linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 0/5] Changes based on review comments for PAT pfnmap tracking
@ 2008-12-19 21:47 venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 1/5] x86 PAT: clarify is_linear_pfn_mapping() interface venkatesh.pallipadi
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: venkatesh.pallipadi @ 2008-12-19 21:47 UTC (permalink / raw)
  To: mingo, tglx, hpa, akpm, npiggin, hugh
  Cc: arjan, jbarnes, rdreier, jeremy, linux-kernel,
	Venkatesh Pallipadi, Suresh Siddha

Incremental patches to address the review comments from Nick Piggin
for v3 version of x86 PAT pfnmap changes patchset here

http://lkml.indiana.edu/hypermail/linux/kernel/0812.2/01330.html

-- 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 1/5] x86 PAT: clarify is_linear_pfn_mapping() interface
  2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
@ 2008-12-19 21:47 ` venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 2/5] x86 PAT: Modify follow_phys to return phys_addr prot and return value venkatesh.pallipadi
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: venkatesh.pallipadi @ 2008-12-19 21:47 UTC (permalink / raw)
  To: mingo, tglx, hpa, akpm, npiggin, hugh
  Cc: arjan, jbarnes, rdreier, jeremy, linux-kernel,
	Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: linear_pfnmap_comments.patch --]
[-- Type: text/plain, Size: 1643 bytes --]

Incremental patches to address the review comments from Nick Piggin
for v3 version of x86 PAT pfnmap changes patchset here

http://lkml.indiana.edu/hypermail/linux/kernel/0812.2/01330.html

This patch:

Clarify is_linear_pfn_mapping() and its usage.

It is used by x86 PAT code for performance reasons. Identifying pfnmap
as linear over entire vma helps speedup reserve and free of memtype
for the region.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

---
 include/linux/mm.h |    8 ++++++++
 1 file changed, 8 insertions(+)

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h	2008-12-19 09:55:33.000000000 -0800
+++ linux-2.6/include/linux/mm.h	2008-12-19 09:56:02.000000000 -0800
@@ -145,6 +145,14 @@ extern pgprot_t protection_map[16];
 #define FAULT_FLAG_WRITE	0x01	/* Fault was a write access */
 #define FAULT_FLAG_NONLINEAR	0x02	/* Fault was via a nonlinear mapping */
 
+/*
+ * This interface is used by x86 PAT code to identify a pfn mapping that is
+ * linear over entire vma. This is to optimize PAT code that deals with
+ * marking the physical region with a particular prot. This is not for generic
+ * mm use. Note also that this check will not work if the pfn mapping is
+ * linear for a vma starting at physical address 0. In which case PAT code
+ * falls back to slow path of reserving physical range page by page.
+ */
 static inline int is_linear_pfn_mapping(struct vm_area_struct *vma)
 {
 	return ((vma->vm_flags & VM_PFNMAP) && vma->vm_pgoff);

-- 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 2/5] x86 PAT: Modify follow_phys to return phys_addr prot and return value
  2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 1/5] x86 PAT: clarify is_linear_pfn_mapping() interface venkatesh.pallipadi
@ 2008-12-19 21:47 ` venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 3/5] x86 PAT: remove follow_pfnmap_pte in favor of follow_phys venkatesh.pallipadi
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: venkatesh.pallipadi @ 2008-12-19 21:47 UTC (permalink / raw)
  To: mingo, tglx, hpa, akpm, npiggin, hugh
  Cc: arjan, jbarnes, rdreier, jeremy, linux-kernel,
	Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: modify_follow_phys.patch --]
[-- Type: text/plain, Size: 3570 bytes --]

follow_phys does similar things as follow_pfnmap_pte. Make a minor change
to follow_phys so that it can be used in place of follow_pfnmap_pte.
Physical address return value with 0 as error return does not work in
follow_phys as the actual physical address 0 mapping may exist in pte.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

---
 include/linux/mm.h |    2 ++
 mm/memory.c        |   31 ++++++++++++++-----------------
 2 files changed, 16 insertions(+), 17 deletions(-)

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h	2008-12-19 09:56:02.000000000 -0800
+++ linux-2.6/include/linux/mm.h	2008-12-19 11:04:32.000000000 -0800
@@ -804,6 +804,8 @@ int copy_page_range(struct mm_struct *ds
 			struct vm_area_struct *vma);
 void unmap_mapping_range(struct address_space *mapping,
 		loff_t const holebegin, loff_t const holelen, int even_cows);
+int follow_phys(struct vm_area_struct *vma, unsigned long address,
+		unsigned int flags, unsigned long *prot, resource_size_t *phys);
 int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
 			void *buf, int len, int write);
 
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c	2008-12-19 09:55:29.000000000 -0800
+++ linux-2.6/mm/memory.c	2008-12-19 11:05:40.000000000 -0800
@@ -2981,9 +2981,9 @@ int in_gate_area_no_task(unsigned long a
 #endif	/* __HAVE_ARCH_GATE_AREA */
 
 #ifdef CONFIG_HAVE_IOREMAP_PROT
-static resource_size_t follow_phys(struct vm_area_struct *vma,
-			unsigned long address, unsigned int flags,
-			unsigned long *prot)
+int follow_phys(struct vm_area_struct *vma,
+		unsigned long address, unsigned int flags,
+		unsigned long *prot, resource_size_t *phys)
 {
 	pgd_t *pgd;
 	pud_t *pud;
@@ -2992,24 +2992,26 @@ static resource_size_t follow_phys(struc
 	spinlock_t *ptl;
 	resource_size_t phys_addr = 0;
 	struct mm_struct *mm = vma->vm_mm;
+	int ret = -EINVAL;
 
-	VM_BUG_ON(!(vma->vm_flags & (VM_IO | VM_PFNMAP)));
+	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
+		goto out;
 
 	pgd = pgd_offset(mm, address);
 	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		goto no_page_table;
+		goto out;
 
 	pud = pud_offset(pgd, address);
 	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
-		goto no_page_table;
+		goto out;
 
 	pmd = pmd_offset(pud, address);
 	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
-		goto no_page_table;
+		goto out;
 
 	/* We cannot handle huge page PFN maps. Luckily they don't exist. */
 	if (pmd_huge(*pmd))
-		goto no_page_table;
+		goto out;
 
 	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
 	if (!ptep)
@@ -3024,13 +3026,13 @@ static resource_size_t follow_phys(struc
 	phys_addr <<= PAGE_SHIFT; /* Shift here to avoid overflow on PAE */
 
 	*prot = pgprot_val(pte_pgprot(pte));
+	*phys = phys_addr;
+	ret = 0;
 
 unlock:
 	pte_unmap_unlock(ptep, ptl);
 out:
-	return phys_addr;
-no_page_table:
-	return 0;
+	return ret;
 }
 
 int generic_access_phys(struct vm_area_struct *vma, unsigned long addr,
@@ -3041,12 +3043,7 @@ int generic_access_phys(struct vm_area_s
 	void *maddr;
 	int offset = addr & (PAGE_SIZE-1);
 
-	if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
-		return -EINVAL;
-
-	phys_addr = follow_phys(vma, addr, write, &prot);
-
-	if (!phys_addr)
+	if (follow_phys(vma, addr, write, &prot, &phys_addr))
 		return -EINVAL;
 
 	maddr = ioremap_prot(phys_addr, PAGE_SIZE, prot);

-- 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 3/5] x86 PAT: remove follow_pfnmap_pte in favor of follow_phys
  2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 1/5] x86 PAT: clarify is_linear_pfn_mapping() interface venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 2/5] x86 PAT: Modify follow_phys to return phys_addr prot and return value venkatesh.pallipadi
@ 2008-12-19 21:47 ` venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 4/5] x86 PAT: Move track untrack pfnmap stubs to asm-generic venkatesh.pallipadi
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: venkatesh.pallipadi @ 2008-12-19 21:47 UTC (permalink / raw)
  To: mingo, tglx, hpa, akpm, npiggin, hugh
  Cc: arjan, jbarnes, rdreier, jeremy, linux-kernel,
	Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: reuse_follow_phys_in_pat.patch --]
[-- Type: text/plain, Size: 5816 bytes --]

Replace follow_pfnmap_pte in pat code with follow_phys. follow_phys lso
returns protection eliminating the need of pte_pgprot call. Using follow_phys
also eliminates the need for pte_pa.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

---
 arch/x86/include/asm/pgtable.h |    5 ----
 arch/x86/mm/pat.c              |   30 ++++++++++------------------
 include/linux/mm.h             |    3 --
 mm/memory.c                    |   43 -----------------------------------------
 4 files changed, 11 insertions(+), 70 deletions(-)

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h	2008-12-19 09:56:08.000000000 -0800
+++ linux-2.6/include/linux/mm.h	2008-12-19 09:58:16.000000000 -0800
@@ -1239,9 +1239,6 @@ struct page *follow_page(struct vm_area_
 #define FOLL_GET	0x04	/* do get_page on page */
 #define FOLL_ANON	0x08	/* give ZERO_PAGE if no pgtable */
 
-int follow_pfnmap_pte(struct vm_area_struct *vma,
-				unsigned long address, pte_t *ret_ptep);
-
 typedef int (*pte_fn_t)(pte_t *pte, pgtable_t token, unsigned long addr,
 			void *data);
 extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c	2008-12-19 09:56:08.000000000 -0800
+++ linux-2.6/mm/memory.c	2008-12-19 09:58:16.000000000 -0800
@@ -1168,49 +1168,6 @@ no_page_table:
 	return page;
 }
 
-int follow_pfnmap_pte(struct vm_area_struct *vma, unsigned long address,
-			pte_t *ret_ptep)
-{
-	pgd_t *pgd;
-	pud_t *pud;
-	pmd_t *pmd;
-	pte_t *ptep, pte;
-	spinlock_t *ptl;
-	struct page *page;
-	struct mm_struct *mm = vma->vm_mm;
-
-	if (!is_pfn_mapping(vma))
-		goto err;
-
-	page = NULL;
-	pgd = pgd_offset(mm, address);
-	if (pgd_none(*pgd) || unlikely(pgd_bad(*pgd)))
-		goto err;
-
-	pud = pud_offset(pgd, address);
-	if (pud_none(*pud) || unlikely(pud_bad(*pud)))
-		goto err;
-
-	pmd = pmd_offset(pud, address);
-	if (pmd_none(*pmd) || unlikely(pmd_bad(*pmd)))
-		goto err;
-
-	ptep = pte_offset_map_lock(mm, pmd, address, &ptl);
-
-	pte = *ptep;
-	if (!pte_present(pte))
-		goto err_unlock;
-
-	*ret_ptep = pte;
-	pte_unmap_unlock(ptep, ptl);
-	return 0;
-
-err_unlock:
-	pte_unmap_unlock(ptep, ptl);
-err:
-	return -EINVAL;
-}
-
 /* Can we do the FOLL_ANON optimization? */
 static inline int use_zero_page(struct vm_area_struct *vma)
 {
Index: linux-2.6/arch/x86/mm/pat.c
===================================================================
--- linux-2.6.orig/arch/x86/mm/pat.c	2008-12-19 09:55:25.000000000 -0800
+++ linux-2.6/arch/x86/mm/pat.c	2008-12-19 09:58:16.000000000 -0800
@@ -685,8 +685,7 @@ int track_pfn_vma_copy(struct vm_area_st
 	int retval = 0;
 	unsigned long i, j;
 	u64 paddr;
-	pgprot_t prot;
-	pte_t pte;
+	unsigned long prot;
 	unsigned long vma_start = vma->vm_start;
 	unsigned long vma_end = vma->vm_end;
 	unsigned long vma_size = vma_end - vma_start;
@@ -696,26 +695,22 @@ int track_pfn_vma_copy(struct vm_area_st
 
 	if (is_linear_pfn_mapping(vma)) {
 		/*
-		 * reserve the whole chunk starting from vm_pgoff,
-		 * But, we have to get the protection from pte.
+		 * reserve the whole chunk covered by vma. We need the
+		 * starting address and protection from pte.
 		 */
-		if (follow_pfnmap_pte(vma, vma_start, &pte)) {
+		if (follow_phys(vma, vma_start, 0, &prot, &paddr)) {
 			WARN_ON_ONCE(1);
-			return -1;
+			return -EINVAL;
 		}
-		prot = pte_pgprot(pte);
-		paddr = (u64)vma->vm_pgoff << PAGE_SHIFT;
-		return reserve_pfn_range(paddr, vma_size, prot);
+		return reserve_pfn_range(paddr, vma_size, __pgprot(prot));
 	}
 
 	/* reserve entire vma page by page, using pfn and prot from pte */
 	for (i = 0; i < vma_size; i += PAGE_SIZE) {
-		if (follow_pfnmap_pte(vma, vma_start + i, &pte))
+		if (follow_phys(vma, vma_start + i, 0, &prot, &paddr))
 			continue;
 
-		paddr = pte_pa(pte);
-		prot = pte_pgprot(pte);
-		retval = reserve_pfn_range(paddr, PAGE_SIZE, prot);
+		retval = reserve_pfn_range(paddr, PAGE_SIZE, __pgprot(prot));
 		if (retval)
 			goto cleanup_ret;
 	}
@@ -724,10 +719,9 @@ int track_pfn_vma_copy(struct vm_area_st
 cleanup_ret:
 	/* Reserve error: Cleanup partial reservation and return error */
 	for (j = 0; j < i; j += PAGE_SIZE) {
-		if (follow_pfnmap_pte(vma, vma_start + j, &pte))
+		if (follow_phys(vma, vma_start + j, 0, &prot, &paddr))
 			continue;
 
-		paddr = pte_pa(pte);
 		free_pfn_range(paddr, PAGE_SIZE);
 	}
 
@@ -797,6 +791,7 @@ void untrack_pfn_vma(struct vm_area_stru
 {
 	unsigned long i;
 	u64 paddr;
+	unsigned long prot;
 	unsigned long vma_start = vma->vm_start;
 	unsigned long vma_end = vma->vm_end;
 	unsigned long vma_size = vma_end - vma_start;
@@ -821,12 +816,9 @@ void untrack_pfn_vma(struct vm_area_stru
 	} else {
 		/* free entire vma, page by page, using the pfn from pte */
 		for (i = 0; i < vma_size; i += PAGE_SIZE) {
-			pte_t pte;
-
-			if (follow_pfnmap_pte(vma, vma_start + i, &pte))
+			if (follow_phys(vma, vma_start + i, 0, &prot, &paddr))
 				continue;
 
-			paddr = pte_pa(pte);
 			free_pfn_range(paddr, PAGE_SIZE);
 		}
 	}
Index: linux-2.6/arch/x86/include/asm/pgtable.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/pgtable.h	2008-12-19 09:55:25.000000000 -0800
+++ linux-2.6/arch/x86/include/asm/pgtable.h	2008-12-19 09:58:16.000000000 -0800
@@ -230,11 +230,6 @@ static inline unsigned long pte_pfn(pte_
 	return (pte_val(pte) & PTE_PFN_MASK) >> PAGE_SHIFT;
 }
 
-static inline u64 pte_pa(pte_t pte)
-{
-	return pte_val(pte) & PTE_PFN_MASK;
-}
-
 #define pte_page(pte)	pfn_to_page(pte_pfn(pte))
 
 static inline int pmd_large(pmd_t pte)

-- 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 4/5] x86 PAT: Move track untrack pfnmap stubs to asm-generic
  2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
                   ` (2 preceding siblings ...)
  2008-12-19 21:47 ` [patch 3/5] x86 PAT: remove follow_pfnmap_pte in favor of follow_phys venkatesh.pallipadi
@ 2008-12-19 21:47 ` venkatesh.pallipadi
  2008-12-19 21:47 ` [patch 5/5] x86 PAT: pfnmap documentation update changes venkatesh.pallipadi
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: venkatesh.pallipadi @ 2008-12-19 21:47 UTC (permalink / raw)
  To: mingo, tglx, hpa, akpm, npiggin, hugh
  Cc: arjan, jbarnes, rdreier, jeremy, linux-kernel,
	Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: generic_pfn_range_comments.patch --]
[-- Type: text/plain, Size: 6518 bytes --]

Move the track and untrack pfn stub routines from memory.c to asm-generic.
Also add unlikely to pfnmap related calls in fork and exit path.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

---
 arch/x86/include/asm/pgtable.h |    6 +----
 include/asm-generic/pgtable.h  |   46 +++++++++++++++++++++++++++++++++++++++
 include/linux/mm.h             |    6 -----
 mm/memory.c                    |   48 +----------------------------------------
 4 files changed, 50 insertions(+), 56 deletions(-)

Index: linux-2.6/include/linux/mm.h
===================================================================
--- linux-2.6.orig/include/linux/mm.h	2008-12-19 09:58:16.000000000 -0800
+++ linux-2.6/include/linux/mm.h	2008-12-19 10:03:59.000000000 -0800
@@ -163,12 +163,6 @@ static inline int is_pfn_mapping(struct 
 	return (vma->vm_flags & VM_PFNMAP);
 }
 
-extern int track_pfn_vma_new(struct vm_area_struct *vma, pgprot_t prot,
-				unsigned long pfn, unsigned long size);
-extern int track_pfn_vma_copy(struct vm_area_struct *vma);
-extern void untrack_pfn_vma(struct vm_area_struct *vma, unsigned long pfn,
-				unsigned long size);
-
 /*
  * vm_fault is filled by the the pagefault handler and passed to the vma's
  * ->fault function. The vma's ->fault is responsible for returning a bitmask
Index: linux-2.6/mm/memory.c
===================================================================
--- linux-2.6.orig/mm/memory.c	2008-12-19 09:58:16.000000000 -0800
+++ linux-2.6/mm/memory.c	2008-12-19 10:03:59.000000000 -0800
@@ -99,50 +99,6 @@ int randomize_va_space __read_mostly =
 					2;
 #endif
 
-#ifndef track_pfn_vma_new
-/*
- * Interface that can be used by architecture code to keep track of
- * memory type of pfn mappings (remap_pfn_range, vm_insert_pfn)
- *
- * track_pfn_vma_new is called when a _new_ pfn mapping is being established
- * for physical range indicated by pfn and size.
- */
-int track_pfn_vma_new(struct vm_area_struct *vma, pgprot_t prot,
-			unsigned long pfn, unsigned long size)
-{
-	return 0;
-}
-#endif
-
-#ifndef track_pfn_vma_copy
-/*
- * Interface that can be used by architecture code to keep track of
- * memory type of pfn mappings (remap_pfn_range, vm_insert_pfn)
- *
- * track_pfn_vma_copy is called when vma that is covering the pfnmap gets
- * copied through copy_page_range().
- */
-int track_pfn_vma_copy(struct vm_area_struct *vma)
-{
-	return 0;
-}
-#endif
-
-#ifndef untrack_pfn_vma
-/*
- * Interface that can be used by architecture code to keep track of
- * memory type of pfn mappings (remap_pfn_range, vm_insert_pfn)
- *
- * untrack_pfn_vma is called while unmapping a pfnmap for a region.
- * untrack can be called for a specific region indicated by pfn and size or
- * can be for the entire vma (in which case size can be zero).
- */
-void untrack_pfn_vma(struct vm_area_struct *vma, unsigned long pfn,
-			unsigned long size)
-{
-}
-#endif
-
 static int __init disable_randmaps(char *s)
 {
 	randomize_va_space = 0;
@@ -713,7 +669,7 @@ int copy_page_range(struct mm_struct *ds
 	if (is_vm_hugetlb_page(vma))
 		return copy_hugetlb_page_range(dst_mm, src_mm, vma);
 
-	if (is_pfn_mapping(vma)) {
+	if (unlikely(is_pfn_mapping(vma))) {
 		/*
 		 * We do not free on error cases below as remove_vma
 		 * gets called on error from higher level routine
@@ -969,7 +925,7 @@ unsigned long unmap_vmas(struct mmu_gath
 		if (vma->vm_flags & VM_ACCOUNT)
 			*nr_accounted += (end - start) >> PAGE_SHIFT;
 
-		if (is_pfn_mapping(vma))
+		if (unlikely(is_pfn_mapping(vma)))
 			untrack_pfn_vma(vma, 0, 0);
 
 		while (start != end) {
Index: linux-2.6/include/asm-generic/pgtable.h
===================================================================
--- linux-2.6.orig/include/asm-generic/pgtable.h	2008-12-19 09:55:21.000000000 -0800
+++ linux-2.6/include/asm-generic/pgtable.h	2008-12-19 10:15:08.000000000 -0800
@@ -293,6 +293,52 @@ static inline void ptep_modify_prot_comm
 #define arch_flush_lazy_cpu_mode()	do {} while (0)
 #endif
 
+#ifndef __HAVE_PFNMAP_TRACKING
+/*
+ * Interface that can be used by architecture code to keep track of
+ * memory type of pfn mappings (remap_pfn_range, vm_insert_pfn)
+ *
+ * track_pfn_vma_new is called when a _new_ pfn mapping is being established
+ * for physical range indicated by pfn and size.
+ */
+static inline int track_pfn_vma_new(struct vm_area_struct *vma, pgprot_t prot,
+					unsigned long pfn, unsigned long size)
+{
+	return 0;
+}
+
+/*
+ * Interface that can be used by architecture code to keep track of
+ * memory type of pfn mappings (remap_pfn_range, vm_insert_pfn)
+ *
+ * track_pfn_vma_copy is called when vma that is covering the pfnmap gets
+ * copied through copy_page_range().
+ */
+static inline int track_pfn_vma_copy(struct vm_area_struct *vma)
+{
+	return 0;
+}
+
+/*
+ * Interface that can be used by architecture code to keep track of
+ * memory type of pfn mappings (remap_pfn_range, vm_insert_pfn)
+ *
+ * untrack_pfn_vma is called while unmapping a pfnmap for a region.
+ * untrack can be called for a specific region indicated by pfn and size or
+ * can be for the entire vma (in which case size can be zero).
+ */
+static inline void untrack_pfn_vma(struct vm_area_struct *vma,
+					unsigned long pfn, unsigned long size)
+{
+}
+#else
+extern int track_pfn_vma_new(struct vm_area_struct *vma, pgprot_t prot,
+				unsigned long pfn, unsigned long size);
+extern int track_pfn_vma_copy(struct vm_area_struct *vma);
+extern void untrack_pfn_vma(struct vm_area_struct *vma, unsigned long pfn,
+				unsigned long size);
+#endif
+
 #endif /* !__ASSEMBLY__ */
 
 #endif /* _ASM_GENERIC_PGTABLE_H */
Index: linux-2.6/arch/x86/include/asm/pgtable.h
===================================================================
--- linux-2.6.orig/arch/x86/include/asm/pgtable.h	2008-12-19 09:58:16.000000000 -0800
+++ linux-2.6/arch/x86/include/asm/pgtable.h	2008-12-19 10:15:14.000000000 -0800
@@ -339,12 +339,10 @@ static inline pgprot_t pgprot_modify(pgp
 
 #define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask)
 
+#ifndef __ASSEMBLY__
 /* Indicate that x86 has its own track and untrack pfn vma functions */
-#define track_pfn_vma_new track_pfn_vma_new
-#define track_pfn_vma_copy track_pfn_vma_copy
-#define untrack_pfn_vma untrack_pfn_vma
+#define __HAVE_PFNMAP_TRACKING
 
-#ifndef __ASSEMBLY__
 #define __HAVE_PHYS_MEM_ACCESS_PROT
 struct file;
 pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,

-- 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [patch 5/5] x86 PAT: pfnmap documentation update changes
  2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
                   ` (3 preceding siblings ...)
  2008-12-19 21:47 ` [patch 4/5] x86 PAT: Move track untrack pfnmap stubs to asm-generic venkatesh.pallipadi
@ 2008-12-19 21:47 ` venkatesh.pallipadi
  2008-12-19 23:44 ` [patch 0/5] Changes based on review comments for PAT pfnmap tracking H. Peter Anvin
  2008-12-22  3:59 ` Nick Piggin
  6 siblings, 0 replies; 8+ messages in thread
From: venkatesh.pallipadi @ 2008-12-19 21:47 UTC (permalink / raw)
  To: mingo, tglx, hpa, akpm, npiggin, hugh
  Cc: arjan, jbarnes, rdreier, jeremy, linux-kernel,
	Venkatesh Pallipadi, Suresh Siddha

[-- Attachment #1: documentation_updates_comments.patch --]
[-- Type: text/plain, Size: 1935 bytes --]

Documentation updates as per Randy Dunlap's comments.

Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com>

---
 Documentation/x86/pat.txt |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Index: linux-2.6/Documentation/x86/pat.txt
===================================================================
--- linux-2.6.orig/Documentation/x86/pat.txt	2008-12-19 09:55:04.000000000 -0800
+++ linux-2.6/Documentation/x86/pat.txt	2008-12-19 10:15:46.000000000 -0800
@@ -82,23 +82,23 @@ pci proc               |    --    |    -
 
 Advanced APIs for drivers
 -------------------------
-A. Exporting pages to user with remap_pfn_range, io_remap_pfn_range,
+A. Exporting pages to users with remap_pfn_range, io_remap_pfn_range,
 vm_insert_pfn
 
-Drivers wanting to export some pages to userspace, do it by using mmap
+Drivers wanting to export some pages to userspace do it by using mmap
 interface and a combination of
 1) pgprot_noncached()
 2) io_remap_pfn_range() or remap_pfn_range() or vm_insert_pfn()
 
-With pat support, a new API pgprot_writecombine is being added. So, driver can
+With PAT support, a new API pgprot_writecombine is being added. So, drivers can
 continue to use the above sequence, with either pgprot_noncached() or
 pgprot_writecombine() in step 1, followed by step 2.
 
 In addition, step 2 internally tracks the region as UC or WC in memtype
 list in order to ensure no conflicting mapping.
 
-Note that this set of APIs only work with IO (non RAM) regions. If driver
-wants to export RAM region, it has to do set_memory_uc() or set_memory_wc()
+Note that this set of APIs only works with IO (non RAM) regions. If driver
+wants to export a RAM region, it has to do set_memory_uc() or set_memory_wc()
 as step 0 above and also track the usage of those pages and use set_memory_wb()
 before the page is freed to free pool.
 

-- 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 0/5] Changes based on review comments for PAT pfnmap tracking
  2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
                   ` (4 preceding siblings ...)
  2008-12-19 21:47 ` [patch 5/5] x86 PAT: pfnmap documentation update changes venkatesh.pallipadi
@ 2008-12-19 23:44 ` H. Peter Anvin
  2008-12-22  3:59 ` Nick Piggin
  6 siblings, 0 replies; 8+ messages in thread
From: H. Peter Anvin @ 2008-12-19 23:44 UTC (permalink / raw)
  To: venkatesh.pallipadi
  Cc: mingo, tglx, akpm, npiggin, hugh, arjan, jbarnes, rdreier,
	jeremy, linux-kernel, Suresh Siddha

venkatesh.pallipadi@intel.com wrote:
> Incremental patches to address the review comments from Nick Piggin
> for v3 version of x86 PAT pfnmap changes patchset here
> 
> http://lkml.indiana.edu/hypermail/linux/kernel/0812.2/01330.html
> 

Applied to tip:x86/pat2, thanks!

	-hpa

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [patch 0/5] Changes based on review comments for PAT pfnmap tracking
  2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
                   ` (5 preceding siblings ...)
  2008-12-19 23:44 ` [patch 0/5] Changes based on review comments for PAT pfnmap tracking H. Peter Anvin
@ 2008-12-22  3:59 ` Nick Piggin
  6 siblings, 0 replies; 8+ messages in thread
From: Nick Piggin @ 2008-12-22  3:59 UTC (permalink / raw)
  To: venkatesh.pallipadi
  Cc: mingo, tglx, hpa, akpm, hugh, arjan, jbarnes, rdreier, jeremy,
	linux-kernel, Suresh Siddha

On Fri, Dec 19, 2008 at 01:47:25PM -0800, venkatesh.pallipadi@intel.com wrote:
> Incremental patches to address the review comments from Nick Piggin
> for v3 version of x86 PAT pfnmap changes patchset here
> 
> http://lkml.indiana.edu/hypermail/linux/kernel/0812.2/01330.html

Thanks. Feel free to put Reviewed-by: / Acked-by: Nick Piggin <npiggin@suse.de>
on the generic mm patches which I reviewed after these changes if you like.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2008-12-22  3:59 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-19 21:47 [patch 0/5] Changes based on review comments for PAT pfnmap tracking venkatesh.pallipadi
2008-12-19 21:47 ` [patch 1/5] x86 PAT: clarify is_linear_pfn_mapping() interface venkatesh.pallipadi
2008-12-19 21:47 ` [patch 2/5] x86 PAT: Modify follow_phys to return phys_addr prot and return value venkatesh.pallipadi
2008-12-19 21:47 ` [patch 3/5] x86 PAT: remove follow_pfnmap_pte in favor of follow_phys venkatesh.pallipadi
2008-12-19 21:47 ` [patch 4/5] x86 PAT: Move track untrack pfnmap stubs to asm-generic venkatesh.pallipadi
2008-12-19 21:47 ` [patch 5/5] x86 PAT: pfnmap documentation update changes venkatesh.pallipadi
2008-12-19 23:44 ` [patch 0/5] Changes based on review comments for PAT pfnmap tracking H. Peter Anvin
2008-12-22  3:59 ` Nick Piggin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).