linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64
@ 2013-10-22 11:28 Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 1/9] powerpc: Use HPTE constants when updating hpte bits Aneesh Kumar K.V
                   ` (8 more replies)
  0 siblings, 9 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev

Hi,

This patch series add support for numa faults on ppc64 architecture. We steal the
_PAGE_COHERENCE bit and use that for indicating _PAGE_NUMA. We clear the _PAGE_PRESENT bit
and also invalidate the hpte entry on setting _PAGE_NUMA. The next fault on that
page will be considered a numa fault.


NOTE:
______
Issue:
I am finding large lock contention on page_table_lock with this series on a 95 cpu 4 node box with autonuma benchmark

I will out on vacation till NOV 6 without email access. Hence i will not be able to respond to review feedbacks
till then. 


lock_stat version 0.3
-------------------------------------------------------------------------------------------------------------------------------------------------------
                      class name    con-bounces    contentions   waittime-min   waittime-max waittime-total    acq-bounces   acquisitions   holdtime-mi  hold time hold total
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------

  &(&mm->page_table_lock)->rlock:     713531791      719610919           0.09     3038193.19 357867523236.3      729709189      750040162    0.0  236991.36  1159646899.68
  ------------------------------
  &(&mm->page_table_lock)->rlock              1          [<c000000000218880>] .anon_vma_prepare+0xb0/0x1e0
  &(&mm->page_table_lock)->rlock             93          [<c000000000207ebc>] .do_numa_page+0x4c/0x190
  &(&mm->page_table_lock)->rlock         301678          [<c0000000002139d4>] .change_protection+0x1d4/0x560
  &(&mm->page_table_lock)->rlock         244524          [<c000000000213be8>] .change_protection+0x3e8/0x560
  ------------------------------
  &(&mm->page_table_lock)->rlock              1          [<c000000000206a38>] .__do_fault+0x198/0x6b0
  &(&mm->page_table_lock)->rlock         704163          [<c0000000002139d4>] .change_protection+0x1d4/0x560
  &(&mm->page_table_lock)->rlock         207227          [<c000000000213be8>] .change_protection+0x3e8/0x560
  &(&mm->page_table_lock)->rlock             95          [<c000000000207ebc>] .do_numa_page+0x4c/0x190
 
-aneesh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC PATCH 1/9] powerpc: Use HPTE constants when updating hpte bits
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 2/9] powerpc: Free up _PAGE_COHERENCE for numa fault use later Aneesh Kumar K.V
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Even though we have same value for linux PTE bits and hash PTE pits
use the hash pte bits wen updating hash pte

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/platforms/cell/beat_htab.c | 4 ++--
 arch/powerpc/platforms/pseries/lpar.c   | 3 ++-
 2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/platforms/cell/beat_htab.c b/arch/powerpc/platforms/cell/beat_htab.c
index c34ee4e..d4d245c 100644
--- a/arch/powerpc/platforms/cell/beat_htab.c
+++ b/arch/powerpc/platforms/cell/beat_htab.c
@@ -111,7 +111,7 @@ static long beat_lpar_hpte_insert(unsigned long hpte_group,
 		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
 
 	if (rflags & _PAGE_NO_CACHE)
-		hpte_r &= ~_PAGE_COHERENT;
+		hpte_r &= ~HPTE_R_M;
 
 	raw_spin_lock(&beat_htab_lock);
 	lpar_rc = beat_read_mask(hpte_group);
@@ -337,7 +337,7 @@ static long beat_lpar_hpte_insert_v3(unsigned long hpte_group,
 		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
 
 	if (rflags & _PAGE_NO_CACHE)
-		hpte_r &= ~_PAGE_COHERENT;
+		hpte_r &= ~HPTE_R_M;
 
 	/* insert into not-volted entry */
 	lpar_rc = beat_insert_htab_entry3(0, hpte_group, hpte_v, hpte_r,
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 356bc75..c8fbef23 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -153,7 +153,8 @@ static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
 
 	/* Make pHyp happy */
 	if ((rflags & _PAGE_NO_CACHE) && !(rflags & _PAGE_WRITETHRU))
-		hpte_r &= ~_PAGE_COHERENT;
+		hpte_r &= ~HPTE_R_M;
+
 	if (firmware_has_feature(FW_FEATURE_XCMO) && !(hpte_r & HPTE_R_N))
 		flags |= H_COALESCE_CAND;
 
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 2/9] powerpc: Free up _PAGE_COHERENCE for numa fault use later
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 1/9] powerpc: Use HPTE constants when updating hpte bits Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 3/9] mm: Move change_prot_numa outside CONFIG_ARCH_USES_NUMA_PROT_NONE Aneesh Kumar K.V
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Set  memory coherence always on hash64 config. If
a platform cannot have memory coherence always set they
can infer that from _PAGE_NO_CACHE and _PAGE_WRITETHRU
like in lpar. So we dont' really need a separate bit
for tracking _PAGE_COHERENCE.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pte-hash64.h |  2 +-
 arch/powerpc/mm/hash_low_64.S         | 15 ++++++++++++---
 arch/powerpc/mm/hash_utils_64.c       |  7 ++++---
 arch/powerpc/mm/hugepage-hash64.c     |  6 +++++-
 arch/powerpc/mm/hugetlbpage-hash64.c  |  4 ++++
 5 files changed, 26 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/pte-hash64.h b/arch/powerpc/include/asm/pte-hash64.h
index 0419eeb..55aea0c 100644
--- a/arch/powerpc/include/asm/pte-hash64.h
+++ b/arch/powerpc/include/asm/pte-hash64.h
@@ -19,7 +19,7 @@
 #define _PAGE_FILE		0x0002 /* (!present only) software: pte holds file offset */
 #define _PAGE_EXEC		0x0004 /* No execute on POWER4 and newer (we invert) */
 #define _PAGE_GUARDED		0x0008
-#define _PAGE_COHERENT		0x0010 /* M: enforce memory coherence (SMP systems) */
+/* We can derive Memory coherence from _PAGE_NO_CACHE */
 #define _PAGE_NO_CACHE		0x0020 /* I: cache inhibit */
 #define _PAGE_WRITETHRU		0x0040 /* W: cache write-through */
 #define _PAGE_DIRTY		0x0080 /* C: page changed */
diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
index d3cbda6..1136d26 100644
--- a/arch/powerpc/mm/hash_low_64.S
+++ b/arch/powerpc/mm/hash_low_64.S
@@ -148,7 +148,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
 	and	r0,r0,r4		/* _PAGE_RW & _PAGE_DIRTY ->r0 bit 30*/
 	andc	r0,r30,r0		/* r0 = pte & ~r0 */
 	rlwimi	r3,r0,32-1,31,31	/* Insert result into PP lsb */
-	ori	r3,r3,HPTE_R_C		/* Always add "C" bit for perf. */
+	/*
+	 * Always add "C" bit for perf. Memory coherence is always enabled
+	 */
+	ori	r3,r3,HPTE_R_C | HPTE_R_M
 
 	/* We eventually do the icache sync here (maybe inline that
 	 * code rather than call a C function...) 
@@ -457,7 +460,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
 	and	r0,r0,r4		/* _PAGE_RW & _PAGE_DIRTY ->r0 bit 30*/
 	andc	r0,r3,r0		/* r0 = pte & ~r0 */
 	rlwimi	r3,r0,32-1,31,31	/* Insert result into PP lsb */
-	ori	r3,r3,HPTE_R_C		/* Always add "C" bit for perf. */
+	/*
+	 * Always add "C" bit for perf. Memory coherence is always enabled
+	 */
+	ori	r3,r3,HPTE_R_C | HPTE_R_M
 
 	/* We eventually do the icache sync here (maybe inline that
 	 * code rather than call a C function...)
@@ -795,7 +801,10 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
 	and	r0,r0,r4		/* _PAGE_RW & _PAGE_DIRTY ->r0 bit 30*/
 	andc	r0,r30,r0		/* r0 = pte & ~r0 */
 	rlwimi	r3,r0,32-1,31,31	/* Insert result into PP lsb */
-	ori	r3,r3,HPTE_R_C		/* Always add "C" bit for perf. */
+	/*
+	 * Always add "C" bit for perf. Memory coherence is always enabled
+	 */
+	ori	r3,r3,HPTE_R_C | HPTE_R_M
 
 	/* We eventually do the icache sync here (maybe inline that
 	 * code rather than call a C function...)
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index bde8b55..fb176e9 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -169,9 +169,10 @@ static unsigned long htab_convert_pte_flags(unsigned long pteflags)
 	if ((pteflags & _PAGE_USER) && !((pteflags & _PAGE_RW) &&
 					 (pteflags & _PAGE_DIRTY)))
 		rflags |= 1;
-
-	/* Always add C */
-	return rflags | HPTE_R_C;
+	/*
+	 * Always add "C" bit for perf. Memory coherence is always enabled
+	 */
+	return rflags | HPTE_R_C | HPTE_R_M;
 }
 
 int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
diff --git a/arch/powerpc/mm/hugepage-hash64.c b/arch/powerpc/mm/hugepage-hash64.c
index 34de9e0..826893f 100644
--- a/arch/powerpc/mm/hugepage-hash64.c
+++ b/arch/powerpc/mm/hugepage-hash64.c
@@ -127,7 +127,11 @@ repeat:
 
 		/* Add in WIMG bits */
 		rflags |= (new_pmd & (_PAGE_WRITETHRU | _PAGE_NO_CACHE |
-				      _PAGE_COHERENT | _PAGE_GUARDED));
+				      _PAGE_GUARDED));
+		/*
+		 * enable the memory coherence always
+		 */
+		rflags |= HPTE_R_M;
 
 		/* Insert into the hash table, primary slot */
 		slot = ppc_md.hpte_insert(hpte_group, vpn, pa, rflags, 0,
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index 0b7fb67..a5bcf93 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -99,6 +99,10 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 		/* Add in WIMG bits */
 		rflags |= (new_pte & (_PAGE_WRITETHRU | _PAGE_NO_CACHE |
 				      _PAGE_COHERENT | _PAGE_GUARDED));
+		/*
+		 * enable the memory coherence always
+		 */
+		rflags |= HPTE_R_M;
 
 		slot = hpte_insert_repeating(hash, vpn, pa, rflags, 0,
 					     mmu_psize, ssize);
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 3/9] mm: Move change_prot_numa outside CONFIG_ARCH_USES_NUMA_PROT_NONE
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 1/9] powerpc: Use HPTE constants when updating hpte bits Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 2/9] powerpc: Free up _PAGE_COHERENCE for numa fault use later Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 4/9] powerpc: mm: Only check for _PAGE_PRESENT in set_pte/pmd functions Aneesh Kumar K.V
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

change_prot_numa should work even if _PAGE_NUMA != _PAGE_PROTNONE.
On archs like ppc64 that don't use _PAGE_PROTNONE and also have
a separate page table outside linux pagetable, we just need to
make sure that when calling change_prot_numa we flush the
hardware page table entry so that next page access  result in a numa
fault.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 include/linux/mm.h | 3 ---
 mm/mempolicy.c     | 9 ---------
 2 files changed, 12 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 8b6e55e..5ab0e22 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1668,11 +1668,8 @@ static inline pgprot_t vm_get_page_prot(unsigned long vm_flags)
 }
 #endif
 
-#ifdef CONFIG_ARCH_USES_NUMA_PROT_NONE
 unsigned long change_prot_numa(struct vm_area_struct *vma,
 			unsigned long start, unsigned long end);
-#endif
-
 struct vm_area_struct *find_extend_vma(struct mm_struct *, unsigned long addr);
 int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
 			unsigned long pfn, unsigned long size, pgprot_t);
diff --git a/mm/mempolicy.c b/mm/mempolicy.c
index 0472964..efb4300 100644
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -612,7 +612,6 @@ static inline int queue_pages_pgd_range(struct vm_area_struct *vma,
 	return 0;
 }
 
-#ifdef CONFIG_ARCH_USES_NUMA_PROT_NONE
 /*
  * This is used to mark a range of virtual addresses to be inaccessible.
  * These are later cleared by a NUMA hinting fault. Depending on these
@@ -626,7 +625,6 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
 			unsigned long addr, unsigned long end)
 {
 	int nr_updated;
-	BUILD_BUG_ON(_PAGE_NUMA != _PAGE_PROTNONE);
 
 	nr_updated = change_protection(vma, addr, end, vma->vm_page_prot, 0, 1);
 	if (nr_updated)
@@ -634,13 +632,6 @@ unsigned long change_prot_numa(struct vm_area_struct *vma,
 
 	return nr_updated;
 }
-#else
-static unsigned long change_prot_numa(struct vm_area_struct *vma,
-			unsigned long addr, unsigned long end)
-{
-	return 0;
-}
-#endif /* CONFIG_ARCH_USES_NUMA_PROT_NONE */
 
 /*
  * Walk through page tables and collect pages to be migrated.
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 4/9] powerpc: mm: Only check for _PAGE_PRESENT in set_pte/pmd functions
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
                   ` (2 preceding siblings ...)
  2013-10-22 11:28 ` [RFC PATCH 3/9] mm: Move change_prot_numa outside CONFIG_ARCH_USES_NUMA_PROT_NONE Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 5/9] powerpc: mm: book3s: Enable _PAGE_NUMA for book3s Aneesh Kumar K.V
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We want to make sure we don't use these function when updating a pte
or pmd entry that have a valid hpte entry, because these functions
don't invalidate them. So limit the check to _PAGE_PRESENT bit.
Numafault core changes use these functions for updating _PAGE_NUMA bits.
That should be ok because when _PAGE_NUMA is set we can be sure that
hpte entries are not present.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/pgtable.c    | 2 +-
 arch/powerpc/mm/pgtable_64.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index edda589..10c09b6 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -187,7 +187,7 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
 		pte_t pte)
 {
 #ifdef CONFIG_DEBUG_VM
-	WARN_ON(pte_present(*ptep));
+	WARN_ON(pte_val(*ptep) & _PAGE_PRESENT);
 #endif
 	/* Note: mm->context.id might not yet have been assigned as
 	 * this context might not have been activated yet when this
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 536eec72..56b7586 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -686,7 +686,7 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 		pmd_t *pmdp, pmd_t pmd)
 {
 #ifdef CONFIG_DEBUG_VM
-	WARN_ON(!pmd_none(*pmdp));
+	WARN_ON(pmd_val(*pmdp) & _PAGE_PRESENT);
 	assert_spin_locked(&mm->page_table_lock);
 	WARN_ON(!pmd_trans_huge(pmd));
 #endif
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 5/9] powerpc: mm: book3s: Enable _PAGE_NUMA for book3s
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
                   ` (3 preceding siblings ...)
  2013-10-22 11:28 ` [RFC PATCH 4/9] powerpc: mm: Only check for _PAGE_PRESENT in set_pte/pmd functions Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 6/9] powerpc: mm: book3s: Disable hugepaged pmd format " Aneesh Kumar K.V
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We steal the _PAGE_COHERENCE bit and use that for indicating NUMA ptes.
This patch still disables the numa hinting using pmd entries. That
require further changes to pmd entry format which is done in later
patches.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable.h     | 66 +++++++++++++++++++++++++++++++++-
 arch/powerpc/include/asm/pte-hash64.h  |  6 ++++
 arch/powerpc/platforms/Kconfig.cputype |  1 +
 3 files changed, 72 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 7d6eacf..9d87125 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -3,6 +3,7 @@
 #ifdef __KERNEL__
 
 #ifndef __ASSEMBLY__
+#include <linux/mmdebug.h>
 #include <asm/processor.h>		/* For TASK_SIZE */
 #include <asm/mmu.h>
 #include <asm/page.h>
@@ -33,10 +34,73 @@ static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
 static inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
 static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
 static inline int pte_special(pte_t pte)	{ return pte_val(pte) & _PAGE_SPECIAL; }
-static inline int pte_present(pte_t pte)	{ return pte_val(pte) & _PAGE_PRESENT; }
 static inline int pte_none(pte_t pte)		{ return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
 static inline pgprot_t pte_pgprot(pte_t pte)	{ return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
 
+#ifdef CONFIG_NUMA_BALANCING
+
+static inline int pte_present(pte_t pte)
+{
+	return pte_val(pte) & (_PAGE_PRESENT | _PAGE_NUMA);
+}
+
+#define pte_numa pte_numa
+static inline int pte_numa(pte_t pte)
+{
+	return (pte_val(pte) &
+		(_PAGE_NUMA|_PAGE_PRESENT)) == _PAGE_NUMA;
+}
+
+#define pte_mknonnuma pte_mknonnuma
+static inline pte_t pte_mknonnuma(pte_t pte)
+{
+	pte_val(pte) &= ~_PAGE_NUMA;
+	pte_val(pte) |=  _PAGE_PRESENT | _PAGE_ACCESSED;
+	return pte;
+}
+
+#define pte_mknuma pte_mknuma
+static inline pte_t pte_mknuma(pte_t pte)
+{
+	/*
+	 * We should not set _PAGE_NUMA on non present ptes. Also clear the
+	 * present bit so that hash_page will return 1 and we collect this
+	 * as numa fault.
+	 */
+	if (pte_present(pte)) {
+		pte_val(pte) |= _PAGE_NUMA;
+		pte_val(pte) &= ~_PAGE_PRESENT;
+	} else
+		VM_BUG_ON(1);
+	return pte;
+}
+
+#define pmd_numa pmd_numa
+static inline int pmd_numa(pmd_t pmd)
+{
+	return 0;
+}
+
+#define pmd_mknonnuma pmd_mknonnuma
+static inline pmd_t pmd_mknonnuma(pmd_t pmd)
+{
+	return pmd;
+}
+
+#define pmd_mknuma pmd_mknuma
+static inline pmd_t pmd_mknuma(pmd_t pmd)
+{
+	return pmd;
+}
+
+# else
+
+static inline int pte_present(pte_t pte)
+{
+	return pte_val(pte) & _PAGE_PRESENT;
+}
+#endif /* CONFIG_NUMA_BALANCING */
+
 /* Conversion functions: convert a page and protection to a page entry,
  * and a page entry and page directory to the page they refer to.
  *
diff --git a/arch/powerpc/include/asm/pte-hash64.h b/arch/powerpc/include/asm/pte-hash64.h
index 55aea0c..2505d8e 100644
--- a/arch/powerpc/include/asm/pte-hash64.h
+++ b/arch/powerpc/include/asm/pte-hash64.h
@@ -27,6 +27,12 @@
 #define _PAGE_RW		0x0200 /* software: user write access allowed */
 #define _PAGE_BUSY		0x0800 /* software: PTE & hash are busy */
 
+/*
+ * Used for tracking numa faults
+ */
+#define _PAGE_NUMA	0x00000010 /* Gather numa placement stats */
+
+
 /* No separate kernel read-only */
 #define _PAGE_KERNEL_RW		(_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
 #define _PAGE_KERNEL_RO		 _PAGE_KERNEL_RW
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 6704e2e..c9d6223 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -72,6 +72,7 @@ config PPC_BOOK3S_64
 	select PPC_HAVE_PMU_SUPPORT
 	select SYS_SUPPORTS_HUGETLBFS
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
+	select ARCH_SUPPORTS_NUMA_BALANCING
 
 config PPC_BOOK3E_64
 	bool "Embedded processors"
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 6/9] powerpc: mm: book3s: Disable hugepaged pmd format for book3s
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
                   ` (4 preceding siblings ...)
  2013-10-22 11:28 ` [RFC PATCH 5/9] powerpc: mm: book3s: Enable _PAGE_NUMA for book3s Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 7/9] mm: numafaults: Use change_pmd_protnuma for updating _PAGE_NUMA for regular pmds Aneesh Kumar K.V
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

After commit e2b3d202d1dba8f3546ed28224ce485bc50010be we have the
below possible formats for pmd entry

(1) invalid (all zeroes)
(2) pointer to next table, as normal; bottom 6 bits == 0
(3) leaf pte for huge page, bottom two bits != 00
(4) hugepd pointer, bottom two bits == 00, next 4 bits indicate size of table

On book3s we don't really use the (4).  For Numa balancing we need to
tag pmd entries that are pointer to next table with _PAGE_NUMA for
performance reason (9532fec118d485ea37ab6e3ea372d68cd8b4cd0d). This
patch enables that by disabling hugepd support for book3s if
NUMA_BALANCING is enabled. We ideally want to get rid of hugepd pointer
completely.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/page.h | 11 +++++++++++
 arch/powerpc/mm/hugetlbpage.c   |  8 +++++++-
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index b9f4262..791ab56 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -369,11 +369,22 @@ typedef struct { signed long pd; } hugepd_t;
 #ifdef CONFIG_PPC_BOOK3S_64
 static inline int hugepd_ok(hugepd_t hpd)
 {
+#ifdef CONFIG_NUMA_BALANCING
+	/*
+	 * In order to enable batch handling of pte numa faults, Numa balancing
+	 * code use the _PAGE_NUMA bit even on pmd that is pointing to PTE PAGE.
+	 * 9532fec118d485ea37ab6e3ea372d68cd8b4cd0d. After commit
+	 * e2b3d202d1dba8f3546ed28224ce485bc50010be we really don't need to
+	 * support hugepd for ppc64.
+	 */
+	return 0;
+#else
 	/*
 	 * hugepd pointer, bottom two bits == 00 and next 4 bits
 	 * indicate size of table
 	 */
 	return (((hpd.pd & 0x3) == 0x0) && ((hpd.pd & HUGEPD_SHIFT_MASK) != 0));
+#endif
 }
 #else
 static inline int hugepd_ok(hugepd_t hpd)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index d67db4b..71bd214 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -235,8 +235,14 @@ pte_t *huge_pte_alloc(struct mm_struct *mm, unsigned long addr, unsigned long sz
 	if (!hpdp)
 		return NULL;
 
+#ifdef CONFIG_NUMA_BALANCING
+	/*
+	 * We cannot support hugepd format with numa balancing support
+	 * enabled.
+	 */
+	return NULL;
+#endif
 	BUG_ON(!hugepd_none(*hpdp) && !hugepd_ok(*hpdp));
-
 	if (hugepd_none(*hpdp) && __hugepte_alloc(mm, hpdp, addr, pdshift, pshift))
 		return NULL;
 
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 7/9] mm: numafaults: Use change_pmd_protnuma for updating _PAGE_NUMA for regular pmds
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
                   ` (5 preceding siblings ...)
  2013-10-22 11:28 ` [RFC PATCH 6/9] powerpc: mm: book3s: Disable hugepaged pmd format " Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 8/9] powerpc: mm: Support setting _PAGE_NUMA bit on pmd entry which are pointer to PTE page Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 9/9] powerpc: mm: Enable numa faulting for hugepages Aneesh Kumar K.V
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Archs like ppc64 have different layout for pmd entries pointing to PTE
page. Hence add a separate function for modifying them

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable.h | 17 +++++++++++++++++
 include/asm-generic/pgtable.h      | 20 ++++++++++++++++++++
 mm/memory.c                        |  2 +-
 mm/mprotect.c                      | 24 ++++++------------------
 4 files changed, 44 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 9d87125..67ea8fb 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -75,6 +75,23 @@ static inline pte_t pte_mknuma(pte_t pte)
 	return pte;
 }
 
+#define change_pmd_protnuma change_pmd_protnuma
+static inline void change_pmd_protnuma(struct mm_struct *mm, unsigned long addr,
+				       pmd_t *pmdp, int prot_numa)
+{
+	/*
+	 * We don't track the _PAGE_PRESENT bit here
+	 */
+	unsigned long pmd_val;
+	pmd_val = pmd_val(*pmdp);
+	if (prot_numa)
+		pmd_val |= _PAGE_NUMA;
+	else
+		pmd_val &= ~_PAGE_NUMA;
+	pmd_set(pmdp, pmd_val | _PAGE_NUMA);
+}
+
+
 #define pmd_numa pmd_numa
 static inline int pmd_numa(pmd_t pmd)
 {
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index f330d28..568a8c4 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -697,6 +697,18 @@ static inline pmd_t pmd_mknuma(pmd_t pmd)
 	return pmd_clear_flags(pmd, _PAGE_PRESENT);
 }
 #endif
+
+#ifndef change_pmd_protnuma
+static inline void change_pmd_protnuma(struct mm_struct *mm, unsigned long addr,
+				       pmd_t *pmd, int prot_numa)
+{
+	if (prot_numa)
+		set_pmd_at(mm, addr & PMD_MASK, pmd, pmd_mknuma(*pmd));
+	else
+		set_pmd_at(mm, addr & PMD_MASK, pmd, pmd_mknonnuma(*pmd));
+}
+
+#endif
 #else
 extern int pte_numa(pte_t pte);
 extern int pmd_numa(pmd_t pmd);
@@ -704,6 +716,8 @@ extern pte_t pte_mknonnuma(pte_t pte);
 extern pmd_t pmd_mknonnuma(pmd_t pmd);
 extern pte_t pte_mknuma(pte_t pte);
 extern pmd_t pmd_mknuma(pmd_t pmd);
+extern void change_pmd_protnuma(struct mm_struct *mm, unsigned long addr,
+				pmd_t *pmd, int prot_numa);
 #endif /* CONFIG_ARCH_USES_NUMA_PROT_NONE */
 #else
 static inline int pmd_numa(pmd_t pmd)
@@ -735,6 +749,12 @@ static inline pmd_t pmd_mknuma(pmd_t pmd)
 {
 	return pmd;
 }
+
+static inline void change_pmd_protnuma(struct mm_struct *mm, unsigned long addr,
+				       pmd_t *pmd, int prot_numa)
+{
+	BUG();
+}
 #endif /* CONFIG_NUMA_BALANCING */
 
 #endif /* CONFIG_MMU */
diff --git a/mm/memory.c b/mm/memory.c
index ca00039..e930e50 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3605,7 +3605,7 @@ static int do_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	spin_lock(&mm->page_table_lock);
 	pmd = *pmdp;
 	if (pmd_numa(pmd)) {
-		set_pmd_at(mm, _addr, pmdp, pmd_mknonnuma(pmd));
+		change_pmd_protnuma(mm, _addr, pmdp, 0);
 		numa = true;
 	}
 	spin_unlock(&mm->page_table_lock);
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 94722a4..88de575 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -112,22 +112,6 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
 	return pages;
 }
 
-#ifdef CONFIG_NUMA_BALANCING
-static inline void change_pmd_protnuma(struct mm_struct *mm, unsigned long addr,
-				       pmd_t *pmd)
-{
-	spin_lock(&mm->page_table_lock);
-	set_pmd_at(mm, addr & PMD_MASK, pmd, pmd_mknuma(*pmd));
-	spin_unlock(&mm->page_table_lock);
-}
-#else
-static inline void change_pmd_protnuma(struct mm_struct *mm, unsigned long addr,
-				       pmd_t *pmd)
-{
-	BUG();
-}
-#endif /* CONFIG_NUMA_BALANCING */
-
 static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
 		pud_t *pud, unsigned long addr, unsigned long end,
 		pgprot_t newprot, int dirty_accountable, int prot_numa)
@@ -161,8 +145,12 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
 		 * node. This allows a regular PMD to be handled as one fault
 		 * and effectively batches the taking of the PTL
 		 */
-		if (prot_numa && all_same_node)
-			change_pmd_protnuma(vma->vm_mm, addr, pmd);
+		if (prot_numa && all_same_node) {
+			spin_lock(&vma->vm_mm->page_table_lock);
+			change_pmd_protnuma(vma->vm_mm, addr, pmd, 1);
+			spin_unlock(&vma->vm_mm->page_table_lock);
+
+		}
 	} while (pmd++, addr = next, addr != end);
 
 	return pages;
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 8/9] powerpc: mm: Support setting _PAGE_NUMA bit on pmd entry which are pointer to PTE page
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
                   ` (6 preceding siblings ...)
  2013-10-22 11:28 ` [RFC PATCH 7/9] mm: numafaults: Use change_pmd_protnuma for updating _PAGE_NUMA for regular pmds Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  2013-10-22 11:28 ` [RFC PATCH 9/9] powerpc: mm: Enable numa faulting for hugepages Aneesh Kumar K.V
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable-ppc64.h | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index 46db094..f828944 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -150,8 +150,22 @@
 
 #define pmd_set(pmdp, pmdval) 	(pmd_val(*(pmdp)) = (pmdval))
 #define pmd_none(pmd)		(!pmd_val(pmd))
-#define	pmd_bad(pmd)		(!is_kernel_addr(pmd_val(pmd)) \
-				 || (pmd_val(pmd) & PMD_BAD_BITS))
+
+static inline int pmd_bad(pmd_t pmd)
+{
+#ifdef CONFIG_NUMA_BALANCING
+	/*
+	 * For numa balancing we can have this set
+	 */
+	if (pmd_val(pmd) & _PAGE_NUMA)
+		return 0;
+#endif
+	if (!is_kernel_addr(pmd_val(pmd)) ||
+	    (pmd_val(pmd) & PMD_BAD_BITS))
+		return 1;
+	return 0;
+}
+
 #define	pmd_present(pmd)	(pmd_val(pmd) != 0)
 #define	pmd_clear(pmdp)		(pmd_val(*(pmdp)) = 0)
 #define pmd_page_vaddr(pmd)	(pmd_val(pmd) & ~PMD_MASKED_BITS)
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC PATCH 9/9] powerpc: mm: Enable numa faulting for hugepages
  2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
                   ` (7 preceding siblings ...)
  2013-10-22 11:28 ` [RFC PATCH 8/9] powerpc: mm: Support setting _PAGE_NUMA bit on pmd entry which are pointer to PTE page Aneesh Kumar K.V
@ 2013-10-22 11:28 ` Aneesh Kumar K.V
  8 siblings, 0 replies; 10+ messages in thread
From: Aneesh Kumar K.V @ 2013-10-22 11:28 UTC (permalink / raw)
  To: benh, paulus, linux-mm; +Cc: linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Provide numa related functions for updating pmd entries.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable.h | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 67ea8fb..aa3add7 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -95,19 +95,19 @@ static inline void change_pmd_protnuma(struct mm_struct *mm, unsigned long addr,
 #define pmd_numa pmd_numa
 static inline int pmd_numa(pmd_t pmd)
 {
-	return 0;
+	return pte_numa(pmd_pte(pmd));
 }
 
 #define pmd_mknonnuma pmd_mknonnuma
 static inline pmd_t pmd_mknonnuma(pmd_t pmd)
 {
-	return pmd;
+	return pte_pmd(pte_mknonnuma(pmd_pte(pmd)));
 }
 
 #define pmd_mknuma pmd_mknuma
 static inline pmd_t pmd_mknuma(pmd_t pmd)
 {
-	return pmd;
+	return pte_pmd(pte_mknuma(pmd_pte(pmd)));
 }
 
 # else
-- 
1.8.3.2

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2013-10-22 17:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-10-22 11:28 [RFC PATCH 0/9] powerpc: mm: Numa faults support for ppc64 Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 1/9] powerpc: Use HPTE constants when updating hpte bits Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 2/9] powerpc: Free up _PAGE_COHERENCE for numa fault use later Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 3/9] mm: Move change_prot_numa outside CONFIG_ARCH_USES_NUMA_PROT_NONE Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 4/9] powerpc: mm: Only check for _PAGE_PRESENT in set_pte/pmd functions Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 5/9] powerpc: mm: book3s: Enable _PAGE_NUMA for book3s Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 6/9] powerpc: mm: book3s: Disable hugepaged pmd format " Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 7/9] mm: numafaults: Use change_pmd_protnuma for updating _PAGE_NUMA for regular pmds Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 8/9] powerpc: mm: Support setting _PAGE_NUMA bit on pmd entry which are pointer to PTE page Aneesh Kumar K.V
2013-10-22 11:28 ` [RFC PATCH 9/9] powerpc: mm: Enable numa faulting for hugepages Aneesh Kumar K.V

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).