linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH -V5 00/25] THP support for PPC64
@ 2013-04-04  5:57 Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 01/25] powerpc: Use signed formatting when printing error Aneesh Kumar K.V
                   ` (27 more replies)
  0 siblings, 28 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev

Hi,

This patchset adds transparent hugepage support for PPC64.

TODO:
* hash preload support in update_mmu_cache_pmd (we don't do that for hugetlb)

Some numbers:

The latency measurements code from Anton  found at
http://ozlabs.org/~anton/junkcode/latency2001.c

THP disabled 64K page size
------------------------
[root@llmp24l02 ~]# ./latency2001 8G
 8589934592    731.73 cycles    205.77 ns
[root@llmp24l02 ~]# ./latency2001 8G
 8589934592    743.39 cycles    209.05 ns
[root@llmp24l02 ~]#

THP disabled large page via hugetlbfs
-------------------------------------
[root@llmp24l02 ~]# ./latency2001  -l 8G
 8589934592    416.09 cycles    117.01 ns
[root@llmp24l02 ~]# ./latency2001  -l 8G
 8589934592    415.74 cycles    116.91 ns

THP enabled 64K page size.
----------------
[root@llmp24l02 ~]# ./latency2001 8G
 8589934592    405.07 cycles    113.91 ns
[root@llmp24l02 ~]# ./latency2001 8G
 8589934592    411.82 cycles    115.81 ns
[root@llmp24l02 ~]#

We are close to hugetlbfs in latency and we can achieve this with zero
config/page reservation. Most of the allocations above are fault allocated.

Another test that does 50000000 random access over 1GB area goes from
2.65 seconds to 1.07 seconds with this patchset.

split_huge_page impact:
---------------------
To look at the performance impact of large page invalidate, I tried the below
experiment. The test involved, accessing a large contiguous region of memory
location as below

    for (i = 0; i < size; i += PAGE_SIZE)
	data[i] = i;

We wanted to access the data in sequential order so that we look at the
worst case THP performance. Accesing the data in sequential order implies
we have the Page table cached and overhead of TLB miss is as minimal as
possible. We also don't touch the entire page, because that can result in
cache evict.

After we touched the full range as above, we now call mprotect on each
of that page. A mprotect will result in a hugepage split. This should
allow us to measure the impact of hugepage split.

    for (i = 0; i < size; i += PAGE_SIZE)
	 mprotect(&data[i], PAGE_SIZE, PROT_READ);

Split hugepage impact: 
---------------------
THP enabled: 2.851561705 seconds for test completion
THP disable: 3.599146098 seconds for test completion

We are 20.7% better than non THP case even when we have all the large pages split.

Detailed output:

THP enabled:
---------------------------------------
[root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
thp_fault_alloc 0
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 0
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0
[root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G                                                                      
time taken to touch all the data in ns: 2763096913 

 Performance counter stats for './split-huge-page-mpro 20G':

             1,581 page-faults                                                 
             3,159 dTLB-load-misses                                            

       2.851561705 seconds time elapsed

[root@llmp24l02 ~]# 
[root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
thp_fault_alloc 1279
thp_fault_fallback 0
thp_collapse_alloc 0
thp_collapse_alloc_failed 0
thp_split 1279
thp_zero_page_alloc 0
thp_zero_page_alloc_failed 0
[root@llmp24l02 ~]# 

    77.05%  split-huge-page  [kernel.kallsyms]     [k] .clear_user_page                        
     7.10%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap_ctx                    
     1.51%  split-huge-page  split-huge-page-mpro  [.] 0x0000000000000a70                      
     0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc                      
     0.81%  split-huge-page  [kernel.kallsyms]     [k] .up_write                               
     0.76%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap                        
     0.76%  split-huge-page  [kernel.kallsyms]     [k] .down_write                             
     0.74%  split-huge-page  [kernel.kallsyms]     [k] .lru_add_page_tail                      
     0.61%  split-huge-page  [kernel.kallsyms]     [k] .split_huge_page                        
     0.59%  split-huge-page  [kernel.kallsyms]     [k] .change_protection                      
     0.51%  split-huge-page  [kernel.kallsyms]     [k] .release_pages                          


     0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc                      
            |          
            |--79.44%-- reloc_start
            |          |          
            |          |--86.54%-- .__pSeries_lpar_hugepage_invalidate
            |          |          .pSeries_lpar_hugepage_invalidate
            |          |          .hpte_need_hugepage_flush
            |          |          .split_huge_page
            |          |          .__split_huge_page_pmd
            |          |          .vma_adjust
            |          |          .vma_merge
            |          |          .mprotect_fixup
            |          |          .SyS_mprotect


THP disabled:
---------------
[root@llmp24l02 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
[root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G
time taken to touch all the data in ns: 3513767220 

 Performance counter stats for './split-huge-page-mpro 20G':

          3,27,726 page-faults                                                 
          3,29,654 dTLB-load-misses                                            

       3.599146098 seconds time elapsed

[root@llmp24l02 ~]#

Changes from V4:
* Fix bad page error in page_table_alloc
  BUG: Bad page state in process stream  pfn:f1a59
  page:f0000000034dc378 count:1 mapcount:0 mapping:          (null) index:0x0
  [c000000f322c77d0] [c00000000015e198] .bad_page+0xe8/0x140
  [c000000f322c7860] [c00000000015e3c4] .free_pages_prepare+0x1d4/0x1e0
  [c000000f322c7910] [c000000000160450] .free_hot_cold_page+0x50/0x230
  [c000000f322c79c0] [c00000000003ad18] .page_table_alloc+0x168/0x1c0

Changes from V3:
* PowerNV boot fixes

Change from V2:
* Change patch "powerpc: Reduce PTE table memory wastage" to use much simpler approach
  for PTE page sharing.
* Changes to handle huge pages in KVM code.
* Address other review comments

Changes from V1
* Address review comments
* More patch split
* Add batch hpte invalidate for hugepages.

Changes from RFC V2:
* Address review comments
* More code cleanup and patch split

Changes from RFC V1:
* HugeTLB fs now works
* Compile issues fixed
* rebased to v3.8
* Patch series reorded so that ppc64 cleanups and MM THP changes are moved
  early in the series. This should help in picking those patches early.

Thanks,
-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH -V5 01/25] powerpc: Use signed formatting when printing error
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 02/25] powerpc: Save DAR and DSISR in pt_regs on MCE Aneesh Kumar K.V
                   ` (26 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

PAPR defines these errors as negative values. So print them accordingly
for easy debugging.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/platforms/pseries/lpar.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 0da39fe..a77c35b 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -155,7 +155,7 @@ static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
 	 */
 	if (unlikely(lpar_rc != H_SUCCESS)) {
 		if (!(vflags & HPTE_V_BOLTED))
-			pr_devel(" lpar err %lu\n", lpar_rc);
+			pr_devel(" lpar err %ld\n", lpar_rc);
 		return -2;
 	}
 	if (!(vflags & HPTE_V_BOLTED))
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 02/25] powerpc: Save DAR and DSISR in pt_regs on MCE
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 01/25] powerpc: Use signed formatting when printing error Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 03/25] powerpc: Don't hard code the size of pte page Aneesh Kumar K.V
                   ` (25 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We were not saving DAR and DSISR on MCE. Save then and also print the values
along with exception details in xmon.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/kernel/exceptions-64s.S |    9 +++++++++
 arch/powerpc/xmon/xmon.c             |    2 +-
 2 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 0e9c48c..d02e730 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -640,9 +640,18 @@ slb_miss_user_pseries:
 	.align	7
 	.globl machine_check_common
 machine_check_common:
+
+	mfspr	r10,SPRN_DAR
+	std	r10,PACA_EXGEN+EX_DAR(r13)
+	mfspr	r10,SPRN_DSISR
+	stw	r10,PACA_EXGEN+EX_DSISR(r13)
 	EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC)
 	FINISH_NAP
 	DISABLE_INTS
+	ld	r3,PACA_EXGEN+EX_DAR(r13)
+	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
 	bl	.save_nvgprs
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	.machine_check_exception
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 1f8d2f1..a72e490 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -1423,7 +1423,7 @@ static void excprint(struct pt_regs *fp)
 	printf("    sp: %lx\n", fp->gpr[1]);
 	printf("   msr: %lx\n", fp->msr);
 
-	if (trap == 0x300 || trap == 0x380 || trap == 0x600) {
+	if (trap == 0x300 || trap == 0x380 || trap == 0x600 || trap == 0x200) {
 		printf("   dar: %lx\n", fp->dar);
 		if (trap != 0x380)
 			printf(" dsisr: %lx\n", fp->dsisr);
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 03/25] powerpc: Don't hard code the size of pte page
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 01/25] powerpc: Use signed formatting when printing error Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 02/25] powerpc: Save DAR and DSISR in pt_regs on MCE Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 04/25] powerpc: Reduce the PTE_INDEX_SIZE Aneesh Kumar K.V
                   ` (24 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

USE PTRS_PER_PTE to indicate the size of pte page. To support THP,
later patches will be changing PTRS_PER_PTE value.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable.h |    6 ++++++
 arch/powerpc/mm/hash_low_64.S      |    4 ++--
 2 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index a9cbd3b..4b52726 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -17,6 +17,12 @@ struct mm_struct;
 #  include <asm/pgtable-ppc32.h>
 #endif
 
+/*
+ * We save the slot number & secondary bit in the second half of the
+ * PTE page. We use the 8 bytes per each pte entry.
+ */
+#define PTE_PAGE_HIDX_OFFSET (PTRS_PER_PTE * 8)
+
 #ifndef __ASSEMBLY__
 
 #include <asm/tlbflush.h>
diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
index 7443481..abdd5e2 100644
--- a/arch/powerpc/mm/hash_low_64.S
+++ b/arch/powerpc/mm/hash_low_64.S
@@ -490,7 +490,7 @@ END_FTR_SECTION(CPU_FTR_NOEXECUTE|CPU_FTR_COHERENT_ICACHE, CPU_FTR_NOEXECUTE)
 	beq	htab_inval_old_hpte
 
 	ld	r6,STK_PARAM(R6)(r1)
-	ori	r26,r6,0x8000		/* Load the hidx mask */
+	ori	r26,r6,PTE_PAGE_HIDX_OFFSET /* Load the hidx mask. */
 	ld	r26,0(r26)
 	addi	r5,r25,36		/* Check actual HPTE_SUB bit, this */
 	rldcr.	r0,r31,r5,0		/* must match pgtable.h definition */
@@ -607,7 +607,7 @@ htab_pte_insert_ok:
 	sld	r4,r4,r5
 	andc	r26,r26,r4
 	or	r26,r26,r3
-	ori	r5,r6,0x8000
+	ori	r5,r6,PTE_PAGE_HIDX_OFFSET
 	std	r26,0(r5)
 	lwsync
 	std	r30,0(r6)
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 04/25] powerpc: Reduce the PTE_INDEX_SIZE
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (2 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 03/25] powerpc: Don't hard code the size of pte page Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  7:10   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 05/25] powerpc: Move the pte free routines from common header Aneesh Kumar K.V
                   ` (23 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

This make one PMD cover 16MB range. That helps in easier implementation of THP
on power. THP core code make use of one pmd entry to track the hugepage and
the range mapped by a single pmd entry should be equal to the hugepage size
supported by the hardware.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable-ppc64-64k.h |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
index be4e287..3c529b4 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
@@ -4,10 +4,10 @@
 #include <asm-generic/pgtable-nopud.h>
 
 
-#define PTE_INDEX_SIZE  12
+#define PTE_INDEX_SIZE  8
 #define PMD_INDEX_SIZE  12
 #define PUD_INDEX_SIZE	0
-#define PGD_INDEX_SIZE  6
+#define PGD_INDEX_SIZE  10
 
 #ifndef __ASSEMBLY__
 #define PTE_TABLE_SIZE	(sizeof(real_pte_t) << PTE_INDEX_SIZE)
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 05/25] powerpc: Move the pte free routines from common header
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (3 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 04/25] powerpc: Reduce the PTE_INDEX_SIZE Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage Aneesh Kumar K.V
                   ` (22 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

This patch moves the common code to 32/64 bit headers and also duplicate
4K_PAGES and 64K_PAGES section. We will later change the 64 bit 64K_PAGES
version to support smaller PTE fragments. The patch doesn't introduce
any functional changes.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgalloc-32.h |   45 ++++++++++
 arch/powerpc/include/asm/pgalloc-64.h |  157 ++++++++++++++++++++++++++++++---
 arch/powerpc/include/asm/pgalloc.h    |   46 +---------
 3 files changed, 189 insertions(+), 59 deletions(-)

diff --git a/arch/powerpc/include/asm/pgalloc-32.h b/arch/powerpc/include/asm/pgalloc-32.h
index 580cf73..27b2386 100644
--- a/arch/powerpc/include/asm/pgalloc-32.h
+++ b/arch/powerpc/include/asm/pgalloc-32.h
@@ -37,6 +37,17 @@ extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
 extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr);
 extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr);
 
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
+{
+	pgtable_page_dtor(ptepage);
+	__free_page(ptepage);
+}
+
 static inline void pgtable_free(void *table, unsigned index_size)
 {
 	BUG_ON(index_size); /* 32-bit doesn't use this */
@@ -45,4 +56,38 @@ static inline void pgtable_free(void *table, unsigned index_size)
 
 #define check_pgt_cache()	do { } while (0)
 
+#ifdef CONFIG_SMP
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+				    void *table, int shift)
+{
+	unsigned long pgf = (unsigned long)table;
+	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
+	pgf |= shift;
+	tlb_remove_table(tlb, (void *)pgf);
+}
+
+static inline void __tlb_remove_table(void *_table)
+{
+	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
+	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
+
+	pgtable_free(table, shift);
+}
+#else
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+				    void *table, int shift)
+{
+	pgtable_free(table, shift);
+}
+#endif
+
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
+				  unsigned long address)
+{
+	struct page *page = page_address(table);
+
+	tlb_flush_pgtable(tlb, address);
+	pgtable_page_dtor(page);
+	pgtable_free_tlb(tlb, page, 0);
+}
 #endif /* _ASM_POWERPC_PGALLOC_32_H */
diff --git a/arch/powerpc/include/asm/pgalloc-64.h b/arch/powerpc/include/asm/pgalloc-64.h
index 292725c..cdbf555 100644
--- a/arch/powerpc/include/asm/pgalloc-64.h
+++ b/arch/powerpc/include/asm/pgalloc-64.h
@@ -72,8 +72,83 @@ static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
 #define pmd_populate_kernel(mm, pmd, pte) pmd_set(pmd, (unsigned long)(pte))
 #define pmd_pgtable(pmd) pmd_page(pmd)
 
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+					  unsigned long address)
+{
+	return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
+				      unsigned long address)
+{
+	struct page *page;
+	pte_t *pte;
+
+	pte = pte_alloc_one_kernel(mm, address);
+	if (!pte)
+		return NULL;
+	page = virt_to_page(pte);
+	pgtable_page_ctor(page);
+	return page;
+}
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
+{
+	pgtable_page_dtor(ptepage);
+	__free_page(ptepage);
+}
+
+static inline void pgtable_free(void *table, unsigned index_size)
+{
+	if (!index_size)
+		free_page((unsigned long)table);
+	else {
+		BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE);
+		kmem_cache_free(PGT_CACHE(index_size), table);
+	}
+}
+
+#ifdef CONFIG_SMP
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+				    void *table, int shift)
+{
+	unsigned long pgf = (unsigned long)table;
+	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
+	pgf |= shift;
+	tlb_remove_table(tlb, (void *)pgf);
+}
+
+static inline void __tlb_remove_table(void *_table)
+{
+	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
+	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
+
+	pgtable_free(table, shift);
+}
+#else /* !CONFIG_SMP */
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+				    void *table, int shift)
+{
+	pgtable_free(table, shift);
+}
+#endif /* CONFIG_SMP */
+
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
+				  unsigned long address)
+{
+	struct page *page = page_address(table);
+
+	tlb_flush_pgtable(tlb, address);
+	pgtable_page_dtor(page);
+	pgtable_free_tlb(tlb, page, 0);
+}
 
-#else /* CONFIG_PPC_64K_PAGES */
+#else /* if CONFIG_PPC_64K_PAGES */
 
 #define pud_populate(mm, pud, pmd)	pud_set(pud, (unsigned long)pmd)
 
@@ -83,31 +158,25 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
 	pmd_set(pmd, (unsigned long)pte);
 }
 
-#define pmd_populate(mm, pmd, pte_page) \
-	pmd_populate_kernel(mm, pmd, page_address(pte_page))
-#define pmd_pgtable(pmd) pmd_page(pmd)
-
-#endif /* CONFIG_PPC_64K_PAGES */
-
-static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
+				pgtable_t pte_page)
 {
-	return kmem_cache_alloc(PGT_CACHE(PMD_INDEX_SIZE),
-				GFP_KERNEL|__GFP_REPEAT);
+	pmd_populate_kernel(mm, pmd, page_address(pte_page));
 }
 
-static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+static inline pgtable_t pmd_pgtable(pmd_t pmd)
 {
-	kmem_cache_free(PGT_CACHE(PMD_INDEX_SIZE), pmd);
+	return pmd_page(pmd);
 }
 
 static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
 					  unsigned long address)
 {
-        return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
 }
 
 static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
-					unsigned long address)
+				      unsigned long address)
 {
 	struct page *page;
 	pte_t *pte;
@@ -120,6 +189,17 @@ static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
 	return page;
 }
 
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+	free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
+{
+	pgtable_page_dtor(ptepage);
+	__free_page(ptepage);
+}
+
 static inline void pgtable_free(void *table, unsigned index_size)
 {
 	if (!index_size)
@@ -130,6 +210,55 @@ static inline void pgtable_free(void *table, unsigned index_size)
 	}
 }
 
+#ifdef CONFIG_SMP
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+				    void *table, int shift)
+{
+	unsigned long pgf = (unsigned long)table;
+	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
+	pgf |= shift;
+	tlb_remove_table(tlb, (void *)pgf);
+}
+
+static inline void __tlb_remove_table(void *_table)
+{
+	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
+	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
+
+	pgtable_free(table, shift);
+}
+#else /* !CONFIG_SMP */
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+				    void *table, int shift)
+{
+	pgtable_free(table, shift);
+}
+#endif /* CONFIG_SMP */
+
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
+				  unsigned long address)
+{
+	struct page *page = page_address(table);
+
+	tlb_flush_pgtable(tlb, address);
+	pgtable_page_dtor(page);
+	pgtable_free_tlb(tlb, page, 0);
+}
+
+#endif /* CONFIG_PPC_64K_PAGES */
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+	return kmem_cache_alloc(PGT_CACHE(PMD_INDEX_SIZE),
+				GFP_KERNEL|__GFP_REPEAT);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+	kmem_cache_free(PGT_CACHE(PMD_INDEX_SIZE), pmd);
+}
+
+
 #define __pmd_free_tlb(tlb, pmd, addr)		      \
 	pgtable_free_tlb(tlb, pmd, PMD_INDEX_SIZE)
 #ifndef CONFIG_PPC_64K_PAGES
diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h
index bf301ac..e9a9f60 100644
--- a/arch/powerpc/include/asm/pgalloc.h
+++ b/arch/powerpc/include/asm/pgalloc.h
@@ -3,6 +3,7 @@
 #ifdef __KERNEL__
 
 #include <linux/mm.h>
+#include <asm-generic/tlb.h>
 
 #ifdef CONFIG_PPC_BOOK3E
 extern void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address);
@@ -13,56 +14,11 @@ static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
 }
 #endif /* !CONFIG_PPC_BOOK3E */
 
-static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
-{
-	free_page((unsigned long)pte);
-}
-
-static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
-{
-	pgtable_page_dtor(ptepage);
-	__free_page(ptepage);
-}
-
 #ifdef CONFIG_PPC64
 #include <asm/pgalloc-64.h>
 #else
 #include <asm/pgalloc-32.h>
 #endif
 
-#ifdef CONFIG_SMP
-struct mmu_gather;
-extern void tlb_remove_table(struct mmu_gather *, void *);
-
-static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
-{
-	unsigned long pgf = (unsigned long)table;
-	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
-	pgf |= shift;
-	tlb_remove_table(tlb, (void *)pgf);
-}
-
-static inline void __tlb_remove_table(void *_table)
-{
-	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
-	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
-
-	pgtable_free(table, shift);
-}
-#else /* CONFIG_SMP */
-static inline void pgtable_free_tlb(struct mmu_gather *tlb, void *table, unsigned shift)
-{
-	pgtable_free(table, shift);
-}
-#endif /* !CONFIG_SMP */
-
-static inline void __pte_free_tlb(struct mmu_gather *tlb, struct page *ptepage,
-				  unsigned long address)
-{
-	tlb_flush_pgtable(tlb, address);
-	pgtable_page_dtor(ptepage);
-	pgtable_free_tlb(tlb, page_address(ptepage), 0);
-}
-
 #endif /* __KERNEL__ */
 #endif /* _ASM_POWERPC_PGALLOC_H */
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (4 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 05/25] powerpc: Move the pte free routines from common header Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-10  4:46   ` David Gibson
  2013-04-10  7:14   ` Michael Ellerman
  2013-04-04  5:57 ` [PATCH -V5 07/25] powerpc: Use encode avpn where we need only avpn values Aneesh Kumar K.V
                   ` (21 subsequent siblings)
  27 siblings, 2 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We allocate one page for the last level of linux page table. With THP and
large page size of 16MB, that would mean we are wasting large part
of that page. To map 16MB area, we only need a PTE space of 2K with 64K
page size. This patch reduce the space wastage by sharing the page
allocated for the last level of linux page table with multiple pmd
entries. We call these smaller chunks PTE page fragments and allocated
page, PTE page.

In order to support systems which doesn't have 64K HPTE support, we also
add another 2K to PTE page fragment. The second half of the PTE fragments
is used for storing slot and secondary bit information of an HPTE. With this
we now have a 4K PTE fragment.

We use a simple approach to share the PTE page. On allocation, we bump the
PTE page refcount to 16 and share the PTE page with the next 16 pte alloc
request. This should help in the node locality of the PTE page fragment,
assuming that the immediate pte alloc request will mostly come from the
same NUMA node. We don't try to reuse the freed PTE page fragment. Hence
we could be waisting some space.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/mmu-book3e.h |    4 +
 arch/powerpc/include/asm/mmu-hash64.h |    4 +
 arch/powerpc/include/asm/page.h       |    4 +
 arch/powerpc/include/asm/pgalloc-64.h |   72 ++++-------------
 arch/powerpc/kernel/setup_64.c        |    4 +-
 arch/powerpc/mm/mmu_context_hash64.c  |   35 +++++++++
 arch/powerpc/mm/pgtable_64.c          |  137 +++++++++++++++++++++++++++++++++
 7 files changed, 202 insertions(+), 58 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu-book3e.h b/arch/powerpc/include/asm/mmu-book3e.h
index 99d43e0..affbd68 100644
--- a/arch/powerpc/include/asm/mmu-book3e.h
+++ b/arch/powerpc/include/asm/mmu-book3e.h
@@ -231,6 +231,10 @@ typedef struct {
 	u64 high_slices_psize;  /* 4 bits per slice for now */
 	u16 user_psize;         /* page size index */
 #endif
+#ifdef CONFIG_PPC_64K_PAGES
+	/* for 4K PTE fragment support */
+	struct page *pgtable_page;
+#endif
 } mm_context_t;
 
 /* Page size definitions, common between 32 and 64-bit
diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index 35bb51e..300ac3c 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -498,6 +498,10 @@ typedef struct {
 	unsigned long acop;	/* mask of enabled coprocessor types */
 	unsigned int cop_pid;	/* pid value used with coprocessors */
 #endif /* CONFIG_PPC_ICSWX */
+#ifdef CONFIG_PPC_64K_PAGES
+	/* for 4K PTE fragment support */
+	struct page *pgtable_page;
+#endif
 } mm_context_t;
 
 
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index f072e97..38e7ff6 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -378,7 +378,11 @@ void arch_free_page(struct page *page, int order);
 
 struct vm_area_struct;
 
+#ifdef CONFIG_PPC_64K_PAGES
+typedef pte_t *pgtable_t;
+#else
 typedef struct page *pgtable_t;
+#endif
 
 #include <asm-generic/memory_model.h>
 #endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/include/asm/pgalloc-64.h b/arch/powerpc/include/asm/pgalloc-64.h
index cdbf555..3418989 100644
--- a/arch/powerpc/include/asm/pgalloc-64.h
+++ b/arch/powerpc/include/asm/pgalloc-64.h
@@ -150,6 +150,13 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
 
 #else /* if CONFIG_PPC_64K_PAGES */
 
+extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
+extern void page_table_free(struct mm_struct *, unsigned long *, int);
+extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
+#ifdef CONFIG_SMP
+extern void __tlb_remove_table(void *_table);
+#endif
+
 #define pud_populate(mm, pud, pmd)	pud_set(pud, (unsigned long)pmd)
 
 static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
@@ -161,90 +168,42 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
 static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
 				pgtable_t pte_page)
 {
-	pmd_populate_kernel(mm, pmd, page_address(pte_page));
+	pmd_set(pmd, (unsigned long)pte_page);
 }
 
 static inline pgtable_t pmd_pgtable(pmd_t pmd)
 {
-	return pmd_page(pmd);
+	return (pgtable_t)(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE);
 }
 
 static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
 					  unsigned long address)
 {
-	return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+	return (pte_t *)page_table_alloc(mm, address, 1);
 }
 
 static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
-				      unsigned long address)
+					unsigned long address)
 {
-	struct page *page;
-	pte_t *pte;
-
-	pte = pte_alloc_one_kernel(mm, address);
-	if (!pte)
-		return NULL;
-	page = virt_to_page(pte);
-	pgtable_page_ctor(page);
-	return page;
+	return (pgtable_t)page_table_alloc(mm, address, 0);
 }
 
 static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
 {
-	free_page((unsigned long)pte);
+	page_table_free(mm, (unsigned long *)pte, 1);
 }
 
 static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
 {
-	pgtable_page_dtor(ptepage);
-	__free_page(ptepage);
-}
-
-static inline void pgtable_free(void *table, unsigned index_size)
-{
-	if (!index_size)
-		free_page((unsigned long)table);
-	else {
-		BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE);
-		kmem_cache_free(PGT_CACHE(index_size), table);
-	}
+	page_table_free(mm, (unsigned long *)ptepage, 0);
 }
 
-#ifdef CONFIG_SMP
-static inline void pgtable_free_tlb(struct mmu_gather *tlb,
-				    void *table, int shift)
-{
-	unsigned long pgf = (unsigned long)table;
-	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
-	pgf |= shift;
-	tlb_remove_table(tlb, (void *)pgf);
-}
-
-static inline void __tlb_remove_table(void *_table)
-{
-	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
-	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
-
-	pgtable_free(table, shift);
-}
-#else /* !CONFIG_SMP */
-static inline void pgtable_free_tlb(struct mmu_gather *tlb,
-				    void *table, int shift)
-{
-	pgtable_free(table, shift);
-}
-#endif /* CONFIG_SMP */
-
 static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
 				  unsigned long address)
 {
-	struct page *page = page_address(table);
-
 	tlb_flush_pgtable(tlb, address);
-	pgtable_page_dtor(page);
-	pgtable_free_tlb(tlb, page, 0);
+	pgtable_free_tlb(tlb, table, 0);
 }
-
 #endif /* CONFIG_PPC_64K_PAGES */
 
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
@@ -258,7 +217,6 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 	kmem_cache_free(PGT_CACHE(PMD_INDEX_SIZE), pmd);
 }
 
-
 #define __pmd_free_tlb(tlb, pmd, addr)		      \
 	pgtable_free_tlb(tlb, pmd, PMD_INDEX_SIZE)
 #ifndef CONFIG_PPC_64K_PAGES
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 6da881b..04d833c 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -575,7 +575,9 @@ void __init setup_arch(char **cmdline_p)
 	init_mm.end_code = (unsigned long) _etext;
 	init_mm.end_data = (unsigned long) _edata;
 	init_mm.brk = klimit;
-	
+#ifdef CONFIG_PPC_64K_PAGES
+	init_mm.context.pgtable_page = NULL;
+#endif
 	irqstack_early_init();
 	exc_lvl_early_init();
 	emergency_stack_init();
diff --git a/arch/powerpc/mm/mmu_context_hash64.c b/arch/powerpc/mm/mmu_context_hash64.c
index 59cd773..fbfdca2 100644
--- a/arch/powerpc/mm/mmu_context_hash64.c
+++ b/arch/powerpc/mm/mmu_context_hash64.c
@@ -86,6 +86,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 	spin_lock_init(mm->context.cop_lockp);
 #endif /* CONFIG_PPC_ICSWX */
 
+#ifdef CONFIG_PPC_64K_PAGES
+	mm->context.pgtable_page = NULL;
+#endif
 	return 0;
 }
 
@@ -97,13 +100,45 @@ void __destroy_context(int context_id)
 }
 EXPORT_SYMBOL_GPL(__destroy_context);
 
+#ifdef CONFIG_PPC_64K_PAGES
+static void destroy_pagetable_page(struct mm_struct *mm)
+{
+	int count;
+	struct page *page;
+
+	page = mm->context.pgtable_page;
+	if (!page)
+		return;
+
+	/* drop all the pending references */
+	count = atomic_read(&page->_mapcount) + 1;
+	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
+	count = atomic_sub_return(16 - count, &page->_count);
+	if (!count) {
+		pgtable_page_dtor(page);
+		reset_page_mapcount(page);
+		free_hot_cold_page(page, 0);
+	}
+}
+
+#else
+static inline void destroy_pagetable_page(struct mm_struct *mm)
+{
+	return;
+}
+#endif
+
+
 void destroy_context(struct mm_struct *mm)
 {
+
 #ifdef CONFIG_PPC_ICSWX
 	drop_cop(mm->context.acop, mm);
 	kfree(mm->context.cop_lockp);
 	mm->context.cop_lockp = NULL;
 #endif /* CONFIG_PPC_ICSWX */
+
+	destroy_pagetable_page(mm);
 	__destroy_context(mm->context.id);
 	subpage_prot_free(mm);
 	mm->context.id = MMU_NO_CONTEXT;
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index e212a27..e79840b 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -337,3 +337,140 @@ EXPORT_SYMBOL(__ioremap_at);
 EXPORT_SYMBOL(iounmap);
 EXPORT_SYMBOL(__iounmap);
 EXPORT_SYMBOL(__iounmap_at);
+
+#ifdef CONFIG_PPC_64K_PAGES
+/*
+ * we support 16 fragments per PTE page. This is limited by how many
+ * bits we can pack in page->_mapcount. We use the first half for
+ * tracking the usage for rcu page table free.
+ */
+#define PTE_FRAG_NR	16
+/*
+ * We use a 2K PTE page fragment and another 2K for storing
+ * real_pte_t hash index
+ */
+#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
+
+static pte_t *get_from_cache(struct mm_struct *mm)
+{
+	int index;
+	pte_t *ret = NULL;
+	struct page *page;
+
+	spin_lock(&mm->page_table_lock);
+	page = mm->context.pgtable_page;
+	if (page) {
+		void *p = page_address(page);
+		index = atomic_add_return(1, &page->_mapcount);
+		ret = (pte_t *) (p + (index * PTE_FRAG_SIZE));
+		/*
+		 * If we have taken up all the fragments mark PTE page NULL
+		 */
+		if (index == PTE_FRAG_NR - 1)
+			mm->context.pgtable_page = NULL;
+	}
+	spin_unlock(&mm->page_table_lock);
+	return ret;
+}
+
+static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
+{
+	pte_t *ret = NULL;
+	struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK |
+				       __GFP_REPEAT | __GFP_ZERO);
+	if (!page)
+		return NULL;
+
+	spin_lock(&mm->page_table_lock);
+	/*
+	 * If we find pgtable_page set, we return
+	 * the allocated page with single fragement
+	 * count.
+	 */
+	if (likely(!mm->context.pgtable_page)) {
+		atomic_set(&page->_count, PTE_FRAG_NR);
+		atomic_set(&page->_mapcount, 0);
+		mm->context.pgtable_page = page;
+	}
+	spin_unlock(&mm->page_table_lock);
+
+	ret = (unsigned long *)page_address(page);
+	if (!kernel)
+		pgtable_page_ctor(page);
+
+	return ret;
+}
+
+pte_t *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel)
+{
+	pte_t *pte;
+
+	pte = get_from_cache(mm);
+	if (pte)
+		return pte;
+
+	return __alloc_for_cache(mm, kernel);
+}
+
+void page_table_free(struct mm_struct *mm, unsigned long *table, int kernel)
+{
+	struct page *page = virt_to_page(table);
+	if (put_page_testzero(page)) {
+		if (!kernel)
+			pgtable_page_dtor(page);
+		reset_page_mapcount(page);
+		free_hot_cold_page(page, 0);
+	}
+}
+
+#ifdef CONFIG_SMP
+static void page_table_free_rcu(void *table)
+{
+	struct page *page = virt_to_page(table);
+	if (put_page_testzero(page)) {
+		pgtable_page_dtor(page);
+		reset_page_mapcount(page);
+		free_hot_cold_page(page, 0);
+	}
+}
+
+void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
+{
+	unsigned long pgf = (unsigned long)table;
+
+	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
+	pgf |= shift;
+	tlb_remove_table(tlb, (void *)pgf);
+}
+
+void __tlb_remove_table(void *_table)
+{
+	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
+	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
+
+	if (!shift)
+		/* PTE page needs special handling */
+		page_table_free_rcu(table);
+	else {
+		BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
+		kmem_cache_free(PGT_CACHE(shift), table);
+	}
+}
+#else
+void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
+{
+	if (!shift) {
+		/* PTE page needs special handling */
+		struct page *page = virt_to_page(table);
+		if (put_page_testzero(page)) {
+			pgtable_page_dtor(page);
+			reset_page_mapcount(page);
+			free_hot_cold_page(page, 0);
+		}
+	} else {
+		BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
+		kmem_cache_free(PGT_CACHE(shift), table);
+	}
+}
+#endif
+#endif /* CONFIG_PPC_64K_PAGES */
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 07/25] powerpc: Use encode avpn where we need only avpn values
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (5 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly Aneesh Kumar K.V
                   ` (20 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

In all these cases we are doing something similar to

HPTE_V_COMPARE(hpte_v, want_v) which ignores the HPTE_V_LARGE bit

With MPSS support we would need actual page size to set HPTE_V_LARGE
bit and that won't be available in most of these cases. Since we are ignoring
HPTE_V_LARGE bit, use the  avpn value instead. There should not be any change
in behaviour after this patch.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/hash_native_64.c        |    8 ++++----
 arch/powerpc/platforms/cell/beat_htab.c |   10 +++++-----
 arch/powerpc/platforms/ps3/htab.c       |    2 +-
 3 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index ffc1e00..9d8983a 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -252,7 +252,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 	unsigned long hpte_v, want_v;
 	int ret = 0;
 
-	want_v = hpte_encode_v(vpn, psize, ssize);
+	want_v = hpte_encode_avpn(vpn, psize, ssize);
 
 	DBG_LOW("    update(vpn=%016lx, avpnv=%016lx, group=%lx, newpp=%lx)",
 		vpn, want_v & HPTE_V_AVPN, slot, newpp);
@@ -288,7 +288,7 @@ static long native_hpte_find(unsigned long vpn, int psize, int ssize)
 	unsigned long want_v, hpte_v;
 
 	hash = hpt_hash(vpn, mmu_psize_defs[psize].shift, ssize);
-	want_v = hpte_encode_v(vpn, psize, ssize);
+	want_v = hpte_encode_avpn(vpn, psize, ssize);
 
 	/* Bolted mappings are only ever in the primary group */
 	slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
@@ -348,7 +348,7 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
 
 	DBG_LOW("    invalidate(vpn=%016lx, hash: %lx)\n", vpn, slot);
 
-	want_v = hpte_encode_v(vpn, psize, ssize);
+	want_v = hpte_encode_avpn(vpn, psize, ssize);
 	native_lock_hpte(hptep);
 	hpte_v = hptep->v;
 
@@ -520,7 +520,7 @@ static void native_flush_hash_range(unsigned long number, int local)
 			slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
 			slot += hidx & _PTEIDX_GROUP_IX;
 			hptep = htab_address + slot;
-			want_v = hpte_encode_v(vpn, psize, ssize);
+			want_v = hpte_encode_avpn(vpn, psize, ssize);
 			native_lock_hpte(hptep);
 			hpte_v = hptep->v;
 			if (!HPTE_V_COMPARE(hpte_v, want_v) ||
diff --git a/arch/powerpc/platforms/cell/beat_htab.c b/arch/powerpc/platforms/cell/beat_htab.c
index 0f6f839..472f9a7 100644
--- a/arch/powerpc/platforms/cell/beat_htab.c
+++ b/arch/powerpc/platforms/cell/beat_htab.c
@@ -191,7 +191,7 @@ static long beat_lpar_hpte_updatepp(unsigned long slot,
 	u64 dummy0, dummy1;
 	unsigned long want_v;
 
-	want_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M);
+	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
 
 	DBG_LOW("    update: "
 		"avpnv=%016lx, slot=%016lx, psize: %d, newpp %016lx ... ",
@@ -228,7 +228,7 @@ static long beat_lpar_hpte_find(unsigned long vpn, int psize)
 	unsigned long want_v, hpte_v;
 
 	hash = hpt_hash(vpn, mmu_psize_defs[psize].shift, MMU_SEGSIZE_256M);
-	want_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M);
+	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
 
 	for (j = 0; j < 2; j++) {
 		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
@@ -283,7 +283,7 @@ static void beat_lpar_hpte_invalidate(unsigned long slot, unsigned long vpn,
 
 	DBG_LOW("    inval : slot=%lx, va=%016lx, psize: %d, local: %d\n",
 		slot, va, psize, local);
-	want_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M);
+	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
 
 	raw_spin_lock_irqsave(&beat_htab_lock, flags);
 	dummy1 = beat_lpar_hpte_getword0(slot);
@@ -372,7 +372,7 @@ static long beat_lpar_hpte_updatepp_v3(unsigned long slot,
 	unsigned long want_v;
 	unsigned long pss;
 
-	want_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M);
+	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
 	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc;
 
 	DBG_LOW("    update: "
@@ -402,7 +402,7 @@ static void beat_lpar_hpte_invalidate_v3(unsigned long slot, unsigned long vpn,
 
 	DBG_LOW("    inval : slot=%lx, vpn=%016lx, psize: %d, local: %d\n",
 		slot, vpn, psize, local);
-	want_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M);
+	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
 	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc;
 
 	lpar_rc = beat_invalidate_htab_entry3(0, slot, want_v, pss);
diff --git a/arch/powerpc/platforms/ps3/htab.c b/arch/powerpc/platforms/ps3/htab.c
index d00d7b0..07a4bba 100644
--- a/arch/powerpc/platforms/ps3/htab.c
+++ b/arch/powerpc/platforms/ps3/htab.c
@@ -115,7 +115,7 @@ static long ps3_hpte_updatepp(unsigned long slot, unsigned long newpp,
 	unsigned long flags;
 	long ret;
 
-	want_v = hpte_encode_v(vpn, psize, ssize);
+	want_v = hpte_encode_avpn(vpn, psize, ssize);
 
 	spin_lock_irqsave(&ps3_htab_lock, flags);
 
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly.
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (6 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 07/25] powerpc: Use encode avpn where we need only avpn values Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-10  7:19   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 09/25] powerpc: Fix hpte_decode to use the correct decoding for page sizes Aneesh Kumar K.V
                   ` (19 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We look at both the segment base page size and actual page size and store
the pte-lp-encodings in an array per base page size.

We also update all relevant functions to take actual page size argument
so that we can use the correct PTE LP encoding in HPTE. This should also
get the basic Multiple Page Size per Segment (MPSS) support. This is needed
to enable THP on ppc64.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/machdep.h      |    3 +-
 arch/powerpc/include/asm/mmu-hash64.h   |   33 ++++----
 arch/powerpc/kvm/book3s_hv.c            |    2 +-
 arch/powerpc/mm/hash_low_64.S           |   18 ++--
 arch/powerpc/mm/hash_native_64.c        |  138 ++++++++++++++++++++++---------
 arch/powerpc/mm/hash_utils_64.c         |  121 +++++++++++++++++----------
 arch/powerpc/mm/hugetlbpage-hash64.c    |    4 +-
 arch/powerpc/platforms/cell/beat_htab.c |   16 ++--
 arch/powerpc/platforms/ps3/htab.c       |    6 +-
 arch/powerpc/platforms/pseries/lpar.c   |    6 +-
 10 files changed, 230 insertions(+), 117 deletions(-)

diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
index 19d9d96..6cee6e0 100644
--- a/arch/powerpc/include/asm/machdep.h
+++ b/arch/powerpc/include/asm/machdep.h
@@ -50,7 +50,8 @@ struct machdep_calls {
 				       unsigned long prpn,
 				       unsigned long rflags,
 				       unsigned long vflags,
-				       int psize, int ssize);
+				       int psize, int apsize,
+				       int ssize);
 	long		(*hpte_remove)(unsigned long hpte_group);
 	void            (*hpte_removebolted)(unsigned long ea,
 					     int psize, int ssize);
diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index 300ac3c..e42f4a3 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -154,7 +154,7 @@ extern unsigned long htab_hash_mask;
 struct mmu_psize_def
 {
 	unsigned int	shift;	/* number of bits */
-	unsigned int	penc;	/* HPTE encoding */
+	int		penc[MMU_PAGE_COUNT];	/* HPTE encoding */
 	unsigned int	tlbiel;	/* tlbiel supported for that page size */
 	unsigned long	avpnm;	/* bits to mask out in AVPN in the HPTE */
 	unsigned long	sllp;	/* SLB L||LP (exact mask to use in slbmte) */
@@ -181,6 +181,13 @@ struct mmu_psize_def
  */
 #define VPN_SHIFT	12
 
+/*
+ * HPTE Large Page (LP) details
+ */
+#define LP_SHIFT	12
+#define LP_BITS		8
+#define LP_MASK(i)	((0xFF >> (i)) << LP_SHIFT)
+
 #ifndef __ASSEMBLY__
 
 static inline int segment_shift(int ssize)
@@ -237,14 +244,14 @@ static inline unsigned long hpte_encode_avpn(unsigned long vpn, int psize,
 
 /*
  * This function sets the AVPN and L fields of the HPTE  appropriately
- * for the page size
+ * using the base page size and actual page size.
  */
-static inline unsigned long hpte_encode_v(unsigned long vpn,
-					  int psize, int ssize)
+static inline unsigned long hpte_encode_v(unsigned long vpn, int base_psize,
+					  int actual_psize, int ssize)
 {
 	unsigned long v;
-	v = hpte_encode_avpn(vpn, psize, ssize);
-	if (psize != MMU_PAGE_4K)
+	v = hpte_encode_avpn(vpn, base_psize, ssize);
+	if (actual_psize != MMU_PAGE_4K)
 		v |= HPTE_V_LARGE;
 	return v;
 }
@@ -254,19 +261,17 @@ static inline unsigned long hpte_encode_v(unsigned long vpn,
  * for the page size. We assume the pa is already "clean" that is properly
  * aligned for the requested page size
  */
-static inline unsigned long hpte_encode_r(unsigned long pa, int psize)
+static inline unsigned long hpte_encode_r(unsigned long pa, int base_psize,
+					  int actual_psize)
 {
-	unsigned long r;
-
 	/* A 4K page needs no special encoding */
-	if (psize == MMU_PAGE_4K)
+	if (actual_psize == MMU_PAGE_4K)
 		return pa & HPTE_R_RPN;
 	else {
-		unsigned int penc = mmu_psize_defs[psize].penc;
-		unsigned int shift = mmu_psize_defs[psize].shift;
-		return (pa & ~((1ul << shift) - 1)) | (penc << 12);
+		unsigned int penc = mmu_psize_defs[base_psize].penc[actual_psize];
+		unsigned int shift = mmu_psize_defs[actual_psize].shift;
+		return (pa & ~((1ul << shift) - 1)) | (penc << LP_SHIFT);
 	}
-	return r;
 }
 
 /*
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 71d0c90..48f6d99 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1515,7 +1515,7 @@ static void kvmppc_add_seg_page_size(struct kvm_ppc_one_seg_page_size **sps,
 	(*sps)->page_shift = def->shift;
 	(*sps)->slb_enc = def->sllp;
 	(*sps)->enc[0].page_shift = def->shift;
-	(*sps)->enc[0].pte_enc = def->penc;
+	(*sps)->enc[0].pte_enc = def->penc[linux_psize];
 	(*sps)++;
 }
 
diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
index abdd5e2..0e980ac 100644
--- a/arch/powerpc/mm/hash_low_64.S
+++ b/arch/powerpc/mm/hash_low_64.S
@@ -196,7 +196,8 @@ htab_insert_pte:
 	mr	r4,r29			/* Retrieve vpn */
 	li	r7,0			/* !bolted, !secondary */
 	li	r8,MMU_PAGE_4K		/* page size */
-	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
+	li	r9,MMU_PAGE_4K		/* actual page size */
+	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
 _GLOBAL(htab_call_hpte_insert1)
 	bl	.			/* Patched by htab_finish_init() */
 	cmpdi	0,r3,0
@@ -219,7 +220,8 @@ _GLOBAL(htab_call_hpte_insert1)
 	mr	r4,r29			/* Retrieve vpn */
 	li	r7,HPTE_V_SECONDARY	/* !bolted, secondary */
 	li	r8,MMU_PAGE_4K		/* page size */
-	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
+	li	r9,MMU_PAGE_4K		/* actual page size */
+	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
 _GLOBAL(htab_call_hpte_insert2)
 	bl	.			/* Patched by htab_finish_init() */
 	cmpdi	0,r3,0
@@ -515,7 +517,8 @@ htab_special_pfn:
 	mr	r4,r29			/* Retrieve vpn */
 	li	r7,0			/* !bolted, !secondary */
 	li	r8,MMU_PAGE_4K		/* page size */
-	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
+	li	r9,MMU_PAGE_4K		/* actual page size */
+	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
 _GLOBAL(htab_call_hpte_insert1)
 	bl	.			/* patched by htab_finish_init() */
 	cmpdi	0,r3,0
@@ -542,7 +545,8 @@ _GLOBAL(htab_call_hpte_insert1)
 	mr	r4,r29			/* Retrieve vpn */
 	li	r7,HPTE_V_SECONDARY	/* !bolted, secondary */
 	li	r8,MMU_PAGE_4K		/* page size */
-	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
+	li	r9,MMU_PAGE_4K		/* actual page size */
+	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
 _GLOBAL(htab_call_hpte_insert2)
 	bl	.			/* patched by htab_finish_init() */
 	cmpdi	0,r3,0
@@ -840,7 +844,8 @@ ht64_insert_pte:
 	mr	r4,r29			/* Retrieve vpn */
 	li	r7,0			/* !bolted, !secondary */
 	li	r8,MMU_PAGE_64K
-	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
+	li	r9,MMU_PAGE_64K		/* actual page size */
+	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
 _GLOBAL(ht64_call_hpte_insert1)
 	bl	.			/* patched by htab_finish_init() */
 	cmpdi	0,r3,0
@@ -863,7 +868,8 @@ _GLOBAL(ht64_call_hpte_insert1)
 	mr	r4,r29			/* Retrieve vpn */
 	li	r7,HPTE_V_SECONDARY	/* !bolted, secondary */
 	li	r8,MMU_PAGE_64K
-	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
+	li	r9,MMU_PAGE_64K		/* actual page size */
+	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
 _GLOBAL(ht64_call_hpte_insert2)
 	bl	.			/* patched by htab_finish_init() */
 	cmpdi	0,r3,0
diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 9d8983a..aa0499b 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -39,7 +39,7 @@
 
 DEFINE_RAW_SPINLOCK(native_tlbie_lock);
 
-static inline void __tlbie(unsigned long vpn, int psize, int ssize)
+static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
 {
 	unsigned long va;
 	unsigned int penc;
@@ -68,7 +68,7 @@ static inline void __tlbie(unsigned long vpn, int psize, int ssize)
 		break;
 	default:
 		/* We need 14 to 14 + i bits of va */
-		penc = mmu_psize_defs[psize].penc;
+		penc = mmu_psize_defs[psize].penc[apsize];
 		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
 		va |= penc << 12;
 		va |= ssize << 8;
@@ -80,7 +80,7 @@ static inline void __tlbie(unsigned long vpn, int psize, int ssize)
 	}
 }
 
-static inline void __tlbiel(unsigned long vpn, int psize, int ssize)
+static inline void __tlbiel(unsigned long vpn, int psize, int apsize, int ssize)
 {
 	unsigned long va;
 	unsigned int penc;
@@ -102,7 +102,7 @@ static inline void __tlbiel(unsigned long vpn, int psize, int ssize)
 		break;
 	default:
 		/* We need 14 to 14 + i bits of va */
-		penc = mmu_psize_defs[psize].penc;
+		penc = mmu_psize_defs[psize].penc[apsize];
 		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
 		va |= penc << 12;
 		va |= ssize << 8;
@@ -114,7 +114,8 @@ static inline void __tlbiel(unsigned long vpn, int psize, int ssize)
 
 }
 
-static inline void tlbie(unsigned long vpn, int psize, int ssize, int local)
+static inline void tlbie(unsigned long vpn, int psize, int apsize,
+			 int ssize, int local)
 {
 	unsigned int use_local = local && mmu_has_feature(MMU_FTR_TLBIEL);
 	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
@@ -125,10 +126,10 @@ static inline void tlbie(unsigned long vpn, int psize, int ssize, int local)
 		raw_spin_lock(&native_tlbie_lock);
 	asm volatile("ptesync": : :"memory");
 	if (use_local) {
-		__tlbiel(vpn, psize, ssize);
+		__tlbiel(vpn, psize, apsize, ssize);
 		asm volatile("ptesync": : :"memory");
 	} else {
-		__tlbie(vpn, psize, ssize);
+		__tlbie(vpn, psize, apsize, ssize);
 		asm volatile("eieio; tlbsync; ptesync": : :"memory");
 	}
 	if (lock_tlbie && !use_local)
@@ -156,7 +157,7 @@ static inline void native_unlock_hpte(struct hash_pte *hptep)
 
 static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
 			unsigned long pa, unsigned long rflags,
-			unsigned long vflags, int psize, int ssize)
+			unsigned long vflags, int psize, int apsize, int ssize)
 {
 	struct hash_pte *hptep = htab_address + hpte_group;
 	unsigned long hpte_v, hpte_r;
@@ -183,8 +184,8 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
 	if (i == HPTES_PER_GROUP)
 		return -1;
 
-	hpte_v = hpte_encode_v(vpn, psize, ssize) | vflags | HPTE_V_VALID;
-	hpte_r = hpte_encode_r(pa, psize) | rflags;
+	hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
+	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
 
 	if (!(vflags & HPTE_V_BOLTED)) {
 		DBG_LOW(" i=%x hpte_v=%016lx, hpte_r=%016lx\n",
@@ -244,6 +245,48 @@ static long native_hpte_remove(unsigned long hpte_group)
 	return i;
 }
 
+static inline int hpte_actual_psize(struct hash_pte *hptep, int psize)
+{
+	int i, shift;
+	unsigned int mask;
+	/* Look at the 8 bit LP value */
+	unsigned int lp = (hptep->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
+
+	if (!(hptep->v & HPTE_V_VALID))
+		return -1;
+
+	/* First check if it is large page */
+	if (!(hptep->v & HPTE_V_LARGE))
+		return MMU_PAGE_4K;
+
+	/* start from 1 ignoring MMU_PAGE_4K */
+	for (i = 1; i < MMU_PAGE_COUNT; i++) {
+		/* valid entries have a shift value */
+		if (!mmu_psize_defs[i].shift)
+			continue;
+
+		/* invalid penc */
+		if (mmu_psize_defs[psize].penc[i] == -1)
+			continue;
+		/*
+		 * encoding bits per actual page size
+		 *        PTE LP     actual page size
+		 *    rrrr rrrz		>=8KB
+		 *    rrrr rrzz		>=16KB
+		 *    rrrr rzzz		>=32KB
+		 *    rrrr zzzz		>=64KB
+		 * .......
+		 */
+		shift = mmu_psize_defs[i].shift - LP_SHIFT;
+		if (shift > LP_BITS)
+			shift = LP_BITS;
+		mask = (1 << shift) - 1;
+		if ((lp & mask) == mmu_psize_defs[psize].penc[i])
+			return i;
+	}
+	return -1;
+}
+
 static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 				 unsigned long vpn, int psize, int ssize,
 				 int local)
@@ -251,6 +294,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 	struct hash_pte *hptep = htab_address + slot;
 	unsigned long hpte_v, want_v;
 	int ret = 0;
+	int actual_psize;
 
 	want_v = hpte_encode_avpn(vpn, psize, ssize);
 
@@ -260,9 +304,13 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 	native_lock_hpte(hptep);
 
 	hpte_v = hptep->v;
-
+	actual_psize = hpte_actual_psize(hptep, psize);
+	if (actual_psize < 0) {
+		native_unlock_hpte(hptep);
+		return -1;
+	}
 	/* Even if we miss, we need to invalidate the TLB */
-	if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID)) {
+	if (!HPTE_V_COMPARE(hpte_v, want_v)) {
 		DBG_LOW(" -> miss\n");
 		ret = -1;
 	} else {
@@ -274,7 +322,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 	native_unlock_hpte(hptep);
 
 	/* Ensure it is out of the tlb too. */
-	tlbie(vpn, psize, ssize, local);
+	tlbie(vpn, psize, actual_psize, ssize, local);
 
 	return ret;
 }
@@ -315,6 +363,7 @@ static long native_hpte_find(unsigned long vpn, int psize, int ssize)
 static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
 				       int psize, int ssize)
 {
+	int actual_psize;
 	unsigned long vpn;
 	unsigned long vsid;
 	long slot;
@@ -327,13 +376,16 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
 	if (slot == -1)
 		panic("could not find page to bolt\n");
 	hptep = htab_address + slot;
+	actual_psize = hpte_actual_psize(hptep, psize);
+	if (actual_psize < 0)
+		return;
 
 	/* Update the HPTE */
 	hptep->r = (hptep->r & ~(HPTE_R_PP | HPTE_R_N)) |
 		(newpp & (HPTE_R_PP | HPTE_R_N));
 
 	/* Ensure it is out of the tlb too. */
-	tlbie(vpn, psize, ssize, 0);
+	tlbie(vpn, psize, actual_psize, ssize, 0);
 }
 
 static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
@@ -343,6 +395,7 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
 	unsigned long hpte_v;
 	unsigned long want_v;
 	unsigned long flags;
+	int actual_psize;
 
 	local_irq_save(flags);
 
@@ -352,35 +405,38 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
 	native_lock_hpte(hptep);
 	hpte_v = hptep->v;
 
+	actual_psize = hpte_actual_psize(hptep, psize);
+	if (actual_psize < 0) {
+		native_unlock_hpte(hptep);
+		local_irq_restore(flags);
+		return;
+	}
 	/* Even if we miss, we need to invalidate the TLB */
-	if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
+	if (!HPTE_V_COMPARE(hpte_v, want_v))
 		native_unlock_hpte(hptep);
 	else
 		/* Invalidate the hpte. NOTE: this also unlocks it */
 		hptep->v = 0;
 
 	/* Invalidate the TLB */
-	tlbie(vpn, psize, ssize, local);
+	tlbie(vpn, psize, actual_psize, ssize, local);
 
 	local_irq_restore(flags);
 }
 
-#define LP_SHIFT	12
-#define LP_BITS		8
-#define LP_MASK(i)	((0xFF >> (i)) << LP_SHIFT)
-
 static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
-			int *psize, int *ssize, unsigned long *vpn)
+			int *psize, int *apsize, int *ssize, unsigned long *vpn)
 {
 	unsigned long avpn, pteg, vpi;
 	unsigned long hpte_r = hpte->r;
 	unsigned long hpte_v = hpte->v;
 	unsigned long vsid, seg_off;
-	int i, size, shift, penc;
+	int i, size, a_size, shift, penc;
 
-	if (!(hpte_v & HPTE_V_LARGE))
-		size = MMU_PAGE_4K;
-	else {
+	if (!(hpte_v & HPTE_V_LARGE)) {
+		size   = MMU_PAGE_4K;
+		a_size = MMU_PAGE_4K;
+	} else {
 		for (i = 0; i < LP_BITS; i++) {
 			if ((hpte_r & LP_MASK(i+1)) == LP_MASK(i+1))
 				break;
@@ -388,19 +444,26 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
 		penc = LP_MASK(i+1) >> LP_SHIFT;
 		for (size = 0; size < MMU_PAGE_COUNT; size++) {
 
-			/* 4K pages are not represented by LP */
-			if (size == MMU_PAGE_4K)
-				continue;
-
 			/* valid entries have a shift value */
 			if (!mmu_psize_defs[size].shift)
 				continue;
+			for (a_size = 0; a_size < MMU_PAGE_COUNT; a_size++) {
 
-			if (penc == mmu_psize_defs[size].penc)
-				break;
+				/* 4K pages are not represented by LP */
+				if (a_size == MMU_PAGE_4K)
+					continue;
+
+				/* valid entries have a shift value */
+				if (!mmu_psize_defs[a_size].shift)
+					continue;
+
+				if (penc == mmu_psize_defs[size].penc[a_size])
+					goto out;
+			}
 		}
 	}
 
+out:
 	/* This works for all page sizes, and for 256M and 1T segments */
 	*ssize = hpte_v >> HPTE_V_SSIZE_SHIFT;
 	shift = mmu_psize_defs[size].shift;
@@ -433,7 +496,8 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
 	default:
 		*vpn = size = 0;
 	}
-	*psize = size;
+	*psize  = size;
+	*apsize = a_size;
 }
 
 /*
@@ -451,7 +515,7 @@ static void native_hpte_clear(void)
 	struct hash_pte *hptep = htab_address;
 	unsigned long hpte_v;
 	unsigned long pteg_count;
-	int psize, ssize;
+	int psize, apsize, ssize;
 
 	pteg_count = htab_hash_mask + 1;
 
@@ -477,9 +541,9 @@ static void native_hpte_clear(void)
 		 * already hold the native_tlbie_lock.
 		 */
 		if (hpte_v & HPTE_V_VALID) {
-			hpte_decode(hptep, slot, &psize, &ssize, &vpn);
+			hpte_decode(hptep, slot, &psize, &apsize, &ssize, &vpn);
 			hptep->v = 0;
-			__tlbie(vpn, psize, ssize);
+			__tlbie(vpn, psize, apsize, ssize);
 		}
 	}
 
@@ -540,7 +604,7 @@ static void native_flush_hash_range(unsigned long number, int local)
 
 			pte_iterate_hashed_subpages(pte, psize,
 						    vpn, index, shift) {
-				__tlbiel(vpn, psize, ssize);
+				__tlbiel(vpn, psize, psize, ssize);
 			} pte_iterate_hashed_end();
 		}
 		asm volatile("ptesync":::"memory");
@@ -557,7 +621,7 @@ static void native_flush_hash_range(unsigned long number, int local)
 
 			pte_iterate_hashed_subpages(pte, psize,
 						    vpn, index, shift) {
-				__tlbie(vpn, psize, ssize);
+				__tlbie(vpn, psize, psize, ssize);
 			} pte_iterate_hashed_end();
 		}
 		asm volatile("eieio; tlbsync; ptesync":::"memory");
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index bfeab83..a5a5067 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -125,7 +125,7 @@ static struct mmu_psize_def mmu_psize_defaults_old[] = {
 	[MMU_PAGE_4K] = {
 		.shift	= 12,
 		.sllp	= 0,
-		.penc	= 0,
+		.penc   = {[MMU_PAGE_4K] = 0, [1 ... MMU_PAGE_COUNT - 1] = -1},
 		.avpnm	= 0,
 		.tlbiel = 0,
 	},
@@ -139,14 +139,15 @@ static struct mmu_psize_def mmu_psize_defaults_gp[] = {
 	[MMU_PAGE_4K] = {
 		.shift	= 12,
 		.sllp	= 0,
-		.penc	= 0,
+		.penc   = {[MMU_PAGE_4K] = 0, [1 ... MMU_PAGE_COUNT - 1] = -1},
 		.avpnm	= 0,
 		.tlbiel = 1,
 	},
 	[MMU_PAGE_16M] = {
 		.shift	= 24,
 		.sllp	= SLB_VSID_L,
-		.penc	= 0,
+		.penc   = {[0 ... MMU_PAGE_16M - 1] = -1, [MMU_PAGE_16M] = 0,
+			    [MMU_PAGE_16M + 1 ... MMU_PAGE_COUNT - 1] = -1 },
 		.avpnm	= 0x1UL,
 		.tlbiel = 0,
 	},
@@ -208,7 +209,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
 
 		BUG_ON(!ppc_md.hpte_insert);
 		ret = ppc_md.hpte_insert(hpteg, vpn, paddr, tprot,
-					 HPTE_V_BOLTED, psize, ssize);
+					 HPTE_V_BOLTED, psize, psize, ssize);
 
 		if (ret < 0)
 			break;
@@ -275,6 +276,30 @@ static void __init htab_init_seg_sizes(void)
 	of_scan_flat_dt(htab_dt_scan_seg_sizes, NULL);
 }
 
+static int __init get_idx_from_shift(unsigned int shift)
+{
+	int idx = -1;
+
+	switch (shift) {
+	case 0xc:
+		idx = MMU_PAGE_4K;
+		break;
+	case 0x10:
+		idx = MMU_PAGE_64K;
+		break;
+	case 0x14:
+		idx = MMU_PAGE_1M;
+		break;
+	case 0x18:
+		idx = MMU_PAGE_16M;
+		break;
+	case 0x22:
+		idx = MMU_PAGE_16G;
+		break;
+	}
+	return idx;
+}
+
 static int __init htab_dt_scan_page_sizes(unsigned long node,
 					  const char *uname, int depth,
 					  void *data)
@@ -294,60 +319,61 @@ static int __init htab_dt_scan_page_sizes(unsigned long node,
 		size /= 4;
 		cur_cpu_spec->mmu_features &= ~(MMU_FTR_16M_PAGE);
 		while(size > 0) {
-			unsigned int shift = prop[0];
+			unsigned int base_shift = prop[0];
 			unsigned int slbenc = prop[1];
 			unsigned int lpnum = prop[2];
-			unsigned int lpenc = 0;
 			struct mmu_psize_def *def;
-			int idx = -1;
+			int idx, base_idx;
 
 			size -= 3; prop += 3;
-			while(size > 0 && lpnum) {
-				if (prop[0] == shift)
-					lpenc = prop[1];
-				prop += 2; size -= 2;
-				lpnum--;
+			base_idx = get_idx_from_shift(base_shift);
+			if (base_idx < 0) {
+				/*
+				 * skip the pte encoding also
+				 */
+				prop += lpnum * 2; size -= lpnum * 2;
+				continue;
 			}
-			switch(shift) {
-			case 0xc:
-				idx = MMU_PAGE_4K;
-				break;
-			case 0x10:
-				idx = MMU_PAGE_64K;
-				break;
-			case 0x14:
-				idx = MMU_PAGE_1M;
-				break;
-			case 0x18:
-				idx = MMU_PAGE_16M;
+			def = &mmu_psize_defs[base_idx];
+			if (base_idx == MMU_PAGE_16M)
 				cur_cpu_spec->mmu_features |= MMU_FTR_16M_PAGE;
-				break;
-			case 0x22:
-				idx = MMU_PAGE_16G;
-				break;
-			}
-			if (idx < 0)
-				continue;
-			def = &mmu_psize_defs[idx];
-			def->shift = shift;
-			if (shift <= 23)
+
+			def->shift = base_shift;
+			if (base_shift <= 23)
 				def->avpnm = 0;
 			else
-				def->avpnm = (1 << (shift - 23)) - 1;
+				def->avpnm = (1 << (base_shift - 23)) - 1;
 			def->sllp = slbenc;
-			def->penc = lpenc;
-			/* We don't know for sure what's up with tlbiel, so
+			/*
+			 * We don't know for sure what's up with tlbiel, so
 			 * for now we only set it for 4K and 64K pages
 			 */
-			if (idx == MMU_PAGE_4K || idx == MMU_PAGE_64K)
+			if (base_idx == MMU_PAGE_4K || base_idx == MMU_PAGE_64K)
 				def->tlbiel = 1;
 			else
 				def->tlbiel = 0;
 
-			DBG(" %d: shift=%02x, sllp=%04lx, avpnm=%08lx, "
-			    "tlbiel=%d, penc=%d\n",
-			    idx, shift, def->sllp, def->avpnm, def->tlbiel,
-			    def->penc);
+			while (size > 0 && lpnum) {
+				unsigned int shift = prop[0];
+				int penc  = prop[1];
+
+				prop += 2; size -= 2;
+				lpnum--;
+
+				idx = get_idx_from_shift(shift);
+				if (idx < 0)
+					continue;
+
+				if (penc == -1)
+					pr_err("Invalid penc for base_shift=%d "
+					       "shift=%d\n", base_shift, shift);
+
+				def->penc[idx] = penc;
+				DBG(" %d: shift=%02x, sllp=%04lx, "
+				    "avpnm=%08lx, tlbiel=%d, penc=%d\n",
+				    idx, shift, def->sllp, def->avpnm,
+				    def->tlbiel, def->penc[idx]);
+			}
 		}
 		return 1;
 	}
@@ -396,10 +422,21 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned long node,
 }
 #endif /* CONFIG_HUGETLB_PAGE */
 
+static void mmu_psize_set_default_penc(void)
+{
+	int bpsize, apsize;
+	for (bpsize = 0; bpsize < MMU_PAGE_COUNT; bpsize++)
+		for (apsize = 0; apsize < MMU_PAGE_COUNT; apsize++)
+			mmu_psize_defs[bpsize].penc[apsize] = -1;
+}
+
 static void __init htab_init_page_sizes(void)
 {
 	int rc;
 
+	/* se the invalid penc to -1 */
+	mmu_psize_set_default_penc();
+
 	/* Default to 4K pages only */
 	memcpy(mmu_psize_defs, mmu_psize_defaults_old,
 	       sizeof(mmu_psize_defaults_old));
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index cecad34..e0d52ee 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -103,7 +103,7 @@ repeat:
 
 		/* Insert into the hash table, primary slot */
 		slot = ppc_md.hpte_insert(hpte_group, vpn, pa, rflags, 0,
-					  mmu_psize, ssize);
+					  mmu_psize, mmu_psize, ssize);
 
 		/* Primary is full, try the secondary */
 		if (unlikely(slot == -1)) {
@@ -111,7 +111,7 @@ repeat:
 				      HPTES_PER_GROUP) & ~0x7UL;
 			slot = ppc_md.hpte_insert(hpte_group, vpn, pa, rflags,
 						  HPTE_V_SECONDARY,
-						  mmu_psize, ssize);
+						  mmu_psize, mmu_psize, ssize);
 			if (slot == -1) {
 				if (mftb() & 0x1)
 					hpte_group = ((hash & htab_hash_mask) *
diff --git a/arch/powerpc/platforms/cell/beat_htab.c b/arch/powerpc/platforms/cell/beat_htab.c
index 472f9a7..246e1d8 100644
--- a/arch/powerpc/platforms/cell/beat_htab.c
+++ b/arch/powerpc/platforms/cell/beat_htab.c
@@ -90,7 +90,7 @@ static inline unsigned int beat_read_mask(unsigned hpte_group)
 static long beat_lpar_hpte_insert(unsigned long hpte_group,
 				  unsigned long vpn, unsigned long pa,
 				  unsigned long rflags, unsigned long vflags,
-				  int psize, int ssize)
+				  int psize, int apsize, int ssize)
 {
 	unsigned long lpar_rc;
 	u64 hpte_v, hpte_r, slot;
@@ -103,9 +103,9 @@ static long beat_lpar_hpte_insert(unsigned long hpte_group,
 			"rflags=%lx, vflags=%lx, psize=%d)\n",
 		hpte_group, va, pa, rflags, vflags, psize);
 
-	hpte_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M) |
+	hpte_v = hpte_encode_v(vpn, psize, apsize, MMU_SEGSIZE_256M) |
 		vflags | HPTE_V_VALID;
-	hpte_r = hpte_encode_r(pa, psize) | rflags;
+	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
 
 	if (!(vflags & HPTE_V_BOLTED))
 		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
@@ -314,7 +314,7 @@ void __init hpte_init_beat(void)
 static long beat_lpar_hpte_insert_v3(unsigned long hpte_group,
 				  unsigned long vpn, unsigned long pa,
 				  unsigned long rflags, unsigned long vflags,
-				  int psize, int ssize)
+				  int psize, int apsize, int ssize)
 {
 	unsigned long lpar_rc;
 	u64 hpte_v, hpte_r, slot;
@@ -327,9 +327,9 @@ static long beat_lpar_hpte_insert_v3(unsigned long hpte_group,
 			"rflags=%lx, vflags=%lx, psize=%d)\n",
 		hpte_group, vpn, pa, rflags, vflags, psize);
 
-	hpte_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M) |
+	hpte_v = hpte_encode_v(vpn, psize, apsize, MMU_SEGSIZE_256M) |
 		vflags | HPTE_V_VALID;
-	hpte_r = hpte_encode_r(pa, psize) | rflags;
+	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
 
 	if (!(vflags & HPTE_V_BOLTED))
 		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
@@ -373,7 +373,7 @@ static long beat_lpar_hpte_updatepp_v3(unsigned long slot,
 	unsigned long pss;
 
 	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
-	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc;
+	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc[psize];
 
 	DBG_LOW("    update: "
 		"avpnv=%016lx, slot=%016lx, psize: %d, newpp %016lx ... ",
@@ -403,7 +403,7 @@ static void beat_lpar_hpte_invalidate_v3(unsigned long slot, unsigned long vpn,
 	DBG_LOW("    inval : slot=%lx, vpn=%016lx, psize: %d, local: %d\n",
 		slot, vpn, psize, local);
 	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
-	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc;
+	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc[psize];
 
 	lpar_rc = beat_invalidate_htab_entry3(0, slot, want_v, pss);
 
diff --git a/arch/powerpc/platforms/ps3/htab.c b/arch/powerpc/platforms/ps3/htab.c
index 07a4bba..44f06d2 100644
--- a/arch/powerpc/platforms/ps3/htab.c
+++ b/arch/powerpc/platforms/ps3/htab.c
@@ -45,7 +45,7 @@ static DEFINE_SPINLOCK(ps3_htab_lock);
 
 static long ps3_hpte_insert(unsigned long hpte_group, unsigned long vpn,
 	unsigned long pa, unsigned long rflags, unsigned long vflags,
-	int psize, int ssize)
+	int psize, int apsize, int ssize)
 {
 	int result;
 	u64 hpte_v, hpte_r;
@@ -61,8 +61,8 @@ static long ps3_hpte_insert(unsigned long hpte_group, unsigned long vpn,
 	 */
 	vflags &= ~HPTE_V_SECONDARY;
 
-	hpte_v = hpte_encode_v(vpn, psize, ssize) | vflags | HPTE_V_VALID;
-	hpte_r = hpte_encode_r(ps3_mm_phys_to_lpar(pa), psize) | rflags;
+	hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
+	hpte_r = hpte_encode_r(ps3_mm_phys_to_lpar(pa), psize, apsize) | rflags;
 
 	spin_lock_irqsave(&ps3_htab_lock, flags);
 
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index a77c35b..3daced3 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -109,7 +109,7 @@ void vpa_init(int cpu)
 static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
 				     unsigned long vpn, unsigned long pa,
 				     unsigned long rflags, unsigned long vflags,
-				     int psize, int ssize)
+				     int psize, int apsize, int ssize)
 {
 	unsigned long lpar_rc;
 	unsigned long flags;
@@ -121,8 +121,8 @@ static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
 			 "pa=%016lx, rflags=%lx, vflags=%lx, psize=%d)\n",
 			 hpte_group, vpn,  pa, rflags, vflags, psize);
 
-	hpte_v = hpte_encode_v(vpn, psize, ssize) | vflags | HPTE_V_VALID;
-	hpte_r = hpte_encode_r(pa, psize) | rflags;
+	hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
+	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
 
 	if (!(vflags & HPTE_V_BOLTED))
 		pr_devel(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 09/25] powerpc: Fix hpte_decode to use the correct decoding for page sizes
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (7 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  3:20   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 10/25] powerpc: print both base and actual page size on hash failure Aneesh Kumar K.V
                   ` (18 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

As per ISA doc, we encode base and actual page size in the LP bits of
PTE. The number of bit used to encode the page sizes depend on actual
page size.  ISA doc lists this as

   PTE LP     actual page size
rrrr rrrz 	>=8KB
rrrr rrzz	>=16KB
rrrr rzzz 	>=32KB
rrrr zzzz 	>=64KB
rrrz zzzz 	>=128KB
rrzz zzzz 	>=256KB
rzzz zzzz	>=512KB
zzzz zzzz 	>=1MB

ISA doc also says
"The values of the “z” bits used to specify each size, along with all possible
values of “r” bits in the LP field, must result in LP values distinct from
other LP values for other sizes."

based on the above update hpte_decode to use the correct decoding for LP bits.

Acked-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/hash_native_64.c |   38 ++++++++++++++++++++++++--------------
 1 file changed, 24 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index aa0499b..b461b2d 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -428,41 +428,51 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
 			int *psize, int *apsize, int *ssize, unsigned long *vpn)
 {
 	unsigned long avpn, pteg, vpi;
-	unsigned long hpte_r = hpte->r;
 	unsigned long hpte_v = hpte->v;
 	unsigned long vsid, seg_off;
-	int i, size, a_size, shift, penc;
+	int size, a_size, shift, mask;
+	/* Look at the 8 bit LP value */
+	unsigned int lp = (hpte->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
 
 	if (!(hpte_v & HPTE_V_LARGE)) {
 		size   = MMU_PAGE_4K;
 		a_size = MMU_PAGE_4K;
 	} else {
-		for (i = 0; i < LP_BITS; i++) {
-			if ((hpte_r & LP_MASK(i+1)) == LP_MASK(i+1))
-				break;
-		}
-		penc = LP_MASK(i+1) >> LP_SHIFT;
 		for (size = 0; size < MMU_PAGE_COUNT; size++) {
 
 			/* valid entries have a shift value */
 			if (!mmu_psize_defs[size].shift)
 				continue;
-			for (a_size = 0; a_size < MMU_PAGE_COUNT; a_size++) {
-
-				/* 4K pages are not represented by LP */
-				if (a_size == MMU_PAGE_4K)
-					continue;
 
+			/* start from 1 ignoring MMU_PAGE_4K */
+			for (a_size = 1; a_size < MMU_PAGE_COUNT; a_size++) {
 				/* valid entries have a shift value */
 				if (!mmu_psize_defs[a_size].shift)
 					continue;
 
-				if (penc == mmu_psize_defs[size].penc[a_size])
+				/* invalid penc */
+				if (mmu_psize_defs[size].penc[a_size] == -1)
+					continue;
+				/*
+				 * encoding bits per actual page size
+				 *        PTE LP     actual page size
+				 *    rrrr rrrz		>=8KB
+				 *    rrrr rrzz		>=16KB
+				 *    rrrr rzzz		>=32KB
+				 *    rrrr zzzz		>=64KB
+				 * .......
+				 */
+				shift = mmu_psize_defs[a_size].shift - LP_SHIFT;
+				if (shift > LP_BITS)
+					shift = LP_BITS;
+				mask = (1 << shift) - 1;
+				if ((lp & mask) ==
+				    mmu_psize_defs[size].penc[a_size]) {
 					goto out;
+				}
 			}
 		}
 	}
-
 out:
 	/* This works for all page sizes, and for 256M and 1T segments */
 	*ssize = hpte_v >> HPTE_V_SSIZE_SHIFT;
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 10/25] powerpc: print both base and actual page size on hash failure
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (8 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 09/25] powerpc: Fix hpte_decode to use the correct decoding for page sizes Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  3:21   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 11/25] powerpc: Print page size info during boot Aneesh Kumar K.V
                   ` (17 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/mmu-hash64.h |    3 ++-
 arch/powerpc/mm/hash_utils_64.c       |   12 +++++++-----
 arch/powerpc/mm/hugetlbpage-hash64.c  |    2 +-
 3 files changed, 10 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index e42f4a3..e187254 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -324,7 +324,8 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 		     unsigned int shift, unsigned int mmu_psize);
 extern void hash_failure_debug(unsigned long ea, unsigned long access,
 			       unsigned long vsid, unsigned long trap,
-			       int ssize, int psize, unsigned long pte);
+			       int ssize, int psize, int lpsize,
+			       unsigned long pte);
 extern int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
 			     unsigned long pstart, unsigned long prot,
 			     int psize, int ssize);
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index a5a5067..56ff4bb 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -933,14 +933,14 @@ static inline int subpage_protection(struct mm_struct *mm, unsigned long ea)
 
 void hash_failure_debug(unsigned long ea, unsigned long access,
 			unsigned long vsid, unsigned long trap,
-			int ssize, int psize, unsigned long pte)
+			int ssize, int psize, int lpsize, unsigned long pte)
 {
 	if (!printk_ratelimit())
 		return;
 	pr_info("mm: Hashing failure ! EA=0x%lx access=0x%lx current=%s\n",
 		ea, access, current->comm);
-	pr_info("    trap=0x%lx vsid=0x%lx ssize=%d psize=%d pte=0x%lx\n",
-		trap, vsid, ssize, psize, pte);
+	pr_info("    trap=0x%lx vsid=0x%lx ssize=%d base psize=%d psize %d pte=0x%lx\n",
+		trap, vsid, ssize, psize, lpsize, pte);
 }
 
 /* Result code is:
@@ -1113,7 +1113,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
 	 */
 	if (rc == -1)
 		hash_failure_debug(ea, access, vsid, trap, ssize, psize,
-				   pte_val(*ptep));
+				   psize, pte_val(*ptep));
 #ifndef CONFIG_PPC_64K_PAGES
 	DBG_LOW(" o-pte: %016lx\n", pte_val(*ptep));
 #else
@@ -1191,7 +1191,9 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
 	 */
 	if (rc == -1)
 		hash_failure_debug(ea, access, vsid, trap, ssize,
-				   mm->context.user_psize, pte_val(*ptep));
+				   mm->context.user_psize,
+				   mm->context.user_psize,
+				   pte_val(*ptep));
 
 	local_irq_restore(flags);
 }
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index e0d52ee..06ecb55 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -129,7 +129,7 @@ repeat:
 		if (unlikely(slot == -2)) {
 			*ptep = __pte(old_pte);
 			hash_failure_debug(ea, access, vsid, trap, ssize,
-					   mmu_psize, old_pte);
+					   mmu_psize, mmu_psize, old_pte);
 			return -1;
 		}
 
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 11/25] powerpc: Print page size info during boot
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (9 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 10/25] powerpc: print both base and actual page size on hash failure Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl Aneesh Kumar K.V
                   ` (16 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

This gives hint about different base and actual page size combination
supported by the platform.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/hash_utils_64.c |   10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 56ff4bb..1f2ebbd 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -315,7 +315,7 @@ static int __init htab_dt_scan_page_sizes(unsigned long node,
 	prop = (u32 *)of_get_flat_dt_prop(node,
 					  "ibm,segment-page-sizes", &size);
 	if (prop != NULL) {
-		DBG("Page sizes from device-tree:\n");
+		pr_info("Page sizes from device-tree:\n");
 		size /= 4;
 		cur_cpu_spec->mmu_features &= ~(MMU_FTR_16M_PAGE);
 		while(size > 0) {
@@ -369,10 +369,10 @@ static int __init htab_dt_scan_page_sizes(unsigned long node,
 					       "shift=%d\n", base_shift, shift);
 
 				def->penc[idx] = penc;
-				DBG(" %d: shift=%02x, sllp=%04lx, "
-				    "avpnm=%08lx, tlbiel=%d, penc=%d\n",
-				    idx, shift, def->sllp, def->avpnm,
-				    def->tlbiel, def->penc[idx]);
+				pr_info("base_shift=%d: shift=%d, sllp=0x%04lx,"
+					" avpnm=0x%08lx, tlbiel=%d, penc=%d\n",
+					base_shift, shift, def->sllp,
+					def->avpnm, def->tlbiel, def->penc[idx]);
 			}
 		}
 		return 1;
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (10 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 11/25] powerpc: Print page size info during boot Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  3:24   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc Aneesh Kumar K.V
                   ` (15 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/kvm/book3s_hv.c |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 48f6d99..f472414 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -1508,14 +1508,24 @@ long kvm_vm_ioctl_allocate_rma(struct kvm *kvm, struct kvm_allocate_rma *ret)
 static void kvmppc_add_seg_page_size(struct kvm_ppc_one_seg_page_size **sps,
 				     int linux_psize)
 {
+	int i, index = 0;
 	struct mmu_psize_def *def = &mmu_psize_defs[linux_psize];
 
 	if (!def->shift)
 		return;
 	(*sps)->page_shift = def->shift;
 	(*sps)->slb_enc = def->sllp;
-	(*sps)->enc[0].page_shift = def->shift;
-	(*sps)->enc[0].pte_enc = def->penc[linux_psize];
+	for (i = 0; i < MMU_PAGE_COUNT; i++) {
+		if (def->penc[i] != -1) {
+			if (index >= KVM_PPC_PAGE_SIZES_MAX_SZ) {
+				WARN_ON(1);
+				break;
+			}
+			(*sps)->enc[index].page_shift = mmu_psize_defs[i].shift;
+			(*sps)->enc[index].pte_enc = def->penc[i];
+			index++;
+		}
+	}
 	(*sps)++;
 }
 
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (11 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  3:30   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 14/25] mm/THP: HPAGE_SHIFT is not a #define on some arch Aneesh Kumar K.V
                   ` (14 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

This make sure we handle multiple page size segment correctly.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/hash_native_64.c |   30 ++++++++++++++++++++++++++++--
 1 file changed, 28 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index b461b2d..ac84fa6 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -61,7 +61,10 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
 
 	switch (psize) {
 	case MMU_PAGE_4K:
+		/* clear out bits after (52) [0....52.....63] */
+		va &= ~((1ul << (64 - 52)) - 1);
 		va |= ssize << 8;
+		va |= mmu_psize_defs[apsize].sllp << 6;
 		asm volatile(ASM_FTR_IFCLR("tlbie %0,0", PPC_TLBIE(%1,%0), %2)
 			     : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)
 			     : "memory");
@@ -69,9 +72,19 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
 	default:
 		/* We need 14 to 14 + i bits of va */
 		penc = mmu_psize_defs[psize].penc[apsize];
-		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
+		va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);
 		va |= penc << 12;
 		va |= ssize << 8;
+		/* Add AVAL part */
+		if (psize != apsize) {
+			/*
+			 * MPSS, 64K base page size and 16MB parge page size
+			 * We don't need all the bits, but this seems to work.
+			 * vpn cover upto 65 bits of va. (0...65) and we need
+			 * 58..64 bits of va.
+			 */
+			va |= (vpn & 0xfe);
+		}
 		va |= 1; /* L */
 		asm volatile(ASM_FTR_IFCLR("tlbie %0,1", PPC_TLBIE(%1,%0), %2)
 			     : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)
@@ -96,16 +109,29 @@ static inline void __tlbiel(unsigned long vpn, int psize, int apsize, int ssize)
 
 	switch (psize) {
 	case MMU_PAGE_4K:
+		/* clear out bits after(52) [0....52.....63] */
+		va &= ~((1ul << (64 - 52)) - 1);
 		va |= ssize << 8;
+		va |= mmu_psize_defs[apsize].sllp << 6;
 		asm volatile(".long 0x7c000224 | (%0 << 11) | (0 << 21)"
 			     : : "r"(va) : "memory");
 		break;
 	default:
 		/* We need 14 to 14 + i bits of va */
 		penc = mmu_psize_defs[psize].penc[apsize];
-		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
+		va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);
 		va |= penc << 12;
 		va |= ssize << 8;
+		/* Add AVAL part */
+		if (psize != apsize) {
+			/*
+			 * MPSS, 64K base page size and 16MB parge page size
+			 * We don't need all the bits, but this seems to work.
+			 * vpn cover upto 65 bits of va. (0...65) and we need
+			 * 58..64 bits of va.
+			 */
+			va |= (vpn & 0xfe);
+		}
 		va |= 1; /* L */
 		asm volatile(".long 0x7c000224 | (%0 << 11) | (1 << 21)"
 			     : : "r"(va) : "memory");
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 14/25] mm/THP: HPAGE_SHIFT is not a #define on some arch
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (12 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  3:36   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 15/25] mm/THP: Add pmd args to pgtable deposit and withdraw APIs Aneesh Kumar K.V
                   ` (13 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: Andrea Arcangeli, linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

On archs like powerpc that support different hugepage sizes, HPAGE_SHIFT
and other derived values like HPAGE_PMD_ORDER are not constants. So move
that to hugepage_init

Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 include/linux/huge_mm.h |    3 ---
 mm/huge_memory.c        |    9 ++++++---
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 1d76f8c..0022b70 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -119,9 +119,6 @@ extern void __split_huge_page_pmd(struct vm_area_struct *vma,
 	} while (0)
 extern void split_huge_page_pmd_mm(struct mm_struct *mm, unsigned long address,
 		pmd_t *pmd);
-#if HPAGE_PMD_ORDER > MAX_ORDER
-#error "hugepages can't be allocated by the buddy allocator"
-#endif
 extern int hugepage_madvise(struct vm_area_struct *vma,
 			    unsigned long *vm_flags, int advice);
 extern void __vma_adjust_trans_huge(struct vm_area_struct *vma,
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b5783d8..1940ee0 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -44,7 +44,7 @@ unsigned long transparent_hugepage_flags __read_mostly =
 	(1<<TRANSPARENT_HUGEPAGE_USE_ZERO_PAGE_FLAG);
 
 /* default scan 8*512 pte (or vmas) every 30 second */
-static unsigned int khugepaged_pages_to_scan __read_mostly = HPAGE_PMD_NR*8;
+static unsigned int khugepaged_pages_to_scan __read_mostly;
 static unsigned int khugepaged_pages_collapsed;
 static unsigned int khugepaged_full_scans;
 static unsigned int khugepaged_scan_sleep_millisecs __read_mostly = 10000;
@@ -59,7 +59,7 @@ static DECLARE_WAIT_QUEUE_HEAD(khugepaged_wait);
  * it would have happened if the vma was large enough during page
  * fault.
  */
-static unsigned int khugepaged_max_ptes_none __read_mostly = HPAGE_PMD_NR-1;
+static unsigned int khugepaged_max_ptes_none __read_mostly;
 
 static int khugepaged(void *none);
 static int mm_slots_hash_init(void);
@@ -621,11 +621,14 @@ static int __init hugepage_init(void)
 	int err;
 	struct kobject *hugepage_kobj;
 
-	if (!has_transparent_hugepage()) {
+	if (!has_transparent_hugepage() || (HPAGE_PMD_ORDER > MAX_ORDER)) {
 		transparent_hugepage_flags = 0;
 		return -EINVAL;
 	}
 
+	khugepaged_pages_to_scan = HPAGE_PMD_NR*8;
+	khugepaged_max_ptes_none = HPAGE_PMD_NR-1;
+
 	err = hugepage_init_sysfs(&hugepage_kobj);
 	if (err)
 		return err;
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 15/25] mm/THP: Add pmd args to pgtable deposit and withdraw APIs
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (13 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 14/25] mm/THP: HPAGE_SHIFT is not a #define on some arch Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  3:40   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 16/25] mm/THP: withdraw the pgtable after pmdp related operations Aneesh Kumar K.V
                   ` (12 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: Andrea Arcangeli, linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

This will be later used by powerpc THP support. In powerpc we want to use
pgtable for storing the hash index values. So instead of adding them to
mm_context list, we would like to store them in the second half of pmd

Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/s390/include/asm/pgtable.h     |    5 +++--
 arch/s390/mm/pgtable.c              |    5 +++--
 arch/sparc/include/asm/pgtable_64.h |    5 +++--
 arch/sparc/mm/tlb.c                 |    5 +++--
 include/asm-generic/pgtable.h       |    5 +++--
 mm/huge_memory.c                    |   18 +++++++++---------
 mm/pgtable-generic.c                |    5 +++--
 7 files changed, 27 insertions(+), 21 deletions(-)

diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 098adbb..883296e 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1232,10 +1232,11 @@ static inline void __pmd_idte(unsigned long address, pmd_t *pmdp)
 #define SEGMENT_RW	__pgprot(_HPAGE_TYPE_RW)
 
 #define __HAVE_ARCH_PGTABLE_DEPOSIT
-extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable);
+extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				       pgtable_t pgtable);
 
 #define __HAVE_ARCH_PGTABLE_WITHDRAW
-extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm);
+extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
 
 static inline int pmd_trans_splitting(pmd_t pmd)
 {
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index ae44d2a..9ab3224 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -920,7 +920,8 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
 	}
 }
 
-void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable)
+void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				pgtable_t pgtable)
 {
 	struct list_head *lh = (struct list_head *) pgtable;
 
@@ -934,7 +935,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable)
 	mm->pmd_huge_pte = pgtable;
 }
 
-pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm)
+pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
 {
 	struct list_head *lh;
 	pgtable_t pgtable;
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index 08fcce9..4c86de2 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -853,10 +853,11 @@ extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
 				 pmd_t *pmd);
 
 #define __HAVE_ARCH_PGTABLE_DEPOSIT
-extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable);
+extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				       pgtable_t pgtable);
 
 #define __HAVE_ARCH_PGTABLE_WITHDRAW
-extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm);
+extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
 #endif
 
 /* Encode and de-code a swap entry */
diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
index 3e8fec3..79922f4 100644
--- a/arch/sparc/mm/tlb.c
+++ b/arch/sparc/mm/tlb.c
@@ -150,7 +150,8 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
 	}
 }
 
-void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable)
+void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				pgtable_t pgtable)
 {
 	struct list_head *lh = (struct list_head *) pgtable;
 
@@ -164,7 +165,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable)
 	mm->pmd_huge_pte = pgtable;
 }
 
-pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm)
+pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
 {
 	struct list_head *lh;
 	pgtable_t pgtable;
diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h
index 5cf680a..6f87e9e 100644
--- a/include/asm-generic/pgtable.h
+++ b/include/asm-generic/pgtable.h
@@ -163,11 +163,12 @@ extern void pmdp_splitting_flush(struct vm_area_struct *vma,
 #endif
 
 #ifndef __HAVE_ARCH_PGTABLE_DEPOSIT
-extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable);
+extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				       pgtable_t pgtable);
 #endif
 
 #ifndef __HAVE_ARCH_PGTABLE_WITHDRAW
-extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm);
+extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
 #endif
 
 #ifndef __HAVE_ARCH_PMDP_INVALIDATE
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 1940ee0..e91b763 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -742,7 +742,7 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
 		 */
 		page_add_new_anon_rmap(page, vma, haddr);
 		set_pmd_at(mm, haddr, pmd, entry);
-		pgtable_trans_huge_deposit(mm, pgtable);
+		pgtable_trans_huge_deposit(mm, pmd, pgtable);
 		add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR);
 		mm->nr_ptes++;
 		spin_unlock(&mm->page_table_lock);
@@ -784,7 +784,7 @@ static bool set_huge_zero_page(pgtable_t pgtable, struct mm_struct *mm,
 	entry = pmd_wrprotect(entry);
 	entry = pmd_mkhuge(entry);
 	set_pmd_at(mm, haddr, pmd, entry);
-	pgtable_trans_huge_deposit(mm, pgtable);
+	pgtable_trans_huge_deposit(mm, pmd, pgtable);
 	mm->nr_ptes++;
 	return true;
 }
@@ -929,7 +929,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 	pmdp_set_wrprotect(src_mm, addr, src_pmd);
 	pmd = pmd_mkold(pmd_wrprotect(pmd));
 	set_pmd_at(dst_mm, addr, dst_pmd, pmd);
-	pgtable_trans_huge_deposit(dst_mm, pgtable);
+	pgtable_trans_huge_deposit(dst_mm, dst_pmd, pgtable);
 	dst_mm->nr_ptes++;
 
 	ret = 0;
@@ -999,7 +999,7 @@ static int do_huge_pmd_wp_zero_page_fallback(struct mm_struct *mm,
 	pmdp_clear_flush(vma, haddr, pmd);
 	/* leave pmd empty until pte is filled */
 
-	pgtable = pgtable_trans_huge_withdraw(mm);
+	pgtable = pgtable_trans_huge_withdraw(mm, pmd);
 	pmd_populate(mm, &_pmd, pgtable);
 
 	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
@@ -1094,10 +1094,10 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
 		goto out_free_pages;
 	VM_BUG_ON(!PageHead(page));
 
+	pgtable = pgtable_trans_huge_withdraw(mm, pmd);
 	pmdp_clear_flush(vma, haddr, pmd);
 	/* leave pmd empty until pte is filled */
 
-	pgtable = pgtable_trans_huge_withdraw(mm);
 	pmd_populate(mm, &_pmd, pgtable);
 
 	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
@@ -1380,7 +1380,7 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		struct page *page;
 		pgtable_t pgtable;
 		pmd_t orig_pmd;
-		pgtable = pgtable_trans_huge_withdraw(tlb->mm);
+		pgtable = pgtable_trans_huge_withdraw(tlb->mm, pmd);
 		orig_pmd = pmdp_get_and_clear(tlb->mm, addr, pmd);
 		tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
 		if (is_huge_zero_pmd(orig_pmd)) {
@@ -1712,7 +1712,7 @@ static int __split_huge_page_map(struct page *page,
 	pmd = page_check_address_pmd(page, mm, address,
 				     PAGE_CHECK_ADDRESS_PMD_SPLITTING_FLAG);
 	if (pmd) {
-		pgtable = pgtable_trans_huge_withdraw(mm);
+		pgtable = pgtable_trans_huge_withdraw(mm, pmd);
 		pmd_populate(mm, &_pmd, pgtable);
 
 		haddr = address;
@@ -2400,7 +2400,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	page_add_new_anon_rmap(new_page, vma, address);
 	set_pmd_at(mm, address, pmd, _pmd);
 	update_mmu_cache_pmd(vma, address, pmd);
-	pgtable_trans_huge_deposit(mm, pgtable);
+	pgtable_trans_huge_deposit(mm, pmd, pgtable);
 	spin_unlock(&mm->page_table_lock);
 
 	*hpage = NULL;
@@ -2706,7 +2706,7 @@ static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
 	pmdp_clear_flush(vma, haddr, pmd);
 	/* leave pmd empty until pte is filled */
 
-	pgtable = pgtable_trans_huge_withdraw(mm);
+	pgtable = pgtable_trans_huge_withdraw(mm, pmd);
 	pmd_populate(mm, &_pmd, pgtable);
 
 	for (i = 0; i < HPAGE_PMD_NR; i++, haddr += PAGE_SIZE) {
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index 0c8323f..e1a6e4f 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -124,7 +124,8 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
 
 #ifndef __HAVE_ARCH_PGTABLE_DEPOSIT
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable)
+void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				pgtable_t pgtable)
 {
 	assert_spin_locked(&mm->page_table_lock);
 
@@ -141,7 +142,7 @@ void pgtable_trans_huge_deposit(struct mm_struct *mm, pgtable_t pgtable)
 #ifndef __HAVE_ARCH_PGTABLE_WITHDRAW
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 /* no "address" argument so destroys page coloring of some arch */
-pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm)
+pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
 {
 	pgtable_t pgtable;
 
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 16/25] mm/THP: withdraw the pgtable after pmdp related operations
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (14 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 15/25] mm/THP: Add pmd args to pgtable deposit and withdraw APIs Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-04  5:57 ` [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64 Aneesh Kumar K.V
                   ` (11 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: Andrea Arcangeli, linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

For architectures like ppc64 we look at deposited pgtable when
calling pmdp_get_and_clear. So do the pgtable_trans_huge_withdraw
after finishing pmdp related operations.

Cc: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 mm/huge_memory.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e91b763..5c7cd7d 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1380,9 +1380,10 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		struct page *page;
 		pgtable_t pgtable;
 		pmd_t orig_pmd;
-		pgtable = pgtable_trans_huge_withdraw(tlb->mm, pmd);
+
 		orig_pmd = pmdp_get_and_clear(tlb->mm, addr, pmd);
 		tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
+		pgtable = pgtable_trans_huge_withdraw(tlb->mm, pmd);
 		if (is_huge_zero_pmd(orig_pmd)) {
 			tlb->mm->nr_ptes--;
 			spin_unlock(&tlb->mm->page_table_lock);
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (15 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 16/25] mm/THP: withdraw the pgtable after pmdp related operations Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  5:38   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 18/25] powerpc/THP: Double the PMD table size for THP Aneesh Kumar K.V
                   ` (10 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We now have pmd entries covering to 16MB range. To implement THP on powerpc,
we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
We also use the depoisted PTE page for tracking the HPTE information. The information
include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
4096 entries. Both will fit in a 4K PTE page.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/page.h              |    2 +-
 arch/powerpc/include/asm/pgtable-ppc64-64k.h |    3 +-
 arch/powerpc/include/asm/pgtable-ppc64.h     |    2 +-
 arch/powerpc/include/asm/pgtable.h           |  240 ++++++++++++++++++++
 arch/powerpc/mm/pgtable.c                    |  314 ++++++++++++++++++++++++++
 arch/powerpc/mm/pgtable_64.c                 |   13 ++
 arch/powerpc/platforms/Kconfig.cputype       |    1 +
 7 files changed, 572 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index 38e7ff6..b927447 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -40,7 +40,7 @@
 #ifdef CONFIG_HUGETLB_PAGE
 extern unsigned int HPAGE_SHIFT;
 #else
-#define HPAGE_SHIFT PAGE_SHIFT
+#define HPAGE_SHIFT PMD_SHIFT
 #endif
 #define HPAGE_SIZE		((1UL) << HPAGE_SHIFT)
 #define HPAGE_MASK		(~(HPAGE_SIZE - 1))
diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
index 3c529b4..5c5541a 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
@@ -33,7 +33,8 @@
 #define PGDIR_MASK	(~(PGDIR_SIZE-1))
 
 /* Bits to mask out from a PMD to get to the PTE page */
-#define PMD_MASKED_BITS		0x1ff
+/* PMDs point to PTE table fragments which are 4K aligned.  */
+#define PMD_MASKED_BITS		0xfff
 /* Bits to mask out from a PGD/PUD to get to the PMD page */
 #define PUD_MASKED_BITS		0x1ff
 
diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index 0182c20..c0747c7 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -150,7 +150,7 @@
 #define	pmd_present(pmd)	(pmd_val(pmd) != 0)
 #define	pmd_clear(pmdp)		(pmd_val(*(pmdp)) = 0)
 #define pmd_page_vaddr(pmd)	(pmd_val(pmd) & ~PMD_MASKED_BITS)
-#define pmd_page(pmd)		virt_to_page(pmd_page_vaddr(pmd))
+extern struct page *pmd_page(pmd_t pmd);
 
 #define pud_set(pudp, pudval)	(pud_val(*(pudp)) = (pudval))
 #define pud_none(pud)		(!pud_val(pud))
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 4b52726..9fbe2a7 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -23,7 +23,247 @@ struct mm_struct;
  */
 #define PTE_PAGE_HIDX_OFFSET (PTRS_PER_PTE * 8)
 
+/* A large part matches with pte bits */
+#define PMD_HUGE_PRESENT	0x001 /* software: pte contains a translation */
+#define PMD_HUGE_USER		0x002 /* matches one of the PP bits */
+#define PMD_HUGE_FILE		0x002 /* (!present only) software: pte holds file offset */
+#define PMD_HUGE_EXEC		0x004 /* No execute on POWER4 and newer (we invert) */
+#define PMD_HUGE_SPLITTING	0x008
+#define PMD_HUGE_SAO		0x010 /* strong Access order */
+#define PMD_HUGE_HASHPTE	0x020
+#define PMD_ISHUGE		0x040
+#define PMD_HUGE_DIRTY		0x080 /* C: page changed */
+#define PMD_HUGE_ACCESSED	0x100 /* R: page referenced */
+#define PMD_HUGE_RW		0x200 /* software: user write access allowed */
+#define PMD_HUGE_BUSY		0x800 /* software: PTE & hash are busy */
+#define PMD_HUGE_HPTEFLAGS	(PMD_HUGE_BUSY | PMD_HUGE_HASHPTE)
+/*
+ * We keep both the pmd and pte rpn shift same, eventhough we use only
+ * lower 12 bits for hugepage flags at pmd level
+ */
+#define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
+#define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
+#define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
+
 #ifndef __ASSEMBLY__
+extern void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
+				     pmd_t *pmdp);
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
+extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
+extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
+extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+		       pmd_t *pmdp, pmd_t pmd);
+extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+				 pmd_t *pmd);
+static inline int pmd_large(pmd_t pmd)
+{
+	return (pmd_val(pmd) & (PMD_ISHUGE | PMD_HUGE_PRESENT)) ==
+		(PMD_ISHUGE | PMD_HUGE_PRESENT);
+}
+
+static inline int pmd_trans_splitting(pmd_t pmd)
+{
+	return (pmd_val(pmd) & (PMD_ISHUGE|PMD_HUGE_SPLITTING)) ==
+		(PMD_ISHUGE|PMD_HUGE_SPLITTING);
+}
+
+static inline int pmd_trans_huge(pmd_t pmd)
+{
+	return pmd_val(pmd) & PMD_ISHUGE;
+}
+/* We will enable it in the last patch */
+#define has_transparent_hugepage() 0
+#else
+#define pmd_large(pmd)		0
+#define has_transparent_hugepage() 0
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+static inline unsigned long pmd_pfn(pmd_t pmd)
+{
+	/*
+	 * Only called for hugepage pmd
+	 */
+	return pmd_val(pmd) >> PMD_HUGE_RPN_SHIFT;
+}
+
+static inline int pmd_young(pmd_t pmd)
+{
+	return pmd_val(pmd) & PMD_HUGE_ACCESSED;
+}
+
+static inline pmd_t pmd_mkhuge(pmd_t pmd)
+{
+	/* Do nothing, mk_pmd() does this part.  */
+	return pmd;
+}
+
+#define __HAVE_ARCH_PMD_WRITE
+static inline int pmd_write(pmd_t pmd)
+{
+	return pmd_val(pmd) & PMD_HUGE_RW;
+}
+
+static inline pmd_t pmd_mkold(pmd_t pmd)
+{
+	pmd_val(pmd) &= ~PMD_HUGE_ACCESSED;
+	return pmd;
+}
+
+static inline pmd_t pmd_wrprotect(pmd_t pmd)
+{
+	pmd_val(pmd) &= ~PMD_HUGE_RW;
+	return pmd;
+}
+
+static inline pmd_t pmd_mkdirty(pmd_t pmd)
+{
+	pmd_val(pmd) |= PMD_HUGE_DIRTY;
+	return pmd;
+}
+
+static inline pmd_t pmd_mkyoung(pmd_t pmd)
+{
+	pmd_val(pmd) |= PMD_HUGE_ACCESSED;
+	return pmd;
+}
+
+static inline pmd_t pmd_mkwrite(pmd_t pmd)
+{
+	pmd_val(pmd) |= PMD_HUGE_RW;
+	return pmd;
+}
+
+static inline pmd_t pmd_mknotpresent(pmd_t pmd)
+{
+	pmd_val(pmd) &= ~PMD_HUGE_PRESENT;
+	return pmd;
+}
+
+static inline pmd_t pmd_mksplitting(pmd_t pmd)
+{
+	pmd_val(pmd) |= PMD_HUGE_SPLITTING;
+	return pmd;
+}
+
+/*
+ * Set the dirty and/or accessed bits atomically in a linux hugepage PMD, this
+ * function doesn't need to flush the hash entry
+ */
+static inline void __pmdp_set_access_flags(pmd_t *pmdp, pmd_t entry)
+{
+	unsigned long bits = pmd_val(entry) & (PMD_HUGE_DIRTY |
+					       PMD_HUGE_ACCESSED |
+					       PMD_HUGE_RW | PMD_HUGE_EXEC);
+#ifdef PTE_ATOMIC_UPDATES
+	unsigned long old, tmp;
+
+	__asm__ __volatile__(
+	"1:	ldarx	%0,0,%4\n\
+		andi.	%1,%0,%6\n\
+		bne-	1b \n\
+		or	%0,%3,%0\n\
+		stdcx.	%0,0,%4\n\
+		bne-	1b"
+	:"=&r" (old), "=&r" (tmp), "=m" (*pmdp)
+	:"r" (bits), "r" (pmdp), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
+	:"cc");
+#else
+	unsigned long old = pmd_val(*pmdp);
+	*pmdp = __pmd(old | bits);
+#endif
+}
+
+#define __HAVE_ARCH_PMD_SAME
+static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
+{
+	return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~PMD_HUGE_HPTEFLAGS) == 0);
+}
+
+#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
+extern int pmdp_set_access_flags(struct vm_area_struct *vma,
+				 unsigned long address, pmd_t *pmdp,
+				 pmd_t entry, int dirty);
+
+static inline unsigned long pmd_hugepage_update(struct mm_struct *mm,
+						unsigned long addr,
+						pmd_t *pmdp, unsigned long clr)
+{
+#ifdef PTE_ATOMIC_UPDATES
+	unsigned long old, tmp;
+
+	__asm__ __volatile__(
+	"1:	ldarx	%0,0,%3\n\
+		andi.	%1,%0,%6\n\
+		bne-	1b \n\
+		andc	%1,%0,%4 \n\
+		stdcx.	%1,0,%3 \n\
+		bne-	1b"
+	: "=&r" (old), "=&r" (tmp), "=m" (*pmdp)
+	: "r" (pmdp), "r" (clr), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
+	: "cc" );
+#else
+	unsigned long old = pmd_val(*pmdp);
+	*pmdp = __pmd(old & ~clr);
+#endif
+
+#ifdef CONFIG_PPC_STD_MMU_64
+	if (old & PMD_HUGE_HASHPTE)
+		hpte_need_hugepage_flush(mm, addr, pmdp);
+#endif
+	return old;
+}
+
+static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
+					      unsigned long addr, pmd_t *pmdp)
+{
+	unsigned long old;
+
+	if ((pmd_val(*pmdp) & (PMD_HUGE_ACCESSED | PMD_HUGE_HASHPTE)) == 0)
+		return 0;
+	old = pmd_hugepage_update(mm, addr, pmdp, PMD_HUGE_ACCESSED);
+	return ((old & PMD_HUGE_ACCESSED) != 0);
+}
+
+#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
+extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+				     unsigned long address, pmd_t *pmdp);
+#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
+extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
+				  unsigned long address, pmd_t *pmdp);
+
+#define __HAVE_ARCH_PMDP_GET_AND_CLEAR
+static inline pmd_t pmdp_get_and_clear(struct mm_struct *mm,
+				       unsigned long addr, pmd_t *pmdp)
+{
+	unsigned long old = pmd_hugepage_update(mm, addr, pmdp, ~0UL);
+	return __pmd(old);
+}
+
+#define __HAVE_ARCH_PMDP_SET_WRPROTECT
+static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
+				      pmd_t *pmdp)
+{
+
+	if ((pmd_val(*pmdp) & PMD_HUGE_RW) == 0)
+		return;
+
+	pmd_hugepage_update(mm, addr, pmdp, PMD_HUGE_RW);
+}
+
+#define __HAVE_ARCH_PMDP_SPLITTING_FLUSH
+extern void pmdp_splitting_flush(struct vm_area_struct *vma,
+				 unsigned long address, pmd_t *pmdp);
+
+#define __HAVE_ARCH_PGTABLE_DEPOSIT
+extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				       pgtable_t pgtable);
+#define __HAVE_ARCH_PGTABLE_WITHDRAW
+extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
+
+#define __HAVE_ARCH_PMDP_INVALIDATE
+extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+			    pmd_t *pmdp);
 
 #include <asm/tlbflush.h>
 
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index 214130a..9f33780 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -31,6 +31,7 @@
 #include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
 #include <asm/tlb.h>
+#include <asm/machdep.h>
 
 #include "mmu_decl.h"
 
@@ -240,3 +241,316 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
 }
 #endif /* CONFIG_DEBUG_VM */
 
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static pmd_t set_hugepage_access_flags_filter(pmd_t pmd,
+					      struct vm_area_struct *vma,
+					      int dirty)
+{
+	return pmd;
+}
+
+/*
+ * This is called when relaxing access to a hugepage. It's also called in the page
+ * fault path when we don't hit any of the major fault cases, ie, a minor
+ * update of _PAGE_ACCESSED, _PAGE_DIRTY, etc... The generic code will have
+ * handled those two for us, we additionally deal with missing execute
+ * permission here on some processors
+ */
+int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+			  pmd_t *pmdp, pmd_t entry, int dirty)
+{
+	int changed;
+	entry = set_hugepage_access_flags_filter(entry, vma, dirty);
+	changed = !pmd_same(*(pmdp), entry);
+	if (changed) {
+		__pmdp_set_access_flags(pmdp, entry);
+		/*
+		 * Since we are not supporting SW TLB systems, we don't
+		 * have any thing similar to flush_tlb_page_nohash()
+		 */
+	}
+	return changed;
+}
+
+int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+			      unsigned long address, pmd_t *pmdp)
+{
+	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
+}
+
+/*
+ * We currently remove entries from the hashtable regardless of whether
+ * the entry was young or dirty. The generic routines only flush if the
+ * entry was young or dirty which is not good enough.
+ *
+ * We should be more intelligent about this but for the moment we override
+ * these functions and force a tlb flush unconditionally
+ */
+int pmdp_clear_flush_young(struct vm_area_struct *vma,
+				  unsigned long address, pmd_t *pmdp)
+{
+	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
+}
+
+/*
+ * We mark the pmd splitting and invalidate all the hpte
+ * entries for this hugepage.
+ */
+void pmdp_splitting_flush(struct vm_area_struct *vma,
+			  unsigned long address, pmd_t *pmdp)
+{
+	unsigned long old, tmp;
+
+	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+#ifdef PTE_ATOMIC_UPDATES
+
+	__asm__ __volatile__(
+	"1:	ldarx	%0,0,%3\n\
+		andi.	%1,%0,%6\n\
+		bne-	1b \n\
+		ori	%1,%0,%4 \n\
+		stdcx.	%1,0,%3 \n\
+		bne-	1b"
+	: "=&r" (old), "=&r" (tmp), "=m" (*pmdp)
+	: "r" (pmdp), "i" (PMD_HUGE_SPLITTING), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
+	: "cc" );
+#else
+	old = pmd_val(*pmdp);
+	*pmdp = __pmd(old | PMD_HUGE_SPLITTING);
+#endif
+	/*
+	 * If we didn't had the splitting flag set, go and flush the
+	 * HPTE entries and serialize against gup fast.
+	 */
+	if (!(old & PMD_HUGE_SPLITTING)) {
+#ifdef CONFIG_PPC_STD_MMU_64
+		/* We need to flush the hpte */
+		if (old & PMD_HUGE_HASHPTE)
+			hpte_need_hugepage_flush(vma->vm_mm, address, pmdp);
+#endif
+		/* need tlb flush only to serialize against gup-fast */
+		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+	}
+}
+
+/*
+ * We want to put the pgtable in pmd and use pgtable for tracking
+ * the base page size hptes
+ */
+void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+				pgtable_t pgtable)
+{
+	unsigned long *pgtable_slot;
+	assert_spin_locked(&mm->page_table_lock);
+	/*
+	 * we store the pgtable in the second half of PMD
+	 */
+	pgtable_slot = pmdp + PTRS_PER_PMD;
+	*pgtable_slot = (unsigned long)pgtable;
+}
+
+#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
+pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
+{
+	pgtable_t pgtable;
+	unsigned long *pgtable_slot;
+
+	assert_spin_locked(&mm->page_table_lock);
+	pgtable_slot = pmdp + PTRS_PER_PMD;
+	pgtable = (pgtable_t) *pgtable_slot;
+	/*
+	 * We store HPTE information in the deposited PTE fragment.
+	 * zero out the content on withdraw.
+	 */
+	memset(pgtable, 0, PTE_FRAG_SIZE);
+	return pgtable;
+}
+
+/*
+ * Since we are looking at latest ppc64, we don't need to worry about
+ * i/d cache coherency on exec fault
+ */
+static pmd_t set_pmd_filter(pmd_t pmd, unsigned long addr)
+{
+	pmd = __pmd(pmd_val(pmd) & ~PMD_HUGE_HPTEFLAGS);
+	return pmd;
+}
+
+/*
+ * We can make it less convoluted than __set_pte_at, because
+ * we can ignore lot of hardware here, because this is only for
+ * MPSS
+ */
+static inline void __set_pmd_at(struct mm_struct *mm, unsigned long addr,
+				pmd_t *pmdp, pmd_t pmd, int percpu)
+{
+	/*
+	 * There is nothing in hash page table now, so nothing to
+	 * invalidate, set_pte_at is used for adding new entry.
+	 * For updating we should use update_hugepage_pmd()
+	 */
+	*pmdp = pmd;
+}
+
+/*
+ * set a new huge pmd. We should not be called for updating
+ * an existing pmd entry. That should go via pmd_hugepage_update.
+ */
+void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+		pmd_t *pmdp, pmd_t pmd)
+{
+	/*
+	 * Note: mm->context.id might not yet have been assigned as
+	 * this context might not have been activated yet when this
+	 * is called.
+	 */
+	pmd = set_pmd_filter(pmd, addr);
+
+	__set_pmd_at(mm, addr, pmdp, pmd, 0);
+
+}
+
+void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+		     pmd_t *pmdp)
+{
+	pmd_hugepage_update(vma->vm_mm, address, pmdp, PMD_HUGE_PRESENT);
+	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+}
+
+/*
+ * A linux hugepage PMD was changed and the corresponding hash table entry
+ * neesd to be flushed.
+ *
+ * The linux hugepage PMD now include the pmd entries followed by the address
+ * to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
+ * [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
+ * With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
+ * 4096 entries. Both will fit in a 4K pgtable_t.
+ */
+void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
+			      pmd_t *pmdp)
+{
+	int ssize, i;
+	unsigned long s_addr;
+	unsigned int psize, valid;
+	unsigned char *hpte_slot_array;
+	unsigned long hidx, vpn, vsid, hash, shift, slot;
+
+	/*
+	 * Flush all the hptes mapping this hugepage
+	 */
+	s_addr = addr & HUGE_PAGE_MASK;
+	/*
+	 * The hpte hindex are stored in the pgtable whose address is in the
+	 * second half of the PMD
+	 */
+	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
+
+	/* get the base page size */
+	psize = get_slice_psize(mm, s_addr);
+	shift = mmu_psize_defs[psize].shift;
+
+	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {
+		/*
+		 * 8 bits per each hpte entries
+		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
+		 */
+		valid = hpte_slot_array[i] & 0x1;
+		if (!valid)
+			continue;
+		hidx =  hpte_slot_array[i]  >> 1;
+
+		/* get the vpn */
+		addr = s_addr + (i * (1ul << shift));
+		if (!is_kernel_addr(addr)) {
+			ssize = user_segment_size(addr);
+			vsid = get_vsid(mm->context.id, addr, ssize);
+			WARN_ON(vsid == 0);
+		} else {
+			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
+			ssize = mmu_kernel_ssize;
+		}
+
+		vpn = hpt_vpn(addr, vsid, ssize);
+		hash = hpt_hash(vpn, shift, ssize);
+		if (hidx & _PTEIDX_SECONDARY)
+			hash = ~hash;
+
+		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
+		slot += hidx & _PTEIDX_GROUP_IX;
+		ppc_md.hpte_invalidate(slot, vpn, psize, ssize, 0);
+	}
+}
+
+static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
+{
+	unsigned long pmd_prot = 0;
+	unsigned long prot = pgprot_val(pgprot);
+
+	if (prot & _PAGE_PRESENT)
+		pmd_prot |= PMD_HUGE_PRESENT;
+	if (prot & _PAGE_USER)
+		pmd_prot |= PMD_HUGE_USER;
+	if (prot & _PAGE_FILE)
+		pmd_prot |= PMD_HUGE_FILE;
+	if (prot & _PAGE_EXEC)
+		pmd_prot |= PMD_HUGE_EXEC;
+	/*
+	 * _PAGE_COHERENT should always be set
+	 */
+	VM_BUG_ON(!(prot & _PAGE_COHERENT));
+
+	if (prot & _PAGE_SAO)
+		pmd_prot |= PMD_HUGE_SAO;
+	if (prot & _PAGE_DIRTY)
+		pmd_prot |= PMD_HUGE_DIRTY;
+	if (prot & _PAGE_ACCESSED)
+		pmd_prot |= PMD_HUGE_ACCESSED;
+	if (prot & _PAGE_RW)
+		pmd_prot |= PMD_HUGE_RW;
+
+	pmd_val(pmd) |= pmd_prot;
+	return pmd;
+}
+
+pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
+{
+	pmd_t pmd;
+
+	pmd_val(pmd) = pfn << PMD_HUGE_RPN_SHIFT;
+	pmd_val(pmd) |= PMD_ISHUGE;
+	pmd = pmd_set_protbits(pmd, pgprot);
+	return pmd;
+}
+
+pmd_t mk_pmd(struct page *page, pgprot_t pgprot)
+{
+	return pfn_pmd(page_to_pfn(page), pgprot);
+}
+
+pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+{
+	/* FIXME!! why are this bits cleared ? */
+	pmd_val(pmd) &= ~(PMD_HUGE_PRESENT |
+			  PMD_HUGE_RW |
+			  PMD_HUGE_EXEC);
+	pmd = pmd_set_protbits(pmd, newprot);
+	return pmd;
+}
+
+/*
+ * This is called at the end of handling a user page fault, when the
+ * fault has been handled by updating a HUGE PMD entry in the linux page tables.
+ * We use it to preload an HPTE into the hash table corresponding to
+ * the updated linux HUGE PMD entry.
+ */
+void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+			  pmd_t *pmd)
+{
+	/* FIXME!!
+	 * Will be done in a later patch
+	 */
+}
+
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index e79840b..6fc3488 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -338,6 +338,19 @@ EXPORT_SYMBOL(iounmap);
 EXPORT_SYMBOL(__iounmap);
 EXPORT_SYMBOL(__iounmap_at);
 
+/*
+ * For hugepage we have pfn in the pmd, we use PMD_HUGE_RPN_SHIFT bits for flags
+ * For PTE page, we have a PTE_FRAG_SIZE (4K) aligned virtual address.
+ */
+struct page *pmd_page(pmd_t pmd)
+{
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (pmd_val(pmd) & PMD_ISHUGE)
+		return pfn_to_page(pmd_pfn(pmd));
+#endif
+	return virt_to_page(pmd_page_vaddr(pmd));
+}
+
 #ifdef CONFIG_PPC_64K_PAGES
 /*
  * we support 16 fragments per PTE page. This is limited by how many
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 72afd28..90ee19b 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -71,6 +71,7 @@ config PPC_BOOK3S_64
 	select PPC_FPU
 	select PPC_HAVE_PMU_SUPPORT
 	select SYS_SUPPORTS_HUGETLBFS
+	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
 
 config PPC_BOOK3E_64
 	bool "Embedded processors"
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 18/25] powerpc/THP: Double the PMD table size for THP
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (16 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64 Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-11  6:18   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries Aneesh Kumar K.V
                   ` (9 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

THP code does PTE page allocation along with large page request and deposit them
for later use. This is to ensure that we won't have any failures when we split
hugepages to regular pages.

On powerpc we want to use the deposited PTE page for storing hash pte slot and
secondary bit information for the HPTEs. We use the second half
of the pmd table to save the deposted PTE page.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgalloc-64.h    |    6 +++---
 arch/powerpc/include/asm/pgtable-ppc64.h |    6 +++++-
 arch/powerpc/mm/init_64.c                |    9 ++++++---
 3 files changed, 14 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/include/asm/pgalloc-64.h b/arch/powerpc/include/asm/pgalloc-64.h
index 3418989..46c6ffa 100644
--- a/arch/powerpc/include/asm/pgalloc-64.h
+++ b/arch/powerpc/include/asm/pgalloc-64.h
@@ -208,17 +208,17 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
 
 static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
 {
-	return kmem_cache_alloc(PGT_CACHE(PMD_INDEX_SIZE),
+	return kmem_cache_alloc(PGT_CACHE(PMD_CACHE_INDEX),
 				GFP_KERNEL|__GFP_REPEAT);
 }
 
 static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
 {
-	kmem_cache_free(PGT_CACHE(PMD_INDEX_SIZE), pmd);
+	kmem_cache_free(PGT_CACHE(PMD_CACHE_INDEX), pmd);
 }
 
 #define __pmd_free_tlb(tlb, pmd, addr)		      \
-	pgtable_free_tlb(tlb, pmd, PMD_INDEX_SIZE)
+	pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX)
 #ifndef CONFIG_PPC_64K_PAGES
 #define __pud_free_tlb(tlb, pud, addr)		      \
 	pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE)
diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index c0747c7..d4e845c 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -20,7 +20,11 @@
                 	    PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
 #define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
 
-
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define PMD_CACHE_INDEX	(PMD_INDEX_SIZE + 1)
+#else
+#define PMD_CACHE_INDEX	PMD_INDEX_SIZE
+#endif
 /*
  * Define the address range of the kernel non-linear virtual area
  */
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 95a4529..7608b0d 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -88,7 +88,11 @@ static void pgd_ctor(void *addr)
 
 static void pmd_ctor(void *addr)
 {
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	memset(addr, 0, PMD_TABLE_SIZE * 2);
+#else
 	memset(addr, 0, PMD_TABLE_SIZE);
+#endif
 }
 
 struct kmem_cache *pgtable_cache[MAX_PGTABLE_INDEX_SIZE];
@@ -138,10 +142,9 @@ void pgtable_cache_add(unsigned shift, void (*ctor)(void *))
 void pgtable_cache_init(void)
 {
 	pgtable_cache_add(PGD_INDEX_SIZE, pgd_ctor);
-	pgtable_cache_add(PMD_INDEX_SIZE, pmd_ctor);
-	if (!PGT_CACHE(PGD_INDEX_SIZE) || !PGT_CACHE(PMD_INDEX_SIZE))
+	pgtable_cache_add(PMD_CACHE_INDEX, pmd_ctor);
+	if (!PGT_CACHE(PGD_INDEX_SIZE) || !PGT_CACHE(PMD_CACHE_INDEX))
 		panic("Couldn't allocate pgtable caches");
-
 	/* In all current configs, when the PUD index exists it's the
 	 * same size as either the pgd or pmd index.  Verify that the
 	 * initialization above has also created a PUD cache.  This
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (17 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 18/25] powerpc/THP: Double the PMD table size for THP Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-10  7:21   ` Michael Ellerman
  2013-04-12  1:28   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 20/25] powerpc/THP: Add code to handle HPTE faults for large pages Aneesh Kumar K.V
                   ` (8 subsequent siblings)
  27 siblings, 2 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

HUGETLB clear the top bit of PMD entries and use that to indicate
a HUGETLB page directory. Since we store pfns in PMDs for THP,
we would have the top bit cleared by default. Add the top bit mask
for THP PMD entries and clear that when we are looking for pmd_pfn.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable.h |   16 +++++++++++++---
 arch/powerpc/mm/pgtable.c          |    5 ++++-
 arch/powerpc/mm/pgtable_64.c       |    2 +-
 3 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 9fbe2a7..9681de4 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -31,7 +31,7 @@ struct mm_struct;
 #define PMD_HUGE_SPLITTING	0x008
 #define PMD_HUGE_SAO		0x010 /* strong Access order */
 #define PMD_HUGE_HASHPTE	0x020
-#define PMD_ISHUGE		0x040
+#define _PMD_ISHUGE		0x040
 #define PMD_HUGE_DIRTY		0x080 /* C: page changed */
 #define PMD_HUGE_ACCESSED	0x100 /* R: page referenced */
 #define PMD_HUGE_RW		0x200 /* software: user write access allowed */
@@ -44,6 +44,14 @@ struct mm_struct;
 #define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
 #define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
 #define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
+/*
+ * HugeTLB looks at the top bit of the Linux page table entries to
+ * decide whether it is a huge page directory or not. Mark HUGE
+ * PMD to differentiate
+ */
+#define PMD_HUGE_NOT_HUGETLB	(ASM_CONST(1) << 63)
+#define PMD_ISHUGE		(_PMD_ISHUGE | PMD_HUGE_NOT_HUGETLB)
+#define PMD_HUGE_PROTBITS	(0xfff | PMD_HUGE_NOT_HUGETLB)
 
 #ifndef __ASSEMBLY__
 extern void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
@@ -70,8 +78,9 @@ static inline int pmd_trans_splitting(pmd_t pmd)
 
 static inline int pmd_trans_huge(pmd_t pmd)
 {
-	return pmd_val(pmd) & PMD_ISHUGE;
+	return ((pmd_val(pmd) & PMD_ISHUGE) ==  PMD_ISHUGE);
 }
+
 /* We will enable it in the last patch */
 #define has_transparent_hugepage() 0
 #else
@@ -84,7 +93,8 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	/*
 	 * Only called for hugepage pmd
 	 */
-	return pmd_val(pmd) >> PMD_HUGE_RPN_SHIFT;
+	unsigned long val = pmd_val(pmd) & ~PMD_HUGE_PROTBITS;
+	return val  >> PMD_HUGE_RPN_SHIFT;
 }
 
 static inline int pmd_young(pmd_t pmd)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index 9f33780..cf3ca8e 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -517,7 +517,10 @@ static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
 pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
 {
 	pmd_t pmd;
-
+	/*
+	 * We cannot support that many PFNs
+	 */
+	VM_BUG_ON(pfn & PMD_HUGE_NOT_HUGETLB);
 	pmd_val(pmd) = pfn << PMD_HUGE_RPN_SHIFT;
 	pmd_val(pmd) |= PMD_ISHUGE;
 	pmd = pmd_set_protbits(pmd, pgprot);
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 6fc3488..cd53020 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -345,7 +345,7 @@ EXPORT_SYMBOL(__iounmap_at);
 struct page *pmd_page(pmd_t pmd)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	if (pmd_val(pmd) & PMD_ISHUGE)
+	if ((pmd_val(pmd) & PMD_ISHUGE) == PMD_ISHUGE)
 		return pfn_to_page(pmd_pfn(pmd));
 #endif
 	return virt_to_page(pmd_page_vaddr(pmd));
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 20/25] powerpc/THP: Add code to handle HPTE faults for large pages
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (18 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-12  4:01   ` David Gibson
  2013-04-04  5:57 ` [PATCH -V5 21/25] powerpc: Handle hugepage in perf callchain Aneesh Kumar K.V
                   ` (7 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We now have pmd entries covering to 16MB range. To implement THP on powerpc,
we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
We also use the depoisted PTE page for tracking the HPTE information. The information
include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
4096 entries. Both will fit in a 4K PTE page.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/mmu-hash64.h    |    5 +
 arch/powerpc/include/asm/pgtable-ppc64.h |   31 +----
 arch/powerpc/kernel/io-workarounds.c     |    3 +-
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |    2 +-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |    4 +-
 arch/powerpc/mm/Makefile                 |    1 +
 arch/powerpc/mm/hash_utils_64.c          |   16 ++-
 arch/powerpc/mm/hugepage-hash64.c        |  185 ++++++++++++++++++++++++++++++
 arch/powerpc/mm/hugetlbpage.c            |   31 ++++-
 arch/powerpc/mm/pgtable.c                |   38 ++++++
 arch/powerpc/mm/tlb_hash64.c             |    5 +-
 arch/powerpc/perf/callchain.c            |    2 +-
 arch/powerpc/platforms/pseries/eeh.c     |    5 +-
 13 files changed, 286 insertions(+), 42 deletions(-)
 create mode 100644 arch/powerpc/mm/hugepage-hash64.c

diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index e187254..a74a3de 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -322,6 +322,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
 int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 		     pte_t *ptep, unsigned long trap, int local, int ssize,
 		     unsigned int shift, unsigned int mmu_psize);
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+extern int __hash_page_thp(unsigned long ea, unsigned long access,
+			   unsigned long vsid, pmd_t *pmdp, unsigned long trap,
+			   int local, int ssize, unsigned int psize);
+#endif
 extern void hash_failure_debug(unsigned long ea, unsigned long access,
 			       unsigned long vsid, unsigned long trap,
 			       int ssize, int psize, int lpsize,
diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index d4e845c..9b81283 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -345,39 +345,18 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
 void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
 void pgtable_cache_init(void);
 
-/*
- * find_linux_pte returns the address of a linux pte for a given
- * effective address and directory.  If not found, it returns zero.
- */
-static inline pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea)
-{
-	pgd_t *pg;
-	pud_t *pu;
-	pmd_t *pm;
-	pte_t *pt = NULL;
-
-	pg = pgdir + pgd_index(ea);
-	if (!pgd_none(*pg)) {
-		pu = pud_offset(pg, ea);
-		if (!pud_none(*pu)) {
-			pm = pmd_offset(pu, ea);
-			if (pmd_present(*pm))
-				pt = pte_offset_kernel(pm, ea);
-		}
-	}
-	return pt;
-}
-
+pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea, unsigned int *thp);
 #ifdef CONFIG_HUGETLB_PAGE
 pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
-				 unsigned *shift);
+				 unsigned *shift, unsigned int *hugepage);
 #else
 static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
-					       unsigned *shift)
+					       unsigned *shift,
+					       unsigned int *hugepage)
 {
 	if (shift)
 		*shift = 0;
-	return find_linux_pte(pgdir, ea);
+	return find_linux_pte(pgdir, ea, hugepage);
 }
 #endif /* !CONFIG_HUGETLB_PAGE */
 
diff --git a/arch/powerpc/kernel/io-workarounds.c b/arch/powerpc/kernel/io-workarounds.c
index 50e90b7..a9c904f 100644
--- a/arch/powerpc/kernel/io-workarounds.c
+++ b/arch/powerpc/kernel/io-workarounds.c
@@ -70,7 +70,8 @@ struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr)
 		if (vaddr < PHB_IO_BASE || vaddr >= PHB_IO_END)
 			return NULL;
 
-		ptep = find_linux_pte(init_mm.pgd, vaddr);
+		/* we won't find hugepages here */
+		ptep = find_linux_pte(init_mm.pgd, vaddr, NULL);
 		if (ptep == NULL)
 			paddr = 0;
 		else
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 8cc18ab..4f2a7dc 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -683,7 +683,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 */
 			rcu_read_lock_sched();
 			ptep = find_linux_pte_or_hugepte(current->mm->pgd,
-							 hva, NULL);
+							 hva, NULL, NULL);
 			if (ptep && pte_present(*ptep)) {
 				pte = kvmppc_read_update_linux_pte(ptep, 1);
 				if (pte_write(pte))
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 19c93ba..7c8e1ed 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -27,7 +27,7 @@ static void *real_vmalloc_addr(void *x)
 	unsigned long addr = (unsigned long) x;
 	pte_t *p;
 
-	p = find_linux_pte(swapper_pg_dir, addr);
+	p = find_linux_pte(swapper_pg_dir, addr, NULL);
 	if (!p || !pte_present(*p))
 		return NULL;
 	/* assume we don't have huge pages in vmalloc space... */
@@ -152,7 +152,7 @@ static pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva,
 	unsigned long ps = *pte_sizep;
 	unsigned int shift;
 
-	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift);
+	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift, NULL);
 	if (!ptep)
 		return __pte(0);
 	if (shift)
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index 3787b61..997deb4 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -33,6 +33,7 @@ obj-y				+= hugetlbpage.o
 obj-$(CONFIG_PPC_STD_MMU_64)	+= hugetlbpage-hash64.o
 obj-$(CONFIG_PPC_BOOK3E_MMU)	+= hugetlbpage-book3e.o
 endif
+obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hugepage-hash64.o
 obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage-prot.o
 obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
 obj-$(CONFIG_HIGHMEM)		+= highmem.o
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 1f2ebbd..cd3ecd8 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -955,7 +955,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
 	unsigned long vsid;
 	struct mm_struct *mm;
 	pte_t *ptep;
-	unsigned hugeshift;
+	unsigned hugeshift, hugepage;
 	const struct cpumask *tmp;
 	int rc, user_region = 0, local = 0;
 	int psize, ssize;
@@ -1021,7 +1021,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
 #endif /* CONFIG_PPC_64K_PAGES */
 
 	/* Get PTE and page size from page tables */
-	ptep = find_linux_pte_or_hugepte(pgdir, ea, &hugeshift);
+	ptep = find_linux_pte_or_hugepte(pgdir, ea, &hugeshift, &hugepage);
 	if (ptep == NULL || !pte_present(*ptep)) {
 		DBG_LOW(" no PTE !\n");
 		return 1;
@@ -1044,6 +1044,12 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
 					ssize, hugeshift, psize);
 #endif /* CONFIG_HUGETLB_PAGE */
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+	if (hugepage)
+		return __hash_page_thp(ea, access, vsid, (pmd_t *)ptep,
+				       trap, local, ssize, psize);
+#endif
+
 #ifndef CONFIG_PPC_64K_PAGES
 	DBG_LOW(" i-pte: %016lx\n", pte_val(*ptep));
 #else
@@ -1149,7 +1155,11 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
 	pgdir = mm->pgd;
 	if (pgdir == NULL)
 		return;
-	ptep = find_linux_pte(pgdir, ea);
+	/*
+	 * We haven't implemented update_mmu_cache_pmd yet. We get called
+	 * only for non hugepages. Hence can ignore THP here
+	 */
+	ptep = find_linux_pte(pgdir, ea, NULL);
 	if (!ptep)
 		return;
 
diff --git a/arch/powerpc/mm/hugepage-hash64.c b/arch/powerpc/mm/hugepage-hash64.c
new file mode 100644
index 0000000..3f6140d
--- /dev/null
+++ b/arch/powerpc/mm/hugepage-hash64.c
@@ -0,0 +1,185 @@
+/*
+ * Copyright IBM Corporation, 2013
+ * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2.1 of the GNU Lesser General Public License
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ */
+
+/*
+ * PPC64 THP Support for hash based MMUs
+ */
+#include <linux/mm.h>
+#include <asm/machdep.h>
+
+/*
+ * The linux hugepage PMD now include the pmd entries followed by the address
+ * to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
+ * [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
+ * With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
+ * 4096 entries. Both will fit in a 4K pgtable_t.
+ */
+int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
+		    pmd_t *pmdp, unsigned long trap, int local, int ssize,
+		    unsigned int psize)
+{
+	unsigned int index, valid;
+	unsigned char *hpte_slot_array;
+	unsigned long rflags, pa, hidx;
+	unsigned long old_pmd, new_pmd;
+	int ret, lpsize = MMU_PAGE_16M;
+	unsigned long vpn, hash, shift, slot;
+
+	/*
+	 * atomically mark the linux large page PMD busy and dirty
+	 */
+	do {
+		old_pmd = pmd_val(*pmdp);
+		/* If PMD busy, retry the access */
+		if (unlikely(old_pmd & PMD_HUGE_BUSY))
+			return 0;
+		/* If PMD permissions don't match, take page fault */
+		if (unlikely(access & ~old_pmd))
+			return 1;
+		/*
+		 * Try to lock the PTE, add ACCESSED and DIRTY if it was
+		 * a write access
+		 */
+		new_pmd = old_pmd | PMD_HUGE_BUSY | PMD_HUGE_ACCESSED;
+		if (access & _PAGE_RW)
+			new_pmd |= PMD_HUGE_DIRTY;
+	} while (old_pmd != __cmpxchg_u64((unsigned long *)pmdp,
+					  old_pmd, new_pmd));
+	/*
+	 * PP bits. PMD_HUGE_USER is already PP bit 0x2, so we only
+	 * need to add in 0x1 if it's a read-only user page
+	 */
+	rflags = new_pmd & PMD_HUGE_USER;
+	if ((new_pmd & PMD_HUGE_USER) && !((new_pmd & PMD_HUGE_RW) &&
+					   (new_pmd & PMD_HUGE_DIRTY)))
+		rflags |= 0x1;
+	/*
+	 * PMD_HUGE_EXEC -> HW_NO_EXEC since it's inverted
+	 */
+	rflags |= ((new_pmd & PMD_HUGE_EXEC) ? 0 : HPTE_R_N);
+
+#if 0 /* FIXME!! */
+	if (!cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) {
+
+		/*
+		 * No CPU has hugepages but lacks no execute, so we
+		 * don't need to worry about that case
+		 */
+		rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
+	}
+#endif
+	/*
+	 * Find the slot index details for this ea, using base page size.
+	 */
+	shift = mmu_psize_defs[psize].shift;
+	index = (ea & (HUGE_PAGE_SIZE - 1)) >> shift;
+	BUG_ON(index > 4096);
+
+	vpn = hpt_vpn(ea, vsid, ssize);
+	hash = hpt_hash(vpn, shift, ssize);
+	/*
+	 * The hpte hindex are stored in the pgtable whose address is in the
+	 * second half of the PMD
+	 */
+	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
+
+	valid = hpte_slot_array[index]  & 0x1;
+	if (unlikely(valid)) {
+		/* update the hpte bits */
+		hidx =  hpte_slot_array[index]  >> 1;
+		if (hidx & _PTEIDX_SECONDARY)
+			hash = ~hash;
+		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
+		slot += hidx & _PTEIDX_GROUP_IX;
+
+		ret = ppc_md.hpte_updatepp(slot, rflags, vpn,
+					   psize, ssize, local);
+		/*
+		 * We failed to update, try to insert a new entry.
+		 */
+		if (ret == -1) {
+			/*
+			 * large pte is marked busy, so we can be sure
+			 * nobody is looking at hpte_slot_array. hence we can
+			 * safely update this here.
+			 */
+			hpte_slot_array[index] = 0;
+			valid = 0;
+		}
+	}
+
+	if (likely(!valid)) {
+		unsigned long hpte_group;
+
+		/* insert new entry */
+		pa = pmd_pfn(__pmd(old_pmd)) << PAGE_SHIFT;
+repeat:
+		hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL;
+
+		/* clear the busy bits and set the hash pte bits */
+		new_pmd = (new_pmd & ~PMD_HUGE_HPTEFLAGS) | PMD_HUGE_HASHPTE;
+
+		/*
+		 * WIMG bits.
+		 * We always have _PAGE_COHERENT enabled for system RAM
+		 */
+		rflags |= _PAGE_COHERENT;
+
+		if (new_pmd & PMD_HUGE_SAO)
+			rflags |= _PAGE_SAO;
+
+		/* Insert into the hash table, primary slot */
+		slot = ppc_md.hpte_insert(hpte_group, vpn, pa, rflags, 0,
+					  psize, lpsize, ssize);
+		/*
+		 * Primary is full, try the secondary
+		 */
+		if (unlikely(slot == -1)) {
+			hpte_group = ((~hash & htab_hash_mask) *
+				      HPTES_PER_GROUP) & ~0x7UL;
+			slot = ppc_md.hpte_insert(hpte_group, vpn, pa,
+						  rflags, HPTE_V_SECONDARY,
+						  psize, lpsize, ssize);
+			if (slot == -1) {
+				if (mftb() & 0x1)
+					hpte_group = ((hash & htab_hash_mask) *
+						      HPTES_PER_GROUP) & ~0x7UL;
+
+				ppc_md.hpte_remove(hpte_group);
+				goto repeat;
+			}
+		}
+		/*
+		 * Hypervisor failure. Restore old pmd and return -1
+		 * similar to __hash_page_*
+		 */
+		if (unlikely(slot == -2)) {
+			*pmdp = __pmd(old_pmd);
+			hash_failure_debug(ea, access, vsid, trap, ssize,
+					   psize, lpsize, old_pmd);
+			return -1;
+		}
+		/*
+		 * large pte is marked busy, so we can be sure
+		 * nobody is looking at hpte_slot_array. hence we can
+		 * safely update this here.
+		 */
+		hpte_slot_array[index] = slot << 1 | 0x1;
+	}
+	/*
+	 * No need to use ldarx/stdcx here
+	 */
+	*pmdp = __pmd(new_pmd & ~PMD_HUGE_BUSY);
+	return 0;
+}
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 1a6de0a..7f11fa0 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -67,7 +67,8 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize)
 
 #define hugepd_none(hpd)	((hpd).pd == 0)
 
-pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, unsigned *shift)
+pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
+				 unsigned *shift, unsigned int *hugepage)
 {
 	pgd_t *pg;
 	pud_t *pu;
@@ -77,6 +78,8 @@ pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, unsigned *shift
 
 	if (shift)
 		*shift = 0;
+	if (hugepage)
+		*hugepage = 0;
 
 	pg = pgdir + pgd_index(ea);
 	if (is_hugepd(pg)) {
@@ -91,12 +94,24 @@ pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, unsigned *shift
 			pm = pmd_offset(pu, ea);
 			if (is_hugepd(pm))
 				hpdp = (hugepd_t *)pm;
-			else if (!pmd_none(*pm)) {
+			else if (pmd_large(*pm)) {
+				/* THP page */
+				if (hugepage) {
+					*hugepage = 1;
+					/*
+					 * This should be ok, except for few
+					 * flags. Most of the pte and hugepage
+					 * pmd bits overlap. We don't use the
+					 * returned value as pte_t in the caller.
+					 */
+					return (pte_t *)pm;
+				} else
+					return NULL;
+			} else if (!pmd_none(*pm)) {
 				return pte_offset_kernel(pm, ea);
 			}
 		}
 	}
-
 	if (!hpdp)
 		return NULL;
 
@@ -108,7 +123,8 @@ EXPORT_SYMBOL_GPL(find_linux_pte_or_hugepte);
 
 pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
 {
-	return find_linux_pte_or_hugepte(mm->pgd, addr, NULL);
+	/* Only called for HugeTLB pages, hence can ignore THP */
+	return find_linux_pte_or_hugepte(mm->pgd, addr, NULL, NULL);
 }
 
 static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
@@ -613,8 +629,11 @@ follow_huge_addr(struct mm_struct *mm, unsigned long address, int write)
 	struct page *page;
 	unsigned shift;
 	unsigned long mask;
-
-	ptep = find_linux_pte_or_hugepte(mm->pgd, address, &shift);
+	/*
+	 * Transparent hugepages are handled by generic code. We can skip them
+	 * here.
+	 */
+	ptep = find_linux_pte_or_hugepte(mm->pgd, address, &shift, NULL);
 
 	/* Verify it is a huge page else bail. */
 	if (!ptep || !shift)
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index cf3ca8e..fbff062 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -557,3 +557,41 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
 }
 
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+/*
+ * find_linux_pte returns the address of a linux pte for a given
+ * effective address and directory.  If not found, it returns zero.
+ */
+pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea, unsigned int *hugepage)
+{
+	pgd_t *pg;
+	pud_t *pu;
+	pmd_t *pm;
+	pte_t *pt = NULL;
+
+	if (hugepage)
+		*hugepage = 0;
+	pg = pgdir + pgd_index(ea);
+	if (!pgd_none(*pg)) {
+		pu = pud_offset(pg, ea);
+		if (!pud_none(*pu)) {
+			pm = pmd_offset(pu, ea);
+			if (pmd_large(*pm)) {
+				/* THP page */
+				if (hugepage) {
+					*hugepage = 1;
+					/*
+					 * This should be ok, except for few
+					 * flags. Most of the pte and hugepage
+					 * pmd bits overlap. We don't use the
+					 * returned value as pte_t in the caller.
+					 */
+					return (pte_t *)pm;
+				} else
+					return NULL;
+			} else if (pmd_present(*pm))
+				pt = pte_offset_kernel(pm, ea);
+		}
+	}
+	return pt;
+}
diff --git a/arch/powerpc/mm/tlb_hash64.c b/arch/powerpc/mm/tlb_hash64.c
index 023ec8a..be0066f 100644
--- a/arch/powerpc/mm/tlb_hash64.c
+++ b/arch/powerpc/mm/tlb_hash64.c
@@ -206,7 +206,10 @@ void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
 	local_irq_save(flags);
 	arch_enter_lazy_mmu_mode();
 	for (; start < end; start += PAGE_SIZE) {
-		pte_t *ptep = find_linux_pte(mm->pgd, start);
+		/*
+		 * We won't find hugepages here.
+		 */
+		pte_t *ptep = find_linux_pte(mm->pgd, start, NULL);
 		unsigned long pte;
 
 		if (ptep == NULL)
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 74d1e78..578cac7 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -125,7 +125,7 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
 	if (!pgdir)
 		return -EFAULT;
 
-	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift);
+	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, NULL);
 	if (!shift)
 		shift = PAGE_SHIFT;
 
diff --git a/arch/powerpc/platforms/pseries/eeh.c b/arch/powerpc/platforms/pseries/eeh.c
index 9a04322..44c931a 100644
--- a/arch/powerpc/platforms/pseries/eeh.c
+++ b/arch/powerpc/platforms/pseries/eeh.c
@@ -261,7 +261,10 @@ static inline unsigned long eeh_token_to_phys(unsigned long token)
 	pte_t *ptep;
 	unsigned long pa;
 
-	ptep = find_linux_pte(init_mm.pgd, token);
+	/*
+	 * We won't find hugepages here
+	 */
+	ptep = find_linux_pte(init_mm.pgd, token, NULL);
 	if (!ptep)
 		return token;
 	pa = pte_pfn(*ptep) << PAGE_SHIFT;
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 21/25] powerpc: Handle hugepage in perf callchain
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (19 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 20/25] powerpc/THP: Add code to handle HPTE faults for large pages Aneesh Kumar K.V
@ 2013-04-04  5:57 ` Aneesh Kumar K.V
  2013-04-12  1:34   ` David Gibson
  2013-04-04  5:58 ` [PATCH -V5 22/25] powerpc/THP: get_user_pages_fast changes Aneesh Kumar K.V
                   ` (6 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:57 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/perf/callchain.c |   32 +++++++++++++++++++++-----------
 1 file changed, 21 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index 578cac7..99262ce 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -115,7 +115,7 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
 {
 	pgd_t *pgdir;
 	pte_t *ptep, pte;
-	unsigned shift;
+	unsigned shift, hugepage;
 	unsigned long addr = (unsigned long) ptr;
 	unsigned long offset;
 	unsigned long pfn;
@@ -125,20 +125,30 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
 	if (!pgdir)
 		return -EFAULT;
 
-	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, NULL);
+	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, &hugepage);
 	if (!shift)
 		shift = PAGE_SHIFT;
 
-	/* align address to page boundary */
-	offset = addr & ((1UL << shift) - 1);
-	addr -= offset;
-
-	if (ptep == NULL)
-		return -EFAULT;
-	pte = *ptep;
-	if (!pte_present(pte) || !(pte_val(pte) & _PAGE_USER))
+	if (!ptep)
 		return -EFAULT;
-	pfn = pte_pfn(pte);
+
+	if (hugepage) {
+		pmd_t pmd = *(pmd_t *)ptep;
+		shift = mmu_psize_defs[MMU_PAGE_16M].shift;
+		offset = addr & ((1UL << shift) - 1);
+
+		if (!pmd_large(pmd) || !(pmd_val(pmd) & PMD_HUGE_USER))
+			return -EFAULT;
+		pfn = pmd_pfn(pmd);
+	} else {
+		offset = addr & ((1UL << shift) - 1);
+
+		pte = *ptep;
+		if (!pte_present(pte) || !(pte_val(pte) & _PAGE_USER))
+			return -EFAULT;
+		pfn = pte_pfn(pte);
+	}
+
 	if (!page_is_ram(pfn))
 		return -EFAULT;
 
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 22/25] powerpc/THP: get_user_pages_fast changes
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (20 preceding siblings ...)
  2013-04-04  5:57 ` [PATCH -V5 21/25] powerpc: Handle hugepage in perf callchain Aneesh Kumar K.V
@ 2013-04-04  5:58 ` Aneesh Kumar K.V
  2013-04-12  1:41   ` David Gibson
  2013-04-04  5:58 ` [PATCH -V5 23/25] powerpc/THP: Enable THP on PPC64 Aneesh Kumar K.V
                   ` (5 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:58 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

handle large pages for get_user_pages_fast. Also take care of large page splitting.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/gup.c |   84 +++++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 82 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/mm/gup.c b/arch/powerpc/mm/gup.c
index d7efdbf..835c1ae 100644
--- a/arch/powerpc/mm/gup.c
+++ b/arch/powerpc/mm/gup.c
@@ -55,6 +55,72 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
 	return 1;
 }
 
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+static inline int gup_huge_pmd(pmd_t *pmdp, unsigned long addr,
+			       unsigned long end, int write,
+			       struct page **pages, int *nr)
+{
+	int refs;
+	pmd_t pmd;
+	unsigned long mask;
+	struct page *head, *page, *tail;
+
+	pmd = *pmdp;
+	mask = PMD_HUGE_PRESENT | PMD_HUGE_USER;
+	if (write)
+		mask |= PMD_HUGE_RW;
+
+	if ((pmd_val(pmd) & mask) != mask)
+		return 0;
+
+	/* large pages are never "special" */
+	VM_BUG_ON(!pfn_valid(pmd_pfn(pmd)));
+
+	refs = 0;
+	head = pmd_page(pmd);
+	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
+	tail = page;
+	do {
+		VM_BUG_ON(compound_head(page) != head);
+		pages[*nr] = page;
+		(*nr)++;
+		page++;
+		refs++;
+	} while (addr += PAGE_SIZE, addr != end);
+
+	if (!page_cache_add_speculative(head, refs)) {
+		*nr -= refs;
+		return 0;
+	}
+
+	if (unlikely(pmd_val(pmd) != pmd_val(*pmdp))) {
+		*nr -= refs;
+		while (refs--)
+			put_page(head);
+		return 0;
+	}
+	/*
+	 * Any tail page need their mapcount reference taken before we
+	 * return.
+	 */
+	while (refs--) {
+		if (PageTail(tail))
+			get_huge_page_tail(tail);
+		tail++;
+	}
+
+	return 1;
+}
+#else
+
+static inline int gup_huge_pmd(pmd_t *pmdp, unsigned long addr,
+			       unsigned long end, int write,
+			       struct page **pages, int *nr)
+{
+	return 1;
+}
+#endif
+
 static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
 		int write, struct page **pages, int *nr)
 {
@@ -66,9 +132,23 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
 		pmd_t pmd = *pmdp;
 
 		next = pmd_addr_end(addr, end);
-		if (pmd_none(pmd))
+		/*
+		 * The pmd_trans_splitting() check below explains why
+		 * pmdp_splitting_flush has to flush the tlb, to stop
+		 * this gup-fast code from running while we set the
+		 * splitting bit in the pmd. Returning zero will take
+		 * the slow path that will call wait_split_huge_page()
+		 * if the pmd is still in splitting state. gup-fast
+		 * can't because it has irq disabled and
+		 * wait_split_huge_page() would never return as the
+		 * tlb flush IPI wouldn't run.
+		 */
+		if (pmd_none(pmd) || pmd_trans_splitting(pmd))
 			return 0;
-		if (is_hugepd(pmdp)) {
+		if (unlikely(pmd_large(pmd))) {
+			if (!gup_huge_pmd(pmdp, addr, next, write, pages, nr))
+				return 0;
+		} else if (is_hugepd(pmdp)) {
 			if (!gup_hugepd((hugepd_t *)pmdp, PMD_SHIFT,
 					addr, next, write, pages, nr))
 				return 0;
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 23/25] powerpc/THP: Enable THP on PPC64
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (21 preceding siblings ...)
  2013-04-04  5:58 ` [PATCH -V5 22/25] powerpc/THP: get_user_pages_fast changes Aneesh Kumar K.V
@ 2013-04-04  5:58 ` Aneesh Kumar K.V
  2013-04-04  5:58 ` [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate Aneesh Kumar K.V
                   ` (4 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:58 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We enable only if the we support 16MB page size.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/pgtable.h |   31 +++++++++++++++++++++++++++++--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 9681de4..5617dee 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -81,8 +81,35 @@ static inline int pmd_trans_huge(pmd_t pmd)
 	return ((pmd_val(pmd) & PMD_ISHUGE) ==  PMD_ISHUGE);
 }
 
-/* We will enable it in the last patch */
-#define has_transparent_hugepage() 0
+static inline int has_transparent_hugepage(void)
+{
+	if (!mmu_has_feature(MMU_FTR_16M_PAGE))
+		return 0;
+	/*
+	 * We support THP only if HPAGE_SHIFT is 16MB.
+	 */
+	if (!HPAGE_SHIFT || (HPAGE_SHIFT != mmu_psize_defs[MMU_PAGE_16M].shift))
+		return 0;
+	/*
+	 * We need to make sure that we support 16MB hugepage in a segement
+	 * with base page size 64K or 4K. We only enable THP with a PAGE_SIZE
+	 * of 64K.
+	 */
+	/*
+	 * If we have 64K HPTE, we will be using that by default
+	 */
+	if (mmu_psize_defs[MMU_PAGE_64K].shift &&
+	    (mmu_psize_defs[MMU_PAGE_64K].penc[MMU_PAGE_16M] == -1))
+		return 0;
+	/*
+	 * Ok we only have 4K HPTE
+	 */
+	if (mmu_psize_defs[MMU_PAGE_4K].penc[MMU_PAGE_16M] == -1)
+		return 0;
+
+	return 1;
+}
+
 #else
 #define pmd_large(pmd)		0
 #define has_transparent_hugepage() 0
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (22 preceding siblings ...)
  2013-04-04  5:58 ` [PATCH -V5 23/25] powerpc/THP: Enable THP on PPC64 Aneesh Kumar K.V
@ 2013-04-04  5:58 ` Aneesh Kumar K.V
  2013-04-12  4:21   ` David Gibson
  2013-04-04  5:58 ` [PATCH -V5 25/25] powerpc: Handle hugepages in kvm Aneesh Kumar K.V
                   ` (3 subsequent siblings)
  27 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:58 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Hugepage invalidate involves invalidating multiple hpte entries.
Optimize the operation using H_BULK_REMOVE on lpar platforms.
On native, reduce the number of tlb flush.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/machdep.h    |    3 +
 arch/powerpc/mm/hash_native_64.c      |   78 ++++++++++++++++++++
 arch/powerpc/mm/pgtable.c             |   13 +++-
 arch/powerpc/platforms/pseries/lpar.c |  126 +++++++++++++++++++++++++++++++--
 4 files changed, 210 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
index 6cee6e0..3bc7816 100644
--- a/arch/powerpc/include/asm/machdep.h
+++ b/arch/powerpc/include/asm/machdep.h
@@ -56,6 +56,9 @@ struct machdep_calls {
 	void            (*hpte_removebolted)(unsigned long ea,
 					     int psize, int ssize);
 	void		(*flush_hash_range)(unsigned long number, int local);
+	void		(*hugepage_invalidate)(struct mm_struct *mm,
+					       unsigned char *hpte_slot_array,
+					       unsigned long addr, int psize);
 
 	/* special for kexec, to be called in real mode, linear mapping is
 	 * destroyed as well */
diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index ac84fa6..59f29bf 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -450,6 +450,83 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
 	local_irq_restore(flags);
 }
 
+static void native_hugepage_invalidate(struct mm_struct *mm,
+				       unsigned char *hpte_slot_array,
+				       unsigned long addr, int psize)
+{
+	int ssize = 0, i;
+	int lock_tlbie;
+	struct hash_pte *hptep;
+	int actual_psize = MMU_PAGE_16M;
+	unsigned int max_hpte_count, valid;
+	unsigned long flags, s_addr = addr;
+	unsigned long hpte_v, want_v, shift;
+	unsigned long hidx, vpn = 0, vsid, hash, slot;
+
+	shift = mmu_psize_defs[psize].shift;
+	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
+
+	local_irq_save(flags);
+	for (i = 0; i < max_hpte_count; i++) {
+		/*
+		 * 8 bits per each hpte entries
+		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
+		 */
+		valid = hpte_slot_array[i] & 0x1;
+		if (!valid)
+			continue;
+		hidx =  hpte_slot_array[i]  >> 1;
+
+		/* get the vpn */
+		addr = s_addr + (i * (1ul << shift));
+		if (!is_kernel_addr(addr)) {
+			ssize = user_segment_size(addr);
+			vsid = get_vsid(mm->context.id, addr, ssize);
+			WARN_ON(vsid == 0);
+		} else {
+			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
+			ssize = mmu_kernel_ssize;
+		}
+
+		vpn = hpt_vpn(addr, vsid, ssize);
+		hash = hpt_hash(vpn, shift, ssize);
+		if (hidx & _PTEIDX_SECONDARY)
+			hash = ~hash;
+
+		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
+		slot += hidx & _PTEIDX_GROUP_IX;
+
+		hptep = htab_address + slot;
+		want_v = hpte_encode_avpn(vpn, psize, ssize);
+		native_lock_hpte(hptep);
+		hpte_v = hptep->v;
+
+		/* Even if we miss, we need to invalidate the TLB */
+		if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
+			native_unlock_hpte(hptep);
+		else
+			/* Invalidate the hpte. NOTE: this also unlocks it */
+			hptep->v = 0;
+	}
+	/*
+	 * Since this is a hugepage, we just need a single tlbie.
+	 * use the last vpn.
+	 */
+	lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+	if (lock_tlbie)
+		raw_spin_lock(&native_tlbie_lock);
+
+	asm volatile("ptesync":::"memory");
+	__tlbie(vpn, psize, actual_psize, ssize);
+	asm volatile("eieio; tlbsync; ptesync":::"memory");
+
+	if (lock_tlbie)
+		raw_spin_unlock(&native_tlbie_lock);
+
+	local_irq_restore(flags);
+}
+
+
 static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
 			int *psize, int *apsize, int *ssize, unsigned long *vpn)
 {
@@ -678,4 +755,5 @@ void __init hpte_init_native(void)
 	ppc_md.hpte_remove	= native_hpte_remove;
 	ppc_md.hpte_clear_all	= native_hpte_clear;
 	ppc_md.flush_hash_range = native_flush_hash_range;
+	ppc_md.hugepage_invalidate   = native_hugepage_invalidate;
 }
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index fbff062..386cab8 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -433,6 +433,7 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
 {
 	int ssize, i;
 	unsigned long s_addr;
+	int max_hpte_count;
 	unsigned int psize, valid;
 	unsigned char *hpte_slot_array;
 	unsigned long hidx, vpn, vsid, hash, shift, slot;
@@ -446,12 +447,18 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
 	 * second half of the PMD
 	 */
 	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
-
 	/* get the base page size */
 	psize = get_slice_psize(mm, s_addr);
-	shift = mmu_psize_defs[psize].shift;
 
-	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {
+	if (ppc_md.hugepage_invalidate)
+		return ppc_md.hugepage_invalidate(mm, hpte_slot_array,
+						  s_addr, psize);
+	/*
+	 * No bluk hpte removal support, invalidate each entry
+	 */
+	shift = mmu_psize_defs[psize].shift;
+	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
+	for (i = 0; i < max_hpte_count; i++) {
 		/*
 		 * 8 bits per each hpte entries
 		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 3daced3..5fcc621 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -45,6 +45,13 @@
 #include "plpar_wrappers.h"
 #include "pseries.h"
 
+/* Flag bits for H_BULK_REMOVE */
+#define HBR_REQUEST	0x4000000000000000UL
+#define HBR_RESPONSE	0x8000000000000000UL
+#define HBR_END		0xc000000000000000UL
+#define HBR_AVPN	0x0200000000000000UL
+#define HBR_ANDCOND	0x0100000000000000UL
+
 
 /* in hvCall.S */
 EXPORT_SYMBOL(plpar_hcall);
@@ -339,6 +346,117 @@ static void pSeries_lpar_hpte_invalidate(unsigned long slot, unsigned long vpn,
 	BUG_ON(lpar_rc != H_SUCCESS);
 }
 
+/*
+ * Limit iterations holding pSeries_lpar_tlbie_lock to 3. We also need
+ * to make sure that we avoid bouncing the hypervisor tlbie lock.
+ */
+#define PPC64_HUGE_HPTE_BATCH 12
+
+static void __pSeries_lpar_hugepage_invalidate(unsigned long *slot,
+					     unsigned long *vpn, int count,
+					     int psize, int ssize)
+{
+	unsigned long param[9];
+	int i = 0, pix = 0, rc;
+	unsigned long flags = 0;
+	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+
+	if (lock_tlbie)
+		spin_lock_irqsave(&pSeries_lpar_tlbie_lock, flags);
+
+	for (i = 0; i < count; i++) {
+
+		if (!firmware_has_feature(FW_FEATURE_BULK_REMOVE)) {
+			pSeries_lpar_hpte_invalidate(slot[i], vpn[i], psize,
+						     ssize, 0);
+		} else {
+			param[pix] = HBR_REQUEST | HBR_AVPN | slot[i];
+			param[pix+1] = hpte_encode_avpn(vpn[i], psize, ssize);
+			pix += 2;
+			if (pix == 8) {
+				rc = plpar_hcall9(H_BULK_REMOVE, param,
+						  param[0], param[1], param[2],
+						  param[3], param[4], param[5],
+						  param[6], param[7]);
+				BUG_ON(rc != H_SUCCESS);
+				pix = 0;
+			}
+		}
+	}
+	if (pix) {
+		param[pix] = HBR_END;
+		rc = plpar_hcall9(H_BULK_REMOVE, param, param[0], param[1],
+				  param[2], param[3], param[4], param[5],
+				  param[6], param[7]);
+		BUG_ON(rc != H_SUCCESS);
+	}
+
+	if (lock_tlbie)
+		spin_unlock_irqrestore(&pSeries_lpar_tlbie_lock, flags);
+}
+
+static void pSeries_lpar_hugepage_invalidate(struct mm_struct *mm,
+				       unsigned char *hpte_slot_array,
+				       unsigned long addr, int psize)
+{
+	int ssize = 0, i, index = 0;
+	unsigned long s_addr = addr;
+	unsigned int max_hpte_count, valid;
+	unsigned long vpn_array[PPC64_HUGE_HPTE_BATCH];
+	unsigned long slot_array[PPC64_HUGE_HPTE_BATCH];
+	unsigned long shift, hidx, vpn = 0, vsid, hash, slot;
+
+	shift = mmu_psize_defs[psize].shift;
+	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
+
+	for (i = 0; i < max_hpte_count; i++) {
+		/*
+		 * 8 bits per each hpte entries
+		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
+		 */
+		valid = hpte_slot_array[i] & 0x1;
+		if (!valid)
+			continue;
+		hidx =  hpte_slot_array[i]  >> 1;
+
+		/* get the vpn */
+		addr = s_addr + (i * (1ul << shift));
+		if (!is_kernel_addr(addr)) {
+			ssize = user_segment_size(addr);
+			vsid = get_vsid(mm->context.id, addr, ssize);
+			WARN_ON(vsid == 0);
+		} else {
+			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
+			ssize = mmu_kernel_ssize;
+		}
+
+		vpn = hpt_vpn(addr, vsid, ssize);
+		hash = hpt_hash(vpn, shift, ssize);
+		if (hidx & _PTEIDX_SECONDARY)
+			hash = ~hash;
+
+		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
+		slot += hidx & _PTEIDX_GROUP_IX;
+
+		slot_array[index] = slot;
+		vpn_array[index] = vpn;
+		if (index == PPC64_HUGE_HPTE_BATCH - 1) {
+			/*
+			 * Now do a bluk invalidate
+			 */
+			__pSeries_lpar_hugepage_invalidate(slot_array,
+							   vpn_array,
+							   PPC64_HUGE_HPTE_BATCH,
+							   psize, ssize);
+			index = 0;
+		} else
+			index++;
+	}
+	if (index)
+		__pSeries_lpar_hugepage_invalidate(slot_array, vpn_array,
+						   index, psize, ssize);
+}
+
 static void pSeries_lpar_hpte_removebolted(unsigned long ea,
 					   int psize, int ssize)
 {
@@ -354,13 +472,6 @@ static void pSeries_lpar_hpte_removebolted(unsigned long ea,
 	pSeries_lpar_hpte_invalidate(slot, vpn, psize, ssize, 0);
 }
 
-/* Flag bits for H_BULK_REMOVE */
-#define HBR_REQUEST	0x4000000000000000UL
-#define HBR_RESPONSE	0x8000000000000000UL
-#define HBR_END		0xc000000000000000UL
-#define HBR_AVPN	0x0200000000000000UL
-#define HBR_ANDCOND	0x0100000000000000UL
-
 /*
  * Take a spinlock around flushes to avoid bouncing the hypervisor tlbie
  * lock.
@@ -446,6 +557,7 @@ void __init hpte_init_lpar(void)
 	ppc_md.hpte_removebolted = pSeries_lpar_hpte_removebolted;
 	ppc_md.flush_hash_range	= pSeries_lpar_flush_hash_range;
 	ppc_md.hpte_clear_all   = pSeries_lpar_hptab_clear;
+	ppc_md.hugepage_invalidate = pSeries_lpar_hugepage_invalidate;
 }
 
 #ifdef CONFIG_PPC_SMLPAR
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [PATCH -V5 25/25] powerpc: Handle hugepages in kvm
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (23 preceding siblings ...)
  2013-04-04  5:58 ` [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate Aneesh Kumar K.V
@ 2013-04-04  5:58 ` Aneesh Kumar K.V
  2013-04-04  6:00 ` [PATCH -V5 00/25] THP support for PPC64 Simon Jeons
                   ` (2 subsequent siblings)
  27 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  5:58 UTC (permalink / raw)
  To: benh, paulus; +Cc: linux-mm, linuxppc-dev, Aneesh Kumar K.V

From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

We could possibly avoid some of these changes because most of the HUGE PMD bits
map to PTE bits.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/kvm_book3s_64.h |   31 ++++++++++++
 arch/powerpc/kvm/book3s_64_mmu_hv.c      |   12 ++++-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c      |   75 ++++++++++++++++++++++--------
 3 files changed, 97 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index 38bec1d..1c5c799 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -110,6 +110,7 @@ static inline unsigned long compute_tlbie_rb(unsigned long v, unsigned long r,
 	return rb;
 }
 
+/* FIXME !! should we use hpte_actual_psize or hpte decode ? */
 static inline unsigned long hpte_page_size(unsigned long h, unsigned long l)
 {
 	/* only handle 4k, 64k and 16M pages for now */
@@ -189,6 +190,36 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *p, int writing)
 	return pte;
 }
 
+/*
+ * Lock and read a linux hugepage PMD.  If it's present and writable, atomically
+ * set dirty and referenced bits and return the PMD, otherwise return 0.
+ */
+static inline pmd_t kvmppc_read_update_linux_hugepmd(pmd_t *p, int writing)
+{
+	pmd_t pmd, tmp;
+
+	/* wait until _PAGE_BUSY is clear then set it atomically */
+	__asm__ __volatile__ (
+		"1:	ldarx	%0,0,%3\n"
+		"	andi.	%1,%0,%4\n"
+		"	bne-	1b\n"
+		"	ori	%1,%0,%4\n"
+		"	stdcx.	%1,0,%3\n"
+		"	bne-	1b"
+		: "=&r" (pmd), "=&r" (tmp), "=m" (*p)
+		: "r" (p), "i" (PMD_HUGE_BUSY)
+		: "cc");
+
+	if (pmd_large(pmd)) {
+		pmd = pmd_mkyoung(pmd);
+		if (writing && pmd_write(pmd))
+			pmd = pte_mkdirty(pmd);
+	}
+
+	*p = pmd;	/* clears PMD_HUGE_BUSY */
+	return pmd;
+}
+
 /* Return HPTE cache control bits corresponding to Linux pte bits */
 static inline unsigned long hpte_cache_bits(unsigned long pte_val)
 {
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 4f2a7dc..da006da 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -675,6 +675,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 		}
 		/* if the guest wants write access, see if that is OK */
 		if (!writing && hpte_is_writable(r)) {
+			int hugepage;
 			pte_t *ptep, pte;
 
 			/*
@@ -683,11 +684,18 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
 			 */
 			rcu_read_lock_sched();
 			ptep = find_linux_pte_or_hugepte(current->mm->pgd,
-							 hva, NULL, NULL);
-			if (ptep && pte_present(*ptep)) {
+							 hva, NULL, &hugepage);
+			if (!hugepage && ptep && pte_present(*ptep)) {
 				pte = kvmppc_read_update_linux_pte(ptep, 1);
 				if (pte_write(pte))
 					write_ok = 1;
+			} else if (hugepage && ptep) {
+				pmd_t pmd = *(pmd_t *)ptep;
+				if (pmd_large(pmd)) {
+					pmd = kvmppc_read_update_linux_hugepmd((pmd_t *)ptep, 1);
+					if (pmd_write(pmd))
+						write_ok = 1;
+				}
 			}
 			rcu_read_unlock_sched();
 		}
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 7c8e1ed..e9d4e3a 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -146,24 +146,37 @@ static void remove_revmap_chain(struct kvm *kvm, long pte_index,
 }
 
 static pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva,
-			      int writing, unsigned long *pte_sizep)
+			      int writing, unsigned long *pte_sizep,
+			      int *hugepage)
 {
 	pte_t *ptep;
 	unsigned long ps = *pte_sizep;
 	unsigned int shift;
 
-	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift, NULL);
+	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift, hugepage);
 	if (!ptep)
 		return __pte(0);
-	if (shift)
-		*pte_sizep = 1ul << shift;
-	else
-		*pte_sizep = PAGE_SIZE;
+	if (*hugepage) {
+		*pte_sizep = 1ul << 24;
+	} else {
+		if (shift)
+			*pte_sizep = 1ul << shift;
+		else
+			*pte_sizep = PAGE_SIZE;
+	}
 	if (ps > *pte_sizep)
 		return __pte(0);
-	if (!pte_present(*ptep))
-		return __pte(0);
-	return kvmppc_read_update_linux_pte(ptep, writing);
+
+	if (*hugepage) {
+		pmd_t *pmdp = (pmd_t *)ptep;
+		if (!pmd_large(*pmdp))
+			return __pmd(0);
+		return kvmppc_read_update_linux_hugepmd(pmdp, writing);
+	} else {
+		if (!pte_present(*ptep))
+			return __pte(0);
+		return kvmppc_read_update_linux_pte(ptep, writing);
+	}
 }
 
 static inline void unlock_hpte(unsigned long *hpte, unsigned long hpte_v)
@@ -239,18 +252,34 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
 		pte_size = PAGE_SIZE << (pa & KVMPPC_PAGE_ORDER_MASK);
 		pa &= PAGE_MASK;
 	} else {
+		int hugepage;
+
 		/* Translate to host virtual address */
 		hva = __gfn_to_hva_memslot(memslot, gfn);
 
 		/* Look up the Linux PTE for the backing page */
 		pte_size = psize;
-		pte = lookup_linux_pte(pgdir, hva, writing, &pte_size);
-		if (pte_present(pte)) {
-			if (writing && !pte_write(pte))
-				/* make the actual HPTE be read-only */
-				ptel = hpte_make_readonly(ptel);
-			is_io = hpte_cache_bits(pte_val(pte));
-			pa = pte_pfn(pte) << PAGE_SHIFT;
+		pte = lookup_linux_pte(pgdir, hva, writing, &pte_size, &hugepage);
+		if (hugepage) {
+			pmd_t pmd = (pmd_t)pte;
+			if (!pmd_large(pmd)) {
+				if (writing && !pmd_write(pmd))
+					/* make the actual HPTE be read-only */
+					ptel = hpte_make_readonly(ptel);
+				/*
+				 * we support hugepage only for RAM
+				 */
+				is_io = 0;
+				pa = pmd_pfn(pmd) << PAGE_SHIFT;
+			}
+		} else {
+			if (pte_present(pte)) {
+				if (writing && !pte_write(pte))
+					/* make the actual HPTE be read-only */
+					ptel = hpte_make_readonly(ptel);
+				is_io = hpte_cache_bits(pte_val(pte));
+				pa = pte_pfn(pte) << PAGE_SHIFT;
+			}
 		}
 	}
 
@@ -645,10 +674,18 @@ long kvmppc_h_protect(struct kvm_vcpu *vcpu, unsigned long flags,
 			gfn = ((r & HPTE_R_RPN) & ~(psize - 1)) >> PAGE_SHIFT;
 			memslot = __gfn_to_memslot(kvm_memslots(kvm), gfn);
 			if (memslot) {
+				int hugepage;
 				hva = __gfn_to_hva_memslot(memslot, gfn);
-				pte = lookup_linux_pte(pgdir, hva, 1, &psize);
-				if (pte_present(pte) && !pte_write(pte))
-					r = hpte_make_readonly(r);
+				pte = lookup_linux_pte(pgdir, hva, 1,
+						       &psize, &hugepage);
+				if (hugepage) {
+					pmd_t pmd = (pmd_t)pte;
+					if (pmd_large(pmd) && !pmd_write(pmd))
+						r = hpte_make_readonly(r);
+				} else {
+					if (pte_present(pte) && !pte_write(pte))
+						r = hpte_make_readonly(r);
+				}
 			}
 		}
 	}
-- 
1.7.10

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 00/25] THP support for PPC64
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (24 preceding siblings ...)
  2013-04-04  5:58 ` [PATCH -V5 25/25] powerpc: Handle hugepages in kvm Aneesh Kumar K.V
@ 2013-04-04  6:00 ` Simon Jeons
  2013-04-04  6:10   ` Aneesh Kumar K.V
  2013-04-04  6:14 ` Simon Jeons
  2013-04-19  1:55 ` Simon Jeons
  27 siblings, 1 reply; 73+ messages in thread
From: Simon Jeons @ 2013-04-04  6:00 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

Hi Aneesh,
On 04/04/2013 01:57 PM, Aneesh Kumar K.V wrote:
> Hi,
>
> This patchset adds transparent hugepage support for PPC64.
>
> TODO:
> * hash preload support in update_mmu_cache_pmd (we don't do that for hugetlb)
>
> Some numbers:
>
> The latency measurements code from Anton  found at
> http://ozlabs.org/~anton/junkcode/latency2001.c

Is there test case against x86?

>
> THP disabled 64K page size
> ------------------------
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    731.73 cycles    205.77 ns
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    743.39 cycles    209.05 ns
> [root@llmp24l02 ~]#
>
> THP disabled large page via hugetlbfs
> -------------------------------------
> [root@llmp24l02 ~]# ./latency2001  -l 8G
>   8589934592    416.09 cycles    117.01 ns
> [root@llmp24l02 ~]# ./latency2001  -l 8G
>   8589934592    415.74 cycles    116.91 ns
>
> THP enabled 64K page size.
> ----------------
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    405.07 cycles    113.91 ns
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    411.82 cycles    115.81 ns
> [root@llmp24l02 ~]#
>
> We are close to hugetlbfs in latency and we can achieve this with zero
> config/page reservation. Most of the allocations above are fault allocated.
>
> Another test that does 50000000 random access over 1GB area goes from
> 2.65 seconds to 1.07 seconds with this patchset.
>
> split_huge_page impact:
> ---------------------
> To look at the performance impact of large page invalidate, I tried the below
> experiment. The test involved, accessing a large contiguous region of memory
> location as below
>
>      for (i = 0; i < size; i += PAGE_SIZE)
> 	data[i] = i;
>
> We wanted to access the data in sequential order so that we look at the
> worst case THP performance. Accesing the data in sequential order implies
> we have the Page table cached and overhead of TLB miss is as minimal as
> possible. We also don't touch the entire page, because that can result in
> cache evict.
>
> After we touched the full range as above, we now call mprotect on each
> of that page. A mprotect will result in a hugepage split. This should
> allow us to measure the impact of hugepage split.
>
>      for (i = 0; i < size; i += PAGE_SIZE)
> 	 mprotect(&data[i], PAGE_SIZE, PROT_READ);
>
> Split hugepage impact:
> ---------------------
> THP enabled: 2.851561705 seconds for test completion
> THP disable: 3.599146098 seconds for test completion
>
> We are 20.7% better than non THP case even when we have all the large pages split.
>
> Detailed output:
>
> THP enabled:
> ---------------------------------------
> [root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
> thp_fault_alloc 0
> thp_fault_fallback 0
> thp_collapse_alloc 0
> thp_collapse_alloc_failed 0
> thp_split 0
> thp_zero_page_alloc 0
> thp_zero_page_alloc_failed 0
> [root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G
> time taken to touch all the data in ns: 2763096913
>
>   Performance counter stats for './split-huge-page-mpro 20G':
>
>               1,581 page-faults
>               3,159 dTLB-load-misses
>
>         2.851561705 seconds time elapsed
>
> [root@llmp24l02 ~]#
> [root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
> thp_fault_alloc 1279
> thp_fault_fallback 0
> thp_collapse_alloc 0
> thp_collapse_alloc_failed 0
> thp_split 1279
> thp_zero_page_alloc 0
> thp_zero_page_alloc_failed 0
> [root@llmp24l02 ~]#
>
>      77.05%  split-huge-page  [kernel.kallsyms]     [k] .clear_user_page
>       7.10%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap_ctx
>       1.51%  split-huge-page  split-huge-page-mpro  [.] 0x0000000000000a70
>       0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc
>       0.81%  split-huge-page  [kernel.kallsyms]     [k] .up_write
>       0.76%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap
>       0.76%  split-huge-page  [kernel.kallsyms]     [k] .down_write
>       0.74%  split-huge-page  [kernel.kallsyms]     [k] .lru_add_page_tail
>       0.61%  split-huge-page  [kernel.kallsyms]     [k] .split_huge_page
>       0.59%  split-huge-page  [kernel.kallsyms]     [k] .change_protection
>       0.51%  split-huge-page  [kernel.kallsyms]     [k] .release_pages
>
>
>       0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc
>              |
>              |--79.44%-- reloc_start
>              |          |
>              |          |--86.54%-- .__pSeries_lpar_hugepage_invalidate
>              |          |          .pSeries_lpar_hugepage_invalidate
>              |          |          .hpte_need_hugepage_flush
>              |          |          .split_huge_page
>              |          |          .__split_huge_page_pmd
>              |          |          .vma_adjust
>              |          |          .vma_merge
>              |          |          .mprotect_fixup
>              |          |          .SyS_mprotect
>
>
> THP disabled:
> ---------------
> [root@llmp24l02 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
> [root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G
> time taken to touch all the data in ns: 3513767220
>
>   Performance counter stats for './split-huge-page-mpro 20G':
>
>            3,27,726 page-faults
>            3,29,654 dTLB-load-misses
>
>         3.599146098 seconds time elapsed
>
> [root@llmp24l02 ~]#
>
> Changes from V4:
> * Fix bad page error in page_table_alloc
>    BUG: Bad page state in process stream  pfn:f1a59
>    page:f0000000034dc378 count:1 mapcount:0 mapping:          (null) index:0x0
>    [c000000f322c77d0] [c00000000015e198] .bad_page+0xe8/0x140
>    [c000000f322c7860] [c00000000015e3c4] .free_pages_prepare+0x1d4/0x1e0
>    [c000000f322c7910] [c000000000160450] .free_hot_cold_page+0x50/0x230
>    [c000000f322c79c0] [c00000000003ad18] .page_table_alloc+0x168/0x1c0
>
> Changes from V3:
> * PowerNV boot fixes
>
> Change from V2:
> * Change patch "powerpc: Reduce PTE table memory wastage" to use much simpler approach
>    for PTE page sharing.
> * Changes to handle huge pages in KVM code.
> * Address other review comments
>
> Changes from V1
> * Address review comments
> * More patch split
> * Add batch hpte invalidate for hugepages.
>
> Changes from RFC V2:
> * Address review comments
> * More code cleanup and patch split
>
> Changes from RFC V1:
> * HugeTLB fs now works
> * Compile issues fixed
> * rebased to v3.8
> * Patch series reorded so that ppc64 cleanups and MM THP changes are moved
>    early in the series. This should help in picking those patches early.
>
> Thanks,
> -aneesh
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 00/25] THP support for PPC64
  2013-04-04  6:00 ` [PATCH -V5 00/25] THP support for PPC64 Simon Jeons
@ 2013-04-04  6:10   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  6:10 UTC (permalink / raw)
  To: Simon Jeons; +Cc: paulus, linuxppc-dev, linux-mm

Simon Jeons <simon.jeons@gmail.com> writes:

> Hi Aneesh,
> On 04/04/2013 01:57 PM, Aneesh Kumar K.V wrote:
>> Hi,
>>
>> This patchset adds transparent hugepage support for PPC64.
>>
>> TODO:
>> * hash preload support in update_mmu_cache_pmd (we don't do that for hugetlb)
>>
>> Some numbers:
>>
>> The latency measurements code from Anton  found at
>> http://ozlabs.org/~anton/junkcode/latency2001.c
>
> Is there test case against x86?
>

That test should work even with x86

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 00/25] THP support for PPC64
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (25 preceding siblings ...)
  2013-04-04  6:00 ` [PATCH -V5 00/25] THP support for PPC64 Simon Jeons
@ 2013-04-04  6:14 ` Simon Jeons
  2013-04-04  8:38   ` Aneesh Kumar K.V
  2013-04-19  1:55 ` Simon Jeons
  27 siblings, 1 reply; 73+ messages in thread
From: Simon Jeons @ 2013-04-04  6:14 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

Hi Aneesh,
On 04/04/2013 01:57 PM, Aneesh Kumar K.V wrote:
> Hi,
>
> This patchset adds transparent hugepage support for PPC64.
>
> TODO:
> * hash preload support in update_mmu_cache_pmd (we don't do that for hugetlb)
>
> Some numbers:
>
> The latency measurements code from Anton  found at
> http://ozlabs.org/~anton/junkcode/latency2001.c
>
> THP disabled 64K page size
> ------------------------
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    731.73 cycles    205.77 ns
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    743.39 cycles    209.05 ns

Could you explain what's the meaning of result?

> [root@llmp24l02 ~]#
>
> THP disabled large page via hugetlbfs
> -------------------------------------
> [root@llmp24l02 ~]# ./latency2001  -l 8G
>   8589934592    416.09 cycles    117.01 ns
> [root@llmp24l02 ~]# ./latency2001  -l 8G
>   8589934592    415.74 cycles    116.91 ns
>
> THP enabled 64K page size.
> ----------------
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    405.07 cycles    113.91 ns
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    411.82 cycles    115.81 ns
> [root@llmp24l02 ~]#
>
> We are close to hugetlbfs in latency and we can achieve this with zero
> config/page reservation. Most of the allocations above are fault allocated.
>
> Another test that does 50000000 random access over 1GB area goes from
> 2.65 seconds to 1.07 seconds with this patchset.
>
> split_huge_page impact:
> ---------------------
> To look at the performance impact of large page invalidate, I tried the below
> experiment. The test involved, accessing a large contiguous region of memory
> location as below
>
>      for (i = 0; i < size; i += PAGE_SIZE)
> 	data[i] = i;
>
> We wanted to access the data in sequential order so that we look at the
> worst case THP performance. Accesing the data in sequential order implies
> we have the Page table cached and overhead of TLB miss is as minimal as
> possible. We also don't touch the entire page, because that can result in
> cache evict.
>
> After we touched the full range as above, we now call mprotect on each
> of that page. A mprotect will result in a hugepage split. This should
> allow us to measure the impact of hugepage split.
>
>      for (i = 0; i < size; i += PAGE_SIZE)
> 	 mprotect(&data[i], PAGE_SIZE, PROT_READ);
>
> Split hugepage impact:
> ---------------------
> THP enabled: 2.851561705 seconds for test completion
> THP disable: 3.599146098 seconds for test completion
>
> We are 20.7% better than non THP case even when we have all the large pages split.
>
> Detailed output:
>
> THP enabled:
> ---------------------------------------
> [root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
> thp_fault_alloc 0
> thp_fault_fallback 0
> thp_collapse_alloc 0
> thp_collapse_alloc_failed 0
> thp_split 0
> thp_zero_page_alloc 0
> thp_zero_page_alloc_failed 0
> [root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G
> time taken to touch all the data in ns: 2763096913
>
>   Performance counter stats for './split-huge-page-mpro 20G':
>
>               1,581 page-faults
>               3,159 dTLB-load-misses
>
>         2.851561705 seconds time elapsed
>
> [root@llmp24l02 ~]#
> [root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
> thp_fault_alloc 1279
> thp_fault_fallback 0
> thp_collapse_alloc 0
> thp_collapse_alloc_failed 0
> thp_split 1279
> thp_zero_page_alloc 0
> thp_zero_page_alloc_failed 0
> [root@llmp24l02 ~]#
>
>      77.05%  split-huge-page  [kernel.kallsyms]     [k] .clear_user_page
>       7.10%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap_ctx
>       1.51%  split-huge-page  split-huge-page-mpro  [.] 0x0000000000000a70
>       0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc
>       0.81%  split-huge-page  [kernel.kallsyms]     [k] .up_write
>       0.76%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap
>       0.76%  split-huge-page  [kernel.kallsyms]     [k] .down_write
>       0.74%  split-huge-page  [kernel.kallsyms]     [k] .lru_add_page_tail
>       0.61%  split-huge-page  [kernel.kallsyms]     [k] .split_huge_page
>       0.59%  split-huge-page  [kernel.kallsyms]     [k] .change_protection
>       0.51%  split-huge-page  [kernel.kallsyms]     [k] .release_pages
>
>
>       0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc
>              |
>              |--79.44%-- reloc_start
>              |          |
>              |          |--86.54%-- .__pSeries_lpar_hugepage_invalidate
>              |          |          .pSeries_lpar_hugepage_invalidate
>              |          |          .hpte_need_hugepage_flush
>              |          |          .split_huge_page
>              |          |          .__split_huge_page_pmd
>              |          |          .vma_adjust
>              |          |          .vma_merge
>              |          |          .mprotect_fixup
>              |          |          .SyS_mprotect
>
>
> THP disabled:
> ---------------
> [root@llmp24l02 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
> [root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G
> time taken to touch all the data in ns: 3513767220
>
>   Performance counter stats for './split-huge-page-mpro 20G':
>
>            3,27,726 page-faults
>            3,29,654 dTLB-load-misses
>
>         3.599146098 seconds time elapsed
>
> [root@llmp24l02 ~]#
>
> Changes from V4:
> * Fix bad page error in page_table_alloc
>    BUG: Bad page state in process stream  pfn:f1a59
>    page:f0000000034dc378 count:1 mapcount:0 mapping:          (null) index:0x0
>    [c000000f322c77d0] [c00000000015e198] .bad_page+0xe8/0x140
>    [c000000f322c7860] [c00000000015e3c4] .free_pages_prepare+0x1d4/0x1e0
>    [c000000f322c7910] [c000000000160450] .free_hot_cold_page+0x50/0x230
>    [c000000f322c79c0] [c00000000003ad18] .page_table_alloc+0x168/0x1c0
>
> Changes from V3:
> * PowerNV boot fixes
>
> Change from V2:
> * Change patch "powerpc: Reduce PTE table memory wastage" to use much simpler approach
>    for PTE page sharing.
> * Changes to handle huge pages in KVM code.
> * Address other review comments
>
> Changes from V1
> * Address review comments
> * More patch split
> * Add batch hpte invalidate for hugepages.
>
> Changes from RFC V2:
> * Address review comments
> * More code cleanup and patch split
>
> Changes from RFC V1:
> * HugeTLB fs now works
> * Compile issues fixed
> * rebased to v3.8
> * Patch series reorded so that ppc64 cleanups and MM THP changes are moved
>    early in the series. This should help in picking those patches early.
>
> Thanks,
> -aneesh
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 00/25] THP support for PPC64
  2013-04-04  6:14 ` Simon Jeons
@ 2013-04-04  8:38   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-04  8:38 UTC (permalink / raw)
  To: Simon Jeons; +Cc: paulus, linuxppc-dev, linux-mm

Simon Jeons <simon.jeons@gmail.com> writes:

> Hi Aneesh,
> On 04/04/2013 01:57 PM, Aneesh Kumar K.V wrote:
>> Hi,
>>
>> This patchset adds transparent hugepage support for PPC64.
>>
>> TODO:
>> * hash preload support in update_mmu_cache_pmd (we don't do that for hugetlb)
>>
>> Some numbers:
>>
>> The latency measurements code from Anton  found at
>> http://ozlabs.org/~anton/junkcode/latency2001.c
>>
>> THP disabled 64K page size
>> ------------------------
>> [root@llmp24l02 ~]# ./latency2001 8G
>>   8589934592    731.73 cycles    205.77 ns
>> [root@llmp24l02 ~]# ./latency2001 8G
>>   8589934592    743.39 cycles    209.05 ns
>
> Could you explain what's the meaning of result?
>

That is the total memory range, cycles taken to access an address and
time taken to access. That numbers shows the overhead of tlb miss.

you can find the source at http://ozlabs.org/~anton/junkcode/latency2001.c


-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-04  5:57 ` [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage Aneesh Kumar K.V
@ 2013-04-10  4:46   ` David Gibson
  2013-04-10  6:29     ` Aneesh Kumar K.V
  2013-04-10  7:14   ` Michael Ellerman
  1 sibling, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-10  4:46 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 15308 bytes --]

On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> We allocate one page for the last level of linux page table. With THP and
> large page size of 16MB, that would mean we are wasting large part
> of that page. To map 16MB area, we only need a PTE space of 2K with 64K
> page size. This patch reduce the space wastage by sharing the page
> allocated for the last level of linux page table with multiple pmd
> entries. We call these smaller chunks PTE page fragments and allocated
> page, PTE page.
> 
> In order to support systems which doesn't have 64K HPTE support, we also
> add another 2K to PTE page fragment. The second half of the PTE fragments
> is used for storing slot and secondary bit information of an HPTE. With this
> we now have a 4K PTE fragment.
> 
> We use a simple approach to share the PTE page. On allocation, we bump the
> PTE page refcount to 16 and share the PTE page with the next 16 pte alloc
> request. This should help in the node locality of the PTE page fragment,
> assuming that the immediate pte alloc request will mostly come from the
> same NUMA node. We don't try to reuse the freed PTE page fragment. Hence
> we could be waisting some space.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/mmu-book3e.h |    4 +
>  arch/powerpc/include/asm/mmu-hash64.h |    4 +
>  arch/powerpc/include/asm/page.h       |    4 +
>  arch/powerpc/include/asm/pgalloc-64.h |   72 ++++-------------
>  arch/powerpc/kernel/setup_64.c        |    4 +-
>  arch/powerpc/mm/mmu_context_hash64.c  |   35 +++++++++
>  arch/powerpc/mm/pgtable_64.c          |  137 +++++++++++++++++++++++++++++++++
>  7 files changed, 202 insertions(+), 58 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/mmu-book3e.h b/arch/powerpc/include/asm/mmu-book3e.h
> index 99d43e0..affbd68 100644
> --- a/arch/powerpc/include/asm/mmu-book3e.h
> +++ b/arch/powerpc/include/asm/mmu-book3e.h
> @@ -231,6 +231,10 @@ typedef struct {
>  	u64 high_slices_psize;  /* 4 bits per slice for now */
>  	u16 user_psize;         /* page size index */
>  #endif
> +#ifdef CONFIG_PPC_64K_PAGES
> +	/* for 4K PTE fragment support */
> +	struct page *pgtable_page;
> +#endif
>  } mm_context_t;
>  
>  /* Page size definitions, common between 32 and 64-bit
> diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
> index 35bb51e..300ac3c 100644
> --- a/arch/powerpc/include/asm/mmu-hash64.h
> +++ b/arch/powerpc/include/asm/mmu-hash64.h
> @@ -498,6 +498,10 @@ typedef struct {
>  	unsigned long acop;	/* mask of enabled coprocessor types */
>  	unsigned int cop_pid;	/* pid value used with coprocessors */
>  #endif /* CONFIG_PPC_ICSWX */
> +#ifdef CONFIG_PPC_64K_PAGES
> +	/* for 4K PTE fragment support */
> +	struct page *pgtable_page;
> +#endif
>  } mm_context_t;
>  
>  
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index f072e97..38e7ff6 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -378,7 +378,11 @@ void arch_free_page(struct page *page, int order);
>  
>  struct vm_area_struct;
>  
> +#ifdef CONFIG_PPC_64K_PAGES
> +typedef pte_t *pgtable_t;
> +#else
>  typedef struct page *pgtable_t;
> +#endif

Ugh, that's pretty horrible, though I don't see an easy way around it.

>  #include <asm-generic/memory_model.h>
>  #endif /* __ASSEMBLY__ */
> diff --git a/arch/powerpc/include/asm/pgalloc-64.h b/arch/powerpc/include/asm/pgalloc-64.h
> index cdbf555..3418989 100644
> --- a/arch/powerpc/include/asm/pgalloc-64.h
> +++ b/arch/powerpc/include/asm/pgalloc-64.h
> @@ -150,6 +150,13 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
>  
>  #else /* if CONFIG_PPC_64K_PAGES */
>  
> +extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
> +extern void page_table_free(struct mm_struct *, unsigned long *, int);
> +extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
> +#ifdef CONFIG_SMP
> +extern void __tlb_remove_table(void *_table);
> +#endif
> +
>  #define pud_populate(mm, pud, pmd)	pud_set(pud, (unsigned long)pmd)
>  
>  static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
> @@ -161,90 +168,42 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
>  static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
>  				pgtable_t pte_page)
>  {
> -	pmd_populate_kernel(mm, pmd, page_address(pte_page));
> +	pmd_set(pmd, (unsigned long)pte_page);
>  }
>  
>  static inline pgtable_t pmd_pgtable(pmd_t pmd)
>  {
> -	return pmd_page(pmd);
> +	return (pgtable_t)(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE);
>  }
>  
>  static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
>  					  unsigned long address)
>  {
> -	return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
> +	return (pte_t *)page_table_alloc(mm, address, 1);
>  }
>  
>  static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
> -				      unsigned long address)
> +					unsigned long address)
>  {
> -	struct page *page;
> -	pte_t *pte;
> -
> -	pte = pte_alloc_one_kernel(mm, address);
> -	if (!pte)
> -		return NULL;
> -	page = virt_to_page(pte);
> -	pgtable_page_ctor(page);
> -	return page;
> +	return (pgtable_t)page_table_alloc(mm, address, 0);
>  }
>  
>  static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
>  {
> -	free_page((unsigned long)pte);
> +	page_table_free(mm, (unsigned long *)pte, 1);
>  }
>  
>  static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
>  {
> -	pgtable_page_dtor(ptepage);
> -	__free_page(ptepage);
> -}
> -
> -static inline void pgtable_free(void *table, unsigned index_size)
> -{
> -	if (!index_size)
> -		free_page((unsigned long)table);
> -	else {
> -		BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE);
> -		kmem_cache_free(PGT_CACHE(index_size), table);
> -	}
> +	page_table_free(mm, (unsigned long *)ptepage, 0);
>  }
>  
> -#ifdef CONFIG_SMP
> -static inline void pgtable_free_tlb(struct mmu_gather *tlb,
> -				    void *table, int shift)
> -{
> -	unsigned long pgf = (unsigned long)table;
> -	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
> -	pgf |= shift;
> -	tlb_remove_table(tlb, (void *)pgf);
> -}
> -
> -static inline void __tlb_remove_table(void *_table)
> -{
> -	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
> -	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
> -
> -	pgtable_free(table, shift);
> -}
> -#else /* !CONFIG_SMP */
> -static inline void pgtable_free_tlb(struct mmu_gather *tlb,
> -				    void *table, int shift)
> -{
> -	pgtable_free(table, shift);
> -}
> -#endif /* CONFIG_SMP */
> -
>  static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
>  				  unsigned long address)
>  {
> -	struct page *page = page_address(table);
> -
>  	tlb_flush_pgtable(tlb, address);
> -	pgtable_page_dtor(page);
> -	pgtable_free_tlb(tlb, page, 0);
> +	pgtable_free_tlb(tlb, table, 0);
>  }
> -
>  #endif /* CONFIG_PPC_64K_PAGES */
>  
>  static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
> @@ -258,7 +217,6 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
>  	kmem_cache_free(PGT_CACHE(PMD_INDEX_SIZE), pmd);
>  }
>  
> -
>  #define __pmd_free_tlb(tlb, pmd, addr)		      \
>  	pgtable_free_tlb(tlb, pmd, PMD_INDEX_SIZE)
>  #ifndef CONFIG_PPC_64K_PAGES
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 6da881b..04d833c 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -575,7 +575,9 @@ void __init setup_arch(char **cmdline_p)
>  	init_mm.end_code = (unsigned long) _etext;
>  	init_mm.end_data = (unsigned long) _edata;
>  	init_mm.brk = klimit;
> -	
> +#ifdef CONFIG_PPC_64K_PAGES
> +	init_mm.context.pgtable_page = NULL;
> +#endif
>  	irqstack_early_init();
>  	exc_lvl_early_init();
>  	emergency_stack_init();
> diff --git a/arch/powerpc/mm/mmu_context_hash64.c b/arch/powerpc/mm/mmu_context_hash64.c
> index 59cd773..fbfdca2 100644
> --- a/arch/powerpc/mm/mmu_context_hash64.c
> +++ b/arch/powerpc/mm/mmu_context_hash64.c
> @@ -86,6 +86,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>  	spin_lock_init(mm->context.cop_lockp);
>  #endif /* CONFIG_PPC_ICSWX */
>  
> +#ifdef CONFIG_PPC_64K_PAGES
> +	mm->context.pgtable_page = NULL;
> +#endif
>  	return 0;
>  }
>  
> @@ -97,13 +100,45 @@ void __destroy_context(int context_id)
>  }
>  EXPORT_SYMBOL_GPL(__destroy_context);
>  
> +#ifdef CONFIG_PPC_64K_PAGES
> +static void destroy_pagetable_page(struct mm_struct *mm)
> +{
> +	int count;
> +	struct page *page;
> +
> +	page = mm->context.pgtable_page;
> +	if (!page)
> +		return;
> +
> +	/* drop all the pending references */
> +	count = atomic_read(&page->_mapcount) + 1;
> +	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
> +	count = atomic_sub_return(16 - count, &page->_count);

You should really move PTE_FRAG_NR to a header so you can actually use
it here rather than hard coding 16.

It took me a fair while to convince myself that there is no race here
with something altering mapcount and count between the atomic_read()
and the atomic_sub_return().  It could do with a comment to explain
why that is safe.

Re-using the mapcount field for your index also seems odd, and it took
me a while to convince myself that that's safe too.  Wouldn't it be
simpler to store a pointer to the next sub-page in the mm_context
instead? You can get from that to the struct page easily enough with a
shift and pfn_to_page().

> +	if (!count) {
> +		pgtable_page_dtor(page);
> +		reset_page_mapcount(page);
> +		free_hot_cold_page(page, 0);

It would be nice to use put_page() somehow instead of duplicating its
logic, though I realise the sparc code you've based this on does the
same thing.

> +	}
> +}
> +
> +#else
> +static inline void destroy_pagetable_page(struct mm_struct *mm)
> +{
> +	return;
> +}
> +#endif
> +
> +
>  void destroy_context(struct mm_struct *mm)
>  {
> +
>  #ifdef CONFIG_PPC_ICSWX
>  	drop_cop(mm->context.acop, mm);
>  	kfree(mm->context.cop_lockp);
>  	mm->context.cop_lockp = NULL;
>  #endif /* CONFIG_PPC_ICSWX */
> +
> +	destroy_pagetable_page(mm);
>  	__destroy_context(mm->context.id);
>  	subpage_prot_free(mm);
>  	mm->context.id = MMU_NO_CONTEXT;
> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
> index e212a27..e79840b 100644
> --- a/arch/powerpc/mm/pgtable_64.c
> +++ b/arch/powerpc/mm/pgtable_64.c
> @@ -337,3 +337,140 @@ EXPORT_SYMBOL(__ioremap_at);
>  EXPORT_SYMBOL(iounmap);
>  EXPORT_SYMBOL(__iounmap);
>  EXPORT_SYMBOL(__iounmap_at);
> +
> +#ifdef CONFIG_PPC_64K_PAGES
> +/*
> + * we support 16 fragments per PTE page. This is limited by how many
> + * bits we can pack in page->_mapcount. We use the first half for
> + * tracking the usage for rcu page table free.
> + */
> +#define PTE_FRAG_NR	16
> +/*
> + * We use a 2K PTE page fragment and another 2K for storing
> + * real_pte_t hash index
> + */
> +#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
> +
> +static pte_t *get_from_cache(struct mm_struct *mm)
> +{
> +	int index;
> +	pte_t *ret = NULL;
> +	struct page *page;
> +
> +	spin_lock(&mm->page_table_lock);
> +	page = mm->context.pgtable_page;
> +	if (page) {
> +		void *p = page_address(page);
> +		index = atomic_add_return(1, &page->_mapcount);
> +		ret = (pte_t *) (p + (index * PTE_FRAG_SIZE));
> +		/*
> +		 * If we have taken up all the fragments mark PTE page NULL
> +		 */
> +		if (index == PTE_FRAG_NR - 1)
> +			mm->context.pgtable_page = NULL;
> +	}
> +	spin_unlock(&mm->page_table_lock);
> +	return ret;
> +}
> +
> +static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
> +{
> +	pte_t *ret = NULL;
> +	struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK |
> +				       __GFP_REPEAT | __GFP_ZERO);
> +	if (!page)
> +		return NULL;
> +
> +	spin_lock(&mm->page_table_lock);
> +	/*
> +	 * If we find pgtable_page set, we return
> +	 * the allocated page with single fragement
> +	 * count.
> +	 */
> +	if (likely(!mm->context.pgtable_page)) {
> +		atomic_set(&page->_count, PTE_FRAG_NR);
> +		atomic_set(&page->_mapcount, 0);
> +		mm->context.pgtable_page = page;
> +	}

.. and in the unlikely case where there *is* a pgtable_page already
set, what then?  Seems like you should BUG_ON, or at least return NULL
- as it is you will return the first sub-page of that page again,
which is very likely in use.

> +	spin_unlock(&mm->page_table_lock);
> +
> +	ret = (unsigned long *)page_address(page);
> +	if (!kernel)
> +		pgtable_page_ctor(page);
> +
> +	return ret;
> +}
> +
> +pte_t *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel)
> +{
> +	pte_t *pte;
> +
> +	pte = get_from_cache(mm);
> +	if (pte)
> +		return pte;
> +
> +	return __alloc_for_cache(mm, kernel);
> +}
> +
> +void page_table_free(struct mm_struct *mm, unsigned long *table, int kernel)
> +{
> +	struct page *page = virt_to_page(table);
> +	if (put_page_testzero(page)) {
> +		if (!kernel)
> +			pgtable_page_dtor(page);
> +		reset_page_mapcount(page);
> +		free_hot_cold_page(page, 0);
> +	}
> +}
> +
> +#ifdef CONFIG_SMP
> +static void page_table_free_rcu(void *table)
> +{
> +	struct page *page = virt_to_page(table);
> +	if (put_page_testzero(page)) {
> +		pgtable_page_dtor(page);
> +		reset_page_mapcount(page);
> +		free_hot_cold_page(page, 0);
> +	}
> +}
> +
> +void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
> +{
> +	unsigned long pgf = (unsigned long)table;
> +
> +	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
> +	pgf |= shift;
> +	tlb_remove_table(tlb, (void *)pgf);
> +}
> +
> +void __tlb_remove_table(void *_table)
> +{
> +	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
> +	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
> +
> +	if (!shift)
> +		/* PTE page needs special handling */
> +		page_table_free_rcu(table);
> +	else {
> +		BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
> +		kmem_cache_free(PGT_CACHE(shift), table);
> +	}
> +}
> +#else
> +void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
> +{
> +	if (!shift) {
> +		/* PTE page needs special handling */
> +		struct page *page = virt_to_page(table);
> +		if (put_page_testzero(page)) {
> +			pgtable_page_dtor(page);
> +			reset_page_mapcount(page);
> +			free_hot_cold_page(page, 0);
> +		}
> +	} else {
> +		BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
> +		kmem_cache_free(PGT_CACHE(shift), table);
> +	}
> +}
> +#endif
> +#endif /* CONFIG_PPC_64K_PAGES */

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10  4:46   ` David Gibson
@ 2013-04-10  6:29     ` Aneesh Kumar K.V
  2013-04-10  7:04       ` David Gibson
  0 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10  6:29 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> We allocate one page for the last level of linux page table. With THP and
>> large page size of 16MB, that would mean we are wasting large part
>> of that page. To map 16MB area, we only need a PTE space of 2K with 64K
>> page size. This patch reduce the space wastage by sharing the page
>> allocated for the last level of linux page table with multiple pmd
>> entries. We call these smaller chunks PTE page fragments and allocated
>> page, PTE page.
>> 
>> In order to support systems which doesn't have 64K HPTE support, we also
>> add another 2K to PTE page fragment. The second half of the PTE fragments
>> is used for storing slot and secondary bit information of an HPTE. With this
>> we now have a 4K PTE fragment.
>> 
>> We use a simple approach to share the PTE page. On allocation, we bump the
>> PTE page refcount to 16 and share the PTE page with the next 16 pte alloc
>> request. This should help in the node locality of the PTE page fragment,
>> assuming that the immediate pte alloc request will mostly come from the
>> same NUMA node. We don't try to reuse the freed PTE page fragment. Hence
>> we could be waisting some space.
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/mmu-book3e.h |    4 +
>>  arch/powerpc/include/asm/mmu-hash64.h |    4 +
>>  arch/powerpc/include/asm/page.h       |    4 +
>>  arch/powerpc/include/asm/pgalloc-64.h |   72 ++++-------------
>>  arch/powerpc/kernel/setup_64.c        |    4 +-
>>  arch/powerpc/mm/mmu_context_hash64.c  |   35 +++++++++
>>  arch/powerpc/mm/pgtable_64.c          |  137 +++++++++++++++++++++++++++++++++
>>  7 files changed, 202 insertions(+), 58 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/mmu-book3e.h b/arch/powerpc/include/asm/mmu-book3e.h
>> index 99d43e0..affbd68 100644
>> --- a/arch/powerpc/include/asm/mmu-book3e.h
>> +++ b/arch/powerpc/include/asm/mmu-book3e.h
>> @@ -231,6 +231,10 @@ typedef struct {
>>  	u64 high_slices_psize;  /* 4 bits per slice for now */
>>  	u16 user_psize;         /* page size index */
>>  #endif
>> +#ifdef CONFIG_PPC_64K_PAGES
>> +	/* for 4K PTE fragment support */
>> +	struct page *pgtable_page;
>> +#endif
>>  } mm_context_t;
>>  
>>  /* Page size definitions, common between 32 and 64-bit
>> diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
>> index 35bb51e..300ac3c 100644
>> --- a/arch/powerpc/include/asm/mmu-hash64.h
>> +++ b/arch/powerpc/include/asm/mmu-hash64.h
>> @@ -498,6 +498,10 @@ typedef struct {
>>  	unsigned long acop;	/* mask of enabled coprocessor types */
>>  	unsigned int cop_pid;	/* pid value used with coprocessors */
>>  #endif /* CONFIG_PPC_ICSWX */
>> +#ifdef CONFIG_PPC_64K_PAGES
>> +	/* for 4K PTE fragment support */
>> +	struct page *pgtable_page;
>> +#endif
>>  } mm_context_t;
>>  
>>  
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index f072e97..38e7ff6 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -378,7 +378,11 @@ void arch_free_page(struct page *page, int order);
>>  
>>  struct vm_area_struct;
>>  
>> +#ifdef CONFIG_PPC_64K_PAGES
>> +typedef pte_t *pgtable_t;
>> +#else
>>  typedef struct page *pgtable_t;
>> +#endif
>
> Ugh, that's pretty horrible, though I don't see an easy way around it.
>
>>  #include <asm-generic/memory_model.h>
>>  #endif /* __ASSEMBLY__ */
>> diff --git a/arch/powerpc/include/asm/pgalloc-64.h b/arch/powerpc/include/asm/pgalloc-64.h
>> index cdbf555..3418989 100644
>> --- a/arch/powerpc/include/asm/pgalloc-64.h
>> +++ b/arch/powerpc/include/asm/pgalloc-64.h
>> @@ -150,6 +150,13 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
>>  
>>  #else /* if CONFIG_PPC_64K_PAGES */
>>  
>> +extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
>> +extern void page_table_free(struct mm_struct *, unsigned long *, int);
>> +extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
>> +#ifdef CONFIG_SMP
>> +extern void __tlb_remove_table(void *_table);
>> +#endif
>> +
>>  #define pud_populate(mm, pud, pmd)	pud_set(pud, (unsigned long)pmd)
>>  
>>  static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
>> @@ -161,90 +168,42 @@ static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
>>  static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
>>  				pgtable_t pte_page)
>>  {
>> -	pmd_populate_kernel(mm, pmd, page_address(pte_page));
>> +	pmd_set(pmd, (unsigned long)pte_page);
>>  }
>>  
>>  static inline pgtable_t pmd_pgtable(pmd_t pmd)
>>  {
>> -	return pmd_page(pmd);
>> +	return (pgtable_t)(pmd_val(pmd) & -sizeof(pte_t)*PTRS_PER_PTE);
>>  }
>>  
>>  static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
>>  					  unsigned long address)
>>  {
>> -	return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
>> +	return (pte_t *)page_table_alloc(mm, address, 1);
>>  }
>>  
>>  static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
>> -				      unsigned long address)
>> +					unsigned long address)
>>  {
>> -	struct page *page;
>> -	pte_t *pte;
>> -
>> -	pte = pte_alloc_one_kernel(mm, address);
>> -	if (!pte)
>> -		return NULL;
>> -	page = virt_to_page(pte);
>> -	pgtable_page_ctor(page);
>> -	return page;
>> +	return (pgtable_t)page_table_alloc(mm, address, 0);
>>  }
>>  
>>  static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
>>  {
>> -	free_page((unsigned long)pte);
>> +	page_table_free(mm, (unsigned long *)pte, 1);
>>  }
>>  
>>  static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
>>  {
>> -	pgtable_page_dtor(ptepage);
>> -	__free_page(ptepage);
>> -}
>> -
>> -static inline void pgtable_free(void *table, unsigned index_size)
>> -{
>> -	if (!index_size)
>> -		free_page((unsigned long)table);
>> -	else {
>> -		BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE);
>> -		kmem_cache_free(PGT_CACHE(index_size), table);
>> -	}
>> +	page_table_free(mm, (unsigned long *)ptepage, 0);
>>  }
>>  
>> -#ifdef CONFIG_SMP
>> -static inline void pgtable_free_tlb(struct mmu_gather *tlb,
>> -				    void *table, int shift)
>> -{
>> -	unsigned long pgf = (unsigned long)table;
>> -	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
>> -	pgf |= shift;
>> -	tlb_remove_table(tlb, (void *)pgf);
>> -}
>> -
>> -static inline void __tlb_remove_table(void *_table)
>> -{
>> -	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
>> -	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
>> -
>> -	pgtable_free(table, shift);
>> -}
>> -#else /* !CONFIG_SMP */
>> -static inline void pgtable_free_tlb(struct mmu_gather *tlb,
>> -				    void *table, int shift)
>> -{
>> -	pgtable_free(table, shift);
>> -}
>> -#endif /* CONFIG_SMP */
>> -
>>  static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
>>  				  unsigned long address)
>>  {
>> -	struct page *page = page_address(table);
>> -
>>  	tlb_flush_pgtable(tlb, address);
>> -	pgtable_page_dtor(page);
>> -	pgtable_free_tlb(tlb, page, 0);
>> +	pgtable_free_tlb(tlb, table, 0);
>>  }
>> -
>>  #endif /* CONFIG_PPC_64K_PAGES */
>>  
>>  static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
>> @@ -258,7 +217,6 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
>>  	kmem_cache_free(PGT_CACHE(PMD_INDEX_SIZE), pmd);
>>  }
>>  
>> -
>>  #define __pmd_free_tlb(tlb, pmd, addr)		      \
>>  	pgtable_free_tlb(tlb, pmd, PMD_INDEX_SIZE)
>>  #ifndef CONFIG_PPC_64K_PAGES
>> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
>> index 6da881b..04d833c 100644
>> --- a/arch/powerpc/kernel/setup_64.c
>> +++ b/arch/powerpc/kernel/setup_64.c
>> @@ -575,7 +575,9 @@ void __init setup_arch(char **cmdline_p)
>>  	init_mm.end_code = (unsigned long) _etext;
>>  	init_mm.end_data = (unsigned long) _edata;
>>  	init_mm.brk = klimit;
>> -	
>> +#ifdef CONFIG_PPC_64K_PAGES
>> +	init_mm.context.pgtable_page = NULL;
>> +#endif
>>  	irqstack_early_init();
>>  	exc_lvl_early_init();
>>  	emergency_stack_init();
>> diff --git a/arch/powerpc/mm/mmu_context_hash64.c b/arch/powerpc/mm/mmu_context_hash64.c
>> index 59cd773..fbfdca2 100644
>> --- a/arch/powerpc/mm/mmu_context_hash64.c
>> +++ b/arch/powerpc/mm/mmu_context_hash64.c
>> @@ -86,6 +86,9 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
>>  	spin_lock_init(mm->context.cop_lockp);
>>  #endif /* CONFIG_PPC_ICSWX */
>>  
>> +#ifdef CONFIG_PPC_64K_PAGES
>> +	mm->context.pgtable_page = NULL;
>> +#endif
>>  	return 0;
>>  }
>>  
>> @@ -97,13 +100,45 @@ void __destroy_context(int context_id)
>>  }
>>  EXPORT_SYMBOL_GPL(__destroy_context);
>>  
>> +#ifdef CONFIG_PPC_64K_PAGES
>> +static void destroy_pagetable_page(struct mm_struct *mm)
>> +{
>> +	int count;
>> +	struct page *page;
>> +
>> +	page = mm->context.pgtable_page;
>> +	if (!page)
>> +		return;
>> +
>> +	/* drop all the pending references */
>> +	count = atomic_read(&page->_mapcount) + 1;
>> +	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
>> +	count = atomic_sub_return(16 - count, &page->_count);
>
> You should really move PTE_FRAG_NR to a header so you can actually use
> it here rather than hard coding 16.
>
> It took me a fair while to convince myself that there is no race here
> with something altering mapcount and count between the atomic_read()
> and the atomic_sub_return().  It could do with a comment to explain
> why that is safe.
>
> Re-using the mapcount field for your index also seems odd, and it took
> me a while to convince myself that that's safe too.  Wouldn't it be
> simpler to store a pointer to the next sub-page in the mm_context
> instead? You can get from that to the struct page easily enough with a
> shift and pfn_to_page().

I found using _mapcount simpler in this case. I was looking at it not
as an index, but rather how may fragments are mapped/used already. Using
subpage pointer in mm->context.xyz means, we have to calculate the
number of fragments used/mapped via the pointer. We need the fragment
count so that we can drop page reference count correctly here.


>
>> +	if (!count) {
>> +		pgtable_page_dtor(page);
>> +		reset_page_mapcount(page);
>> +		free_hot_cold_page(page, 0);
>
> It would be nice to use put_page() somehow instead of duplicating its
> logic, though I realise the sparc code you've based this on does the
> same thing.

That is not exactly put_page. We can avoid lots of check in this
specific case.

>
>> +	}
>> +}
>> +
>> +#else
>> +static inline void destroy_pagetable_page(struct mm_struct *mm)
>> +{
>> +	return;
>> +}
>> +#endif
>> +
>> +
>>  void destroy_context(struct mm_struct *mm)
>>  {
>> +
>>  #ifdef CONFIG_PPC_ICSWX
>>  	drop_cop(mm->context.acop, mm);
>>  	kfree(mm->context.cop_lockp);
>>  	mm->context.cop_lockp = NULL;
>>  #endif /* CONFIG_PPC_ICSWX */
>> +
>> +	destroy_pagetable_page(mm);
>>  	__destroy_context(mm->context.id);
>>  	subpage_prot_free(mm);
>>  	mm->context.id = MMU_NO_CONTEXT;
>> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
>> index e212a27..e79840b 100644
>> --- a/arch/powerpc/mm/pgtable_64.c
>> +++ b/arch/powerpc/mm/pgtable_64.c
>> @@ -337,3 +337,140 @@ EXPORT_SYMBOL(__ioremap_at);
>>  EXPORT_SYMBOL(iounmap);
>>  EXPORT_SYMBOL(__iounmap);
>>  EXPORT_SYMBOL(__iounmap_at);
>> +
>> +#ifdef CONFIG_PPC_64K_PAGES
>> +/*
>> + * we support 16 fragments per PTE page. This is limited by how many
>> + * bits we can pack in page->_mapcount. We use the first half for
>> + * tracking the usage for rcu page table free.
>> + */
>> +#define PTE_FRAG_NR	16
>> +/*
>> + * We use a 2K PTE page fragment and another 2K for storing
>> + * real_pte_t hash index
>> + */
>> +#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
>> +
>> +static pte_t *get_from_cache(struct mm_struct *mm)
>> +{
>> +	int index;
>> +	pte_t *ret = NULL;
>> +	struct page *page;
>> +
>> +	spin_lock(&mm->page_table_lock);
>> +	page = mm->context.pgtable_page;
>> +	if (page) {
>> +		void *p = page_address(page);
>> +		index = atomic_add_return(1, &page->_mapcount);
>> +		ret = (pte_t *) (p + (index * PTE_FRAG_SIZE));
>> +		/*
>> +		 * If we have taken up all the fragments mark PTE page NULL
>> +		 */
>> +		if (index == PTE_FRAG_NR - 1)
>> +			mm->context.pgtable_page = NULL;
>> +	}
>> +	spin_unlock(&mm->page_table_lock);
>> +	return ret;
>> +}
>> +
>> +static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
>> +{
>> +	pte_t *ret = NULL;
>> +	struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK |
>> +				       __GFP_REPEAT | __GFP_ZERO);
>> +	if (!page)
>> +		return NULL;
>> +
>> +	spin_lock(&mm->page_table_lock);
>> +	/*
>> +	 * If we find pgtable_page set, we return
>> +	 * the allocated page with single fragement
>> +	 * count.
>> +	 */
>> +	if (likely(!mm->context.pgtable_page)) {
>> +		atomic_set(&page->_count, PTE_FRAG_NR);
>> +		atomic_set(&page->_mapcount, 0);
>> +		mm->context.pgtable_page = page;
>> +	}
>
> .. and in the unlikely case where there *is* a pgtable_page already
> set, what then?  Seems like you should BUG_ON, or at least return NULL
> - as it is you will return the first sub-page of that page again,
> which is very likely in use.


As explained in the comment above, we return with the allocated page
with fragment count set to 1. So we end up having only one fragment. The
other option I had was to to free the allocated page and do a
get_from_cache under the page_table_lock. But since we already allocated
the page, why not use that ?. It also keep the code similar to sparc.


>
>> +	spin_unlock(&mm->page_table_lock);
>> +
>> +	ret = (unsigned long *)page_address(page);
>> +	if (!kernel)
>> +		pgtable_page_ctor(page);
>> +
>> +	return ret;
>> +}
>> +
>> +pte_t *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel)
>> +{
>> +	pte_t *pte;
>> +
>> +	pte = get_from_cache(mm);
>> +	if (pte)
>> +		return pte;
>> +
>> +	return __alloc_for_cache(mm, kernel);
>> +}
>> +
>> +void page_table_free(struct mm_struct *mm, unsigned long *table, int kernel)
>> +{
>> +	struct page *page = virt_to_page(table);
>> +	if (put_page_testzero(page)) {
>> +		if (!kernel)
>> +			pgtable_page_dtor(page);
>> +		reset_page_mapcount(page);
>> +		free_hot_cold_page(page, 0);
>> +	}
>> +}
>> +
>> +#ifdef CONFIG_SMP
>> +static void page_table_free_rcu(void *table)
>> +{
>> +	struct page *page = virt_to_page(table);
>> +	if (put_page_testzero(page)) {
>> +		pgtable_page_dtor(page);
>> +		reset_page_mapcount(page);
>> +		free_hot_cold_page(page, 0);
>> +	}
>> +}
>> +
>> +void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
>> +{
>> +	unsigned long pgf = (unsigned long)table;
>> +
>> +	BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
>> +	pgf |= shift;
>> +	tlb_remove_table(tlb, (void *)pgf);
>> +}
>> +
>> +void __tlb_remove_table(void *_table)
>> +{
>> +	void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
>> +	unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
>> +
>> +	if (!shift)
>> +		/* PTE page needs special handling */
>> +		page_table_free_rcu(table);
>> +	else {
>> +		BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
>> +		kmem_cache_free(PGT_CACHE(shift), table);
>> +	}
>> +}
>> +#else
>> +void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
>> +{
>> +	if (!shift) {
>> +		/* PTE page needs special handling */
>> +		struct page *page = virt_to_page(table);
>> +		if (put_page_testzero(page)) {
>> +			pgtable_page_dtor(page);
>> +			reset_page_mapcount(page);
>> +			free_hot_cold_page(page, 0);
>> +		}
>> +	} else {
>> +		BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
>> +		kmem_cache_free(PGT_CACHE(shift), table);
>> +	}
>> +}
>> +#endif
>> +#endif /* CONFIG_PPC_64K_PAGES */
>
> -- 
> David Gibson			| I'll have my music baroque, and my code
> david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
> 				| _way_ _around_!
> http://www.ozlabs.org/~dgibson

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10  6:29     ` Aneesh Kumar K.V
@ 2013-04-10  7:04       ` David Gibson
  2013-04-10  7:53         ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-10  7:04 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 4253 bytes --]

On Wed, Apr 10, 2013 at 11:59:29AM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> > On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
[snip]
> >> @@ -97,13 +100,45 @@ void __destroy_context(int context_id)
> >>  }
> >>  EXPORT_SYMBOL_GPL(__destroy_context);
> >>  
> >> +#ifdef CONFIG_PPC_64K_PAGES
> >> +static void destroy_pagetable_page(struct mm_struct *mm)
> >> +{
> >> +	int count;
> >> +	struct page *page;
> >> +
> >> +	page = mm->context.pgtable_page;
> >> +	if (!page)
> >> +		return;
> >> +
> >> +	/* drop all the pending references */
> >> +	count = atomic_read(&page->_mapcount) + 1;
> >> +	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
> >> +	count = atomic_sub_return(16 - count, &page->_count);
> >
> > You should really move PTE_FRAG_NR to a header so you can actually use
> > it here rather than hard coding 16.
> >
> > It took me a fair while to convince myself that there is no race here
> > with something altering mapcount and count between the atomic_read()
> > and the atomic_sub_return().  It could do with a comment to explain
> > why that is safe.
> >
> > Re-using the mapcount field for your index also seems odd, and it took
> > me a while to convince myself that that's safe too.  Wouldn't it be
> > simpler to store a pointer to the next sub-page in the mm_context
> > instead? You can get from that to the struct page easily enough with a
> > shift and pfn_to_page().
> 
> I found using _mapcount simpler in this case. I was looking at it not
> as an index, but rather how may fragments are mapped/used already.

Except that it's actually (#fragments - 1).  Using subpage pointer
makes the fragments calculation (very slightly) harder, but the
calculation of the table address easier.  More importantly it avoids
adding effectively an extra variable - which is then shoehorned into a
structure not really designed to hold it.

> Using
> subpage pointer in mm->context.xyz means, we have to calculate the
> number of fragments used/mapped via the pointer. We need the fragment
> count so that we can drop page reference count correctly here.
> 
> 
> >
> >> +	if (!count) {
> >> +		pgtable_page_dtor(page);
> >> +		reset_page_mapcount(page);
> >> +		free_hot_cold_page(page, 0);
> >
> > It would be nice to use put_page() somehow instead of duplicating its
> > logic, though I realise the sparc code you've based this on does the
> > same thing.
> 
> That is not exactly put_page. We can avoid lots of check in this
> specific case.

[snip]
> >> +static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
> >> +{
> >> +	pte_t *ret = NULL;
> >> +	struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK |
> >> +				       __GFP_REPEAT | __GFP_ZERO);
> >> +	if (!page)
> >> +		return NULL;
> >> +
> >> +	spin_lock(&mm->page_table_lock);
> >> +	/*
> >> +	 * If we find pgtable_page set, we return
> >> +	 * the allocated page with single fragement
> >> +	 * count.
> >> +	 */
> >> +	if (likely(!mm->context.pgtable_page)) {
> >> +		atomic_set(&page->_count, PTE_FRAG_NR);
> >> +		atomic_set(&page->_mapcount, 0);
> >> +		mm->context.pgtable_page = page;
> >> +	}
> >
> > .. and in the unlikely case where there *is* a pgtable_page already
> > set, what then?  Seems like you should BUG_ON, or at least return NULL
> > - as it is you will return the first sub-page of that page again,
> > which is very likely in use.
> 
> 
> As explained in the comment above, we return with the allocated page
> with fragment count set to 1. So we end up having only one fragment. The
> other option I had was to to free the allocated page and do a
> get_from_cache under the page_table_lock. But since we already allocated
> the page, why not use that ?. It also keep the code similar to
> sparc.

My point is that I can't see any circumstance under which we should
ever hit this case.  Which means if we do something is badly messed up
and we should BUG() (or at least WARN()).

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-04  5:57 ` [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage Aneesh Kumar K.V
  2013-04-10  4:46   ` David Gibson
@ 2013-04-10  7:14   ` Michael Ellerman
  2013-04-10  7:54     ` Aneesh Kumar K.V
  1 sibling, 1 reply; 73+ messages in thread
From: Michael Ellerman @ 2013-04-10  7:14 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> We allocate one page for the last level of linux page table. With THP and
> large page size of 16MB, that would mean we are wasting large part
> of that page. To map 16MB area, we only need a PTE space of 2K with 64K
> page size. This patch reduce the space wastage by sharing the page
> allocated for the last level of linux page table with multiple pmd
> entries. We call these smaller chunks PTE page fragments and allocated
> page, PTE page.

This is not compiling for me:

arch/powerpc/mm/mmu_context_hash64.c:118:3: error: implicit declaration of function 'reset_page_mapcount'

And similar.

cheers

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly.
  2013-04-04  5:57 ` [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly Aneesh Kumar K.V
@ 2013-04-10  7:19   ` David Gibson
  2013-04-10  8:11     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-10  7:19 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 31290 bytes --]

On Thu, Apr 04, 2013 at 11:27:46AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> We look at both the segment base page size and actual page size and store
> the pte-lp-encodings in an array per base page size.
> 
> We also update all relevant functions to take actual page size argument
> so that we can use the correct PTE LP encoding in HPTE. This should also
> get the basic Multiple Page Size per Segment (MPSS) support. This is needed
> to enable THP on ppc64.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/machdep.h      |    3 +-
>  arch/powerpc/include/asm/mmu-hash64.h   |   33 ++++----
>  arch/powerpc/kvm/book3s_hv.c            |    2 +-
>  arch/powerpc/mm/hash_low_64.S           |   18 ++--
>  arch/powerpc/mm/hash_native_64.c        |  138 ++++++++++++++++++++++---------
>  arch/powerpc/mm/hash_utils_64.c         |  121 +++++++++++++++++----------
>  arch/powerpc/mm/hugetlbpage-hash64.c    |    4 +-
>  arch/powerpc/platforms/cell/beat_htab.c |   16 ++--
>  arch/powerpc/platforms/ps3/htab.c       |    6 +-
>  arch/powerpc/platforms/pseries/lpar.c   |    6 +-
>  10 files changed, 230 insertions(+), 117 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
> index 19d9d96..6cee6e0 100644
> --- a/arch/powerpc/include/asm/machdep.h
> +++ b/arch/powerpc/include/asm/machdep.h
> @@ -50,7 +50,8 @@ struct machdep_calls {
>  				       unsigned long prpn,
>  				       unsigned long rflags,
>  				       unsigned long vflags,
> -				       int psize, int ssize);
> +				       int psize, int apsize,
> +				       int ssize);
>  	long		(*hpte_remove)(unsigned long hpte_group);
>  	void            (*hpte_removebolted)(unsigned long ea,
>  					     int psize, int ssize);
> diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
> index 300ac3c..e42f4a3 100644
> --- a/arch/powerpc/include/asm/mmu-hash64.h
> +++ b/arch/powerpc/include/asm/mmu-hash64.h
> @@ -154,7 +154,7 @@ extern unsigned long htab_hash_mask;
>  struct mmu_psize_def
>  {
>  	unsigned int	shift;	/* number of bits */
> -	unsigned int	penc;	/* HPTE encoding */
> +	int		penc[MMU_PAGE_COUNT];	/* HPTE encoding */
>  	unsigned int	tlbiel;	/* tlbiel supported for that page size */
>  	unsigned long	avpnm;	/* bits to mask out in AVPN in the HPTE */
>  	unsigned long	sllp;	/* SLB L||LP (exact mask to use in slbmte) */
> @@ -181,6 +181,13 @@ struct mmu_psize_def
>   */
>  #define VPN_SHIFT	12
>  
> +/*
> + * HPTE Large Page (LP) details
> + */
> +#define LP_SHIFT	12
> +#define LP_BITS		8
> +#define LP_MASK(i)	((0xFF >> (i)) << LP_SHIFT)
> +
>  #ifndef __ASSEMBLY__
>  
>  static inline int segment_shift(int ssize)
> @@ -237,14 +244,14 @@ static inline unsigned long hpte_encode_avpn(unsigned long vpn, int psize,
>  
>  /*
>   * This function sets the AVPN and L fields of the HPTE  appropriately
> - * for the page size
> + * using the base page size and actual page size.
>   */
> -static inline unsigned long hpte_encode_v(unsigned long vpn,
> -					  int psize, int ssize)
> +static inline unsigned long hpte_encode_v(unsigned long vpn, int base_psize,
> +					  int actual_psize, int ssize)
>  {
>  	unsigned long v;
> -	v = hpte_encode_avpn(vpn, psize, ssize);
> -	if (psize != MMU_PAGE_4K)
> +	v = hpte_encode_avpn(vpn, base_psize, ssize);
> +	if (actual_psize != MMU_PAGE_4K)
>  		v |= HPTE_V_LARGE;
>  	return v;
>  }
> @@ -254,19 +261,17 @@ static inline unsigned long hpte_encode_v(unsigned long vpn,
>   * for the page size. We assume the pa is already "clean" that is properly
>   * aligned for the requested page size
>   */
> -static inline unsigned long hpte_encode_r(unsigned long pa, int psize)
> +static inline unsigned long hpte_encode_r(unsigned long pa, int base_psize,
> +					  int actual_psize)
>  {
> -	unsigned long r;
> -
>  	/* A 4K page needs no special encoding */
> -	if (psize == MMU_PAGE_4K)
> +	if (actual_psize == MMU_PAGE_4K)
>  		return pa & HPTE_R_RPN;
>  	else {
> -		unsigned int penc = mmu_psize_defs[psize].penc;
> -		unsigned int shift = mmu_psize_defs[psize].shift;
> -		return (pa & ~((1ul << shift) - 1)) | (penc << 12);
> +		unsigned int penc = mmu_psize_defs[base_psize].penc[actual_psize];
> +		unsigned int shift = mmu_psize_defs[actual_psize].shift;
> +		return (pa & ~((1ul << shift) - 1)) | (penc << LP_SHIFT);
>  	}
> -	return r;
>  }
>  
>  /*
> diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
> index 71d0c90..48f6d99 100644
> --- a/arch/powerpc/kvm/book3s_hv.c
> +++ b/arch/powerpc/kvm/book3s_hv.c
> @@ -1515,7 +1515,7 @@ static void kvmppc_add_seg_page_size(struct kvm_ppc_one_seg_page_size **sps,
>  	(*sps)->page_shift = def->shift;
>  	(*sps)->slb_enc = def->sllp;
>  	(*sps)->enc[0].page_shift = def->shift;
> -	(*sps)->enc[0].pte_enc = def->penc;
> +	(*sps)->enc[0].pte_enc = def->penc[linux_psize];
>  	(*sps)++;
>  }
>  
> diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
> index abdd5e2..0e980ac 100644
> --- a/arch/powerpc/mm/hash_low_64.S
> +++ b/arch/powerpc/mm/hash_low_64.S
> @@ -196,7 +196,8 @@ htab_insert_pte:
>  	mr	r4,r29			/* Retrieve vpn */
>  	li	r7,0			/* !bolted, !secondary */
>  	li	r8,MMU_PAGE_4K		/* page size */
> -	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
> +	li	r9,MMU_PAGE_4K		/* actual page size */
> +	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
>  _GLOBAL(htab_call_hpte_insert1)
>  	bl	.			/* Patched by htab_finish_init() */
>  	cmpdi	0,r3,0
> @@ -219,7 +220,8 @@ _GLOBAL(htab_call_hpte_insert1)
>  	mr	r4,r29			/* Retrieve vpn */
>  	li	r7,HPTE_V_SECONDARY	/* !bolted, secondary */
>  	li	r8,MMU_PAGE_4K		/* page size */
> -	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
> +	li	r9,MMU_PAGE_4K		/* actual page size */
> +	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
>  _GLOBAL(htab_call_hpte_insert2)
>  	bl	.			/* Patched by htab_finish_init() */
>  	cmpdi	0,r3,0
> @@ -515,7 +517,8 @@ htab_special_pfn:
>  	mr	r4,r29			/* Retrieve vpn */
>  	li	r7,0			/* !bolted, !secondary */
>  	li	r8,MMU_PAGE_4K		/* page size */
> -	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
> +	li	r9,MMU_PAGE_4K		/* actual page size */
> +	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
>  _GLOBAL(htab_call_hpte_insert1)
>  	bl	.			/* patched by htab_finish_init() */
>  	cmpdi	0,r3,0
> @@ -542,7 +545,8 @@ _GLOBAL(htab_call_hpte_insert1)
>  	mr	r4,r29			/* Retrieve vpn */
>  	li	r7,HPTE_V_SECONDARY	/* !bolted, secondary */
>  	li	r8,MMU_PAGE_4K		/* page size */
> -	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
> +	li	r9,MMU_PAGE_4K		/* actual page size */
> +	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
>  _GLOBAL(htab_call_hpte_insert2)
>  	bl	.			/* patched by htab_finish_init() */
>  	cmpdi	0,r3,0
> @@ -840,7 +844,8 @@ ht64_insert_pte:
>  	mr	r4,r29			/* Retrieve vpn */
>  	li	r7,0			/* !bolted, !secondary */
>  	li	r8,MMU_PAGE_64K
> -	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
> +	li	r9,MMU_PAGE_64K		/* actual page size */
> +	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
>  _GLOBAL(ht64_call_hpte_insert1)
>  	bl	.			/* patched by htab_finish_init() */
>  	cmpdi	0,r3,0
> @@ -863,7 +868,8 @@ _GLOBAL(ht64_call_hpte_insert1)
>  	mr	r4,r29			/* Retrieve vpn */
>  	li	r7,HPTE_V_SECONDARY	/* !bolted, secondary */
>  	li	r8,MMU_PAGE_64K
> -	ld	r9,STK_PARAM(R9)(r1)	/* segment size */
> +	li	r9,MMU_PAGE_64K		/* actual page size */
> +	ld	r10,STK_PARAM(R9)(r1)	/* segment size */
>  _GLOBAL(ht64_call_hpte_insert2)
>  	bl	.			/* patched by htab_finish_init() */
>  	cmpdi	0,r3,0
> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
> index 9d8983a..aa0499b 100644
> --- a/arch/powerpc/mm/hash_native_64.c
> +++ b/arch/powerpc/mm/hash_native_64.c
> @@ -39,7 +39,7 @@
>  
>  DEFINE_RAW_SPINLOCK(native_tlbie_lock);
>  
> -static inline void __tlbie(unsigned long vpn, int psize, int ssize)
> +static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
>  {
>  	unsigned long va;
>  	unsigned int penc;
> @@ -68,7 +68,7 @@ static inline void __tlbie(unsigned long vpn, int psize, int ssize)
>  		break;
>  	default:
>  		/* We need 14 to 14 + i bits of va */
> -		penc = mmu_psize_defs[psize].penc;
> +		penc = mmu_psize_defs[psize].penc[apsize];
>  		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
>  		va |= penc << 12;
>  		va |= ssize << 8;
> @@ -80,7 +80,7 @@ static inline void __tlbie(unsigned long vpn, int psize, int ssize)
>  	}
>  }
>  
> -static inline void __tlbiel(unsigned long vpn, int psize, int ssize)
> +static inline void __tlbiel(unsigned long vpn, int psize, int apsize, int ssize)
>  {
>  	unsigned long va;
>  	unsigned int penc;
> @@ -102,7 +102,7 @@ static inline void __tlbiel(unsigned long vpn, int psize, int ssize)
>  		break;
>  	default:
>  		/* We need 14 to 14 + i bits of va */
> -		penc = mmu_psize_defs[psize].penc;
> +		penc = mmu_psize_defs[psize].penc[apsize];
>  		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
>  		va |= penc << 12;
>  		va |= ssize << 8;
> @@ -114,7 +114,8 @@ static inline void __tlbiel(unsigned long vpn, int psize, int ssize)
>  
>  }
>  
> -static inline void tlbie(unsigned long vpn, int psize, int ssize, int local)
> +static inline void tlbie(unsigned long vpn, int psize, int apsize,
> +			 int ssize, int local)
>  {
>  	unsigned int use_local = local && mmu_has_feature(MMU_FTR_TLBIEL);
>  	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
> @@ -125,10 +126,10 @@ static inline void tlbie(unsigned long vpn, int psize, int ssize, int local)
>  		raw_spin_lock(&native_tlbie_lock);
>  	asm volatile("ptesync": : :"memory");
>  	if (use_local) {
> -		__tlbiel(vpn, psize, ssize);
> +		__tlbiel(vpn, psize, apsize, ssize);
>  		asm volatile("ptesync": : :"memory");
>  	} else {
> -		__tlbie(vpn, psize, ssize);
> +		__tlbie(vpn, psize, apsize, ssize);
>  		asm volatile("eieio; tlbsync; ptesync": : :"memory");
>  	}
>  	if (lock_tlbie && !use_local)
> @@ -156,7 +157,7 @@ static inline void native_unlock_hpte(struct hash_pte *hptep)
>  
>  static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
>  			unsigned long pa, unsigned long rflags,
> -			unsigned long vflags, int psize, int ssize)
> +			unsigned long vflags, int psize, int apsize, int ssize)
>  {
>  	struct hash_pte *hptep = htab_address + hpte_group;
>  	unsigned long hpte_v, hpte_r;
> @@ -183,8 +184,8 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
>  	if (i == HPTES_PER_GROUP)
>  		return -1;
>  
> -	hpte_v = hpte_encode_v(vpn, psize, ssize) | vflags | HPTE_V_VALID;
> -	hpte_r = hpte_encode_r(pa, psize) | rflags;
> +	hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
> +	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
>  
>  	if (!(vflags & HPTE_V_BOLTED)) {
>  		DBG_LOW(" i=%x hpte_v=%016lx, hpte_r=%016lx\n",
> @@ -244,6 +245,48 @@ static long native_hpte_remove(unsigned long hpte_group)
>  	return i;
>  }
>  
> +static inline int hpte_actual_psize(struct hash_pte *hptep, int psize)
> +{
> +	int i, shift;
> +	unsigned int mask;
> +	/* Look at the 8 bit LP value */
> +	unsigned int lp = (hptep->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
> +
> +	if (!(hptep->v & HPTE_V_VALID))
> +		return -1;

Folding the validity check into the size check seems confusing to me.

> +	/* First check if it is large page */
> +	if (!(hptep->v & HPTE_V_LARGE))
> +		return MMU_PAGE_4K;
> +
> +	/* start from 1 ignoring MMU_PAGE_4K */
> +	for (i = 1; i < MMU_PAGE_COUNT; i++) {
> +		/* valid entries have a shift value */
> +		if (!mmu_psize_defs[i].shift)
> +			continue;

Isn't this check redundant with the one below?

> +		/* invalid penc */
> +		if (mmu_psize_defs[psize].penc[i] == -1)
> +			continue;
> +		/*
> +		 * encoding bits per actual page size
> +		 *        PTE LP     actual page size
> +		 *    rrrr rrrz		>=8KB
> +		 *    rrrr rrzz		>=16KB
> +		 *    rrrr rzzz		>=32KB
> +		 *    rrrr zzzz		>=64KB
> +		 * .......
> +		 */
> +		shift = mmu_psize_defs[i].shift - LP_SHIFT;
> +		if (shift > LP_BITS)
> +			shift = LP_BITS;
> +		mask = (1 << shift) - 1;
> +		if ((lp & mask) == mmu_psize_defs[psize].penc[i])
> +			return i;
> +	}

Shouldn't we have a BUG() or something here.  If we get here we've
somehow created a PTE with LP bits we can't interpret, yes?

> +	return -1;
> +}
> +
>  static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>  				 unsigned long vpn, int psize, int ssize,
>  				 int local)
> @@ -251,6 +294,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>  	struct hash_pte *hptep = htab_address + slot;
>  	unsigned long hpte_v, want_v;
>  	int ret = 0;
> +	int actual_psize;
>  
>  	want_v = hpte_encode_avpn(vpn, psize, ssize);
>  
> @@ -260,9 +304,13 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>  	native_lock_hpte(hptep);
>  
>  	hpte_v = hptep->v;
> -
> +	actual_psize = hpte_actual_psize(hptep, psize);
> +	if (actual_psize < 0) {
> +		native_unlock_hpte(hptep);
> +		return -1;
> +	}

Wouldn't it make more sense to only do the psize lookup once you've
found a matching hpte?

>  	/* Even if we miss, we need to invalidate the TLB */
> -	if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID)) {
> +	if (!HPTE_V_COMPARE(hpte_v, want_v)) {
>  		DBG_LOW(" -> miss\n");
>  		ret = -1;
>  	} else {
> @@ -274,7 +322,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>  	native_unlock_hpte(hptep);
>  
>  	/* Ensure it is out of the tlb too. */
> -	tlbie(vpn, psize, ssize, local);
> +	tlbie(vpn, psize, actual_psize, ssize, local);
>  
>  	return ret;
>  }
> @@ -315,6 +363,7 @@ static long native_hpte_find(unsigned long vpn, int psize, int ssize)
>  static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
>  				       int psize, int ssize)
>  {
> +	int actual_psize;
>  	unsigned long vpn;
>  	unsigned long vsid;
>  	long slot;
> @@ -327,13 +376,16 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
>  	if (slot == -1)
>  		panic("could not find page to bolt\n");
>  	hptep = htab_address + slot;
> +	actual_psize = hpte_actual_psize(hptep, psize);
> +	if (actual_psize < 0)
> +		return;
>  
>  	/* Update the HPTE */
>  	hptep->r = (hptep->r & ~(HPTE_R_PP | HPTE_R_N)) |
>  		(newpp & (HPTE_R_PP | HPTE_R_N));
>  
>  	/* Ensure it is out of the tlb too. */
> -	tlbie(vpn, psize, ssize, 0);
> +	tlbie(vpn, psize, actual_psize, ssize, 0);
>  }
>  
>  static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
> @@ -343,6 +395,7 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
>  	unsigned long hpte_v;
>  	unsigned long want_v;
>  	unsigned long flags;
> +	int actual_psize;
>  
>  	local_irq_save(flags);
>  
> @@ -352,35 +405,38 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
>  	native_lock_hpte(hptep);
>  	hpte_v = hptep->v;
>  
> +	actual_psize = hpte_actual_psize(hptep, psize);
> +	if (actual_psize < 0) {
> +		native_unlock_hpte(hptep);
> +		local_irq_restore(flags);
> +		return;
> +	}
>  	/* Even if we miss, we need to invalidate the TLB */
> -	if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
> +	if (!HPTE_V_COMPARE(hpte_v, want_v))
>  		native_unlock_hpte(hptep);
>  	else
>  		/* Invalidate the hpte. NOTE: this also unlocks it */
>  		hptep->v = 0;
>  
>  	/* Invalidate the TLB */
> -	tlbie(vpn, psize, ssize, local);
> +	tlbie(vpn, psize, actual_psize, ssize, local);
>  
>  	local_irq_restore(flags);
>  }
>  
> -#define LP_SHIFT	12
> -#define LP_BITS		8
> -#define LP_MASK(i)	((0xFF >> (i)) << LP_SHIFT)
> -
>  static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
> -			int *psize, int *ssize, unsigned long *vpn)
> +			int *psize, int *apsize, int *ssize, unsigned long *vpn)
>  {
>  	unsigned long avpn, pteg, vpi;
>  	unsigned long hpte_r = hpte->r;
>  	unsigned long hpte_v = hpte->v;
>  	unsigned long vsid, seg_off;
> -	int i, size, shift, penc;
> +	int i, size, a_size, shift, penc;
>  
> -	if (!(hpte_v & HPTE_V_LARGE))
> -		size = MMU_PAGE_4K;
> -	else {
> +	if (!(hpte_v & HPTE_V_LARGE)) {
> +		size   = MMU_PAGE_4K;
> +		a_size = MMU_PAGE_4K;
> +	} else {
>  		for (i = 0; i < LP_BITS; i++) {
>  			if ((hpte_r & LP_MASK(i+1)) == LP_MASK(i+1))
>  				break;
> @@ -388,19 +444,26 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>  		penc = LP_MASK(i+1) >> LP_SHIFT;
>  		for (size = 0; size < MMU_PAGE_COUNT; size++) {

>  
> -			/* 4K pages are not represented by LP */
> -			if (size == MMU_PAGE_4K)
> -				continue;
> -
>  			/* valid entries have a shift value */
>  			if (!mmu_psize_defs[size].shift)
>  				continue;
> +			for (a_size = 0; a_size < MMU_PAGE_COUNT; a_size++) {

Can't you resize hpte_actual_psize() here instead of recoding the lookup?

> -			if (penc == mmu_psize_defs[size].penc)
> -				break;
> +				/* 4K pages are not represented by LP */
> +				if (a_size == MMU_PAGE_4K)
> +					continue;
> +
> +				/* valid entries have a shift value */
> +				if (!mmu_psize_defs[a_size].shift)
> +					continue;
> +
> +				if (penc == mmu_psize_defs[size].penc[a_size])
> +					goto out;
> +			}
>  		}
>  	}
>  
> +out:
>  	/* This works for all page sizes, and for 256M and 1T segments */
>  	*ssize = hpte_v >> HPTE_V_SSIZE_SHIFT;
>  	shift = mmu_psize_defs[size].shift;
> @@ -433,7 +496,8 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>  	default:
>  		*vpn = size = 0;
>  	}
> -	*psize = size;
> +	*psize  = size;
> +	*apsize = a_size;
>  }
>  
>  /*
> @@ -451,7 +515,7 @@ static void native_hpte_clear(void)
>  	struct hash_pte *hptep = htab_address;
>  	unsigned long hpte_v;
>  	unsigned long pteg_count;
> -	int psize, ssize;
> +	int psize, apsize, ssize;
>  
>  	pteg_count = htab_hash_mask + 1;
>  
> @@ -477,9 +541,9 @@ static void native_hpte_clear(void)
>  		 * already hold the native_tlbie_lock.
>  		 */
>  		if (hpte_v & HPTE_V_VALID) {
> -			hpte_decode(hptep, slot, &psize, &ssize, &vpn);
> +			hpte_decode(hptep, slot, &psize, &apsize, &ssize, &vpn);
>  			hptep->v = 0;
> -			__tlbie(vpn, psize, ssize);
> +			__tlbie(vpn, psize, apsize, ssize);
>  		}
>  	}
>  
> @@ -540,7 +604,7 @@ static void native_flush_hash_range(unsigned long number, int local)
>  
>  			pte_iterate_hashed_subpages(pte, psize,
>  						    vpn, index, shift) {
> -				__tlbiel(vpn, psize, ssize);
> +				__tlbiel(vpn, psize, psize, ssize);
>  			} pte_iterate_hashed_end();
>  		}
>  		asm volatile("ptesync":::"memory");
> @@ -557,7 +621,7 @@ static void native_flush_hash_range(unsigned long number, int local)
>  
>  			pte_iterate_hashed_subpages(pte, psize,
>  						    vpn, index, shift) {
> -				__tlbie(vpn, psize, ssize);
> +				__tlbie(vpn, psize, psize, ssize);
>  			} pte_iterate_hashed_end();
>  		}
>  		asm volatile("eieio; tlbsync; ptesync":::"memory");
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index bfeab83..a5a5067 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -125,7 +125,7 @@ static struct mmu_psize_def mmu_psize_defaults_old[] = {
>  	[MMU_PAGE_4K] = {
>  		.shift	= 12,
>  		.sllp	= 0,
> -		.penc	= 0,
> +		.penc   = {[MMU_PAGE_4K] = 0, [1 ... MMU_PAGE_COUNT - 1] = -1},
>  		.avpnm	= 0,
>  		.tlbiel = 0,
>  	},
> @@ -139,14 +139,15 @@ static struct mmu_psize_def mmu_psize_defaults_gp[] = {
>  	[MMU_PAGE_4K] = {
>  		.shift	= 12,
>  		.sllp	= 0,
> -		.penc	= 0,
> +		.penc   = {[MMU_PAGE_4K] = 0, [1 ... MMU_PAGE_COUNT - 1] = -1},
>  		.avpnm	= 0,
>  		.tlbiel = 1,
>  	},
>  	[MMU_PAGE_16M] = {
>  		.shift	= 24,
>  		.sllp	= SLB_VSID_L,
> -		.penc	= 0,
> +		.penc   = {[0 ... MMU_PAGE_16M - 1] = -1, [MMU_PAGE_16M] = 0,
> +			    [MMU_PAGE_16M + 1 ... MMU_PAGE_COUNT - 1] = -1 },
>  		.avpnm	= 0x1UL,
>  		.tlbiel = 0,
>  	},
> @@ -208,7 +209,7 @@ int htab_bolt_mapping(unsigned long vstart, unsigned long vend,
>  
>  		BUG_ON(!ppc_md.hpte_insert);
>  		ret = ppc_md.hpte_insert(hpteg, vpn, paddr, tprot,
> -					 HPTE_V_BOLTED, psize, ssize);
> +					 HPTE_V_BOLTED, psize, psize, ssize);
>  
>  		if (ret < 0)
>  			break;
> @@ -275,6 +276,30 @@ static void __init htab_init_seg_sizes(void)
>  	of_scan_flat_dt(htab_dt_scan_seg_sizes, NULL);
>  }
>  
> +static int __init get_idx_from_shift(unsigned int shift)
> +{
> +	int idx = -1;
> +
> +	switch (shift) {
> +	case 0xc:
> +		idx = MMU_PAGE_4K;
> +		break;
> +	case 0x10:
> +		idx = MMU_PAGE_64K;
> +		break;
> +	case 0x14:
> +		idx = MMU_PAGE_1M;
> +		break;
> +	case 0x18:
> +		idx = MMU_PAGE_16M;
> +		break;
> +	case 0x22:
> +		idx = MMU_PAGE_16G;
> +		break;
> +	}
> +	return idx;
> +}
> +
>  static int __init htab_dt_scan_page_sizes(unsigned long node,
>  					  const char *uname, int depth,
>  					  void *data)
> @@ -294,60 +319,61 @@ static int __init htab_dt_scan_page_sizes(unsigned long node,
>  		size /= 4;
>  		cur_cpu_spec->mmu_features &= ~(MMU_FTR_16M_PAGE);
>  		while(size > 0) {
> -			unsigned int shift = prop[0];
> +			unsigned int base_shift = prop[0];
>  			unsigned int slbenc = prop[1];
>  			unsigned int lpnum = prop[2];
> -			unsigned int lpenc = 0;
>  			struct mmu_psize_def *def;
> -			int idx = -1;
> +			int idx, base_idx;
>  
>  			size -= 3; prop += 3;
> -			while(size > 0 && lpnum) {
> -				if (prop[0] == shift)
> -					lpenc = prop[1];
> -				prop += 2; size -= 2;
> -				lpnum--;
> +			base_idx = get_idx_from_shift(base_shift);
> +			if (base_idx < 0) {
> +				/*
> +				 * skip the pte encoding also
> +				 */
> +				prop += lpnum * 2; size -= lpnum * 2;
> +				continue;
>  			}
> -			switch(shift) {
> -			case 0xc:
> -				idx = MMU_PAGE_4K;
> -				break;
> -			case 0x10:
> -				idx = MMU_PAGE_64K;
> -				break;
> -			case 0x14:
> -				idx = MMU_PAGE_1M;
> -				break;
> -			case 0x18:
> -				idx = MMU_PAGE_16M;
> +			def = &mmu_psize_defs[base_idx];
> +			if (base_idx == MMU_PAGE_16M)
>  				cur_cpu_spec->mmu_features |= MMU_FTR_16M_PAGE;
> -				break;
> -			case 0x22:
> -				idx = MMU_PAGE_16G;
> -				break;
> -			}
> -			if (idx < 0)
> -				continue;
> -			def = &mmu_psize_defs[idx];
> -			def->shift = shift;
> -			if (shift <= 23)
> +
> +			def->shift = base_shift;
> +			if (base_shift <= 23)
>  				def->avpnm = 0;
>  			else
> -				def->avpnm = (1 << (shift - 23)) - 1;
> +				def->avpnm = (1 << (base_shift - 23)) - 1;
>  			def->sllp = slbenc;
> -			def->penc = lpenc;
> -			/* We don't know for sure what's up with tlbiel, so
> +			/*
> +			 * We don't know for sure what's up with tlbiel, so
>  			 * for now we only set it for 4K and 64K pages
>  			 */
> -			if (idx == MMU_PAGE_4K || idx == MMU_PAGE_64K)
> +			if (base_idx == MMU_PAGE_4K || base_idx == MMU_PAGE_64K)
>  				def->tlbiel = 1;
>  			else
>  				def->tlbiel = 0;
>  
> -			DBG(" %d: shift=%02x, sllp=%04lx, avpnm=%08lx, "
> -			    "tlbiel=%d, penc=%d\n",
> -			    idx, shift, def->sllp, def->avpnm, def->tlbiel,
> -			    def->penc);
> +			while (size > 0 && lpnum) {
> +				unsigned int shift = prop[0];
> +				int penc  = prop[1];
> +
> +				prop += 2; size -= 2;
> +				lpnum--;
> +
> +				idx = get_idx_from_shift(shift);
> +				if (idx < 0)
> +					continue;
> +
> +				if (penc == -1)
> +					pr_err("Invalid penc for base_shift=%d "
> +					       "shift=%d\n", base_shift, shift);
> +
> +				def->penc[idx] = penc;
> +				DBG(" %d: shift=%02x, sllp=%04lx, "
> +				    "avpnm=%08lx, tlbiel=%d, penc=%d\n",
> +				    idx, shift, def->sllp, def->avpnm,
> +				    def->tlbiel, def->penc[idx]);
> +			}
>  		}
>  		return 1;
>  	}
> @@ -396,10 +422,21 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned long node,
>  }
>  #endif /* CONFIG_HUGETLB_PAGE */
>  
> +static void mmu_psize_set_default_penc(void)
> +{
> +	int bpsize, apsize;
> +	for (bpsize = 0; bpsize < MMU_PAGE_COUNT; bpsize++)
> +		for (apsize = 0; apsize < MMU_PAGE_COUNT; apsize++)
> +			mmu_psize_defs[bpsize].penc[apsize] = -1;
> +}
> +
>  static void __init htab_init_page_sizes(void)
>  {
>  	int rc;
>  
> +	/* se the invalid penc to -1 */
> +	mmu_psize_set_default_penc();
> +
>  	/* Default to 4K pages only */
>  	memcpy(mmu_psize_defs, mmu_psize_defaults_old,
>  	       sizeof(mmu_psize_defaults_old));
> diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
> index cecad34..e0d52ee 100644
> --- a/arch/powerpc/mm/hugetlbpage-hash64.c
> +++ b/arch/powerpc/mm/hugetlbpage-hash64.c
> @@ -103,7 +103,7 @@ repeat:
>  
>  		/* Insert into the hash table, primary slot */
>  		slot = ppc_md.hpte_insert(hpte_group, vpn, pa, rflags, 0,
> -					  mmu_psize, ssize);
> +					  mmu_psize, mmu_psize, ssize);
>  
>  		/* Primary is full, try the secondary */
>  		if (unlikely(slot == -1)) {
> @@ -111,7 +111,7 @@ repeat:
>  				      HPTES_PER_GROUP) & ~0x7UL;
>  			slot = ppc_md.hpte_insert(hpte_group, vpn, pa, rflags,
>  						  HPTE_V_SECONDARY,
> -						  mmu_psize, ssize);
> +						  mmu_psize, mmu_psize, ssize);
>  			if (slot == -1) {
>  				if (mftb() & 0x1)
>  					hpte_group = ((hash & htab_hash_mask) *
> diff --git a/arch/powerpc/platforms/cell/beat_htab.c b/arch/powerpc/platforms/cell/beat_htab.c
> index 472f9a7..246e1d8 100644
> --- a/arch/powerpc/platforms/cell/beat_htab.c
> +++ b/arch/powerpc/platforms/cell/beat_htab.c
> @@ -90,7 +90,7 @@ static inline unsigned int beat_read_mask(unsigned hpte_group)
>  static long beat_lpar_hpte_insert(unsigned long hpte_group,
>  				  unsigned long vpn, unsigned long pa,
>  				  unsigned long rflags, unsigned long vflags,
> -				  int psize, int ssize)
> +				  int psize, int apsize, int ssize)
>  {
>  	unsigned long lpar_rc;
>  	u64 hpte_v, hpte_r, slot;
> @@ -103,9 +103,9 @@ static long beat_lpar_hpte_insert(unsigned long hpte_group,
>  			"rflags=%lx, vflags=%lx, psize=%d)\n",
>  		hpte_group, va, pa, rflags, vflags, psize);
>  
> -	hpte_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M) |
> +	hpte_v = hpte_encode_v(vpn, psize, apsize, MMU_SEGSIZE_256M) |
>  		vflags | HPTE_V_VALID;
> -	hpte_r = hpte_encode_r(pa, psize) | rflags;
> +	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
>  
>  	if (!(vflags & HPTE_V_BOLTED))
>  		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
> @@ -314,7 +314,7 @@ void __init hpte_init_beat(void)
>  static long beat_lpar_hpte_insert_v3(unsigned long hpte_group,
>  				  unsigned long vpn, unsigned long pa,
>  				  unsigned long rflags, unsigned long vflags,
> -				  int psize, int ssize)
> +				  int psize, int apsize, int ssize)
>  {
>  	unsigned long lpar_rc;
>  	u64 hpte_v, hpte_r, slot;
> @@ -327,9 +327,9 @@ static long beat_lpar_hpte_insert_v3(unsigned long hpte_group,
>  			"rflags=%lx, vflags=%lx, psize=%d)\n",
>  		hpte_group, vpn, pa, rflags, vflags, psize);
>  
> -	hpte_v = hpte_encode_v(vpn, psize, MMU_SEGSIZE_256M) |
> +	hpte_v = hpte_encode_v(vpn, psize, apsize, MMU_SEGSIZE_256M) |
>  		vflags | HPTE_V_VALID;
> -	hpte_r = hpte_encode_r(pa, psize) | rflags;
> +	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
>  
>  	if (!(vflags & HPTE_V_BOLTED))
>  		DBG_LOW(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
> @@ -373,7 +373,7 @@ static long beat_lpar_hpte_updatepp_v3(unsigned long slot,
>  	unsigned long pss;
>  
>  	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
> -	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc;
> +	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc[psize];
>  
>  	DBG_LOW("    update: "
>  		"avpnv=%016lx, slot=%016lx, psize: %d, newpp %016lx ... ",
> @@ -403,7 +403,7 @@ static void beat_lpar_hpte_invalidate_v3(unsigned long slot, unsigned long vpn,
>  	DBG_LOW("    inval : slot=%lx, vpn=%016lx, psize: %d, local: %d\n",
>  		slot, vpn, psize, local);
>  	want_v = hpte_encode_avpn(vpn, psize, MMU_SEGSIZE_256M);
> -	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc;
> +	pss = (psize == MMU_PAGE_4K) ? -1UL : mmu_psize_defs[psize].penc[psize];
>  
>  	lpar_rc = beat_invalidate_htab_entry3(0, slot, want_v, pss);
>  
> diff --git a/arch/powerpc/platforms/ps3/htab.c b/arch/powerpc/platforms/ps3/htab.c
> index 07a4bba..44f06d2 100644
> --- a/arch/powerpc/platforms/ps3/htab.c
> +++ b/arch/powerpc/platforms/ps3/htab.c
> @@ -45,7 +45,7 @@ static DEFINE_SPINLOCK(ps3_htab_lock);
>  
>  static long ps3_hpte_insert(unsigned long hpte_group, unsigned long vpn,
>  	unsigned long pa, unsigned long rflags, unsigned long vflags,
> -	int psize, int ssize)
> +	int psize, int apsize, int ssize)
>  {
>  	int result;
>  	u64 hpte_v, hpte_r;
> @@ -61,8 +61,8 @@ static long ps3_hpte_insert(unsigned long hpte_group, unsigned long vpn,
>  	 */
>  	vflags &= ~HPTE_V_SECONDARY;
>  
> -	hpte_v = hpte_encode_v(vpn, psize, ssize) | vflags | HPTE_V_VALID;
> -	hpte_r = hpte_encode_r(ps3_mm_phys_to_lpar(pa), psize) | rflags;
> +	hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
> +	hpte_r = hpte_encode_r(ps3_mm_phys_to_lpar(pa), psize, apsize) | rflags;
>  
>  	spin_lock_irqsave(&ps3_htab_lock, flags);
>  
> diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
> index a77c35b..3daced3 100644
> --- a/arch/powerpc/platforms/pseries/lpar.c
> +++ b/arch/powerpc/platforms/pseries/lpar.c
> @@ -109,7 +109,7 @@ void vpa_init(int cpu)
>  static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
>  				     unsigned long vpn, unsigned long pa,
>  				     unsigned long rflags, unsigned long vflags,
> -				     int psize, int ssize)
> +				     int psize, int apsize, int ssize)
>  {
>  	unsigned long lpar_rc;
>  	unsigned long flags;
> @@ -121,8 +121,8 @@ static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
>  			 "pa=%016lx, rflags=%lx, vflags=%lx, psize=%d)\n",
>  			 hpte_group, vpn,  pa, rflags, vflags, psize);
>  
> -	hpte_v = hpte_encode_v(vpn, psize, ssize) | vflags | HPTE_V_VALID;
> -	hpte_r = hpte_encode_r(pa, psize) | rflags;
> +	hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
> +	hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
>  
>  	if (!(vflags & HPTE_V_BOLTED))
>  		pr_devel(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries
  2013-04-04  5:57 ` [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries Aneesh Kumar K.V
@ 2013-04-10  7:21   ` Michael Ellerman
  2013-04-10 18:26     ` Aneesh Kumar K.V
  2013-04-12  1:28   ` David Gibson
  1 sibling, 1 reply; 73+ messages in thread
From: Michael Ellerman @ 2013-04-10  7:21 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

On Thu, Apr 04, 2013 at 11:27:57AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> HUGETLB clear the top bit of PMD entries and use that to indicate
> a HUGETLB page directory. Since we store pfns in PMDs for THP,
> we would have the top bit cleared by default. Add the top bit mask
> for THP PMD entries and clear that when we are looking for pmd_pfn.
> 
> @@ -44,6 +44,14 @@ struct mm_struct;
>  #define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
>  #define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
>  #define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
> +/*
> + * HugeTLB looks at the top bit of the Linux page table entries to
> + * decide whether it is a huge page directory or not. Mark HUGE
> + * PMD to differentiate
> + */
> +#define PMD_HUGE_NOT_HUGETLB	(ASM_CONST(1) << 63)
> +#define PMD_ISHUGE		(_PMD_ISHUGE | PMD_HUGE_NOT_HUGETLB)
> +#define PMD_HUGE_PROTBITS	(0xfff | PMD_HUGE_NOT_HUGETLB)
>  
>  #ifndef __ASSEMBLY__
>  extern void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
> @@ -84,7 +93,8 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>  	/*
>  	 * Only called for hugepage pmd
>  	 */
> -	return pmd_val(pmd) >> PMD_HUGE_RPN_SHIFT;
> +	unsigned long val = pmd_val(pmd) & ~PMD_HUGE_PROTBITS;
> +	return val  >> PMD_HUGE_RPN_SHIFT;
>  }

This is breaking the 32-bit build for me (pmac32_defconfig):

arch/powerpc/include/asm/pgtable.h:123:2: error: left shift count >= width of type [-Werror]

cheers

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10  7:04       ` David Gibson
@ 2013-04-10  7:53         ` Aneesh Kumar K.V
  2013-04-10 17:47           ` Aneesh Kumar K.V
  2013-04-11  1:12           ` David Gibson
  0 siblings, 2 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10  7:53 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Wed, Apr 10, 2013 at 11:59:29AM +0530, Aneesh Kumar K.V wrote:
>> David Gibson <dwg@au1.ibm.com> writes:
>> > On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
> [snip]
>> >> @@ -97,13 +100,45 @@ void __destroy_context(int context_id)
>> >>  }
>> >>  EXPORT_SYMBOL_GPL(__destroy_context);
>> >>  
>> >> +#ifdef CONFIG_PPC_64K_PAGES
>> >> +static void destroy_pagetable_page(struct mm_struct *mm)
>> >> +{
>> >> +	int count;
>> >> +	struct page *page;
>> >> +
>> >> +	page = mm->context.pgtable_page;
>> >> +	if (!page)
>> >> +		return;
>> >> +
>> >> +	/* drop all the pending references */
>> >> +	count = atomic_read(&page->_mapcount) + 1;
>> >> +	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
>> >> +	count = atomic_sub_return(16 - count, &page->_count);
>> >
>> > You should really move PTE_FRAG_NR to a header so you can actually use
>> > it here rather than hard coding 16.
>> >
>> > It took me a fair while to convince myself that there is no race here
>> > with something altering mapcount and count between the atomic_read()
>> > and the atomic_sub_return().  It could do with a comment to explain
>> > why that is safe.
>> >
>> > Re-using the mapcount field for your index also seems odd, and it took
>> > me a while to convince myself that that's safe too.  Wouldn't it be
>> > simpler to store a pointer to the next sub-page in the mm_context
>> > instead? You can get from that to the struct page easily enough with a
>> > shift and pfn_to_page().
>> 
>> I found using _mapcount simpler in this case. I was looking at it not
>> as an index, but rather how may fragments are mapped/used already.
>
> Except that it's actually (#fragments - 1).  Using subpage pointer
> makes the fragments calculation (very slightly) harder, but the
> calculation of the table address easier.  More importantly it avoids
> adding effectively an extra variable - which is then shoehorned into a
> structure not really designed to hold it.

Even with subpage pointer we would need mm->context.pgtable_page or
something similar. We don't add any other extra variable right ?. Let me
try what you are suggesting here and see if that make it simpler.


>> Using
>> subpage pointer in mm->context.xyz means, we have to calculate the
>> number of fragments used/mapped via the pointer. We need the fragment
>> count so that we can drop page reference count correctly here.
>> 
>> 
>> >
>> >> +	if (!count) {
>> >> +		pgtable_page_dtor(page);
>> >> +		reset_page_mapcount(page);
>> >> +		free_hot_cold_page(page, 0);
>> >
>> > It would be nice to use put_page() somehow instead of duplicating its
>> > logic, though I realise the sparc code you've based this on does the
>> > same thing.
>> 
>> That is not exactly put_page. We can avoid lots of check in this
>> specific case.
>
> [snip]
>> >> +static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
>> >> +{
>> >> +	pte_t *ret = NULL;
>> >> +	struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK |
>> >> +				       __GFP_REPEAT | __GFP_ZERO);
>> >> +	if (!page)
>> >> +		return NULL;
>> >> +
>> >> +	spin_lock(&mm->page_table_lock);
>> >> +	/*
>> >> +	 * If we find pgtable_page set, we return
>> >> +	 * the allocated page with single fragement
>> >> +	 * count.
>> >> +	 */
>> >> +	if (likely(!mm->context.pgtable_page)) {
>> >> +		atomic_set(&page->_count, PTE_FRAG_NR);
>> >> +		atomic_set(&page->_mapcount, 0);
>> >> +		mm->context.pgtable_page = page;
>> >> +	}
>> >
>> > .. and in the unlikely case where there *is* a pgtable_page already
>> > set, what then?  Seems like you should BUG_ON, or at least return NULL
>> > - as it is you will return the first sub-page of that page again,
>> > which is very likely in use.
>> 
>> 
>> As explained in the comment above, we return with the allocated page
>> with fragment count set to 1. So we end up having only one fragment. The
>> other option I had was to to free the allocated page and do a
>> get_from_cache under the page_table_lock. But since we already allocated
>> the page, why not use that ?. It also keep the code similar to
>> sparc.
>
> My point is that I can't see any circumstance under which we should
> ever hit this case.  Which means if we do something is badly messed up
> and we should BUG() (or at least WARN()).

A multi threaded test would easily hit that. stream is the test I used.

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10  7:14   ` Michael Ellerman
@ 2013-04-10  7:54     ` Aneesh Kumar K.V
  2013-04-10  8:52       ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10  7:54 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: paulus, linuxppc-dev, linux-mm

Michael Ellerman <michael@ellerman.id.au> writes:

> On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> We allocate one page for the last level of linux page table. With THP and
>> large page size of 16MB, that would mean we are wasting large part
>> of that page. To map 16MB area, we only need a PTE space of 2K with 64K
>> page size. This patch reduce the space wastage by sharing the page
>> allocated for the last level of linux page table with multiple pmd
>> entries. We call these smaller chunks PTE page fragments and allocated
>> page, PTE page.
>
> This is not compiling for me:
>
> arch/powerpc/mm/mmu_context_hash64.c:118:3: error: implicit declaration of function 'reset_page_mapcount'
>

can you share the .config ? I have the git tree at 

git://github.com/kvaneesh/linux.git ppc64-thp-7

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly.
  2013-04-10  7:19   ` David Gibson
@ 2013-04-10  8:11     ` Aneesh Kumar K.V
  2013-04-10 17:49       ` Aneesh Kumar K.V
  2013-04-11  1:28       ` David Gibson
  0 siblings, 2 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10  8:11 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 04, 2013 at 11:27:46AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> We look at both the segment base page size and actual page size and store
>> the pte-lp-encodings in an array per base page size.
>> 
>> We also update all relevant functions to take actual page size argument
>> so that we can use the correct PTE LP encoding in HPTE. This should also
>> get the basic Multiple Page Size per Segment (MPSS) support. This is needed
>> to enable THP on ppc64.
>> 

....

>> +static inline int hpte_actual_psize(struct hash_pte *hptep, int psize)
>> +{
>> +	int i, shift;
>> +	unsigned int mask;
>> +	/* Look at the 8 bit LP value */
>> +	unsigned int lp = (hptep->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
>> +
>> +	if (!(hptep->v & HPTE_V_VALID))
>> +		return -1;
>
> Folding the validity check into the size check seems confusing to me.

We do end up with invalid hpte with which we call
hpte_actual_psize. So that check is needed. I can either move to caller,
but then i will have to replicate it in all the call sites.


>
>> +	/* First check if it is large page */
>> +	if (!(hptep->v & HPTE_V_LARGE))
>> +		return MMU_PAGE_4K;
>> +
>> +	/* start from 1 ignoring MMU_PAGE_4K */
>> +	for (i = 1; i < MMU_PAGE_COUNT; i++) {
>> +		/* valid entries have a shift value */
>> +		if (!mmu_psize_defs[i].shift)
>> +			continue;
>
> Isn't this check redundant with the one below?

Yes. I guess we can safely assume that if penc is valid then we do
support that specific large page.

I will drop this and keep the penc check. That is more correct check

>
>> +		/* invalid penc */
>> +		if (mmu_psize_defs[psize].penc[i] == -1)
>> +			continue;
>> +		/*
>> +		 * encoding bits per actual page size
>> +		 *        PTE LP     actual page size
>> +		 *    rrrr rrrz		>=8KB
>> +		 *    rrrr rrzz		>=16KB
>> +		 *    rrrr rzzz		>=32KB
>> +		 *    rrrr zzzz		>=64KB
>> +		 * .......
>> +		 */
>> +		shift = mmu_psize_defs[i].shift - LP_SHIFT;
>> +		if (shift > LP_BITS)
>> +			shift = LP_BITS;
>> +		mask = (1 << shift) - 1;
>> +		if ((lp & mask) == mmu_psize_defs[psize].penc[i])
>> +			return i;
>> +	}
>
> Shouldn't we have a BUG() or something here.  If we get here we've
> somehow created a PTE with LP bits we can't interpret, yes?
>

I don't know. Is BUG() the right thing to do ? 


>> +	return -1;
>> +}
>> +
>>  static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>>  				 unsigned long vpn, int psize, int ssize,
>>  				 int local)
>> @@ -251,6 +294,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>>  	struct hash_pte *hptep = htab_address + slot;
>>  	unsigned long hpte_v, want_v;
>>  	int ret = 0;
>> +	int actual_psize;
>>  
>>  	want_v = hpte_encode_avpn(vpn, psize, ssize);
>>  
>> @@ -260,9 +304,13 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>>  	native_lock_hpte(hptep);
>>  
>>  	hpte_v = hptep->v;
>> -
>> +	actual_psize = hpte_actual_psize(hptep, psize);
>> +	if (actual_psize < 0) {
>> +		native_unlock_hpte(hptep);
>> +		return -1;
>> +	}
>
> Wouldn't it make more sense to only do the psize lookup once you've
> found a matching hpte?

But we need to do psize lookup even if V_COMPARE fail, because we want
to do tlbie in both the case.

>
>>  	/* Even if we miss, we need to invalidate the TLB */
>> -	if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID)) {
>> +	if (!HPTE_V_COMPARE(hpte_v, want_v)) {
>>  		DBG_LOW(" -> miss\n");
>>  		ret = -1;
>>  	} else {
>> @@ -274,7 +322,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
>>  	native_unlock_hpte(hptep);
>>  
>>  	/* Ensure it is out of the tlb too. */
>> -	tlbie(vpn, psize, ssize, local);
>> +	tlbie(vpn, psize, actual_psize, ssize, local);
>>  
>>  	return ret;
>>  }
>> @@ -315,6 +363,7 @@ static long native_hpte_find(unsigned long vpn, int psize, int ssize)
>>  static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
>>  				       int psize, int ssize)
>>  {
>> +	int actual_psize;
>>  	unsigned long vpn;
>>  	unsigned long vsid;
>>  	long slot;
>> @@ -327,13 +376,16 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
>>  	if (slot == -1)
>>  		panic("could not find page to bolt\n");
>>  	hptep = htab_address + slot;
>> +	actual_psize = hpte_actual_psize(hptep, psize);
>> +	if (actual_psize < 0)
>> +		return;
>>  
>>  	/* Update the HPTE */
>>  	hptep->r = (hptep->r & ~(HPTE_R_PP | HPTE_R_N)) |
>>  		(newpp & (HPTE_R_PP | HPTE_R_N));
>>  
>>  	/* Ensure it is out of the tlb too. */
>> -	tlbie(vpn, psize, ssize, 0);
>> +	tlbie(vpn, psize, actual_psize, ssize, 0);
>>  }
>>  
>>  static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
>> @@ -343,6 +395,7 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
>>  	unsigned long hpte_v;
>>  	unsigned long want_v;
>>  	unsigned long flags;
>> +	int actual_psize;
>>  
>>  	local_irq_save(flags);
>>  
>> @@ -352,35 +405,38 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
>>  	native_lock_hpte(hptep);
>>  	hpte_v = hptep->v;
>>  
>> +	actual_psize = hpte_actual_psize(hptep, psize);
>> +	if (actual_psize < 0) {
>> +		native_unlock_hpte(hptep);
>> +		local_irq_restore(flags);
>> +		return;
>> +	}
>>  	/* Even if we miss, we need to invalidate the TLB */
>> -	if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
>> +	if (!HPTE_V_COMPARE(hpte_v, want_v))
>>  		native_unlock_hpte(hptep);
>>  	else
>>  		/* Invalidate the hpte. NOTE: this also unlocks it */
>>  		hptep->v = 0;
>>  
>>  	/* Invalidate the TLB */
>> -	tlbie(vpn, psize, ssize, local);
>> +	tlbie(vpn, psize, actual_psize, ssize, local);
>>  
>>  	local_irq_restore(flags);
>>  }
>>  
>> -#define LP_SHIFT	12
>> -#define LP_BITS		8
>> -#define LP_MASK(i)	((0xFF >> (i)) << LP_SHIFT)
>> -
>>  static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>> -			int *psize, int *ssize, unsigned long *vpn)
>> +			int *psize, int *apsize, int *ssize, unsigned long *vpn)
>>  {
>>  	unsigned long avpn, pteg, vpi;
>>  	unsigned long hpte_r = hpte->r;
>>  	unsigned long hpte_v = hpte->v;
>>  	unsigned long vsid, seg_off;
>> -	int i, size, shift, penc;
>> +	int i, size, a_size, shift, penc;
>>  
>> -	if (!(hpte_v & HPTE_V_LARGE))
>> -		size = MMU_PAGE_4K;
>> -	else {
>> +	if (!(hpte_v & HPTE_V_LARGE)) {
>> +		size   = MMU_PAGE_4K;
>> +		a_size = MMU_PAGE_4K;
>> +	} else {
>>  		for (i = 0; i < LP_BITS; i++) {
>>  			if ((hpte_r & LP_MASK(i+1)) == LP_MASK(i+1))
>>  				break;
>> @@ -388,19 +444,26 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>>  		penc = LP_MASK(i+1) >> LP_SHIFT;
>>  		for (size = 0; size < MMU_PAGE_COUNT; size++) {
>
>>  
>> -			/* 4K pages are not represented by LP */
>> -			if (size == MMU_PAGE_4K)
>> -				continue;
>> -
>>  			/* valid entries have a shift value */
>>  			if (!mmu_psize_defs[size].shift)
>>  				continue;
>> +			for (a_size = 0; a_size < MMU_PAGE_COUNT; a_size++) {
>
> Can't you resize hpte_actual_psize() here instead of recoding the
> lookup?

I thought about that, but re-coding avoided some repeated check. But
then, if I follow your review comments of avoiding hpte valid check etc, may
be I can reuse the hpte_actual_psize. Will try this. 


>
>> -			if (penc == mmu_psize_defs[size].penc)
>> -				break;
>> +				/* 4K pages are not represented by LP */
>> +				if (a_size == MMU_PAGE_4K)
>> +					continue;
>> +
>> +				/* valid entries have a shift value */
>> +				if (!mmu_psize_defs[a_size].shift)
>> +					continue;
>> +
>> +				if (penc == mmu_psize_defs[size].penc[a_size])
>> +					goto out;
>> +			}
>>  		}
>>  	}
>>  
>> +out:

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10  7:54     ` Aneesh Kumar K.V
@ 2013-04-10  8:52       ` Aneesh Kumar K.V
  0 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10  8:52 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: paulus, linuxppc-dev, linux-mm

"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> writes:

> Michael Ellerman <michael@ellerman.id.au> writes:
>
>> On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
>>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>>> 
>>> We allocate one page for the last level of linux page table. With THP and
>>> large page size of 16MB, that would mean we are wasting large part
>>> of that page. To map 16MB area, we only need a PTE space of 2K with 64K
>>> page size. This patch reduce the space wastage by sharing the page
>>> allocated for the last level of linux page table with multiple pmd
>>> entries. We call these smaller chunks PTE page fragments and allocated
>>> page, PTE page.
>>
>> This is not compiling for me:
>>
>> arch/powerpc/mm/mmu_context_hash64.c:118:3: error: implicit declaration of function 'reset_page_mapcount'
>>
>
> can you share the .config ? I have the git tree at 
>
> git://github.com/kvaneesh/linux.git ppc64-thp-7

22b751c3d0376e86a377e3a0aa2ddbbe9d2eefc1 . Will rebase to latest linus tree.

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10  7:53         ` Aneesh Kumar K.V
@ 2013-04-10 17:47           ` Aneesh Kumar K.V
  2013-04-11  1:20             ` David Gibson
  2013-04-11  1:12           ` David Gibson
  1 sibling, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10 17:47 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> writes:

> David Gibson <dwg@au1.ibm.com> writes:
>
>> On Wed, Apr 10, 2013 at 11:59:29AM +0530, Aneesh Kumar K.V wrote:
>>> David Gibson <dwg@au1.ibm.com> writes:
>>> > On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
>> [snip]
>>> >> @@ -97,13 +100,45 @@ void __destroy_context(int context_id)
>>> >>  }
>>> >>  EXPORT_SYMBOL_GPL(__destroy_context);
>>> >>  
>>> >> +#ifdef CONFIG_PPC_64K_PAGES
>>> >> +static void destroy_pagetable_page(struct mm_struct *mm)
>>> >> +{
>>> >> +	int count;
>>> >> +	struct page *page;
>>> >> +
>>> >> +	page = mm->context.pgtable_page;
>>> >> +	if (!page)
>>> >> +		return;
>>> >> +
>>> >> +	/* drop all the pending references */
>>> >> +	count = atomic_read(&page->_mapcount) + 1;
>>> >> +	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
>>> >> +	count = atomic_sub_return(16 - count, &page->_count);
>>> >
>>> > You should really move PTE_FRAG_NR to a header so you can actually use
>>> > it here rather than hard coding 16.
>>> >
>>> > It took me a fair while to convince myself that there is no race here
>>> > with something altering mapcount and count between the atomic_read()
>>> > and the atomic_sub_return().  It could do with a comment to explain
>>> > why that is safe.
>>> >
>>> > Re-using the mapcount field for your index also seems odd, and it took
>>> > me a while to convince myself that that's safe too.  Wouldn't it be
>>> > simpler to store a pointer to the next sub-page in the mm_context
>>> > instead? You can get from that to the struct page easily enough with a
>>> > shift and pfn_to_page().
>>> 
>>> I found using _mapcount simpler in this case. I was looking at it not
>>> as an index, but rather how may fragments are mapped/used already.
>>
>> Except that it's actually (#fragments - 1).  Using subpage pointer
>> makes the fragments calculation (very slightly) harder, but the
>> calculation of the table address easier.  More importantly it avoids
>> adding effectively an extra variable - which is then shoehorned into a
>> structure not really designed to hold it.
>
> Even with subpage pointer we would need mm->context.pgtable_page or
> something similar. We don't add any other extra variable right ?. Let me
> try what you are suggesting here and see if that make it simpler.


Here is what I ended up with. I will fold this in next update

diff --git a/arch/powerpc/include/asm/mmu-book3e.h b/arch/powerpc/include/asm/mmu-book3e.h
index affbd68..8bd560c 100644
--- a/arch/powerpc/include/asm/mmu-book3e.h
+++ b/arch/powerpc/include/asm/mmu-book3e.h
@@ -233,7 +233,7 @@ typedef struct {
 #endif
 #ifdef CONFIG_PPC_64K_PAGES
 	/* for 4K PTE fragment support */
-	struct page *pgtable_page;
+	void *pte_frag;
 #endif
 } mm_context_t;
 
diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index f51ed83..af73f06 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -511,7 +511,7 @@ typedef struct {
 #endif /* CONFIG_PPC_ICSWX */
 #ifdef CONFIG_PPC_64K_PAGES
 	/* for 4K PTE fragment support */
-	struct page *pgtable_page;
+	void *pte_frag;
 #endif
 } mm_context_t;
 
diff --git a/arch/powerpc/include/asm/pgalloc-64.h b/arch/powerpc/include/asm/pgalloc-64.h
index 46c6ffa..7b7ac40 100644
--- a/arch/powerpc/include/asm/pgalloc-64.h
+++ b/arch/powerpc/include/asm/pgalloc-64.h
@@ -149,6 +149,16 @@ static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
 }
 
 #else /* if CONFIG_PPC_64K_PAGES */
+/*
+ * we support 16 fragments per PTE page.
+ */
+#define PTE_FRAG_NR	16
+/*
+ * We use a 2K PTE page fragment and another 2K for storing
+ * real_pte_t hash index
+ */
+#define PTE_FRAG_SIZE_SHIFT  12
+#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
 
 extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
 extern void page_table_free(struct mm_struct *, unsigned long *, int);
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 27432fe..e379d3f 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -584,7 +584,7 @@ void __init setup_arch(char **cmdline_p)
 	init_mm.end_data = (unsigned long) _edata;
 	init_mm.brk = klimit;
 #ifdef CONFIG_PPC_64K_PAGES
-	init_mm.context.pgtable_page = NULL;
+	init_mm.context.pte_frag = NULL;
 #endif
 	irqstack_early_init();
 	exc_lvl_early_init();
diff --git a/arch/powerpc/mm/mmu_context_hash64.c b/arch/powerpc/mm/mmu_context_hash64.c
index 87d96e5..8fe4bc9 100644
--- a/arch/powerpc/mm/mmu_context_hash64.c
+++ b/arch/powerpc/mm/mmu_context_hash64.c
@@ -23,6 +23,7 @@
 #include <linux/slab.h>
 
 #include <asm/mmu_context.h>
+#include <asm/pgalloc.h>
 
 #include "icswx.h"
 
@@ -86,7 +87,7 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
 #endif /* CONFIG_PPC_ICSWX */
 
 #ifdef CONFIG_PPC_64K_PAGES
-	mm->context.pgtable_page = NULL;
+	mm->context.pte_frag = NULL;
 #endif
 	return 0;
 }
@@ -103,16 +104,19 @@ EXPORT_SYMBOL_GPL(__destroy_context);
 static void destroy_pagetable_page(struct mm_struct *mm)
 {
 	int count;
+	void *pte_frag;
 	struct page *page;
 
-	page = mm->context.pgtable_page;
-	if (!page)
+	pte_frag = mm->context.pte_frag;
+	if (!pte_frag)
 		return;
 
+	page = virt_to_page(pte_frag);
 	/* drop all the pending references */
-	count = atomic_read(&page->_mapcount) + 1;
-	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
-	count = atomic_sub_return(16 - count, &page->_count);
+	count = ((unsigned long )pte_frag &
+		 (PAGE_SIZE -1)) >> PTE_FRAG_SIZE_SHIFT;
+	/* We allow PTE_FRAG_NR fragments from a PTE page */
+	count = atomic_sub_return(PTE_FRAG_NR - count, &page->_count);
 	if (!count) {
 		pgtable_page_dtor(page);
 		page_mapcount_reset(page);
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 34bc11f..d776614 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -352,66 +352,50 @@ struct page *pmd_page(pmd_t pmd)
 }
 
 #ifdef CONFIG_PPC_64K_PAGES
-/*
- * we support 16 fragments per PTE page. This is limited by how many
- * bits we can pack in page->_mapcount. We use the first half for
- * tracking the usage for rcu page table free.
- */
-#define PTE_FRAG_NR	16
-/*
- * We use a 2K PTE page fragment and another 2K for storing
- * real_pte_t hash index
- */
-#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
-
 static pte_t *get_from_cache(struct mm_struct *mm)
 {
-	int index;
-	pte_t *ret = NULL;
-	struct page *page;
+	void *ret = NULL;
 
 	spin_lock(&mm->page_table_lock);
-	page = mm->context.pgtable_page;
-	if (page) {
-		void *p = page_address(page);
-		index = atomic_add_return(1, &page->_mapcount);
-		ret = (pte_t *) (p + (index * PTE_FRAG_SIZE));
+	ret = mm->context.pte_frag;
+	if (ret) {
+		ret += PTE_FRAG_SIZE;
 		/*
 		 * If we have taken up all the fragments mark PTE page NULL
 		 */
-		if (index == PTE_FRAG_NR - 1)
-			mm->context.pgtable_page = NULL;
+		if (((unsigned long )ret & (PAGE_SIZE - 1)) == 0)
+			ret = NULL;
+		mm->context.pte_frag = ret;
 	}
 	spin_unlock(&mm->page_table_lock);
-	return ret;
+	return (pte_t *)ret;
 }
 
 static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
 {
-	pte_t *ret = NULL;
+	void *ret = NULL;
 	struct page *page = alloc_page(GFP_KERNEL | __GFP_NOTRACK |
 				       __GFP_REPEAT | __GFP_ZERO);
 	if (!page)
 		return NULL;
 
+	ret = page_address(page);
 	spin_lock(&mm->page_table_lock);
 	/*
 	 * If we find pgtable_page set, we return
 	 * the allocated page with single fragement
 	 * count.
 	 */
-	if (likely(!mm->context.pgtable_page)) {
+	if (likely(!mm->context.pte_frag)) {
 		atomic_set(&page->_count, PTE_FRAG_NR);
-		atomic_set(&page->_mapcount, 0);
-		mm->context.pgtable_page = page;
+		mm->context.pte_frag = ret + PTE_FRAG_SIZE;
 	}
 	spin_unlock(&mm->page_table_lock);
 
-	ret = (unsigned long *)page_address(page);
 	if (!kernel)
 		pgtable_page_ctor(page);
 
-	return ret;
+	return (pte_t *)ret;
 }
 
 pte_t *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel)

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly.
  2013-04-10  8:11     ` Aneesh Kumar K.V
@ 2013-04-10 17:49       ` Aneesh Kumar K.V
  2013-04-11  1:28       ` David Gibson
  1 sibling, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10 17:49 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

"Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> writes:

>>>  static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>>> -			int *psize, int *ssize, unsigned long *vpn)
>>> +			int *psize, int *apsize, int *ssize, unsigned long *vpn)
>>>  {
>>>  	unsigned long avpn, pteg, vpi;
>>>  	unsigned long hpte_r = hpte->r;
>>>  	unsigned long hpte_v = hpte->v;
>>>  	unsigned long vsid, seg_off;
>>> -	int i, size, shift, penc;
>>> +	int i, size, a_size, shift, penc;
>>>  
>>> -	if (!(hpte_v & HPTE_V_LARGE))
>>> -		size = MMU_PAGE_4K;
>>> -	else {
>>> +	if (!(hpte_v & HPTE_V_LARGE)) {
>>> +		size   = MMU_PAGE_4K;
>>> +		a_size = MMU_PAGE_4K;
>>> +	} else {
>>>  		for (i = 0; i < LP_BITS; i++) {
>>>  			if ((hpte_r & LP_MASK(i+1)) == LP_MASK(i+1))
>>>  				break;
>>> @@ -388,19 +444,26 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>>>  		penc = LP_MASK(i+1) >> LP_SHIFT;
>>>  		for (size = 0; size < MMU_PAGE_COUNT; size++) {
>>
>>>  
>>> -			/* 4K pages are not represented by LP */
>>> -			if (size == MMU_PAGE_4K)
>>> -				continue;
>>> -
>>>  			/* valid entries have a shift value */
>>>  			if (!mmu_psize_defs[size].shift)
>>>  				continue;
>>> +			for (a_size = 0; a_size < MMU_PAGE_COUNT; a_size++) {
>>
>> Can't you resize hpte_actual_psize() here instead of recoding the
>> lookup?
>
> I thought about that, but re-coding avoided some repeated check. But
> then, if I follow your review comments of avoiding hpte valid check etc, may
> be I can reuse the hpte_actual_psize. Will try this. 
>

How about below ?

diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 4427ca8..de235d5 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -271,19 +271,10 @@ static long native_hpte_remove(unsigned long hpte_group)
 	return i;
 }
 
-static inline int hpte_actual_psize(struct hash_pte *hptep, int psize)
+static inline int __hpte_actual_psize(unsigned int lp, int psize)
 {
 	int i, shift;
 	unsigned int mask;
-	/* Look at the 8 bit LP value */
-	unsigned int lp = (hptep->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
-
-	if (!(hptep->v & HPTE_V_VALID))
-		return -1;
-
-	/* First check if it is large page */
-	if (!(hptep->v & HPTE_V_LARGE))
-		return MMU_PAGE_4K;
 
 	/* start from 1 ignoring MMU_PAGE_4K */
 	for (i = 1; i < MMU_PAGE_COUNT; i++) {
@@ -310,6 +301,21 @@ static inline int hpte_actual_psize(struct hash_pte *hptep, int psize)
 	return -1;
 }
 
+static inline int hpte_actual_psize(struct hash_pte *hptep, int psize)
+{
+	/* Look at the 8 bit LP value */
+	unsigned int lp = (hptep->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
+
+	if (!(hptep->v & HPTE_V_VALID))
+		return -1;
+
+	/* First check if it is large page */
+	if (!(hptep->v & HPTE_V_LARGE))
+		return MMU_PAGE_4K;
+
+	return __hpte_actual_psize(lp, psize);
+}
+
 static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 				 unsigned long vpn, int psize, int ssize,
 				 int local)
@@ -530,7 +536,7 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
 	unsigned long avpn, pteg, vpi;
 	unsigned long hpte_v = hpte->v;
 	unsigned long vsid, seg_off;
-	int size, a_size, shift, mask;
+	int size, a_size, shift;
 	/* Look at the 8 bit LP value */
 	unsigned int lp = (hpte->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
 
@@ -544,33 +550,11 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
 			if (!mmu_psize_defs[size].shift)
 				continue;
 
-			/* start from 1 ignoring MMU_PAGE_4K */
-			for (a_size = 1; a_size < MMU_PAGE_COUNT; a_size++) {
-
-				/* invalid penc */
-				if (mmu_psize_defs[size].penc[a_size] == -1)
-					continue;
-				/*
-				 * encoding bits per actual page size
-				 *        PTE LP     actual page size
-				 *    rrrr rrrz		>=8KB
-				 *    rrrr rrzz		>=16KB
-				 *    rrrr rzzz		>=32KB
-				 *    rrrr zzzz		>=64KB
-				 * .......
-				 */
-				shift = mmu_psize_defs[a_size].shift - LP_SHIFT;
-				if (shift > LP_BITS)
-					shift = LP_BITS;
-				mask = (1 << shift) - 1;
-				if ((lp & mask) ==
-				    mmu_psize_defs[size].penc[a_size]) {
-					goto out;
-				}
-			}
+			a_size = __hpte_actual_psize(lp, size);
+			if (a_size != -1)
+				break;
 		}
 	}
-out:
 	/* This works for all page sizes, and for 256M and 1T segments */
 	*ssize = hpte_v >> HPTE_V_SSIZE_SHIFT;
 	shift = mmu_psize_defs[size].shift;

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries
  2013-04-10  7:21   ` Michael Ellerman
@ 2013-04-10 18:26     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-10 18:26 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: paulus, linuxppc-dev, linux-mm

Michael Ellerman <michael@ellerman.id.au> writes:

> On Thu, Apr 04, 2013 at 11:27:57AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> HUGETLB clear the top bit of PMD entries and use that to indicate
>> a HUGETLB page directory. Since we store pfns in PMDs for THP,
>> we would have the top bit cleared by default. Add the top bit mask
>> for THP PMD entries and clear that when we are looking for pmd_pfn.
>> 
>> @@ -44,6 +44,14 @@ struct mm_struct;
>>  #define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
>>  #define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
>>  #define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
>> +/*
>> + * HugeTLB looks at the top bit of the Linux page table entries to
>> + * decide whether it is a huge page directory or not. Mark HUGE
>> + * PMD to differentiate
>> + */
>> +#define PMD_HUGE_NOT_HUGETLB	(ASM_CONST(1) << 63)
>> +#define PMD_ISHUGE		(_PMD_ISHUGE | PMD_HUGE_NOT_HUGETLB)
>> +#define PMD_HUGE_PROTBITS	(0xfff | PMD_HUGE_NOT_HUGETLB)
>>  
>>  #ifndef __ASSEMBLY__
>>  extern void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
>> @@ -84,7 +93,8 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>>  	/*
>>  	 * Only called for hugepage pmd
>>  	 */
>> -	return pmd_val(pmd) >> PMD_HUGE_RPN_SHIFT;
>> +	unsigned long val = pmd_val(pmd) & ~PMD_HUGE_PROTBITS;
>> +	return val  >> PMD_HUGE_RPN_SHIFT;
>>  }
>
> This is breaking the 32-bit build for me (pmac32_defconfig):
>
> arch/powerpc/include/asm/pgtable.h:123:2: error: left shift count >= width of type [-Werror]
>



diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 5617dee..30c765a 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -110,11 +110,6 @@ static inline int has_transparent_hugepage(void)
 	return 1;
 }
 
-#else
-#define pmd_large(pmd)		0
-#define has_transparent_hugepage() 0
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-
 static inline unsigned long pmd_pfn(pmd_t pmd)
 {
 	/*
@@ -124,6 +119,11 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
 	return val  >> PMD_HUGE_RPN_SHIFT;
 }
 
+#else
+#define pmd_large(pmd)		0
+#define has_transparent_hugepage() 0
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
 static inline int pmd_young(pmd_t pmd)
 {
 	return pmd_val(pmd) & PMD_HUGE_ACCESSED;

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10  7:53         ` Aneesh Kumar K.V
  2013-04-10 17:47           ` Aneesh Kumar K.V
@ 2013-04-11  1:12           ` David Gibson
  1 sibling, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  1:12 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2041 bytes --]

On Wed, Apr 10, 2013 at 01:23:25PM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> > On Wed, Apr 10, 2013 at 11:59:29AM +0530, Aneesh Kumar K.V wrote:
> >> David Gibson <dwg@au1.ibm.com> writes:
> >> > On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
[snip]
> >> > You should really move PTE_FRAG_NR to a header so you can actually use
> >> > it here rather than hard coding 16.
> >> >
> >> > It took me a fair while to convince myself that there is no race here
> >> > with something altering mapcount and count between the atomic_read()
> >> > and the atomic_sub_return().  It could do with a comment to explain
> >> > why that is safe.
> >> >
> >> > Re-using the mapcount field for your index also seems odd, and it took
> >> > me a while to convince myself that that's safe too.  Wouldn't it be
> >> > simpler to store a pointer to the next sub-page in the mm_context
> >> > instead? You can get from that to the struct page easily enough with a
> >> > shift and pfn_to_page().
> >> 
> >> I found using _mapcount simpler in this case. I was looking at it not
> >> as an index, but rather how may fragments are mapped/used already.
> >
> > Except that it's actually (#fragments - 1).  Using subpage pointer
> > makes the fragments calculation (very slightly) harder, but the
> > calculation of the table address easier.  More importantly it avoids
> > adding effectively an extra variable - which is then shoehorned into a
> > structure not really designed to hold it.
> 
> Even with subpage pointer we would need mm->context.pgtable_page or
> something similar. We don't add any other extra variable right ?. Let me
> try what you are suggesting here and see if that make it simpler.

No, because the struct page * can be easily derived from the subpage
pointer.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage
  2013-04-10 17:47           ` Aneesh Kumar K.V
@ 2013-04-11  1:20             ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  1:20 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: linuxppc-dev, paulus, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2935 bytes --]

On Wed, Apr 10, 2013 at 11:17:30PM +0530, Aneesh Kumar K.V wrote:
> "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com> writes:
> 
> > David Gibson <dwg@au1.ibm.com> writes:
> >
> >> On Wed, Apr 10, 2013 at 11:59:29AM +0530, Aneesh Kumar K.V wrote:
> >>> David Gibson <dwg@au1.ibm.com> writes:
> >>> > On Thu, Apr 04, 2013 at 11:27:44AM +0530, Aneesh Kumar K.V wrote:
> >> [snip]
> >>> >> @@ -97,13 +100,45 @@ void __destroy_context(int context_id)
> >>> >>  }
> >>> >>  EXPORT_SYMBOL_GPL(__destroy_context);
> >>> >>  
> >>> >> +#ifdef CONFIG_PPC_64K_PAGES
> >>> >> +static void destroy_pagetable_page(struct mm_struct *mm)
> >>> >> +{
> >>> >> +	int count;
> >>> >> +	struct page *page;
> >>> >> +
> >>> >> +	page = mm->context.pgtable_page;
> >>> >> +	if (!page)
> >>> >> +		return;
> >>> >> +
> >>> >> +	/* drop all the pending references */
> >>> >> +	count = atomic_read(&page->_mapcount) + 1;
> >>> >> +	/* We allow PTE_FRAG_NR(16) fragments from a PTE page */
> >>> >> +	count = atomic_sub_return(16 - count, &page->_count);
> >>> >
> >>> > You should really move PTE_FRAG_NR to a header so you can actually use
> >>> > it here rather than hard coding 16.
> >>> >
> >>> > It took me a fair while to convince myself that there is no race here
> >>> > with something altering mapcount and count between the atomic_read()
> >>> > and the atomic_sub_return().  It could do with a comment to explain
> >>> > why that is safe.
> >>> >
> >>> > Re-using the mapcount field for your index also seems odd, and it took
> >>> > me a while to convince myself that that's safe too.  Wouldn't it be
> >>> > simpler to store a pointer to the next sub-page in the mm_context
> >>> > instead? You can get from that to the struct page easily enough with a
> >>> > shift and pfn_to_page().
> >>> 
> >>> I found using _mapcount simpler in this case. I was looking at it not
> >>> as an index, but rather how may fragments are mapped/used already.
> >>
> >> Except that it's actually (#fragments - 1).  Using subpage pointer
> >> makes the fragments calculation (very slightly) harder, but the
> >> calculation of the table address easier.  More importantly it avoids
> >> adding effectively an extra variable - which is then shoehorned into a
> >> structure not really designed to hold it.
> >
> > Even with subpage pointer we would need mm->context.pgtable_page or
> > something similar. We don't add any other extra variable right ?. Let me
> > try what you are suggesting here and see if that make it simpler.
> 
> 
> Here is what I ended up with. I will fold this in next update

Yeah, that looks better to me.  Note that ~PAGE_MASK is the more usual
idiom, rather than (PAGE_SIZE - 1).

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly.
  2013-04-10  8:11     ` Aneesh Kumar K.V
  2013-04-10 17:49       ` Aneesh Kumar K.V
@ 2013-04-11  1:28       ` David Gibson
  1 sibling, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  1:28 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: linuxppc-dev, paulus, linux-mm

[-- Attachment #1: Type: text/plain, Size: 5036 bytes --]

On Wed, Apr 10, 2013 at 01:41:16PM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> 
> > On Thu, Apr 04, 2013 at 11:27:46AM +0530, Aneesh Kumar K.V wrote:
> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> >> 
> >> We look at both the segment base page size and actual page size and store
> >> the pte-lp-encodings in an array per base page size.
> >> 
> >> We also update all relevant functions to take actual page size argument
> >> so that we can use the correct PTE LP encoding in HPTE. This should also
> >> get the basic Multiple Page Size per Segment (MPSS) support. This is needed
> >> to enable THP on ppc64.
> >> 
> 
> ....
> 
> >> +static inline int hpte_actual_psize(struct hash_pte *hptep, int psize)
> >> +{
> >> +	int i, shift;
> >> +	unsigned int mask;
> >> +	/* Look at the 8 bit LP value */
> >> +	unsigned int lp = (hptep->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
> >> +
> >> +	if (!(hptep->v & HPTE_V_VALID))
> >> +		return -1;
> >
> > Folding the validity check into the size check seems confusing to me.
> 
> We do end up with invalid hpte with which we call
> hpte_actual_psize. So that check is needed. I can either move to caller,
> but then i will have to replicate it in all the call sites.
> 
> 
> >> +	/* First check if it is large page */
> >> +	if (!(hptep->v & HPTE_V_LARGE))
> >> +		return MMU_PAGE_4K;
> >> +
> >> +	/* start from 1 ignoring MMU_PAGE_4K */
> >> +	for (i = 1; i < MMU_PAGE_COUNT; i++) {
> >> +		/* valid entries have a shift value */
> >> +		if (!mmu_psize_defs[i].shift)
> >> +			continue;
> >
> > Isn't this check redundant with the one below?
> 
> Yes. I guess we can safely assume that if penc is valid then we do
> support that specific large page.
> 
> I will drop this and keep the penc check. That is more correct check
> 
> >> +		/* invalid penc */
> >> +		if (mmu_psize_defs[psize].penc[i] == -1)
> >> +			continue;
> >> +		/*
> >> +		 * encoding bits per actual page size
> >> +		 *        PTE LP     actual page size
> >> +		 *    rrrr rrrz		>=8KB
> >> +		 *    rrrr rrzz		>=16KB
> >> +		 *    rrrr rzzz		>=32KB
> >> +		 *    rrrr zzzz		>=64KB
> >> +		 * .......
> >> +		 */
> >> +		shift = mmu_psize_defs[i].shift - LP_SHIFT;
> >> +		if (shift > LP_BITS)
> >> +			shift = LP_BITS;
> >> +		mask = (1 << shift) - 1;
> >> +		if ((lp & mask) == mmu_psize_defs[psize].penc[i])
> >> +			return i;
> >> +	}
> >
> > Shouldn't we have a BUG() or something here.  If we get here we've
> > somehow created a PTE with LP bits we can't interpret, yes?
> 
> I don't know. Is BUG() the right thing to do ? 

Well, it's a situation that should never occur, and it's not clear
what we can do to fix it if it does, so, yeah, I think BUG() is appropriate.

> >> +	return -1;
> >> +}
> >> +
> >>  static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
> >>  				 unsigned long vpn, int psize, int ssize,
> >>  				 int local)
> >> @@ -251,6 +294,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
> >>  	struct hash_pte *hptep = htab_address + slot;
> >>  	unsigned long hpte_v, want_v;
> >>  	int ret = 0;
> >> +	int actual_psize;
> >>  
> >>  	want_v = hpte_encode_avpn(vpn, psize, ssize);
> >>  
> >> @@ -260,9 +304,13 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
> >>  	native_lock_hpte(hptep);
> >>  
> >>  	hpte_v = hptep->v;
> >> -
> >> +	actual_psize = hpte_actual_psize(hptep, psize);
> >> +	if (actual_psize < 0) {
> >> +		native_unlock_hpte(hptep);
> >> +		return -1;
> >> +	}
> >
> > Wouldn't it make more sense to only do the psize lookup once you've
> > found a matching hpte?
> 
> But we need to do psize lookup even if V_COMPARE fail, because we want
> to do tlbie in both the case.

Ah, yes.  Sorry, misunderstood what this code was doing.

[snip]
> >> @@ -388,19 +444,26 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
> >>  		penc = LP_MASK(i+1) >> LP_SHIFT;
> >>  		for (size = 0; size < MMU_PAGE_COUNT; size++) {
> >
> >>  
> >> -			/* 4K pages are not represented by LP */
> >> -			if (size == MMU_PAGE_4K)
> >> -				continue;
> >> -
> >>  			/* valid entries have a shift value */
> >>  			if (!mmu_psize_defs[size].shift)
> >>  				continue;
> >> +			for (a_size = 0; a_size < MMU_PAGE_COUNT; a_size++) {
> >
> > Can't you resize hpte_actual_psize() here instead of recoding the
> > lookup?
> 
> I thought about that, but re-coding avoided some repeated check. But
> then, if I follow your review comments of avoiding hpte valid check etc, may
> be I can reuse the hpte_actual_psize. Will try this. 

hpte_decode() is only used in the kexec() path so some repeated simple
tests don't really matter.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 09/25] powerpc: Fix hpte_decode to use the correct decoding for page sizes
  2013-04-04  5:57 ` [PATCH -V5 09/25] powerpc: Fix hpte_decode to use the correct decoding for page sizes Aneesh Kumar K.V
@ 2013-04-11  3:20   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  3:20 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 1210 bytes --]

On Thu, Apr 04, 2013 at 11:27:47AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> As per ISA doc, we encode base and actual page size in the LP bits of
> PTE. The number of bit used to encode the page sizes depend on actual
> page size.  ISA doc lists this as
> 
>    PTE LP     actual page size
> rrrr rrrz 	>=8KB
> rrrr rrzz	>=16KB
> rrrr rzzz 	>=32KB
> rrrr zzzz 	>=64KB
> rrrz zzzz 	>=128KB
> rrzz zzzz 	>=256KB
> rzzz zzzz	>=512KB
> zzzz zzzz 	>=1MB
> 
> ISA doc also says
> "The values of the “z” bits used to specify each size, along with all possible
> values of “r” bits in the LP field, must result in LP values distinct from
> other LP values for other sizes."
> 
> based on the above update hpte_decode to use the correct decoding for LP bits.
> 
> Acked-by: Paul Mackerras <paulus@samba.org>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 10/25] powerpc: print both base and actual page size on hash failure
  2013-04-04  5:57 ` [PATCH -V5 10/25] powerpc: print both base and actual page size on hash failure Aneesh Kumar K.V
@ 2013-04-11  3:21   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  3:21 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 457 bytes --]

On Thu, Apr 04, 2013 at 11:27:48AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl
  2013-04-04  5:57 ` [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl Aneesh Kumar K.V
@ 2013-04-11  3:24   ` David Gibson
  2013-04-11  5:11     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-11  3:24 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 440 bytes --]

On Thu, Apr 04, 2013 at 11:27:50AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

Surely this can't be correct until the KVM H_ENTER implementation is
updated to cope with the MPSS page sizes.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc
  2013-04-04  5:57 ` [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc Aneesh Kumar K.V
@ 2013-04-11  3:30   ` David Gibson
  2013-04-11  5:20     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-11  3:30 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2239 bytes --]

On Thu, Apr 04, 2013 at 11:27:51AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> This make sure we handle multiple page size segment correctly.

This needs a much more detailed message.  In what way was the existing
code not matching the ISA documentation?  What consequences did that
have?

> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/mm/hash_native_64.c |   30 ++++++++++++++++++++++++++++--
>  1 file changed, 28 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
> index b461b2d..ac84fa6 100644
> --- a/arch/powerpc/mm/hash_native_64.c
> +++ b/arch/powerpc/mm/hash_native_64.c
> @@ -61,7 +61,10 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
>  
>  	switch (psize) {
>  	case MMU_PAGE_4K:
> +		/* clear out bits after (52) [0....52.....63] */
> +		va &= ~((1ul << (64 - 52)) - 1);
>  		va |= ssize << 8;
> +		va |= mmu_psize_defs[apsize].sllp << 6;

sllp is the per-segment encoding, so it sure must be looked up via
psize, not apsize.

>  		asm volatile(ASM_FTR_IFCLR("tlbie %0,0", PPC_TLBIE(%1,%0), %2)
>  			     : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)
>  			     : "memory");
> @@ -69,9 +72,19 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
>  	default:
>  		/* We need 14 to 14 + i bits of va */
>  		penc = mmu_psize_defs[psize].penc[apsize];
> -		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
> +		va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);
>  		va |= penc << 12;
>  		va |= ssize << 8;
> +		/* Add AVAL part */
> +		if (psize != apsize) {
> +			/*
> +			 * MPSS, 64K base page size and 16MB parge page size
> +			 * We don't need all the bits, but this seems to work.
> +			 * vpn cover upto 65 bits of va. (0...65) and we need
> +			 * 58..64 bits of va.

"seems to work" is not a comment I like to see in core MMU code...

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 14/25] mm/THP: HPAGE_SHIFT is not a #define on some arch
  2013-04-04  5:57 ` [PATCH -V5 14/25] mm/THP: HPAGE_SHIFT is not a #define on some arch Aneesh Kumar K.V
@ 2013-04-11  3:36   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  3:36 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: Andrea Arcangeli, paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 703 bytes --]

On Thu, Apr 04, 2013 at 11:27:52AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> On archs like powerpc that support different hugepage sizes, HPAGE_SHIFT
> and other derived values like HPAGE_PMD_ORDER are not constants. So move
> that to hugepage_init
> 
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Looks ok to me.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 15/25] mm/THP: Add pmd args to pgtable deposit and withdraw APIs
  2013-04-04  5:57 ` [PATCH -V5 15/25] mm/THP: Add pmd args to pgtable deposit and withdraw APIs Aneesh Kumar K.V
@ 2013-04-11  3:40   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  3:40 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: Andrea Arcangeli, paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 754 bytes --]

On Thu, Apr 04, 2013 at 11:27:53AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> This will be later used by powerpc THP support. In powerpc we want to use
> pgtable for storing the hash index values. So instead of adding them to
> mm_context list, we would like to store them in the second half of pmd
> 
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Looks ok, afaict.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl
  2013-04-11  3:24   ` David Gibson
@ 2013-04-11  5:11     ` Aneesh Kumar K.V
  2013-04-11  5:57       ` David Gibson
  0 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-11  5:11 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 04, 2013 at 11:27:50AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>
> Surely this can't be correct until the KVM H_ENTER implementation is
> updated to cope with the MPSS page sizes.

Why ? We are returning info regarding penc values for different
combination. I would guess qemu to only use info related to base page
size. Rest it can ignore right ?. Obviously i haven't tested this
part. So let me know if I should drop this ?

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc
  2013-04-11  3:30   ` David Gibson
@ 2013-04-11  5:20     ` Aneesh Kumar K.V
  2013-04-11  6:16       ` David Gibson
  0 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-11  5:20 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 04, 2013 at 11:27:51AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> This make sure we handle multiple page size segment correctly.
>
> This needs a much more detailed message.  In what way was the existing
> code not matching the ISA documentation?  What consequences did that
> have?

Mostly to make sure we use the right penc values in tlbie. I did test
these changes on PowerNV. 


>
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/mm/hash_native_64.c |   30 ++++++++++++++++++++++++++++--
>>  1 file changed, 28 insertions(+), 2 deletions(-)
>> 
>> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
>> index b461b2d..ac84fa6 100644
>> --- a/arch/powerpc/mm/hash_native_64.c
>> +++ b/arch/powerpc/mm/hash_native_64.c
>> @@ -61,7 +61,10 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
>>  
>>  	switch (psize) {
>>  	case MMU_PAGE_4K:
>> +		/* clear out bits after (52) [0....52.....63] */
>> +		va &= ~((1ul << (64 - 52)) - 1);
>>  		va |= ssize << 8;
>> +		va |= mmu_psize_defs[apsize].sllp << 6;
>
> sllp is the per-segment encoding, so it sure must be looked up via
> psize, not apsize.


as per ISA doc, for base page size 4K, RB[56:58] must be set to
SLB[L|LP] encoded for the page size corresponding to the actual page
size specified by the PTE that was used to create the the TLB entry to
be invalidated.


>
>>  		asm volatile(ASM_FTR_IFCLR("tlbie %0,0", PPC_TLBIE(%1,%0), %2)
>>  			     : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)
>>  			     : "memory");
>> @@ -69,9 +72,19 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
>>  	default:
>>  		/* We need 14 to 14 + i bits of va */
>>  		penc = mmu_psize_defs[psize].penc[apsize];
>> -		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
>> +		va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);
>>  		va |= penc << 12;
>>  		va |= ssize << 8;
>> +		/* Add AVAL part */
>> +		if (psize != apsize) {
>> +			/*
>> +			 * MPSS, 64K base page size and 16MB parge page size
>> +			 * We don't need all the bits, but this seems to work.
>> +			 * vpn cover upto 65 bits of va. (0...65) and we need
>> +			 * 58..64 bits of va.
>
> "seems to work" is not a comment I like to see in core MMU code...
>

As per ISA spec, the "other bits" in RB[56:62] must be ignored by the
processor. Hence I didn't bother to do zero it out. Since we only
support one MPSS combination, we could easily zero out using 0xf0. 

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64
  2013-04-04  5:57 ` [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64 Aneesh Kumar K.V
@ 2013-04-11  5:38   ` David Gibson
  2013-04-11  7:40     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-11  5:38 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 23390 bytes --]

On Thu, Apr 04, 2013 at 11:27:55AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> We now have pmd entries covering to 16MB range. To implement THP on powerpc,
> we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
> We also use the depoisted PTE page for tracking the HPTE information. The information
> include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
> With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
> 4096 entries. Both will fit in a 4K PTE page.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/page.h              |    2 +-
>  arch/powerpc/include/asm/pgtable-ppc64-64k.h |    3 +-
>  arch/powerpc/include/asm/pgtable-ppc64.h     |    2 +-
>  arch/powerpc/include/asm/pgtable.h           |  240 ++++++++++++++++++++
>  arch/powerpc/mm/pgtable.c                    |  314 ++++++++++++++++++++++++++
>  arch/powerpc/mm/pgtable_64.c                 |   13 ++
>  arch/powerpc/platforms/Kconfig.cputype       |    1 +
>  7 files changed, 572 insertions(+), 3 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index 38e7ff6..b927447 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -40,7 +40,7 @@
>  #ifdef CONFIG_HUGETLB_PAGE
>  extern unsigned int HPAGE_SHIFT;
>  #else
> -#define HPAGE_SHIFT PAGE_SHIFT
> +#define HPAGE_SHIFT PMD_SHIFT

That looks like it could break everything except the 64k page size
64-bit base.

>  #endif
>  #define HPAGE_SIZE		((1UL) << HPAGE_SHIFT)
>  #define HPAGE_MASK		(~(HPAGE_SIZE - 1))
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> index 3c529b4..5c5541a 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> @@ -33,7 +33,8 @@
>  #define PGDIR_MASK	(~(PGDIR_SIZE-1))
>  
>  /* Bits to mask out from a PMD to get to the PTE page */
> -#define PMD_MASKED_BITS		0x1ff
> +/* PMDs point to PTE table fragments which are 4K aligned.  */
> +#define PMD_MASKED_BITS		0xfff
>  /* Bits to mask out from a PGD/PUD to get to the PMD page */
>  #define PUD_MASKED_BITS		0x1ff
>  
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
> index 0182c20..c0747c7 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
> @@ -150,7 +150,7 @@
>  #define	pmd_present(pmd)	(pmd_val(pmd) != 0)
>  #define	pmd_clear(pmdp)		(pmd_val(*(pmdp)) = 0)
>  #define pmd_page_vaddr(pmd)	(pmd_val(pmd) & ~PMD_MASKED_BITS)
> -#define pmd_page(pmd)		virt_to_page(pmd_page_vaddr(pmd))
> +extern struct page *pmd_page(pmd_t pmd);

Does unconditionally changing pmd_page() from a macro to an external
function have a noticeable performance impact?

>  #define pud_set(pudp, pudval)	(pud_val(*(pudp)) = (pudval))
>  #define pud_none(pud)		(!pud_val(pud))
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 4b52726..9fbe2a7 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -23,7 +23,247 @@ struct mm_struct;
>   */
>  #define PTE_PAGE_HIDX_OFFSET (PTRS_PER_PTE * 8)
>  
> +/* A large part matches with pte bits */
> +#define PMD_HUGE_PRESENT	0x001 /* software: pte contains a translation */
> +#define PMD_HUGE_USER		0x002 /* matches one of the PP bits */
> +#define PMD_HUGE_FILE		0x002 /* (!present only) software: pte holds file offset */

Can we actually get hugepage PMDs that are in this state?

> +#define PMD_HUGE_EXEC		0x004 /* No execute on POWER4 and newer (we invert) */
> +#define PMD_HUGE_SPLITTING	0x008
> +#define PMD_HUGE_SAO		0x010 /* strong Access order */
> +#define PMD_HUGE_HASHPTE	0x020
> +#define PMD_ISHUGE		0x040
> +#define PMD_HUGE_DIRTY		0x080 /* C: page changed */
> +#define PMD_HUGE_ACCESSED	0x100 /* R: page referenced */
> +#define PMD_HUGE_RW		0x200 /* software: user write access allowed */
> +#define PMD_HUGE_BUSY		0x800 /* software: PTE & hash are busy */
> +#define PMD_HUGE_HPTEFLAGS	(PMD_HUGE_BUSY | PMD_HUGE_HASHPTE)
> +/*
> + * We keep both the pmd and pte rpn shift same, eventhough we use only
> + * lower 12 bits for hugepage flags at pmd level

Why?

> + */
> +#define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
> +#define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
> +#define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
> +
>  #ifndef __ASSEMBLY__
> +extern void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
> +				     pmd_t *pmdp);
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
> +extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
> +extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
> +extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +		       pmd_t *pmdp, pmd_t pmd);
> +extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
> +				 pmd_t *pmd);
> +static inline int pmd_large(pmd_t pmd)
> +{
> +	return (pmd_val(pmd) & (PMD_ISHUGE | PMD_HUGE_PRESENT)) ==
> +		(PMD_ISHUGE | PMD_HUGE_PRESENT);
> +}
> +
> +static inline int pmd_trans_splitting(pmd_t pmd)
> +{
> +	return (pmd_val(pmd) & (PMD_ISHUGE|PMD_HUGE_SPLITTING)) ==
> +		(PMD_ISHUGE|PMD_HUGE_SPLITTING);
> +}
> +
> +static inline int pmd_trans_huge(pmd_t pmd)
> +{
> +	return pmd_val(pmd) & PMD_ISHUGE;
> +}
> +/* We will enable it in the last patch */
> +#define has_transparent_hugepage() 0
> +#else
> +#define pmd_large(pmd)		0
> +#define has_transparent_hugepage() 0
> +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> +
> +static inline unsigned long pmd_pfn(pmd_t pmd)
> +{
> +	/*
> +	 * Only called for hugepage pmd
> +	 */
> +	return pmd_val(pmd) >> PMD_HUGE_RPN_SHIFT;
> +}
> +
> +static inline int pmd_young(pmd_t pmd)
> +{
> +	return pmd_val(pmd) & PMD_HUGE_ACCESSED;
> +}
> +
> +static inline pmd_t pmd_mkhuge(pmd_t pmd)
> +{
> +	/* Do nothing, mk_pmd() does this part.  */
> +	return pmd;
> +}
> +
> +#define __HAVE_ARCH_PMD_WRITE
> +static inline int pmd_write(pmd_t pmd)
> +{
> +	return pmd_val(pmd) & PMD_HUGE_RW;
> +}
> +
> +static inline pmd_t pmd_mkold(pmd_t pmd)
> +{
> +	pmd_val(pmd) &= ~PMD_HUGE_ACCESSED;
> +	return pmd;
> +}
> +
> +static inline pmd_t pmd_wrprotect(pmd_t pmd)
> +{
> +	pmd_val(pmd) &= ~PMD_HUGE_RW;
> +	return pmd;
> +}
> +
> +static inline pmd_t pmd_mkdirty(pmd_t pmd)
> +{
> +	pmd_val(pmd) |= PMD_HUGE_DIRTY;
> +	return pmd;
> +}
> +
> +static inline pmd_t pmd_mkyoung(pmd_t pmd)
> +{
> +	pmd_val(pmd) |= PMD_HUGE_ACCESSED;
> +	return pmd;
> +}
> +
> +static inline pmd_t pmd_mkwrite(pmd_t pmd)
> +{
> +	pmd_val(pmd) |= PMD_HUGE_RW;
> +	return pmd;
> +}
> +
> +static inline pmd_t pmd_mknotpresent(pmd_t pmd)
> +{
> +	pmd_val(pmd) &= ~PMD_HUGE_PRESENT;
> +	return pmd;
> +}
> +
> +static inline pmd_t pmd_mksplitting(pmd_t pmd)
> +{
> +	pmd_val(pmd) |= PMD_HUGE_SPLITTING;
> +	return pmd;
> +}
> +
> +/*
> + * Set the dirty and/or accessed bits atomically in a linux hugepage PMD, this
> + * function doesn't need to flush the hash entry
> + */
> +static inline void __pmdp_set_access_flags(pmd_t *pmdp, pmd_t entry)
> +{
> +	unsigned long bits = pmd_val(entry) & (PMD_HUGE_DIRTY |
> +					       PMD_HUGE_ACCESSED |
> +					       PMD_HUGE_RW | PMD_HUGE_EXEC);
> +#ifdef PTE_ATOMIC_UPDATES
> +	unsigned long old, tmp;
> +
> +	__asm__ __volatile__(
> +	"1:	ldarx	%0,0,%4\n\
> +		andi.	%1,%0,%6\n\
> +		bne-	1b \n\
> +		or	%0,%3,%0\n\
> +		stdcx.	%0,0,%4\n\
> +		bne-	1b"
> +	:"=&r" (old), "=&r" (tmp), "=m" (*pmdp)
> +	:"r" (bits), "r" (pmdp), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
> +	:"cc");
> +#else
> +	unsigned long old = pmd_val(*pmdp);
> +	*pmdp = __pmd(old | bits);
> +#endif
> +}
> +
> +#define __HAVE_ARCH_PMD_SAME
> +static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
> +{
> +	return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~PMD_HUGE_HPTEFLAGS) == 0);
> +}
> +
> +#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
> +extern int pmdp_set_access_flags(struct vm_area_struct *vma,
> +				 unsigned long address, pmd_t *pmdp,
> +				 pmd_t entry, int dirty);
> +
> +static inline unsigned long pmd_hugepage_update(struct mm_struct *mm,
> +						unsigned long addr,
> +						pmd_t *pmdp, unsigned long clr)
> +{
> +#ifdef PTE_ATOMIC_UPDATES
> +	unsigned long old, tmp;
> +
> +	__asm__ __volatile__(
> +	"1:	ldarx	%0,0,%3\n\
> +		andi.	%1,%0,%6\n\
> +		bne-	1b \n\
> +		andc	%1,%0,%4 \n\
> +		stdcx.	%1,0,%3 \n\
> +		bne-	1b"
> +	: "=&r" (old), "=&r" (tmp), "=m" (*pmdp)
> +	: "r" (pmdp), "r" (clr), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
> +	: "cc" );
> +#else
> +	unsigned long old = pmd_val(*pmdp);
> +	*pmdp = __pmd(old & ~clr);
> +#endif
> +
> +#ifdef CONFIG_PPC_STD_MMU_64
> +	if (old & PMD_HUGE_HASHPTE)
> +		hpte_need_hugepage_flush(mm, addr, pmdp);
> +#endif
> +	return old;
> +}
> +
> +static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
> +					      unsigned long addr, pmd_t *pmdp)
> +{
> +	unsigned long old;
> +
> +	if ((pmd_val(*pmdp) & (PMD_HUGE_ACCESSED | PMD_HUGE_HASHPTE)) == 0)
> +		return 0;
> +	old = pmd_hugepage_update(mm, addr, pmdp, PMD_HUGE_ACCESSED);
> +	return ((old & PMD_HUGE_ACCESSED) != 0);
> +}
> +
> +#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
> +extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +				     unsigned long address, pmd_t *pmdp);
> +#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
> +extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +				  unsigned long address, pmd_t *pmdp);
> +
> +#define __HAVE_ARCH_PMDP_GET_AND_CLEAR
> +static inline pmd_t pmdp_get_and_clear(struct mm_struct *mm,
> +				       unsigned long addr, pmd_t *pmdp)
> +{
> +	unsigned long old = pmd_hugepage_update(mm, addr, pmdp, ~0UL);
> +	return __pmd(old);
> +}
> +
> +#define __HAVE_ARCH_PMDP_SET_WRPROTECT
> +static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
> +				      pmd_t *pmdp)
> +{
> +
> +	if ((pmd_val(*pmdp) & PMD_HUGE_RW) == 0)
> +		return;
> +
> +	pmd_hugepage_update(mm, addr, pmdp, PMD_HUGE_RW);
> +}
> +
> +#define __HAVE_ARCH_PMDP_SPLITTING_FLUSH
> +extern void pmdp_splitting_flush(struct vm_area_struct *vma,
> +				 unsigned long address, pmd_t *pmdp);
> +
> +#define __HAVE_ARCH_PGTABLE_DEPOSIT
> +extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
> +				       pgtable_t pgtable);
> +#define __HAVE_ARCH_PGTABLE_WITHDRAW
> +extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
> +
> +#define __HAVE_ARCH_PMDP_INVALIDATE
> +extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +			    pmd_t *pmdp);
>  
>  #include <asm/tlbflush.h>
>  
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index 214130a..9f33780 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -31,6 +31,7 @@
>  #include <asm/pgalloc.h>
>  #include <asm/tlbflush.h>
>  #include <asm/tlb.h>
> +#include <asm/machdep.h>
>  
>  #include "mmu_decl.h"
>  
> @@ -240,3 +241,316 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
>  }
>  #endif /* CONFIG_DEBUG_VM */
>  
> +
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +static pmd_t set_hugepage_access_flags_filter(pmd_t pmd,
> +					      struct vm_area_struct *vma,
> +					      int dirty)
> +{
> +	return pmd;
> +}

I don't really see why you're splitting out these trivial ...filter()
functions, rather than just doing it inline in the (single) caller.

> +
> +/*
> + * This is called when relaxing access to a hugepage. It's also called in the page
> + * fault path when we don't hit any of the major fault cases, ie, a minor
> + * update of _PAGE_ACCESSED, _PAGE_DIRTY, etc... The generic code will have
> + * handled those two for us, we additionally deal with missing execute
> + * permission here on some processors
> + */
> +int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> +			  pmd_t *pmdp, pmd_t entry, int dirty)
> +{
> +	int changed;
> +	entry = set_hugepage_access_flags_filter(entry, vma, dirty);
> +	changed = !pmd_same(*(pmdp), entry);
> +	if (changed) {
> +		__pmdp_set_access_flags(pmdp, entry);
> +		/*
> +		 * Since we are not supporting SW TLB systems, we don't
> +		 * have any thing similar to flush_tlb_page_nohash()
> +		 */
> +	}
> +	return changed;
> +}
> +
> +int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> +			      unsigned long address, pmd_t *pmdp)
> +{
> +	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
> +}
> +
> +/*
> + * We currently remove entries from the hashtable regardless of whether
> + * the entry was young or dirty. The generic routines only flush if the
> + * entry was young or dirty which is not good enough.
> + *
> + * We should be more intelligent about this but for the moment we override
> + * these functions and force a tlb flush unconditionally
> + */
> +int pmdp_clear_flush_young(struct vm_area_struct *vma,
> +				  unsigned long address, pmd_t *pmdp)
> +{
> +	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
> +}
> +
> +/*
> + * We mark the pmd splitting and invalidate all the hpte
> + * entries for this hugepage.
> + */
> +void pmdp_splitting_flush(struct vm_area_struct *vma,
> +			  unsigned long address, pmd_t *pmdp)
> +{
> +	unsigned long old, tmp;
> +
> +	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> +#ifdef PTE_ATOMIC_UPDATES
> +
> +	__asm__ __volatile__(
> +	"1:	ldarx	%0,0,%3\n\
> +		andi.	%1,%0,%6\n\
> +		bne-	1b \n\
> +		ori	%1,%0,%4 \n\
> +		stdcx.	%1,0,%3 \n\
> +		bne-	1b"
> +	: "=&r" (old), "=&r" (tmp), "=m" (*pmdp)
> +	: "r" (pmdp), "i" (PMD_HUGE_SPLITTING), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
> +	: "cc" );
> +#else
> +	old = pmd_val(*pmdp);
> +	*pmdp = __pmd(old | PMD_HUGE_SPLITTING);
> +#endif
> +	/*
> +	 * If we didn't had the splitting flag set, go and flush the
> +	 * HPTE entries and serialize against gup fast.
> +	 */
> +	if (!(old & PMD_HUGE_SPLITTING)) {
> +#ifdef CONFIG_PPC_STD_MMU_64
> +		/* We need to flush the hpte */
> +		if (old & PMD_HUGE_HASHPTE)
> +			hpte_need_hugepage_flush(vma->vm_mm, address, pmdp);
> +#endif
> +		/* need tlb flush only to serialize against gup-fast */
> +		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
> +	}
> +}
> +
> +/*
> + * We want to put the pgtable in pmd and use pgtable for tracking
> + * the base page size hptes
> + */
> +void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
> +				pgtable_t pgtable)
> +{
> +	unsigned long *pgtable_slot;
> +	assert_spin_locked(&mm->page_table_lock);
> +	/*
> +	 * we store the pgtable in the second half of PMD
> +	 */
> +	pgtable_slot = pmdp + PTRS_PER_PMD;
> +	*pgtable_slot = (unsigned long)pgtable;
> +}
> +
> +#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))

Another example of why this define should be moved to a header.

> +pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
> +{
> +	pgtable_t pgtable;
> +	unsigned long *pgtable_slot;
> +
> +	assert_spin_locked(&mm->page_table_lock);
> +	pgtable_slot = pmdp + PTRS_PER_PMD;
> +	pgtable = (pgtable_t) *pgtable_slot;
> +	/*
> +	 * We store HPTE information in the deposited PTE fragment.
> +	 * zero out the content on withdraw.
> +	 */
> +	memset(pgtable, 0, PTE_FRAG_SIZE);
> +	return pgtable;
> +}
> +
> +/*
> + * Since we are looking at latest ppc64, we don't need to worry about
> + * i/d cache coherency on exec fault
> + */
> +static pmd_t set_pmd_filter(pmd_t pmd, unsigned long addr)
> +{
> +	pmd = __pmd(pmd_val(pmd) & ~PMD_HUGE_HPTEFLAGS);
> +	return pmd;
> +}
> +
> +/*
> + * We can make it less convoluted than __set_pte_at, because
> + * we can ignore lot of hardware here, because this is only for
> + * MPSS
> + */
> +static inline void __set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +				pmd_t *pmdp, pmd_t pmd, int percpu)
> +{
> +	/*
> +	 * There is nothing in hash page table now, so nothing to
> +	 * invalidate, set_pte_at is used for adding new entry.
> +	 * For updating we should use update_hugepage_pmd()
> +	 */
> +	*pmdp = pmd;
> +}
> +
> +/*
> + * set a new huge pmd. We should not be called for updating
> + * an existing pmd entry. That should go via pmd_hugepage_update.
> + */
> +void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> +		pmd_t *pmdp, pmd_t pmd)
> +{
> +	/*
> +	 * Note: mm->context.id might not yet have been assigned as
> +	 * this context might not have been activated yet when this
> +	 * is called.
> +	 */
> +	pmd = set_pmd_filter(pmd, addr);
> +
> +	__set_pmd_at(mm, addr, pmdp, pmd, 0);
> +
> +}
> +
> +void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> +		     pmd_t *pmdp)
> +{
> +	pmd_hugepage_update(vma->vm_mm, address, pmdp, PMD_HUGE_PRESENT);
> +	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
> +}
> +
> +/*
> + * A linux hugepage PMD was changed and the corresponding hash table entry
> + * neesd to be flushed.
> + *
> + * The linux hugepage PMD now include the pmd entries followed by the address
> + * to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
> + * [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
> + * With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
> + * 4096 entries. Both will fit in a 4K pgtable_t.
> + */
> +void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
> +			      pmd_t *pmdp)
> +{
> +	int ssize, i;
> +	unsigned long s_addr;
> +	unsigned int psize, valid;
> +	unsigned char *hpte_slot_array;
> +	unsigned long hidx, vpn, vsid, hash, shift, slot;
> +
> +	/*
> +	 * Flush all the hptes mapping this hugepage
> +	 */
> +	s_addr = addr & HUGE_PAGE_MASK;
> +	/*
> +	 * The hpte hindex are stored in the pgtable whose address is in the
> +	 * second half of the PMD
> +	 */
> +	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
> +
> +	/* get the base page size */
> +	psize = get_slice_psize(mm, s_addr);
> +	shift = mmu_psize_defs[psize].shift;
> +
> +	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {

HUGE_PAGE_SIZE >> shift would be a simpler way to do this calculation.

> +		/*
> +		 * 8 bits per each hpte entries
> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
> +		 */
> +		valid = hpte_slot_array[i] & 0x1;
> +		if (!valid)
> +			continue;
> +		hidx =  hpte_slot_array[i]  >> 1;
> +
> +		/* get the vpn */
> +		addr = s_addr + (i * (1ul << shift));
> +		if (!is_kernel_addr(addr)) {
> +			ssize = user_segment_size(addr);
> +			vsid = get_vsid(mm->context.id, addr, ssize);
> +			WARN_ON(vsid == 0);
> +		} else {
> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
> +			ssize = mmu_kernel_ssize;
> +		}
> +
> +		vpn = hpt_vpn(addr, vsid, ssize);
> +		hash = hpt_hash(vpn, shift, ssize);
> +		if (hidx & _PTEIDX_SECONDARY)
> +			hash = ~hash;
> +
> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
> +		slot += hidx & _PTEIDX_GROUP_IX;
> +		ppc_md.hpte_invalidate(slot, vpn, psize, ssize, 0);
> +	}
> +}
> +
> +static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
> +{
> +	unsigned long pmd_prot = 0;
> +	unsigned long prot = pgprot_val(pgprot);
> +
> +	if (prot & _PAGE_PRESENT)
> +		pmd_prot |= PMD_HUGE_PRESENT;
> +	if (prot & _PAGE_USER)
> +		pmd_prot |= PMD_HUGE_USER;
> +	if (prot & _PAGE_FILE)
> +		pmd_prot |= PMD_HUGE_FILE;
> +	if (prot & _PAGE_EXEC)
> +		pmd_prot |= PMD_HUGE_EXEC;
> +	/*
> +	 * _PAGE_COHERENT should always be set
> +	 */
> +	VM_BUG_ON(!(prot & _PAGE_COHERENT));
> +
> +	if (prot & _PAGE_SAO)
> +		pmd_prot |= PMD_HUGE_SAO;

This looks dubious because _PAGE_SAO is not a single bit.  What
happens if WRITETHRU or NO_CACHE is set without the other?

> +	if (prot & _PAGE_DIRTY)
> +		pmd_prot |= PMD_HUGE_DIRTY;
> +	if (prot & _PAGE_ACCESSED)
> +		pmd_prot |= PMD_HUGE_ACCESSED;
> +	if (prot & _PAGE_RW)
> +		pmd_prot |= PMD_HUGE_RW;
> +
> +	pmd_val(pmd) |= pmd_prot;
> +	return pmd;
> +}
> +
> +pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
> +{
> +	pmd_t pmd;
> +
> +	pmd_val(pmd) = pfn << PMD_HUGE_RPN_SHIFT;
> +	pmd_val(pmd) |= PMD_ISHUGE;
> +	pmd = pmd_set_protbits(pmd, pgprot);
> +	return pmd;
> +}
> +
> +pmd_t mk_pmd(struct page *page, pgprot_t pgprot)
> +{
> +	return pfn_pmd(page_to_pfn(page), pgprot);
> +}
> +
> +pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
> +{
> +	/* FIXME!! why are this bits cleared ? */

You really need to answer this question...

> +	pmd_val(pmd) &= ~(PMD_HUGE_PRESENT |
> +			  PMD_HUGE_RW |
> +			  PMD_HUGE_EXEC);
> +	pmd = pmd_set_protbits(pmd, newprot);
> +	return pmd;
> +}
> +
> +/*
> + * This is called at the end of handling a user page fault, when the
> + * fault has been handled by updating a HUGE PMD entry in the linux page tables.
> + * We use it to preload an HPTE into the hash table corresponding to
> + * the updated linux HUGE PMD entry.
> + */
> +void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
> +			  pmd_t *pmd)
> +{
> +	/* FIXME!!
> +	 * Will be done in a later patch
> +	 */

If you need another patch to make the code in this patch work, they
should probably be folded together.

> +}
> +
> +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
> index e79840b..6fc3488 100644
> --- a/arch/powerpc/mm/pgtable_64.c
> +++ b/arch/powerpc/mm/pgtable_64.c
> @@ -338,6 +338,19 @@ EXPORT_SYMBOL(iounmap);
>  EXPORT_SYMBOL(__iounmap);
>  EXPORT_SYMBOL(__iounmap_at);
>  
> +/*
> + * For hugepage we have pfn in the pmd, we use PMD_HUGE_RPN_SHIFT bits for flags
> + * For PTE page, we have a PTE_FRAG_SIZE (4K) aligned virtual address.
> + */
> +struct page *pmd_page(pmd_t pmd)
> +{
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +	if (pmd_val(pmd) & PMD_ISHUGE)
> +		return pfn_to_page(pmd_pfn(pmd));
> +#endif
> +	return virt_to_page(pmd_page_vaddr(pmd));
> +}
> +
>  #ifdef CONFIG_PPC_64K_PAGES
>  /*
>   * we support 16 fragments per PTE page. This is limited by how many
> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> index 72afd28..90ee19b 100644
> --- a/arch/powerpc/platforms/Kconfig.cputype
> +++ b/arch/powerpc/platforms/Kconfig.cputype
> @@ -71,6 +71,7 @@ config PPC_BOOK3S_64
>  	select PPC_FPU
>  	select PPC_HAVE_PMU_SUPPORT
>  	select SYS_SUPPORTS_HUGETLBFS
> +	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
>  
>  config PPC_BOOK3E_64
>  	bool "Embedded processors"

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl
  2013-04-11  5:11     ` Aneesh Kumar K.V
@ 2013-04-11  5:57       ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  5:57 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: linuxppc-dev, paulus, linux-mm

[-- Attachment #1: Type: text/plain, Size: 993 bytes --]

On Thu, Apr 11, 2013 at 10:41:57AM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> 
> > On Thu, Apr 04, 2013 at 11:27:50AM +0530, Aneesh Kumar K.V wrote:
> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> >
> > Surely this can't be correct until the KVM H_ENTER implementation is
> > updated to cope with the MPSS page sizes.
> 
> Why ? We are returning info regarding penc values for different
> combination. I would guess qemu to only use info related to base page
> size. Rest it can ignore right ?. Obviously i haven't tested this
> part. So let me know if I should drop this ?

The guest can't actually use those encodings unless the host's H_ENTER
allows it to, though, so this patch should be moved after extended
that KVM support.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc
  2013-04-11  5:20     ` Aneesh Kumar K.V
@ 2013-04-11  6:16       ` David Gibson
  2013-04-11  6:36         ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-11  6:16 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: linuxppc-dev, paulus, linux-mm

[-- Attachment #1: Type: text/plain, Size: 3495 bytes --]

On Thu, Apr 11, 2013 at 10:50:12AM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> 
> > On Thu, Apr 04, 2013 at 11:27:51AM +0530, Aneesh Kumar K.V wrote:
> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> >> 
> >> This make sure we handle multiple page size segment correctly.
> >
> > This needs a much more detailed message.  In what way was the existing
> > code not matching the ISA documentation?  What consequences did that
> > have?
> 
> Mostly to make sure we use the right penc values in tlbie. I did test
> these changes on PowerNV. 

A vague description like this is not adequate.  Your commit message
needs to explain what was wrong with the existing behaviour.

> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> >> ---
> >>  arch/powerpc/mm/hash_native_64.c |   30 ++++++++++++++++++++++++++++--
> >>  1 file changed, 28 insertions(+), 2 deletions(-)
> >> 
> >> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
> >> index b461b2d..ac84fa6 100644
> >> --- a/arch/powerpc/mm/hash_native_64.c
> >> +++ b/arch/powerpc/mm/hash_native_64.c
> >> @@ -61,7 +61,10 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
> >>  
> >>  	switch (psize) {
> >>  	case MMU_PAGE_4K:
> >> +		/* clear out bits after (52) [0....52.....63] */
> >> +		va &= ~((1ul << (64 - 52)) - 1);
> >>  		va |= ssize << 8;
> >> +		va |= mmu_psize_defs[apsize].sllp << 6;
> >
> > sllp is the per-segment encoding, so it sure must be looked up via
> > psize, not apsize.
> 
> as per ISA doc, for base page size 4K, RB[56:58] must be set to
> SLB[L|LP] encoded for the page size corresponding to the actual page
> size specified by the PTE that was used to create the the TLB entry to
> be invalidated.

Ok, I see.  Wow, our architecture is even more convoluted than I
thought.  This could really do with a comment, because this is a very
surprising aspect of the architecture.

> >
> >>  		asm volatile(ASM_FTR_IFCLR("tlbie %0,0", PPC_TLBIE(%1,%0), %2)
> >>  			     : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)
> >>  			     : "memory");
> >> @@ -69,9 +72,19 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
> >>  	default:
> >>  		/* We need 14 to 14 + i bits of va */
> >>  		penc = mmu_psize_defs[psize].penc[apsize];
> >> -		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
> >> +		va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);
> >>  		va |= penc << 12;
> >>  		va |= ssize << 8;
> >> +		/* Add AVAL part */
> >> +		if (psize != apsize) {
> >> +			/*
> >> +			 * MPSS, 64K base page size and 16MB parge page size
> >> +			 * We don't need all the bits, but this seems to work.
> >> +			 * vpn cover upto 65 bits of va. (0...65) and we need
> >> +			 * 58..64 bits of va.
> >
> > "seems to work" is not a comment I like to see in core MMU code...
> >
> 
> As per ISA spec, the "other bits" in RB[56:62] must be ignored by the
> processor. Hence I didn't bother to do zero it out. Since we only
> support one MPSS combination, we could easily zero out using 0xf0. 

Then update the comment to clearly explain why what you're doing is
correct, not just say it "seems to work".

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 18/25] powerpc/THP: Double the PMD table size for THP
  2013-04-04  5:57 ` [PATCH -V5 18/25] powerpc/THP: Double the PMD table size for THP Aneesh Kumar K.V
@ 2013-04-11  6:18   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  6:18 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 840 bytes --]

On Thu, Apr 04, 2013 at 11:27:56AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> THP code does PTE page allocation along with large page request and deposit them
> for later use. This is to ensure that we won't have any failures when we split
> hugepages to regular pages.
> 
> On powerpc we want to use the deposited PTE page for storing hash pte slot and
> secondary bit information for the HPTEs. We use the second half
> of the pmd table to save the deposted PTE page.

The previous patch accesses data in that second half of the PMD table,
so this patch should go before it.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc
  2013-04-11  6:16       ` David Gibson
@ 2013-04-11  6:36         ` Aneesh Kumar K.V
  0 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-11  6:36 UTC (permalink / raw)
  To: David Gibson; +Cc: linuxppc-dev, paulus, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 11, 2013 at 10:50:12AM +0530, Aneesh Kumar K.V wrote:
>> David Gibson <dwg@au1.ibm.com> writes:
>> 
>> > On Thu, Apr 04, 2013 at 11:27:51AM +0530, Aneesh Kumar K.V wrote:
>> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> >> 
>> >> This make sure we handle multiple page size segment correctly.
>> >
>> > This needs a much more detailed message.  In what way was the existing
>> > code not matching the ISA documentation?  What consequences did that
>> > have?
>> 
>> Mostly to make sure we use the right penc values in tlbie. I did test
>> these changes on PowerNV. 
>
> A vague description like this is not adequate.  Your commit message
> needs to explain what was wrong with the existing behaviour.
>
>> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> >> ---
>> >>  arch/powerpc/mm/hash_native_64.c |   30 ++++++++++++++++++++++++++++--
>> >>  1 file changed, 28 insertions(+), 2 deletions(-)
>> >> 
>> >> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
>> >> index b461b2d..ac84fa6 100644
>> >> --- a/arch/powerpc/mm/hash_native_64.c
>> >> +++ b/arch/powerpc/mm/hash_native_64.c
>> >> @@ -61,7 +61,10 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
>> >>  
>> >>  	switch (psize) {
>> >>  	case MMU_PAGE_4K:
>> >> +		/* clear out bits after (52) [0....52.....63] */
>> >> +		va &= ~((1ul << (64 - 52)) - 1);
>> >>  		va |= ssize << 8;
>> >> +		va |= mmu_psize_defs[apsize].sllp << 6;
>> >
>> > sllp is the per-segment encoding, so it sure must be looked up via
>> > psize, not apsize.
>> 
>> as per ISA doc, for base page size 4K, RB[56:58] must be set to
>> SLB[L|LP] encoded for the page size corresponding to the actual page
>> size specified by the PTE that was used to create the the TLB entry to
>> be invalidated.
>
> Ok, I see.  Wow, our architecture is even more convoluted than I
> thought.  This could really do with a comment, because this is a very
> surprising aspect of the architecture.
>
>> >
>> >>  		asm volatile(ASM_FTR_IFCLR("tlbie %0,0", PPC_TLBIE(%1,%0), %2)
>> >>  			     : : "r" (va), "r"(0), "i" (CPU_FTR_ARCH_206)
>> >>  			     : "memory");
>> >> @@ -69,9 +72,19 @@ static inline void __tlbie(unsigned long vpn, int psize, int apsize, int ssize)
>> >>  	default:
>> >>  		/* We need 14 to 14 + i bits of va */
>> >>  		penc = mmu_psize_defs[psize].penc[apsize];
>> >> -		va &= ~((1ul << mmu_psize_defs[psize].shift) - 1);
>> >> +		va &= ~((1ul << mmu_psize_defs[apsize].shift) - 1);
>> >>  		va |= penc << 12;
>> >>  		va |= ssize << 8;
>> >> +		/* Add AVAL part */
>> >> +		if (psize != apsize) {
>> >> +			/*
>> >> +			 * MPSS, 64K base page size and 16MB parge page size
>> >> +			 * We don't need all the bits, but this seems to work.
>> >> +			 * vpn cover upto 65 bits of va. (0...65) and we need
>> >> +			 * 58..64 bits of va.
>> >
>> > "seems to work" is not a comment I like to see in core MMU code...
>> >
>> 
>> As per ISA spec, the "other bits" in RB[56:62] must be ignored by the
>> processor. Hence I didn't bother to do zero it out. Since we only
>> support one MPSS combination, we could easily zero out using 0xf0. 
>
> Then update the comment to clearly explain why what you're doing is
> correct, not just say it "seems to work".

Will do.

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 04/25] powerpc: Reduce the PTE_INDEX_SIZE
  2013-04-04  5:57 ` [PATCH -V5 04/25] powerpc: Reduce the PTE_INDEX_SIZE Aneesh Kumar K.V
@ 2013-04-11  7:10   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-11  7:10 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2564 bytes --]

On Thu, Apr 04, 2013 at 11:27:42AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> This make one PMD cover 16MB range. That helps in easier implementation of THP
> on power. THP core code make use of one pmd entry to track the hugepage and
> the range mapped by a single pmd entry should be equal to the hugepage size
> supported by the hardware.
> 
> Acked-by: Paul Mackerras <paulus@samba.org>
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/pgtable-ppc64-64k.h |    4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> index be4e287..3c529b4 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> @@ -4,10 +4,10 @@
>  #include <asm-generic/pgtable-nopud.h>
>  
>  
> -#define PTE_INDEX_SIZE  12
> +#define PTE_INDEX_SIZE  8
>  #define PMD_INDEX_SIZE  12
>  #define PUD_INDEX_SIZE	0
> -#define PGD_INDEX_SIZE  6
> +#define PGD_INDEX_SIZE  10
>  
>  #ifndef __ASSEMBLY__
>  #define PTE_TABLE_SIZE	(sizeof(real_pte_t) << PTE_INDEX_SIZE)

Actually, I've realised there's a much more serious problem here.
This patch as is will break existing hugpage support.  With the
previous numbers we had pagetable levels covering 256M and 1TB.  That
meant that at whichever level we split off a hugepd, it would line up
with the slice/segment boundaries.  Now it won't, and that means that
(explicitly) mapping hugepages and normal pages with correctly
constructed alignments will lead to the normal page fault paths
attempting to walk down hugepds or vice versa which will cause
crashes.

In fact.. with the new boundaries, we will attempt to put explicit 16M
hugepages in a hugepd of 4096 entries covering a total of 64G.  Which
means any attempt to use explicit hugepages in a 32-bit process will
blow up horribly.

The obvious solution is to make explicit hugepages also use your new
hugepage encoding, as a PMD entry pointing directly to the page data.
That's also a good idea, to avoid yet more variants on the pagetable
encoding.  But this conversion of the explicit hugepage code really
needs to be done before attempting to implement THP.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64
  2013-04-11  5:38   ` David Gibson
@ 2013-04-11  7:40     ` Aneesh Kumar K.V
  2013-04-12  0:51       ` David Gibson
  0 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-11  7:40 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 04, 2013 at 11:27:55AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> We now have pmd entries covering to 16MB range. To implement THP on powerpc,
>> we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
>> We also use the depoisted PTE page for tracking the HPTE information. The information
>> include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
>> With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
>> 4096 entries. Both will fit in a 4K PTE page.
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/page.h              |    2 +-
>>  arch/powerpc/include/asm/pgtable-ppc64-64k.h |    3 +-
>>  arch/powerpc/include/asm/pgtable-ppc64.h     |    2 +-
>>  arch/powerpc/include/asm/pgtable.h           |  240 ++++++++++++++++++++
>>  arch/powerpc/mm/pgtable.c                    |  314 ++++++++++++++++++++++++++
>>  arch/powerpc/mm/pgtable_64.c                 |   13 ++
>>  arch/powerpc/platforms/Kconfig.cputype       |    1 +
>>  7 files changed, 572 insertions(+), 3 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> index 38e7ff6..b927447 100644
>> --- a/arch/powerpc/include/asm/page.h
>> +++ b/arch/powerpc/include/asm/page.h
>> @@ -40,7 +40,7 @@
>>  #ifdef CONFIG_HUGETLB_PAGE
>>  extern unsigned int HPAGE_SHIFT;
>>  #else
>> -#define HPAGE_SHIFT PAGE_SHIFT
>> +#define HPAGE_SHIFT PMD_SHIFT
>
> That looks like it could break everything except the 64k page size
> 64-bit base.

How about 

diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index b927447..fadb1ce 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -37,10 +37,19 @@
 #define PAGE_SIZE		(ASM_CONST(1) << PAGE_SHIFT)
 
 #ifndef __ASSEMBLY__
-#ifdef CONFIG_HUGETLB_PAGE
+/*
+ * With hugetlbfs enabled we allow the HPAGE_SHIFT to run time
+ * configurable. But we enable THP only with 16MB hugepage.
+ * With only THP configured, we force hugepage size to 16MB.
+ * This should ensure that all subarchs that doesn't support
+ * THP continue to work fine with HPAGE_SHIFT usage.
+ */
+#if defined(CONFIG_HUGETLB_PAGE)
 extern unsigned int HPAGE_SHIFT;
-#else
+#elif defined(CONFIG_TRANSPARENT_HUGEPAGE)
 #define HPAGE_SHIFT PMD_SHIFT
+#else
+#define HPAGE_SHIFT PAGE_SHIFT
 #endif
 #define HPAGE_SIZE		((1UL) << HPAGE_SHIFT)
 #define HPAGE_MASK		(~(HPAGE_SIZE - 1))


>
>>  #endif
>>  #define HPAGE_SIZE		((1UL) << HPAGE_SHIFT)
>>  #define HPAGE_MASK		(~(HPAGE_SIZE - 1))
>> diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
>> index 3c529b4..5c5541a 100644
>> --- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
>> +++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
>> @@ -33,7 +33,8 @@
>>  #define PGDIR_MASK	(~(PGDIR_SIZE-1))
>>  
>>  /* Bits to mask out from a PMD to get to the PTE page */
>> -#define PMD_MASKED_BITS		0x1ff
>> +/* PMDs point to PTE table fragments which are 4K aligned.  */
>> +#define PMD_MASKED_BITS		0xfff
>>  /* Bits to mask out from a PGD/PUD to get to the PMD page */
>>  #define PUD_MASKED_BITS		0x1ff
>>  
>> diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
>> index 0182c20..c0747c7 100644
>> --- a/arch/powerpc/include/asm/pgtable-ppc64.h
>> +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
>> @@ -150,7 +150,7 @@
>>  #define	pmd_present(pmd)	(pmd_val(pmd) != 0)
>>  #define	pmd_clear(pmdp)		(pmd_val(*(pmdp)) = 0)
>>  #define pmd_page_vaddr(pmd)	(pmd_val(pmd) & ~PMD_MASKED_BITS)
>> -#define pmd_page(pmd)		virt_to_page(pmd_page_vaddr(pmd))
>> +extern struct page *pmd_page(pmd_t pmd);
>
> Does unconditionally changing pmd_page() from a macro to an external
> function have a noticeable performance impact?

I did measure performance impact with THP enabled. Didn't do a micro
benchmark to measure impact of this change. Any suggestion what test
results you would like to see ?

>
>>  #define pud_set(pudp, pudval)	(pud_val(*(pudp)) = (pudval))
>>  #define pud_none(pud)		(!pud_val(pud))
>> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
>> index 4b52726..9fbe2a7 100644
>> --- a/arch/powerpc/include/asm/pgtable.h
>> +++ b/arch/powerpc/include/asm/pgtable.h
>> @@ -23,7 +23,247 @@ struct mm_struct;
>>   */
>>  #define PTE_PAGE_HIDX_OFFSET (PTRS_PER_PTE * 8)
>>  
>> +/* A large part matches with pte bits */
>> +#define PMD_HUGE_PRESENT	0x001 /* software: pte contains a translation */
>> +#define PMD_HUGE_USER		0x002 /* matches one of the PP bits */
>> +#define PMD_HUGE_FILE		0x002 /* (!present only) software: pte holds file offset */
>
> Can we actually get hugepage PMDs that are in this state?

Currently we can't, but we would start supporting THP for page cache later.

>
>> +#define PMD_HUGE_EXEC		0x004 /* No execute on POWER4 and newer (we invert) */
>> +#define PMD_HUGE_SPLITTING	0x008
>> +#define PMD_HUGE_SAO		0x010 /* strong Access order */
>> +#define PMD_HUGE_HASHPTE	0x020
>> +#define PMD_ISHUGE		0x040
>> +#define PMD_HUGE_DIRTY		0x080 /* C: page changed */
>> +#define PMD_HUGE_ACCESSED	0x100 /* R: page referenced */
>> +#define PMD_HUGE_RW		0x200 /* software: user write access allowed */
>> +#define PMD_HUGE_BUSY		0x800 /* software: PTE & hash are busy */
>> +#define PMD_HUGE_HPTEFLAGS	(PMD_HUGE_BUSY | PMD_HUGE_HASHPTE)
>> +/*
>> + * We keep both the pmd and pte rpn shift same, eventhough we use only
>> + * lower 12 bits for hugepage flags at pmd level
>
> Why?
>

I was trying to keep PTE level access code to as much similar to huge
PMD level code. hence retained the same RPN SHIFT, since that would work.

>> + */
>> +#define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
>> +#define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
>> +#define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
>> +

.....

>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index 214130a..9f33780 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -31,6 +31,7 @@
>>  #include <asm/pgalloc.h>
>>  #include <asm/tlbflush.h>
>>  #include <asm/tlb.h>
>> +#include <asm/machdep.h>
>>  
>>  #include "mmu_decl.h"
>>  
>> @@ -240,3 +241,316 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
>>  }
>>  #endif /* CONFIG_DEBUG_VM */
>>  
>> +
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> +static pmd_t set_hugepage_access_flags_filter(pmd_t pmd,
>> +					      struct vm_area_struct *vma,
>> +					      int dirty)
>> +{
>> +	return pmd;
>> +}
>
> I don't really see why you're splitting out these trivial ...filter()
> functions, rather than just doing it inline in the (single) caller.


No specific reason other than keeping the hugepmd related code similar
to PTE related access code. This should enable us to easily enable THP
support for subarchs.

>
>> +
>> +/*
>> + * This is called when relaxing access to a hugepage. It's also called in the page
>> + * fault path when we don't hit any of the major fault cases, ie, a minor
>> + * update of _PAGE_ACCESSED, _PAGE_DIRTY, etc... The generic code will have
>> + * handled those two for us, we additionally deal with missing execute
>> + * permission here on some processors
>> + */
>> +int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
>> +			  pmd_t *pmdp, pmd_t entry, int dirty)
>> +{
>> +	int changed;
>> +	entry = set_hugepage_access_flags_filter(entry, vma, dirty);
>> +	changed = !pmd_same(*(pmdp), entry);
>> +	if (changed) {
>> +		__pmdp_set_access_flags(pmdp, entry);
>> +		/*
>> +		 * Since we are not supporting SW TLB systems, we don't
>> +		 * have any thing similar to flush_tlb_page_nohash()
>> +		 */
>> +	}
>> +	return changed;
>> +}
>> +
>> +int pmdp_test_and_clear_young(struct vm_area_struct *vma,
>> +			      unsigned long address, pmd_t *pmdp)
>> +{
>> +	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
>> +}
>> +
>> +/*
>> + * We currently remove entries from the hashtable regardless of whether
>> + * the entry was young or dirty. The generic routines only flush if the
>> + * entry was young or dirty which is not good enough.
>> + *
>> + * We should be more intelligent about this but for the moment we override
>> + * these functions and force a tlb flush unconditionally
>> + */
>> +int pmdp_clear_flush_young(struct vm_area_struct *vma,
>> +				  unsigned long address, pmd_t *pmdp)
>> +{
>> +	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
>> +}
>> +
>> +/*
>> + * We mark the pmd splitting and invalidate all the hpte
>> + * entries for this hugepage.
>> + */
>> +void pmdp_splitting_flush(struct vm_area_struct *vma,
>> +			  unsigned long address, pmd_t *pmdp)
>> +{
>> +	unsigned long old, tmp;
>> +
>> +	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
>> +#ifdef PTE_ATOMIC_UPDATES
>> +
>> +	__asm__ __volatile__(
>> +	"1:	ldarx	%0,0,%3\n\
>> +		andi.	%1,%0,%6\n\
>> +		bne-	1b \n\
>> +		ori	%1,%0,%4 \n\
>> +		stdcx.	%1,0,%3 \n\
>> +		bne-	1b"
>> +	: "=&r" (old), "=&r" (tmp), "=m" (*pmdp)
>> +	: "r" (pmdp), "i" (PMD_HUGE_SPLITTING), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
>> +	: "cc" );
>> +#else
>> +	old = pmd_val(*pmdp);
>> +	*pmdp = __pmd(old | PMD_HUGE_SPLITTING);
>> +#endif
>> +	/*
>> +	 * If we didn't had the splitting flag set, go and flush the
>> +	 * HPTE entries and serialize against gup fast.
>> +	 */
>> +	if (!(old & PMD_HUGE_SPLITTING)) {
>> +#ifdef CONFIG_PPC_STD_MMU_64
>> +		/* We need to flush the hpte */
>> +		if (old & PMD_HUGE_HASHPTE)
>> +			hpte_need_hugepage_flush(vma->vm_mm, address, pmdp);
>> +#endif
>> +		/* need tlb flush only to serialize against gup-fast */
>> +		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
>> +	}
>> +}
>> +
>> +/*
>> + * We want to put the pgtable in pmd and use pgtable for tracking
>> + * the base page size hptes
>> + */
>> +void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
>> +				pgtable_t pgtable)
>> +{
>> +	unsigned long *pgtable_slot;
>> +	assert_spin_locked(&mm->page_table_lock);
>> +	/*
>> +	 * we store the pgtable in the second half of PMD
>> +	 */
>> +	pgtable_slot = pmdp + PTRS_PER_PMD;
>> +	*pgtable_slot = (unsigned long)pgtable;
>> +}
>> +
>> +#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
>
> Another example of why this define should be moved to a header.

will drop. 

>
>> +pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
>> +{
>> +	pgtable_t pgtable;
>> +	unsigned long *pgtable_slot;
>> +
>> +	assert_spin_locked(&mm->page_table_lock);
>> +	pgtable_slot = pmdp + PTRS_PER_PMD;
>> +	pgtable = (pgtable_t) *pgtable_slot;
>> +	/*
>> +	 * We store HPTE information in the deposited PTE fragment.
>> +	 * zero out the content on withdraw.
>> +	 */
>> +	memset(pgtable, 0, PTE_FRAG_SIZE);
>> +	return pgtable;
>> +}
>> +
>> +/*
>> + * Since we are looking at latest ppc64, we don't need to worry about
>> + * i/d cache coherency on exec fault
>> + */
>> +static pmd_t set_pmd_filter(pmd_t pmd, unsigned long addr)
>> +{
>> +	pmd = __pmd(pmd_val(pmd) & ~PMD_HUGE_HPTEFLAGS);
>> +	return pmd;
>> +}
>> +
>> +/*
>> + * We can make it less convoluted than __set_pte_at, because
>> + * we can ignore lot of hardware here, because this is only for
>> + * MPSS
>> + */
>> +static inline void __set_pmd_at(struct mm_struct *mm, unsigned long addr,
>> +				pmd_t *pmdp, pmd_t pmd, int percpu)
>> +{
>> +	/*
>> +	 * There is nothing in hash page table now, so nothing to
>> +	 * invalidate, set_pte_at is used for adding new entry.
>> +	 * For updating we should use update_hugepage_pmd()
>> +	 */
>> +	*pmdp = pmd;
>> +}
>> +
>> +/*
>> + * set a new huge pmd. We should not be called for updating
>> + * an existing pmd entry. That should go via pmd_hugepage_update.
>> + */
>> +void set_pmd_at(struct mm_struct *mm, unsigned long addr,
>> +		pmd_t *pmdp, pmd_t pmd)
>> +{
>> +	/*
>> +	 * Note: mm->context.id might not yet have been assigned as
>> +	 * this context might not have been activated yet when this
>> +	 * is called.
>> +	 */
>> +	pmd = set_pmd_filter(pmd, addr);
>> +
>> +	__set_pmd_at(mm, addr, pmdp, pmd, 0);
>> +
>> +}
>> +
>> +void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
>> +		     pmd_t *pmdp)
>> +{
>> +	pmd_hugepage_update(vma->vm_mm, address, pmdp, PMD_HUGE_PRESENT);
>> +	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
>> +}
>> +
>> +/*
>> + * A linux hugepage PMD was changed and the corresponding hash table entry
>> + * neesd to be flushed.
>> + *
>> + * The linux hugepage PMD now include the pmd entries followed by the address
>> + * to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
>> + * [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
>> + * With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
>> + * 4096 entries. Both will fit in a 4K pgtable_t.
>> + */
>> +void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
>> +			      pmd_t *pmdp)
>> +{
>> +	int ssize, i;
>> +	unsigned long s_addr;
>> +	unsigned int psize, valid;
>> +	unsigned char *hpte_slot_array;
>> +	unsigned long hidx, vpn, vsid, hash, shift, slot;
>> +
>> +	/*
>> +	 * Flush all the hptes mapping this hugepage
>> +	 */
>> +	s_addr = addr & HUGE_PAGE_MASK;
>> +	/*
>> +	 * The hpte hindex are stored in the pgtable whose address is in the
>> +	 * second half of the PMD
>> +	 */
>> +	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
>> +
>> +	/* get the base page size */
>> +	psize = get_slice_psize(mm, s_addr);
>> +	shift = mmu_psize_defs[psize].shift;
>> +
>> +	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {
>
> HUGE_PAGE_SIZE >> shift would be a simpler way to do this calculation.

Wonder how I missed that


>
>> +		/*
>> +		 * 8 bits per each hpte entries
>> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
>> +		 */
>> +		valid = hpte_slot_array[i] & 0x1;
>> +		if (!valid)
>> +			continue;
>> +		hidx =  hpte_slot_array[i]  >> 1;
>> +
>> +		/* get the vpn */
>> +		addr = s_addr + (i * (1ul << shift));
>> +		if (!is_kernel_addr(addr)) {
>> +			ssize = user_segment_size(addr);
>> +			vsid = get_vsid(mm->context.id, addr, ssize);
>> +			WARN_ON(vsid == 0);
>> +		} else {
>> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
>> +			ssize = mmu_kernel_ssize;
>> +		}
>> +
>> +		vpn = hpt_vpn(addr, vsid, ssize);
>> +		hash = hpt_hash(vpn, shift, ssize);
>> +		if (hidx & _PTEIDX_SECONDARY)
>> +			hash = ~hash;
>> +
>> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
>> +		slot += hidx & _PTEIDX_GROUP_IX;
>> +		ppc_md.hpte_invalidate(slot, vpn, psize, ssize, 0);
>> +	}
>> +}
>> +
>> +static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
>> +{
>> +	unsigned long pmd_prot = 0;
>> +	unsigned long prot = pgprot_val(pgprot);
>> +
>> +	if (prot & _PAGE_PRESENT)
>> +		pmd_prot |= PMD_HUGE_PRESENT;
>> +	if (prot & _PAGE_USER)
>> +		pmd_prot |= PMD_HUGE_USER;
>> +	if (prot & _PAGE_FILE)
>> +		pmd_prot |= PMD_HUGE_FILE;
>> +	if (prot & _PAGE_EXEC)
>> +		pmd_prot |= PMD_HUGE_EXEC;
>> +	/*
>> +	 * _PAGE_COHERENT should always be set
>> +	 */
>> +	VM_BUG_ON(!(prot & _PAGE_COHERENT));
>> +
>> +	if (prot & _PAGE_SAO)
>> +		pmd_prot |= PMD_HUGE_SAO;
>
> This looks dubious because _PAGE_SAO is not a single bit.  What
> happens if WRITETHRU or NO_CACHE is set without the other?

yes that should be 
    if ((prot & _PAGE_SAO) == _PAGE_SAO )


>
>> +	if (prot & _PAGE_DIRTY)
>> +		pmd_prot |= PMD_HUGE_DIRTY;
>> +	if (prot & _PAGE_ACCESSED)
>> +		pmd_prot |= PMD_HUGE_ACCESSED;
>> +	if (prot & _PAGE_RW)
>> +		pmd_prot |= PMD_HUGE_RW;
>> +
>> +	pmd_val(pmd) |= pmd_prot;
>> +	return pmd;
>> +}
>> +
>> +pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
>> +{
>> +	pmd_t pmd;
>> +
>> +	pmd_val(pmd) = pfn << PMD_HUGE_RPN_SHIFT;
>> +	pmd_val(pmd) |= PMD_ISHUGE;
>> +	pmd = pmd_set_protbits(pmd, pgprot);
>> +	return pmd;
>> +}
>> +
>> +pmd_t mk_pmd(struct page *page, pgprot_t pgprot)
>> +{
>> +	return pfn_pmd(page_to_pfn(page), pgprot);
>> +}
>> +
>> +pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
>> +{
>> +	/* FIXME!! why are this bits cleared ? */
>
> You really need to answer this question...

will check. 

>
>> +	pmd_val(pmd) &= ~(PMD_HUGE_PRESENT |
>> +			  PMD_HUGE_RW |
>> +			  PMD_HUGE_EXEC);
>> +	pmd = pmd_set_protbits(pmd, newprot);
>> +	return pmd;
>> +}
>> +
>> +/*
>> + * This is called at the end of handling a user page fault, when the
>> + * fault has been handled by updating a HUGE PMD entry in the linux page tables.
>> + * We use it to preload an HPTE into the hash table corresponding to
>> + * the updated linux HUGE PMD entry.
>> + */
>> +void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
>> +			  pmd_t *pmd)
>> +{
>> +	/* FIXME!!
>> +	 * Will be done in a later patch
>> +	 */
>
> If you need another patch to make the code in this patch work, they
> should probably be folded together.
>

I have that as TODO, we can do a hash_preload for hugepage here. But I
don't see we doing that for HugeTLB. So I haven't yet done that for
hugepage. Do you know why we don't do hash_preload for HugeTLB page ?


>> +}
>> +
>> +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
>> index e79840b..6fc3488 100644
>> --- a/arch/powerpc/mm/pgtable_64.c
>> +++ b/arch/powerpc/mm/pgtable_64.c
>> @@ -338,6 +338,19 @@ EXPORT_SYMBOL(iounmap);
>>  EXPORT_SYMBOL(__iounmap);
>>  EXPORT_SYMBOL(__iounmap_at);
>>  
>> +/*
>> + * For hugepage we have pfn in the pmd, we use PMD_HUGE_RPN_SHIFT bits for flags
>> + * For PTE page, we have a PTE_FRAG_SIZE (4K) aligned virtual address.
>> + */
>> +struct page *pmd_page(pmd_t pmd)
>> +{
>> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> +	if (pmd_val(pmd) & PMD_ISHUGE)
>> +		return pfn_to_page(pmd_pfn(pmd));
>> +#endif
>> +	return virt_to_page(pmd_page_vaddr(pmd));
>> +}
>> +
>>  #ifdef CONFIG_PPC_64K_PAGES
>>  /*
>>   * we support 16 fragments per PTE page. This is limited by how many
>> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
>> index 72afd28..90ee19b 100644
>> --- a/arch/powerpc/platforms/Kconfig.cputype
>> +++ b/arch/powerpc/platforms/Kconfig.cputype
>> @@ -71,6 +71,7 @@ config PPC_BOOK3S_64
>>  	select PPC_FPU
>>  	select PPC_HAVE_PMU_SUPPORT
>>  	select SYS_SUPPORTS_HUGETLBFS
>> +	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
>>  
>>  config PPC_BOOK3E_64
>>  	bool "Embedded processors"

-aneesh

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64
  2013-04-11  7:40     ` Aneesh Kumar K.V
@ 2013-04-12  0:51       ` David Gibson
  2013-04-12  5:06         ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-12  0:51 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: linuxppc-dev, paulus, linux-mm

[-- Attachment #1: Type: text/plain, Size: 20883 bytes --]

On Thu, Apr 11, 2013 at 01:10:29PM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> 
> > On Thu, Apr 04, 2013 at 11:27:55AM +0530, Aneesh Kumar K.V wrote:
> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> >> 
> >> We now have pmd entries covering to 16MB range. To implement THP on powerpc,
> >> we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
> >> We also use the depoisted PTE page for tracking the HPTE information. The information
> >> include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
> >> With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
> >> 4096 entries. Both will fit in a 4K PTE page.
> >> 
> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> >> ---
> >>  arch/powerpc/include/asm/page.h              |    2 +-
> >>  arch/powerpc/include/asm/pgtable-ppc64-64k.h |    3 +-
> >>  arch/powerpc/include/asm/pgtable-ppc64.h     |    2 +-
> >>  arch/powerpc/include/asm/pgtable.h           |  240 ++++++++++++++++++++
> >>  arch/powerpc/mm/pgtable.c                    |  314 ++++++++++++++++++++++++++
> >>  arch/powerpc/mm/pgtable_64.c                 |   13 ++
> >>  arch/powerpc/platforms/Kconfig.cputype       |    1 +
> >>  7 files changed, 572 insertions(+), 3 deletions(-)
> >> 
> >> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> >> index 38e7ff6..b927447 100644
> >> --- a/arch/powerpc/include/asm/page.h
> >> +++ b/arch/powerpc/include/asm/page.h
> >> @@ -40,7 +40,7 @@
> >>  #ifdef CONFIG_HUGETLB_PAGE
> >>  extern unsigned int HPAGE_SHIFT;
> >>  #else
> >> -#define HPAGE_SHIFT PAGE_SHIFT
> >> +#define HPAGE_SHIFT PMD_SHIFT
> >
> > That looks like it could break everything except the 64k page size
> > 64-bit base.
> 
> How about 

It seems very dubious to me to have transparent hugepages enabled
without explicit hugepages in the first place.

> 
> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> index b927447..fadb1ce 100644
> --- a/arch/powerpc/include/asm/page.h
> +++ b/arch/powerpc/include/asm/page.h
> @@ -37,10 +37,19 @@
>  #define PAGE_SIZE		(ASM_CONST(1) << PAGE_SHIFT)
>  
>  #ifndef __ASSEMBLY__
> -#ifdef CONFIG_HUGETLB_PAGE
> +/*
> + * With hugetlbfs enabled we allow the HPAGE_SHIFT to run time
> + * configurable. But we enable THP only with 16MB hugepage.
> + * With only THP configured, we force hugepage size to 16MB.
> + * This should ensure that all subarchs that doesn't support
> + * THP continue to work fine with HPAGE_SHIFT usage.
> + */
> +#if defined(CONFIG_HUGETLB_PAGE)
>  extern unsigned int HPAGE_SHIFT;
> -#else
> +#elif defined(CONFIG_TRANSPARENT_HUGEPAGE)
>  #define HPAGE_SHIFT PMD_SHIFT
> +#else
> +#define HPAGE_SHIFT PAGE_SHIFT
>  #endif
>  #define HPAGE_SIZE		((1UL) << HPAGE_SHIFT)
>  #define HPAGE_MASK		(~(HPAGE_SIZE - 1))
> 
> 
> >
> >>  #endif
> >>  #define HPAGE_SIZE		((1UL) << HPAGE_SHIFT)
> >>  #define HPAGE_MASK		(~(HPAGE_SIZE - 1))
> >> diff --git a/arch/powerpc/include/asm/pgtable-ppc64-64k.h b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> >> index 3c529b4..5c5541a 100644
> >> --- a/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> >> +++ b/arch/powerpc/include/asm/pgtable-ppc64-64k.h
> >> @@ -33,7 +33,8 @@
> >>  #define PGDIR_MASK	(~(PGDIR_SIZE-1))
> >>  
> >>  /* Bits to mask out from a PMD to get to the PTE page */
> >> -#define PMD_MASKED_BITS		0x1ff
> >> +/* PMDs point to PTE table fragments which are 4K aligned.  */
> >> +#define PMD_MASKED_BITS		0xfff
> >>  /* Bits to mask out from a PGD/PUD to get to the PMD page */
> >>  #define PUD_MASKED_BITS		0x1ff
> >>  
> >> diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
> >> index 0182c20..c0747c7 100644
> >> --- a/arch/powerpc/include/asm/pgtable-ppc64.h
> >> +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
> >> @@ -150,7 +150,7 @@
> >>  #define	pmd_present(pmd)	(pmd_val(pmd) != 0)
> >>  #define	pmd_clear(pmdp)		(pmd_val(*(pmdp)) = 0)
> >>  #define pmd_page_vaddr(pmd)	(pmd_val(pmd) & ~PMD_MASKED_BITS)
> >> -#define pmd_page(pmd)		virt_to_page(pmd_page_vaddr(pmd))
> >> +extern struct page *pmd_page(pmd_t pmd);
> >
> > Does unconditionally changing pmd_page() from a macro to an external
> > function have a noticeable performance impact?
> 
> I did measure performance impact with THP enabled. Didn't do a micro
> benchmark to measure impact of this change. Any suggestion what test
> results you would like to see ?
> 
> >
> >>  #define pud_set(pudp, pudval)	(pud_val(*(pudp)) = (pudval))
> >>  #define pud_none(pud)		(!pud_val(pud))
> >> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> >> index 4b52726..9fbe2a7 100644
> >> --- a/arch/powerpc/include/asm/pgtable.h
> >> +++ b/arch/powerpc/include/asm/pgtable.h
> >> @@ -23,7 +23,247 @@ struct mm_struct;
> >>   */
> >>  #define PTE_PAGE_HIDX_OFFSET (PTRS_PER_PTE * 8)
> >>  
> >> +/* A large part matches with pte bits */
> >> +#define PMD_HUGE_PRESENT	0x001 /* software: pte contains a translation */
> >> +#define PMD_HUGE_USER		0x002 /* matches one of the PP bits */
> >> +#define PMD_HUGE_FILE		0x002 /* (!present only) software: pte holds file offset */
> >
> > Can we actually get hugepage PMDs that are in this state?
> 
> Currently we can't, but we would start supporting THP for page cache later.
> 
> >
> >> +#define PMD_HUGE_EXEC		0x004 /* No execute on POWER4 and newer (we invert) */
> >> +#define PMD_HUGE_SPLITTING	0x008
> >> +#define PMD_HUGE_SAO		0x010 /* strong Access order */
> >> +#define PMD_HUGE_HASHPTE	0x020
> >> +#define PMD_ISHUGE		0x040
> >> +#define PMD_HUGE_DIRTY		0x080 /* C: page changed */
> >> +#define PMD_HUGE_ACCESSED	0x100 /* R: page referenced */
> >> +#define PMD_HUGE_RW		0x200 /* software: user write access allowed */
> >> +#define PMD_HUGE_BUSY		0x800 /* software: PTE & hash are busy */
> >> +#define PMD_HUGE_HPTEFLAGS	(PMD_HUGE_BUSY | PMD_HUGE_HASHPTE)
> >> +/*
> >> + * We keep both the pmd and pte rpn shift same, eventhough we use only
> >> + * lower 12 bits for hugepage flags at pmd level
> >
> > Why?
> >
> 
> I was trying to keep PTE level access code to as much similar to huge
> PMD level code. hence retained the same RPN SHIFT, since that would work.
> 
> >> + */
> >> +#define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
> >> +#define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
> >> +#define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
> >> +
> 
> .....
> 
> >> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> >> index 214130a..9f33780 100644
> >> --- a/arch/powerpc/mm/pgtable.c
> >> +++ b/arch/powerpc/mm/pgtable.c
> >> @@ -31,6 +31,7 @@
> >>  #include <asm/pgalloc.h>
> >>  #include <asm/tlbflush.h>
> >>  #include <asm/tlb.h>
> >> +#include <asm/machdep.h>
> >>  
> >>  #include "mmu_decl.h"
> >>  
> >> @@ -240,3 +241,316 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
> >>  }
> >>  #endif /* CONFIG_DEBUG_VM */
> >>  
> >> +
> >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >> +static pmd_t set_hugepage_access_flags_filter(pmd_t pmd,
> >> +					      struct vm_area_struct *vma,
> >> +					      int dirty)
> >> +{
> >> +	return pmd;
> >> +}
> >
> > I don't really see why you're splitting out these trivial ...filter()
> > functions, rather than just doing it inline in the (single) caller.
> 
> 
> No specific reason other than keeping the hugepmd related code similar
> to PTE related access code. This should enable us to easily enable THP
> support for subarchs.
> 
> >
> >> +
> >> +/*
> >> + * This is called when relaxing access to a hugepage. It's also called in the page
> >> + * fault path when we don't hit any of the major fault cases, ie, a minor
> >> + * update of _PAGE_ACCESSED, _PAGE_DIRTY, etc... The generic code will have
> >> + * handled those two for us, we additionally deal with missing execute
> >> + * permission here on some processors
> >> + */
> >> +int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
> >> +			  pmd_t *pmdp, pmd_t entry, int dirty)
> >> +{
> >> +	int changed;
> >> +	entry = set_hugepage_access_flags_filter(entry, vma, dirty);
> >> +	changed = !pmd_same(*(pmdp), entry);
> >> +	if (changed) {
> >> +		__pmdp_set_access_flags(pmdp, entry);
> >> +		/*
> >> +		 * Since we are not supporting SW TLB systems, we don't
> >> +		 * have any thing similar to flush_tlb_page_nohash()
> >> +		 */
> >> +	}
> >> +	return changed;
> >> +}
> >> +
> >> +int pmdp_test_and_clear_young(struct vm_area_struct *vma,
> >> +			      unsigned long address, pmd_t *pmdp)
> >> +{
> >> +	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
> >> +}
> >> +
> >> +/*
> >> + * We currently remove entries from the hashtable regardless of whether
> >> + * the entry was young or dirty. The generic routines only flush if the
> >> + * entry was young or dirty which is not good enough.
> >> + *
> >> + * We should be more intelligent about this but for the moment we override
> >> + * these functions and force a tlb flush unconditionally
> >> + */
> >> +int pmdp_clear_flush_young(struct vm_area_struct *vma,
> >> +				  unsigned long address, pmd_t *pmdp)
> >> +{
> >> +	return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
> >> +}
> >> +
> >> +/*
> >> + * We mark the pmd splitting and invalidate all the hpte
> >> + * entries for this hugepage.
> >> + */
> >> +void pmdp_splitting_flush(struct vm_area_struct *vma,
> >> +			  unsigned long address, pmd_t *pmdp)
> >> +{
> >> +	unsigned long old, tmp;
> >> +
> >> +	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
> >> +#ifdef PTE_ATOMIC_UPDATES
> >> +
> >> +	__asm__ __volatile__(
> >> +	"1:	ldarx	%0,0,%3\n\
> >> +		andi.	%1,%0,%6\n\
> >> +		bne-	1b \n\
> >> +		ori	%1,%0,%4 \n\
> >> +		stdcx.	%1,0,%3 \n\
> >> +		bne-	1b"
> >> +	: "=&r" (old), "=&r" (tmp), "=m" (*pmdp)
> >> +	: "r" (pmdp), "i" (PMD_HUGE_SPLITTING), "m" (*pmdp), "i" (PMD_HUGE_BUSY)
> >> +	: "cc" );
> >> +#else
> >> +	old = pmd_val(*pmdp);
> >> +	*pmdp = __pmd(old | PMD_HUGE_SPLITTING);
> >> +#endif
> >> +	/*
> >> +	 * If we didn't had the splitting flag set, go and flush the
> >> +	 * HPTE entries and serialize against gup fast.
> >> +	 */
> >> +	if (!(old & PMD_HUGE_SPLITTING)) {
> >> +#ifdef CONFIG_PPC_STD_MMU_64
> >> +		/* We need to flush the hpte */
> >> +		if (old & PMD_HUGE_HASHPTE)
> >> +			hpte_need_hugepage_flush(vma->vm_mm, address, pmdp);
> >> +#endif
> >> +		/* need tlb flush only to serialize against gup-fast */
> >> +		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
> >> +	}
> >> +}
> >> +
> >> +/*
> >> + * We want to put the pgtable in pmd and use pgtable for tracking
> >> + * the base page size hptes
> >> + */
> >> +void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
> >> +				pgtable_t pgtable)
> >> +{
> >> +	unsigned long *pgtable_slot;
> >> +	assert_spin_locked(&mm->page_table_lock);
> >> +	/*
> >> +	 * we store the pgtable in the second half of PMD
> >> +	 */
> >> +	pgtable_slot = pmdp + PTRS_PER_PMD;
> >> +	*pgtable_slot = (unsigned long)pgtable;
> >> +}
> >> +
> >> +#define PTE_FRAG_SIZE (2 * PTRS_PER_PTE * sizeof(pte_t))
> >
> > Another example of why this define should be moved to a header.
> 
> will drop. 
> 
> >
> >> +pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
> >> +{
> >> +	pgtable_t pgtable;
> >> +	unsigned long *pgtable_slot;
> >> +
> >> +	assert_spin_locked(&mm->page_table_lock);
> >> +	pgtable_slot = pmdp + PTRS_PER_PMD;
> >> +	pgtable = (pgtable_t) *pgtable_slot;
> >> +	/*
> >> +	 * We store HPTE information in the deposited PTE fragment.
> >> +	 * zero out the content on withdraw.
> >> +	 */
> >> +	memset(pgtable, 0, PTE_FRAG_SIZE);
> >> +	return pgtable;
> >> +}
> >> +
> >> +/*
> >> + * Since we are looking at latest ppc64, we don't need to worry about
> >> + * i/d cache coherency on exec fault
> >> + */
> >> +static pmd_t set_pmd_filter(pmd_t pmd, unsigned long addr)
> >> +{
> >> +	pmd = __pmd(pmd_val(pmd) & ~PMD_HUGE_HPTEFLAGS);
> >> +	return pmd;
> >> +}
> >> +
> >> +/*
> >> + * We can make it less convoluted than __set_pte_at, because
> >> + * we can ignore lot of hardware here, because this is only for
> >> + * MPSS
> >> + */
> >> +static inline void __set_pmd_at(struct mm_struct *mm, unsigned long addr,
> >> +				pmd_t *pmdp, pmd_t pmd, int percpu)
> >> +{
> >> +	/*
> >> +	 * There is nothing in hash page table now, so nothing to
> >> +	 * invalidate, set_pte_at is used for adding new entry.
> >> +	 * For updating we should use update_hugepage_pmd()
> >> +	 */
> >> +	*pmdp = pmd;
> >> +}
> >> +
> >> +/*
> >> + * set a new huge pmd. We should not be called for updating
> >> + * an existing pmd entry. That should go via pmd_hugepage_update.
> >> + */
> >> +void set_pmd_at(struct mm_struct *mm, unsigned long addr,
> >> +		pmd_t *pmdp, pmd_t pmd)
> >> +{
> >> +	/*
> >> +	 * Note: mm->context.id might not yet have been assigned as
> >> +	 * this context might not have been activated yet when this
> >> +	 * is called.
> >> +	 */
> >> +	pmd = set_pmd_filter(pmd, addr);
> >> +
> >> +	__set_pmd_at(mm, addr, pmdp, pmd, 0);
> >> +
> >> +}
> >> +
> >> +void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
> >> +		     pmd_t *pmdp)
> >> +{
> >> +	pmd_hugepage_update(vma->vm_mm, address, pmdp, PMD_HUGE_PRESENT);
> >> +	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
> >> +}
> >> +
> >> +/*
> >> + * A linux hugepage PMD was changed and the corresponding hash table entry
> >> + * neesd to be flushed.
> >> + *
> >> + * The linux hugepage PMD now include the pmd entries followed by the address
> >> + * to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
> >> + * [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
> >> + * With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
> >> + * 4096 entries. Both will fit in a 4K pgtable_t.
> >> + */
> >> +void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
> >> +			      pmd_t *pmdp)
> >> +{
> >> +	int ssize, i;
> >> +	unsigned long s_addr;
> >> +	unsigned int psize, valid;
> >> +	unsigned char *hpte_slot_array;
> >> +	unsigned long hidx, vpn, vsid, hash, shift, slot;
> >> +
> >> +	/*
> >> +	 * Flush all the hptes mapping this hugepage
> >> +	 */
> >> +	s_addr = addr & HUGE_PAGE_MASK;
> >> +	/*
> >> +	 * The hpte hindex are stored in the pgtable whose address is in the
> >> +	 * second half of the PMD
> >> +	 */
> >> +	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
> >> +
> >> +	/* get the base page size */
> >> +	psize = get_slice_psize(mm, s_addr);
> >> +	shift = mmu_psize_defs[psize].shift;
> >> +
> >> +	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {
> >
> > HUGE_PAGE_SIZE >> shift would be a simpler way to do this calculation.
> 
> Wonder how I missed that
> 
> 
> >
> >> +		/*
> >> +		 * 8 bits per each hpte entries
> >> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
> >> +		 */
> >> +		valid = hpte_slot_array[i] & 0x1;
> >> +		if (!valid)
> >> +			continue;
> >> +		hidx =  hpte_slot_array[i]  >> 1;
> >> +
> >> +		/* get the vpn */
> >> +		addr = s_addr + (i * (1ul << shift));
> >> +		if (!is_kernel_addr(addr)) {
> >> +			ssize = user_segment_size(addr);
> >> +			vsid = get_vsid(mm->context.id, addr, ssize);
> >> +			WARN_ON(vsid == 0);
> >> +		} else {
> >> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
> >> +			ssize = mmu_kernel_ssize;
> >> +		}
> >> +
> >> +		vpn = hpt_vpn(addr, vsid, ssize);
> >> +		hash = hpt_hash(vpn, shift, ssize);
> >> +		if (hidx & _PTEIDX_SECONDARY)
> >> +			hash = ~hash;
> >> +
> >> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
> >> +		slot += hidx & _PTEIDX_GROUP_IX;
> >> +		ppc_md.hpte_invalidate(slot, vpn, psize, ssize, 0);
> >> +	}
> >> +}
> >> +
> >> +static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
> >> +{
> >> +	unsigned long pmd_prot = 0;
> >> +	unsigned long prot = pgprot_val(pgprot);
> >> +
> >> +	if (prot & _PAGE_PRESENT)
> >> +		pmd_prot |= PMD_HUGE_PRESENT;
> >> +	if (prot & _PAGE_USER)
> >> +		pmd_prot |= PMD_HUGE_USER;
> >> +	if (prot & _PAGE_FILE)
> >> +		pmd_prot |= PMD_HUGE_FILE;
> >> +	if (prot & _PAGE_EXEC)
> >> +		pmd_prot |= PMD_HUGE_EXEC;
> >> +	/*
> >> +	 * _PAGE_COHERENT should always be set
> >> +	 */
> >> +	VM_BUG_ON(!(prot & _PAGE_COHERENT));
> >> +
> >> +	if (prot & _PAGE_SAO)
> >> +		pmd_prot |= PMD_HUGE_SAO;
> >
> > This looks dubious because _PAGE_SAO is not a single bit.  What
> > happens if WRITETHRU or NO_CACHE is set without the other?
> 
> yes that should be 
>     if ((prot & _PAGE_SAO) == _PAGE_SAO )
> 
> 
> >
> >> +	if (prot & _PAGE_DIRTY)
> >> +		pmd_prot |= PMD_HUGE_DIRTY;
> >> +	if (prot & _PAGE_ACCESSED)
> >> +		pmd_prot |= PMD_HUGE_ACCESSED;
> >> +	if (prot & _PAGE_RW)
> >> +		pmd_prot |= PMD_HUGE_RW;
> >> +
> >> +	pmd_val(pmd) |= pmd_prot;
> >> +	return pmd;
> >> +}
> >> +
> >> +pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
> >> +{
> >> +	pmd_t pmd;
> >> +
> >> +	pmd_val(pmd) = pfn << PMD_HUGE_RPN_SHIFT;
> >> +	pmd_val(pmd) |= PMD_ISHUGE;
> >> +	pmd = pmd_set_protbits(pmd, pgprot);
> >> +	return pmd;
> >> +}
> >> +
> >> +pmd_t mk_pmd(struct page *page, pgprot_t pgprot)
> >> +{
> >> +	return pfn_pmd(page_to_pfn(page), pgprot);
> >> +}
> >> +
> >> +pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
> >> +{
> >> +	/* FIXME!! why are this bits cleared ? */
> >
> > You really need to answer this question...
> 
> will check. 
> 
> >
> >> +	pmd_val(pmd) &= ~(PMD_HUGE_PRESENT |
> >> +			  PMD_HUGE_RW |
> >> +			  PMD_HUGE_EXEC);
> >> +	pmd = pmd_set_protbits(pmd, newprot);
> >> +	return pmd;
> >> +}
> >> +
> >> +/*
> >> + * This is called at the end of handling a user page fault, when the
> >> + * fault has been handled by updating a HUGE PMD entry in the linux page tables.
> >> + * We use it to preload an HPTE into the hash table corresponding to
> >> + * the updated linux HUGE PMD entry.
> >> + */
> >> +void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
> >> +			  pmd_t *pmd)
> >> +{
> >> +	/* FIXME!!
> >> +	 * Will be done in a later patch
> >> +	 */
> >
> > If you need another patch to make the code in this patch work, they
> > should probably be folded together.
> >
> 
> I have that as TODO, we can do a hash_preload for hugepage here. But I
> don't see we doing that for HugeTLB. So I haven't yet done that for
> hugepage. Do you know why we don't do hash_preload for HugeTLB page ?
> 
> 
> >> +}
> >> +
> >> +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> >> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
> >> index e79840b..6fc3488 100644
> >> --- a/arch/powerpc/mm/pgtable_64.c
> >> +++ b/arch/powerpc/mm/pgtable_64.c
> >> @@ -338,6 +338,19 @@ EXPORT_SYMBOL(iounmap);
> >>  EXPORT_SYMBOL(__iounmap);
> >>  EXPORT_SYMBOL(__iounmap_at);
> >>  
> >> +/*
> >> + * For hugepage we have pfn in the pmd, we use PMD_HUGE_RPN_SHIFT bits for flags
> >> + * For PTE page, we have a PTE_FRAG_SIZE (4K) aligned virtual address.
> >> + */
> >> +struct page *pmd_page(pmd_t pmd)
> >> +{
> >> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> >> +	if (pmd_val(pmd) & PMD_ISHUGE)
> >> +		return pfn_to_page(pmd_pfn(pmd));
> >> +#endif
> >> +	return virt_to_page(pmd_page_vaddr(pmd));
> >> +}
> >> +
> >>  #ifdef CONFIG_PPC_64K_PAGES
> >>  /*
> >>   * we support 16 fragments per PTE page. This is limited by how many
> >> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> >> index 72afd28..90ee19b 100644
> >> --- a/arch/powerpc/platforms/Kconfig.cputype
> >> +++ b/arch/powerpc/platforms/Kconfig.cputype
> >> @@ -71,6 +71,7 @@ config PPC_BOOK3S_64
> >>  	select PPC_FPU
> >>  	select PPC_HAVE_PMU_SUPPORT
> >>  	select SYS_SUPPORTS_HUGETLBFS
> >> +	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
> >>  
> >>  config PPC_BOOK3E_64
> >>  	bool "Embedded processors"
> 
> -aneesh
> 
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries
  2013-04-04  5:57 ` [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries Aneesh Kumar K.V
  2013-04-10  7:21   ` Michael Ellerman
@ 2013-04-12  1:28   ` David Gibson
  1 sibling, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-12  1:28 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 4436 bytes --]

On Thu, Apr 04, 2013 at 11:27:57AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> HUGETLB clear the top bit of PMD entries and use that to indicate
> a HUGETLB page directory. Since we store pfns in PMDs for THP,
> we would have the top bit cleared by default. Add the top bit mask
> for THP PMD entries and clear that when we are looking for pmd_pfn.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/pgtable.h |   16 +++++++++++++---
>  arch/powerpc/mm/pgtable.c          |    5 ++++-
>  arch/powerpc/mm/pgtable_64.c       |    2 +-
>  3 files changed, 18 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
> index 9fbe2a7..9681de4 100644
> --- a/arch/powerpc/include/asm/pgtable.h
> +++ b/arch/powerpc/include/asm/pgtable.h
> @@ -31,7 +31,7 @@ struct mm_struct;
>  #define PMD_HUGE_SPLITTING	0x008
>  #define PMD_HUGE_SAO		0x010 /* strong Access order */
>  #define PMD_HUGE_HASHPTE	0x020
> -#define PMD_ISHUGE		0x040
> +#define _PMD_ISHUGE		0x040
>  #define PMD_HUGE_DIRTY		0x080 /* C: page changed */
>  #define PMD_HUGE_ACCESSED	0x100 /* R: page referenced */
>  #define PMD_HUGE_RW		0x200 /* software: user write access allowed */
> @@ -44,6 +44,14 @@ struct mm_struct;
>  #define PMD_HUGE_RPN_SHIFT	PTE_RPN_SHIFT
>  #define HUGE_PAGE_SIZE		(ASM_CONST(1) << 24)
>  #define HUGE_PAGE_MASK		(~(HUGE_PAGE_SIZE - 1))
> +/*
> + * HugeTLB looks at the top bit of the Linux page table entries to
> + * decide whether it is a huge page directory or not. Mark HUGE
> + * PMD to differentiate
> + */
> +#define PMD_HUGE_NOT_HUGETLB	(ASM_CONST(1) << 63)
> +#define PMD_ISHUGE		(_PMD_ISHUGE | PMD_HUGE_NOT_HUGETLB)

Having a define which looks like the name of a boolean flag, but is
two bits strikes me as a really bad idea.

This is one of the many confusions that comes with different pagetable
encodings for transparent and non-transparent hugepages.

Hrm.  So your original patch was horribly broken in that your hugepage
PMDs didn't have the top bit set, and so would be confused with hugepd
pointers.  Now you're patching it up by forcing the top bit to 1 for
hugepage PMDs.  Confusing way of going about it.

> +#define PMD_HUGE_PROTBITS	(0xfff | PMD_HUGE_NOT_HUGETLB)
>  
>  #ifndef __ASSEMBLY__
>  extern void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
> @@ -70,8 +78,9 @@ static inline int pmd_trans_splitting(pmd_t pmd)
>  
>  static inline int pmd_trans_huge(pmd_t pmd)
>  {
> -	return pmd_val(pmd) & PMD_ISHUGE;
> +	return ((pmd_val(pmd) & PMD_ISHUGE) ==  PMD_ISHUGE);



>  }
> +
>  /* We will enable it in the last patch */
>  #define has_transparent_hugepage() 0
>  #else
> @@ -84,7 +93,8 @@ static inline unsigned long pmd_pfn(pmd_t pmd)
>  	/*
>  	 * Only called for hugepage pmd
>  	 */
> -	return pmd_val(pmd) >> PMD_HUGE_RPN_SHIFT;
> +	unsigned long val = pmd_val(pmd) & ~PMD_HUGE_PROTBITS;
> +	return val  >> PMD_HUGE_RPN_SHIFT;
>  }
>  
>  static inline int pmd_young(pmd_t pmd)
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index 9f33780..cf3ca8e 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -517,7 +517,10 @@ static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
>  pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
>  {
>  	pmd_t pmd;
> -
> +	/*
> +	 * We cannot support that many PFNs
> +	 */
> +	VM_BUG_ON(pfn & PMD_HUGE_NOT_HUGETLB);
>  	pmd_val(pmd) = pfn << PMD_HUGE_RPN_SHIFT;
>  	pmd_val(pmd) |= PMD_ISHUGE;
>  	pmd = pmd_set_protbits(pmd, pgprot);
> diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
> index 6fc3488..cd53020 100644
> --- a/arch/powerpc/mm/pgtable_64.c
> +++ b/arch/powerpc/mm/pgtable_64.c
> @@ -345,7 +345,7 @@ EXPORT_SYMBOL(__iounmap_at);
>  struct page *pmd_page(pmd_t pmd)
>  {
>  #ifdef CONFIG_TRANSPARENT_HUGEPAGE
> -	if (pmd_val(pmd) & PMD_ISHUGE)
> +	if ((pmd_val(pmd) & PMD_ISHUGE) == PMD_ISHUGE)
>  		return pfn_to_page(pmd_pfn(pmd));
>  #endif
>  	return virt_to_page(pmd_page_vaddr(pmd));

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 21/25] powerpc: Handle hugepage in perf callchain
  2013-04-04  5:57 ` [PATCH -V5 21/25] powerpc: Handle hugepage in perf callchain Aneesh Kumar K.V
@ 2013-04-12  1:34   ` David Gibson
  2013-04-12  5:05     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-12  1:34 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2722 bytes --]

On Thu, Apr 04, 2013 at 11:27:59AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/perf/callchain.c |   32 +++++++++++++++++++++-----------
>  1 file changed, 21 insertions(+), 11 deletions(-)
> 
> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> index 578cac7..99262ce 100644
> --- a/arch/powerpc/perf/callchain.c
> +++ b/arch/powerpc/perf/callchain.c
> @@ -115,7 +115,7 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
>  {
>  	pgd_t *pgdir;
>  	pte_t *ptep, pte;
> -	unsigned shift;
> +	unsigned shift, hugepage;
>  	unsigned long addr = (unsigned long) ptr;
>  	unsigned long offset;
>  	unsigned long pfn;
> @@ -125,20 +125,30 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
>  	if (!pgdir)
>  		return -EFAULT;
>  
> -	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, NULL);
> +	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, &hugepage);

So, this patch pretty much demonstrates that your earlier patch adding
the optional hugepage argument and making the existing callers pass
NULL was broken.

Any code which calls this function and doesn't use and handle the
hugepage return value is horribly broken, so permitting the hugepage
parameter to be optional is itself broken.

I think instead you need to have an early patch that replaces
find_linux_pte_or_hugepte with a new, more abstracted interface, so
that code using it will remain correct when hugepage PMDs become
possible.

>  	if (!shift)
>  		shift = PAGE_SHIFT;
>  
> -	/* align address to page boundary */
> -	offset = addr & ((1UL << shift) - 1);
> -	addr -= offset;
> -
> -	if (ptep == NULL)
> -		return -EFAULT;
> -	pte = *ptep;
> -	if (!pte_present(pte) || !(pte_val(pte) & _PAGE_USER))
> +	if (!ptep)
>  		return -EFAULT;
> -	pfn = pte_pfn(pte);
> +
> +	if (hugepage) {
> +		pmd_t pmd = *(pmd_t *)ptep;
> +		shift = mmu_psize_defs[MMU_PAGE_16M].shift;
> +		offset = addr & ((1UL << shift) - 1);
> +
> +		if (!pmd_large(pmd) || !(pmd_val(pmd) & PMD_HUGE_USER))
> +			return -EFAULT;
> +		pfn = pmd_pfn(pmd);
> +	} else {
> +		offset = addr & ((1UL << shift) - 1);
> +
> +		pte = *ptep;
> +		if (!pte_present(pte) || !(pte_val(pte) & _PAGE_USER))
> +			return -EFAULT;
> +		pfn = pte_pfn(pte);
> +	}
> +
>  	if (!page_is_ram(pfn))
>  		return -EFAULT;
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 22/25] powerpc/THP: get_user_pages_fast changes
  2013-04-04  5:58 ` [PATCH -V5 22/25] powerpc/THP: get_user_pages_fast changes Aneesh Kumar K.V
@ 2013-04-12  1:41   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-12  1:41 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 3791 bytes --]

On Thu, Apr 04, 2013 at 11:28:00AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> handle large pages for get_user_pages_fast. Also take care of large page splitting.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/mm/gup.c |   84 +++++++++++++++++++++++++++++++++++++++++++++++--
>  1 file changed, 82 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/powerpc/mm/gup.c b/arch/powerpc/mm/gup.c
> index d7efdbf..835c1ae 100644
> --- a/arch/powerpc/mm/gup.c
> +++ b/arch/powerpc/mm/gup.c
> @@ -55,6 +55,72 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
>  	return 1;
>  }
>  
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +static inline int gup_huge_pmd(pmd_t *pmdp, unsigned long addr,
> +			       unsigned long end, int write,
> +			       struct page **pages, int *nr)
> +{
> +	int refs;
> +	pmd_t pmd;
> +	unsigned long mask;
> +	struct page *head, *page, *tail;
> +
> +	pmd = *pmdp;
> +	mask = PMD_HUGE_PRESENT | PMD_HUGE_USER;
> +	if (write)
> +		mask |= PMD_HUGE_RW;
> +
> +	if ((pmd_val(pmd) & mask) != mask)
> +		return 0;
> +
> +	/* large pages are never "special" */
> +	VM_BUG_ON(!pfn_valid(pmd_pfn(pmd)));
> +
> +	refs = 0;
> +	head = pmd_page(pmd);
> +	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
> +	tail = page;
> +	do {
> +		VM_BUG_ON(compound_head(page) != head);
> +		pages[*nr] = page;
> +		(*nr)++;
> +		page++;
> +		refs++;
> +	} while (addr += PAGE_SIZE, addr != end);
> +
> +	if (!page_cache_add_speculative(head, refs)) {
> +		*nr -= refs;
> +		return 0;
> +	}
> +
> +	if (unlikely(pmd_val(pmd) != pmd_val(*pmdp))) {
> +		*nr -= refs;
> +		while (refs--)
> +			put_page(head);
> +		return 0;
> +	}
> +	/*
> +	 * Any tail page need their mapcount reference taken before we
> +	 * return.
> +	 */
> +	while (refs--) {
> +		if (PageTail(tail))
> +			get_huge_page_tail(tail);
> +		tail++;

Is it safe to do this accounting this late?

> +	}
> +
> +	return 1;
> +}
> +#else
> +
> +static inline int gup_huge_pmd(pmd_t *pmdp, unsigned long addr,
> +			       unsigned long end, int write,
> +			       struct page **pages, int *nr)
> +{
> +	return 1;

Should be a BUG() here, since this should never be called if
!CONFIG_TRANSPARENT_HUGEPAGE.

> +}
> +#endif
> +
>  static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
>  		int write, struct page **pages, int *nr)
>  {
> @@ -66,9 +132,23 @@ static int gup_pmd_range(pud_t pud, unsigned long addr, unsigned long end,
>  		pmd_t pmd = *pmdp;
>  
>  		next = pmd_addr_end(addr, end);
> -		if (pmd_none(pmd))
> +		/*
> +		 * The pmd_trans_splitting() check below explains why
> +		 * pmdp_splitting_flush has to flush the tlb, to stop
> +		 * this gup-fast code from running while we set the
> +		 * splitting bit in the pmd. Returning zero will take
> +		 * the slow path that will call wait_split_huge_page()
> +		 * if the pmd is still in splitting state. gup-fast
> +		 * can't because it has irq disabled and
> +		 * wait_split_huge_page() would never return as the
> +		 * tlb flush IPI wouldn't run.
> +		 */
> +		if (pmd_none(pmd) || pmd_trans_splitting(pmd))
>  			return 0;
> -		if (is_hugepd(pmdp)) {
> +		if (unlikely(pmd_large(pmd))) {
> +			if (!gup_huge_pmd(pmdp, addr, next, write, pages, nr))
> +				return 0;
> +		} else if (is_hugepd(pmdp)) {
>  			if (!gup_hugepd((hugepd_t *)pmdp, PMD_SHIFT,
>  					addr, next, write, pages, nr))
>  				return 0;

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 20/25] powerpc/THP: Add code to handle HPTE faults for large pages
  2013-04-04  5:57 ` [PATCH -V5 20/25] powerpc/THP: Add code to handle HPTE faults for large pages Aneesh Kumar K.V
@ 2013-04-12  4:01   ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-12  4:01 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 21539 bytes --]

On Thu, Apr 04, 2013 at 11:27:58AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> We now have pmd entries covering to 16MB range. To implement THP on powerpc,
> we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
> We also use the depoisted PTE page for tracking the HPTE information. The information
> include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
> With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
> 4096 entries. Both will fit in a 4K PTE page.

This description is a duplicate of an earlier path.  Both are
innaccurate for the patches they are now attached to.

> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/mmu-hash64.h    |    5 +
>  arch/powerpc/include/asm/pgtable-ppc64.h |   31 +----
>  arch/powerpc/kernel/io-workarounds.c     |    3 +-
>  arch/powerpc/kvm/book3s_64_mmu_hv.c      |    2 +-
>  arch/powerpc/kvm/book3s_hv_rm_mmu.c      |    4 +-
>  arch/powerpc/mm/Makefile                 |    1 +
>  arch/powerpc/mm/hash_utils_64.c          |   16 ++-
>  arch/powerpc/mm/hugepage-hash64.c        |  185 ++++++++++++++++++++++++++++++
>  arch/powerpc/mm/hugetlbpage.c            |   31 ++++-
>  arch/powerpc/mm/pgtable.c                |   38 ++++++
>  arch/powerpc/mm/tlb_hash64.c             |    5 +-
>  arch/powerpc/perf/callchain.c            |    2 +-
>  arch/powerpc/platforms/pseries/eeh.c     |    5 +-
>  13 files changed, 286 insertions(+), 42 deletions(-)
>  create mode 100644 arch/powerpc/mm/hugepage-hash64.c
> 
> diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
> index e187254..a74a3de 100644
> --- a/arch/powerpc/include/asm/mmu-hash64.h
> +++ b/arch/powerpc/include/asm/mmu-hash64.h
> @@ -322,6 +322,11 @@ extern int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
>  int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
>  		     pte_t *ptep, unsigned long trap, int local, int ssize,
>  		     unsigned int shift, unsigned int mmu_psize);
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +extern int __hash_page_thp(unsigned long ea, unsigned long access,
> +			   unsigned long vsid, pmd_t *pmdp, unsigned long trap,
> +			   int local, int ssize, unsigned int psize);
> +#endif
>  extern void hash_failure_debug(unsigned long ea, unsigned long access,
>  			       unsigned long vsid, unsigned long trap,
>  			       int ssize, int psize, int lpsize,
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
> index d4e845c..9b81283 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
> @@ -345,39 +345,18 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
>  void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
>  void pgtable_cache_init(void);
>  
> -/*
> - * find_linux_pte returns the address of a linux pte for a given
> - * effective address and directory.  If not found, it returns zero.
> - */
> -static inline pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea)
> -{
> -	pgd_t *pg;
> -	pud_t *pu;
> -	pmd_t *pm;
> -	pte_t *pt = NULL;
> -
> -	pg = pgdir + pgd_index(ea);
> -	if (!pgd_none(*pg)) {
> -		pu = pud_offset(pg, ea);
> -		if (!pud_none(*pu)) {
> -			pm = pmd_offset(pu, ea);
> -			if (pmd_present(*pm))
> -				pt = pte_offset_kernel(pm, ea);
> -		}
> -	}
> -	return pt;
> -}
> -
> +pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea, unsigned int *thp);
>  #ifdef CONFIG_HUGETLB_PAGE
>  pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
> -				 unsigned *shift);
> +				 unsigned *shift, unsigned int *hugepage);
>  #else
>  static inline pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
> -					       unsigned *shift)
> +					       unsigned *shift,
> +					       unsigned int *hugepage)
>  {
>  	if (shift)
>  		*shift = 0;
> -	return find_linux_pte(pgdir, ea);
> +	return find_linux_pte(pgdir, ea, hugepage);
>  }
>  #endif /* !CONFIG_HUGETLB_PAGE */
>  
> diff --git a/arch/powerpc/kernel/io-workarounds.c b/arch/powerpc/kernel/io-workarounds.c
> index 50e90b7..a9c904f 100644
> --- a/arch/powerpc/kernel/io-workarounds.c
> +++ b/arch/powerpc/kernel/io-workarounds.c
> @@ -70,7 +70,8 @@ struct iowa_bus *iowa_mem_find_bus(const PCI_IO_ADDR addr)
>  		if (vaddr < PHB_IO_BASE || vaddr >= PHB_IO_END)
>  			return NULL;
>  
> -		ptep = find_linux_pte(init_mm.pgd, vaddr);
> +		/* we won't find hugepages here */

Explaining why might be a good idea.

> +		ptep = find_linux_pte(init_mm.pgd, vaddr, NULL);
>  		if (ptep == NULL)
>  			paddr = 0;
>  		else
> diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> index 8cc18ab..4f2a7dc 100644
> --- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
> +++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
> @@ -683,7 +683,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
>  			 */
>  			rcu_read_lock_sched();
>  			ptep = find_linux_pte_or_hugepte(current->mm->pgd,
> -							 hva, NULL);
> +							 hva, NULL, NULL);
>  			if (ptep && pte_present(*ptep)) {
>  				pte = kvmppc_read_update_linux_pte(ptep, 1);
>  				if (pte_write(pte))
> diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> index 19c93ba..7c8e1ed 100644
> --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
> @@ -27,7 +27,7 @@ static void *real_vmalloc_addr(void *x)
>  	unsigned long addr = (unsigned long) x;
>  	pte_t *p;
>  
> -	p = find_linux_pte(swapper_pg_dir, addr);
> +	p = find_linux_pte(swapper_pg_dir, addr, NULL);

And this one.

>  	if (!p || !pte_present(*p))
>  		return NULL;
>  	/* assume we don't have huge pages in vmalloc space... */
> @@ -152,7 +152,7 @@ static pte_t lookup_linux_pte(pgd_t *pgdir, unsigned long hva,
>  	unsigned long ps = *pte_sizep;
>  	unsigned int shift;
>  
> -	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift);
> +	ptep = find_linux_pte_or_hugepte(pgdir, hva, &shift, NULL);
>  	if (!ptep)
>  		return __pte(0);
>  	if (shift)
> diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
> index 3787b61..997deb4 100644
> --- a/arch/powerpc/mm/Makefile
> +++ b/arch/powerpc/mm/Makefile
> @@ -33,6 +33,7 @@ obj-y				+= hugetlbpage.o
>  obj-$(CONFIG_PPC_STD_MMU_64)	+= hugetlbpage-hash64.o
>  obj-$(CONFIG_PPC_BOOK3E_MMU)	+= hugetlbpage-book3e.o
>  endif
> +obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hugepage-hash64.o
>  obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage-prot.o
>  obj-$(CONFIG_NOT_COHERENT_CACHE) += dma-noncoherent.o
>  obj-$(CONFIG_HIGHMEM)		+= highmem.o
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index 1f2ebbd..cd3ecd8 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -955,7 +955,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
>  	unsigned long vsid;
>  	struct mm_struct *mm;
>  	pte_t *ptep;
> -	unsigned hugeshift;
> +	unsigned hugeshift, hugepage;
>  	const struct cpumask *tmp;
>  	int rc, user_region = 0, local = 0;
>  	int psize, ssize;
> @@ -1021,7 +1021,7 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
>  #endif /* CONFIG_PPC_64K_PAGES */
>  
>  	/* Get PTE and page size from page tables */
> -	ptep = find_linux_pte_or_hugepte(pgdir, ea, &hugeshift);
> +	ptep = find_linux_pte_or_hugepte(pgdir, ea, &hugeshift, &hugepage);
>  	if (ptep == NULL || !pte_present(*ptep)) {

And so's this, since you don't check the hugepage return before
calling pte_present().

>  		DBG_LOW(" no PTE !\n");
>  		return 1;
> @@ -1044,6 +1044,12 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap)
>  					ssize, hugeshift, psize);
>  #endif /* CONFIG_HUGETLB_PAGE */
>  
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> +	if (hugepage)
> +		return __hash_page_thp(ea, access, vsid, (pmd_t *)ptep,
> +				       trap, local, ssize, psize);
> +#endif
> +
>  #ifndef CONFIG_PPC_64K_PAGES
>  	DBG_LOW(" i-pte: %016lx\n", pte_val(*ptep));
>  #else
> @@ -1149,7 +1155,11 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
>  	pgdir = mm->pgd;
>  	if (pgdir == NULL)
>  		return;
> -	ptep = find_linux_pte(pgdir, ea);
> +	/*
> +	 * We haven't implemented update_mmu_cache_pmd yet. We get called
> +	 * only for non hugepages. Hence can ignore THP here

Uh.. why?  By definition THP will occur in non-hugepage areas.

> +	 */
> +	ptep = find_linux_pte(pgdir, ea, NULL);
>  	if (!ptep)
>  		return;
>  
> diff --git a/arch/powerpc/mm/hugepage-hash64.c b/arch/powerpc/mm/hugepage-hash64.c
> new file mode 100644
> index 0000000..3f6140d
> --- /dev/null
> +++ b/arch/powerpc/mm/hugepage-hash64.c
> @@ -0,0 +1,185 @@
> +/*
> + * Copyright IBM Corporation, 2013
> + * Author Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms of version 2.1 of the GNU Lesser General Public License
> + * as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it would be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
> + *
> + */
> +
> +/*
> + * PPC64 THP Support for hash based MMUs
> + */
> +#include <linux/mm.h>
> +#include <asm/machdep.h>
> +
> +/*
> + * The linux hugepage PMD now include the pmd entries followed by the address
> + * to the stashed pgtable_t. The stashed pgtable_t contains the hpte bits.
> + * [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
> + * With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
> + * 4096 entries. Both will fit in a 4K pgtable_t.
> + */
> +int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
> +		    pmd_t *pmdp, unsigned long trap, int local, int ssize,
> +		    unsigned int psize)
> +{
> +	unsigned int index, valid;
> +	unsigned char *hpte_slot_array;
> +	unsigned long rflags, pa, hidx;
> +	unsigned long old_pmd, new_pmd;
> +	int ret, lpsize = MMU_PAGE_16M;
> +	unsigned long vpn, hash, shift, slot;
> +
> +	/*
> +	 * atomically mark the linux large page PMD busy and dirty
> +	 */
> +	do {
> +		old_pmd = pmd_val(*pmdp);
> +		/* If PMD busy, retry the access */
> +		if (unlikely(old_pmd & PMD_HUGE_BUSY))
> +			return 0;
> +		/* If PMD permissions don't match, take page fault */
> +		if (unlikely(access & ~old_pmd))
> +			return 1;
> +		/*
> +		 * Try to lock the PTE, add ACCESSED and DIRTY if it was
> +		 * a write access
> +		 */
> +		new_pmd = old_pmd | PMD_HUGE_BUSY | PMD_HUGE_ACCESSED;
> +		if (access & _PAGE_RW)
> +			new_pmd |= PMD_HUGE_DIRTY;
> +	} while (old_pmd != __cmpxchg_u64((unsigned long *)pmdp,
> +					  old_pmd, new_pmd));
> +	/*
> +	 * PP bits. PMD_HUGE_USER is already PP bit 0x2, so we only
> +	 * need to add in 0x1 if it's a read-only user page
> +	 */
> +	rflags = new_pmd & PMD_HUGE_USER;
> +	if ((new_pmd & PMD_HUGE_USER) && !((new_pmd & PMD_HUGE_RW) &&
> +					   (new_pmd & PMD_HUGE_DIRTY)))
> +		rflags |= 0x1;
> +	/*
> +	 * PMD_HUGE_EXEC -> HW_NO_EXEC since it's inverted
> +	 */
> +	rflags |= ((new_pmd & PMD_HUGE_EXEC) ? 0 : HPTE_R_N);
> +
> +#if 0 /* FIXME!! */
> +	if (!cpu_has_feature(CPU_FTR_COHERENT_ICACHE)) {
> +
> +		/*
> +		 * No CPU has hugepages but lacks no execute, so we
> +		 * don't need to worry about that case
> +		 */
> +		rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
> +	}
> +#endif
> +	/*
> +	 * Find the slot index details for this ea, using base page size.
> +	 */
> +	shift = mmu_psize_defs[psize].shift;
> +	index = (ea & (HUGE_PAGE_SIZE - 1)) >> shift;
> +	BUG_ON(index > 4096);

That needs to be >=, not >.  Also you should probably use the existing
#defines to derive this rather than hard coding 4096.

> +
> +	vpn = hpt_vpn(ea, vsid, ssize);
> +	hash = hpt_hash(vpn, shift, ssize);
> +	/*
> +	 * The hpte hindex are stored in the pgtable whose address is in the
> +	 * second half of the PMD
> +	 */
> +	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);

Hrm.  I gather the contents of the extra pgtable is protected by the
PTE's busy bit.  But what synchronization is necessary for the pgtable
pointer - are there any possible races with the hugepage being split?

> +	valid = hpte_slot_array[index]  & 0x1;
> +	if (unlikely(valid)) {

Why is valid unlikely?  I think you'd be better off leaving this to
the CPU's dynamic branch prediction.

> +		/* update the hpte bits */
> +		hidx =  hpte_slot_array[index]  >> 1;
> +		if (hidx & _PTEIDX_SECONDARY)
> +			hash = ~hash;
> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
> +		slot += hidx & _PTEIDX_GROUP_IX;
> +
> +		ret = ppc_md.hpte_updatepp(slot, rflags, vpn,
> +					   psize, ssize, local);
> +		/*
> +		 * We failed to update, try to insert a new entry.
> +		 */
> +		if (ret == -1) {
> +			/*
> +			 * large pte is marked busy, so we can be sure
> +			 * nobody is looking at hpte_slot_array. hence we can
> +			 * safely update this here.
> +			 */
> +			hpte_slot_array[index] = 0;
> +			valid = 0;
> +		}
> +	}
> +
> +	if (likely(!valid)) {
> +		unsigned long hpte_group;
> +
> +		/* insert new entry */
> +		pa = pmd_pfn(__pmd(old_pmd)) << PAGE_SHIFT;
> +repeat:
> +		hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL;
> +
> +		/* clear the busy bits and set the hash pte bits */
> +		new_pmd = (new_pmd & ~PMD_HUGE_HPTEFLAGS) | PMD_HUGE_HASHPTE;
> +
> +		/*
> +		 * WIMG bits.
> +		 * We always have _PAGE_COHERENT enabled for system RAM
> +		 */
> +		rflags |= _PAGE_COHERENT;
> +
> +		if (new_pmd & PMD_HUGE_SAO)
> +			rflags |= _PAGE_SAO;
> +
> +		/* Insert into the hash table, primary slot */
> +		slot = ppc_md.hpte_insert(hpte_group, vpn, pa, rflags, 0,
> +					  psize, lpsize, ssize);
> +		/*
> +		 * Primary is full, try the secondary
> +		 */
> +		if (unlikely(slot == -1)) {
> +			hpte_group = ((~hash & htab_hash_mask) *
> +				      HPTES_PER_GROUP) & ~0x7UL;
> +			slot = ppc_md.hpte_insert(hpte_group, vpn, pa,
> +						  rflags, HPTE_V_SECONDARY,
> +						  psize, lpsize, ssize);
> +			if (slot == -1) {
> +				if (mftb() & 0x1)
> +					hpte_group = ((hash & htab_hash_mask) *
> +						      HPTES_PER_GROUP) & ~0x7UL;
> +
> +				ppc_md.hpte_remove(hpte_group);
> +				goto repeat;
> +			}
> +		}
> +		/*
> +		 * Hypervisor failure. Restore old pmd and return -1
> +		 * similar to __hash_page_*
> +		 */
> +		if (unlikely(slot == -2)) {
> +			*pmdp = __pmd(old_pmd);
> +			hash_failure_debug(ea, access, vsid, trap, ssize,
> +					   psize, lpsize, old_pmd);
> +			return -1;
> +		}
> +		/*
> +		 * large pte is marked busy, so we can be sure
> +		 * nobody is looking at hpte_slot_array. hence we can
> +		 * safely update this here.
> +		 */
> +		hpte_slot_array[index] = slot << 1 | 0x1;
> +	}
> +	/*
> +	 * No need to use ldarx/stdcx here
> +	 */
> +	*pmdp = __pmd(new_pmd & ~PMD_HUGE_BUSY);
> +	return 0;
> +}
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index 1a6de0a..7f11fa0 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -67,7 +67,8 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize)
>  
>  #define hugepd_none(hpd)	((hpd).pd == 0)
>  
> -pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, unsigned *shift)
> +pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
> +				 unsigned *shift, unsigned int *hugepage)
>  {
>  	pgd_t *pg;
>  	pud_t *pu;
> @@ -77,6 +78,8 @@ pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, unsigned *shift
>  
>  	if (shift)
>  		*shift = 0;
> +	if (hugepage)
> +		*hugepage = 0;
>  	pg = pgdir + pgd_index(ea);
>  	if (is_hugepd(pg)) {
> @@ -91,12 +94,24 @@ pte_t *find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea, unsigned *shift
>  			pm = pmd_offset(pu, ea);
>  			if (is_hugepd(pm))
>  				hpdp = (hugepd_t *)pm;
> -			else if (!pmd_none(*pm)) {
> +			else if (pmd_large(*pm)) {
> +				/* THP page */
> +				if (hugepage) {
> +					*hugepage = 1;
> +					/*
> +					 * This should be ok, except for few
> +					 * flags. Most of the pte and hugepage
> +					 * pmd bits overlap. We don't use the
> +					 * returned value as pte_t in the caller.
> +					 */
> +					return (pte_t *)pm;
> +				} else
> +					return NULL;

Ah, so this is what prevents callers who don't supply the hugepage
parameter from being horribly broken.  Hrm.  Seems dangeously subtle
to me.

The parameter name is also really misleading since it's only for
transparent hugepages, not any hugepage.

> +			} else if (!pmd_none(*pm)) {
>  				return pte_offset_kernel(pm, ea);
>  			}
>  		}
>  	}
> -
>  	if (!hpdp)
>  		return NULL;
>  
> @@ -108,7 +123,8 @@ EXPORT_SYMBOL_GPL(find_linux_pte_or_hugepte);
>  
>  pte_t *huge_pte_offset(struct mm_struct *mm, unsigned long addr)
>  {
> -	return find_linux_pte_or_hugepte(mm->pgd, addr, NULL);
> +	/* Only called for HugeTLB pages, hence can ignore THP */
> +	return find_linux_pte_or_hugepte(mm->pgd, addr, NULL, NULL);
>  }
>  
>  static int __hugepte_alloc(struct mm_struct *mm, hugepd_t *hpdp,
> @@ -613,8 +629,11 @@ follow_huge_addr(struct mm_struct *mm, unsigned long address, int write)
>  	struct page *page;
>  	unsigned shift;
>  	unsigned long mask;
> -
> -	ptep = find_linux_pte_or_hugepte(mm->pgd, address, &shift);
> +	/*
> +	 * Transparent hugepages are handled by generic code. We can skip them
> +	 * here.
> +	 */
> +	ptep = find_linux_pte_or_hugepte(mm->pgd, address, &shift, NULL);
>  
>  	/* Verify it is a huge page else bail. */
>  	if (!ptep || !shift)
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index cf3ca8e..fbff062 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -557,3 +557,41 @@ void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
>  }
>  
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> +
> +/*
> + * find_linux_pte returns the address of a linux pte for a given
> + * effective address and directory.  If not found, it returns zero.
> + */
> +pte_t *find_linux_pte(pgd_t *pgdir, unsigned long ea, unsigned int *hugepage)
> +{
> +	pgd_t *pg;
> +	pud_t *pu;
> +	pmd_t *pm;
> +	pte_t *pt = NULL;
> +
> +	if (hugepage)
> +		*hugepage = 0;
> +	pg = pgdir + pgd_index(ea);
> +	if (!pgd_none(*pg)) {
> +		pu = pud_offset(pg, ea);
> +		if (!pud_none(*pu)) {
> +			pm = pmd_offset(pu, ea);
> +			if (pmd_large(*pm)) {
> +				/* THP page */
> +				if (hugepage) {
> +					*hugepage = 1;
> +					/*
> +					 * This should be ok, except for few
> +					 * flags. Most of the pte and hugepage
> +					 * pmd bits overlap. We don't use the
> +					 * returned value as pte_t in the caller.
> +					 */
> +					return (pte_t *)pm;
> +				} else
> +					return NULL;
> +			} else if (pmd_present(*pm))
> +				pt = pte_offset_kernel(pm, ea);
> +		}
> +	}
> +	return pt;
> +}
> diff --git a/arch/powerpc/mm/tlb_hash64.c b/arch/powerpc/mm/tlb_hash64.c
> index 023ec8a..be0066f 100644
> --- a/arch/powerpc/mm/tlb_hash64.c
> +++ b/arch/powerpc/mm/tlb_hash64.c
> @@ -206,7 +206,10 @@ void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
>  	local_irq_save(flags);
>  	arch_enter_lazy_mmu_mode();
>  	for (; start < end; start += PAGE_SIZE) {
> -		pte_t *ptep = find_linux_pte(mm->pgd, start);
> +		/*
> +		 * We won't find hugepages here.
> +		 */
> +		pte_t *ptep = find_linux_pte(mm->pgd, start, NULL);
>  		unsigned long pte;
>  
>  		if (ptep == NULL)
> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
> index 74d1e78..578cac7 100644
> --- a/arch/powerpc/perf/callchain.c
> +++ b/arch/powerpc/perf/callchain.c
> @@ -125,7 +125,7 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
>  	if (!pgdir)
>  		return -EFAULT;
>  
> -	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift);
> +	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, NULL);
>  	if (!shift)
>  		shift = PAGE_SHIFT;
>  
> diff --git a/arch/powerpc/platforms/pseries/eeh.c b/arch/powerpc/platforms/pseries/eeh.c
> index 9a04322..44c931a 100644
> --- a/arch/powerpc/platforms/pseries/eeh.c
> +++ b/arch/powerpc/platforms/pseries/eeh.c
> @@ -261,7 +261,10 @@ static inline unsigned long eeh_token_to_phys(unsigned long token)
>  	pte_t *ptep;
>  	unsigned long pa;
>  
> -	ptep = find_linux_pte(init_mm.pgd, token);
> +	/*
> +	 * We won't find hugepages here
> +	 */
> +	ptep = find_linux_pte(init_mm.pgd, token, NULL);
>  	if (!ptep)
>  		return token;
>  	pa = pte_pfn(*ptep) << PAGE_SHIFT;

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate
  2013-04-04  5:58 ` [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate Aneesh Kumar K.V
@ 2013-04-12  4:21   ` David Gibson
  2013-04-14 10:02     ` Aneesh Kumar K.V
  0 siblings, 1 reply; 73+ messages in thread
From: David Gibson @ 2013-04-12  4:21 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 11529 bytes --]

On Thu, Apr 04, 2013 at 11:28:02AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> Hugepage invalidate involves invalidating multiple hpte entries.
> Optimize the operation using H_BULK_REMOVE on lpar platforms.
> On native, reduce the number of tlb flush.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/machdep.h    |    3 +
>  arch/powerpc/mm/hash_native_64.c      |   78 ++++++++++++++++++++
>  arch/powerpc/mm/pgtable.c             |   13 +++-
>  arch/powerpc/platforms/pseries/lpar.c |  126 +++++++++++++++++++++++++++++++--
>  4 files changed, 210 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
> index 6cee6e0..3bc7816 100644
> --- a/arch/powerpc/include/asm/machdep.h
> +++ b/arch/powerpc/include/asm/machdep.h
> @@ -56,6 +56,9 @@ struct machdep_calls {
>  	void            (*hpte_removebolted)(unsigned long ea,
>  					     int psize, int ssize);
>  	void		(*flush_hash_range)(unsigned long number, int local);
> +	void		(*hugepage_invalidate)(struct mm_struct *mm,
> +					       unsigned char *hpte_slot_array,
> +					       unsigned long addr, int psize);
>  
>  	/* special for kexec, to be called in real mode, linear mapping is
>  	 * destroyed as well */
> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
> index ac84fa6..59f29bf 100644
> --- a/arch/powerpc/mm/hash_native_64.c
> +++ b/arch/powerpc/mm/hash_native_64.c
> @@ -450,6 +450,83 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
>  	local_irq_restore(flags);
>  }
>  
> +static void native_hugepage_invalidate(struct mm_struct *mm,
> +				       unsigned char *hpte_slot_array,
> +				       unsigned long addr, int psize)
> +{
> +	int ssize = 0, i;
> +	int lock_tlbie;
> +	struct hash_pte *hptep;
> +	int actual_psize = MMU_PAGE_16M;
> +	unsigned int max_hpte_count, valid;
> +	unsigned long flags, s_addr = addr;
> +	unsigned long hpte_v, want_v, shift;
> +	unsigned long hidx, vpn = 0, vsid, hash, slot;
> +
> +	shift = mmu_psize_defs[psize].shift;
> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
> +
> +	local_irq_save(flags);
> +	for (i = 0; i < max_hpte_count; i++) {
> +		/*
> +		 * 8 bits per each hpte entries
> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
> +		 */
> +		valid = hpte_slot_array[i] & 0x1;
> +		if (!valid)
> +			continue;
> +		hidx =  hpte_slot_array[i]  >> 1;
> +
> +		/* get the vpn */
> +		addr = s_addr + (i * (1ul << shift));
> +		if (!is_kernel_addr(addr)) {
> +			ssize = user_segment_size(addr);
> +			vsid = get_vsid(mm->context.id, addr, ssize);
> +			WARN_ON(vsid == 0);
> +		} else {
> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
> +			ssize = mmu_kernel_ssize;
> +		}
> +
> +		vpn = hpt_vpn(addr, vsid, ssize);
> +		hash = hpt_hash(vpn, shift, ssize);
> +		if (hidx & _PTEIDX_SECONDARY)
> +			hash = ~hash;
> +
> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
> +		slot += hidx & _PTEIDX_GROUP_IX;
> +
> +		hptep = htab_address + slot;
> +		want_v = hpte_encode_avpn(vpn, psize, ssize);
> +		native_lock_hpte(hptep);
> +		hpte_v = hptep->v;
> +
> +		/* Even if we miss, we need to invalidate the TLB */
> +		if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
> +			native_unlock_hpte(hptep);
> +		else
> +			/* Invalidate the hpte. NOTE: this also unlocks it */
> +			hptep->v = 0;

Shouldn't you be clearing the entry from the slot_array once it is
invalidated in the hash table?

> +	}
> +	/*
> +	 * Since this is a hugepage, we just need a single tlbie.
> +	 * use the last vpn.
> +	 */
> +	lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
> +	if (lock_tlbie)
> +		raw_spin_lock(&native_tlbie_lock);
> +
> +	asm volatile("ptesync":::"memory");
> +	__tlbie(vpn, psize, actual_psize, ssize);
> +	asm volatile("eieio; tlbsync; ptesync":::"memory");
> +
> +	if (lock_tlbie)
> +		raw_spin_unlock(&native_tlbie_lock);
> +
> +	local_irq_restore(flags);
> +}
> +
> +
>  static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>  			int *psize, int *apsize, int *ssize, unsigned long *vpn)
>  {
> @@ -678,4 +755,5 @@ void __init hpte_init_native(void)
>  	ppc_md.hpte_remove	= native_hpte_remove;
>  	ppc_md.hpte_clear_all	= native_hpte_clear;
>  	ppc_md.flush_hash_range = native_flush_hash_range;
> +	ppc_md.hugepage_invalidate   = native_hugepage_invalidate;
>  }
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index fbff062..386cab8 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -433,6 +433,7 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
>  {
>  	int ssize, i;
>  	unsigned long s_addr;
> +	int max_hpte_count;
>  	unsigned int psize, valid;
>  	unsigned char *hpte_slot_array;
>  	unsigned long hidx, vpn, vsid, hash, shift, slot;
> @@ -446,12 +447,18 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
>  	 * second half of the PMD
>  	 */
>  	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
> -
>  	/* get the base page size */
>  	psize = get_slice_psize(mm, s_addr);
> -	shift = mmu_psize_defs[psize].shift;
>  
> -	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {
> +	if (ppc_md.hugepage_invalidate)
> +		return ppc_md.hugepage_invalidate(mm, hpte_slot_array,
> +						  s_addr, psize);
> +	/*
> +	 * No bluk hpte removal support, invalidate each entry
> +	 */
> +	shift = mmu_psize_defs[psize].shift;
> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
> +	for (i = 0; i < max_hpte_count; i++) {
>  		/*
>  		 * 8 bits per each hpte entries
>  		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
> diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
> index 3daced3..5fcc621 100644
> --- a/arch/powerpc/platforms/pseries/lpar.c
> +++ b/arch/powerpc/platforms/pseries/lpar.c
> @@ -45,6 +45,13 @@
>  #include "plpar_wrappers.h"
>  #include "pseries.h"
>  
> +/* Flag bits for H_BULK_REMOVE */
> +#define HBR_REQUEST	0x4000000000000000UL
> +#define HBR_RESPONSE	0x8000000000000000UL
> +#define HBR_END		0xc000000000000000UL
> +#define HBR_AVPN	0x0200000000000000UL
> +#define HBR_ANDCOND	0x0100000000000000UL
> +
>  
>  /* in hvCall.S */
>  EXPORT_SYMBOL(plpar_hcall);
> @@ -339,6 +346,117 @@ static void pSeries_lpar_hpte_invalidate(unsigned long slot, unsigned long vpn,
>  	BUG_ON(lpar_rc != H_SUCCESS);
>  }
>  
> +/*
> + * Limit iterations holding pSeries_lpar_tlbie_lock to 3. We also need
> + * to make sure that we avoid bouncing the hypervisor tlbie lock.
> + */
> +#define PPC64_HUGE_HPTE_BATCH 12
> +
> +static void __pSeries_lpar_hugepage_invalidate(unsigned long *slot,
> +					     unsigned long *vpn, int count,
> +					     int psize, int ssize)
> +{
> +	unsigned long param[9];
> +	int i = 0, pix = 0, rc;
> +	unsigned long flags = 0;
> +	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
> +
> +	if (lock_tlbie)
> +		spin_lock_irqsave(&pSeries_lpar_tlbie_lock, flags);
> +
> +	for (i = 0; i < count; i++) {
> +
> +		if (!firmware_has_feature(FW_FEATURE_BULK_REMOVE)) {
> +			pSeries_lpar_hpte_invalidate(slot[i], vpn[i], psize,
> +						     ssize, 0);
> +		} else {
> +			param[pix] = HBR_REQUEST | HBR_AVPN | slot[i];
> +			param[pix+1] = hpte_encode_avpn(vpn[i], psize, ssize);
> +			pix += 2;
> +			if (pix == 8) {
> +				rc = plpar_hcall9(H_BULK_REMOVE, param,
> +						  param[0], param[1], param[2],
> +						  param[3], param[4], param[5],
> +						  param[6], param[7]);
> +				BUG_ON(rc != H_SUCCESS);
> +				pix = 0;
> +			}
> +		}
> +	}
> +	if (pix) {
> +		param[pix] = HBR_END;
> +		rc = plpar_hcall9(H_BULK_REMOVE, param, param[0], param[1],
> +				  param[2], param[3], param[4], param[5],
> +				  param[6], param[7]);
> +		BUG_ON(rc != H_SUCCESS);
> +	}
> +
> +	if (lock_tlbie)
> +		spin_unlock_irqrestore(&pSeries_lpar_tlbie_lock, flags);
> +}
> +
> +static void pSeries_lpar_hugepage_invalidate(struct mm_struct *mm,
> +				       unsigned char *hpte_slot_array,
> +				       unsigned long addr, int psize)
> +{
> +	int ssize = 0, i, index = 0;
> +	unsigned long s_addr = addr;
> +	unsigned int max_hpte_count, valid;
> +	unsigned long vpn_array[PPC64_HUGE_HPTE_BATCH];
> +	unsigned long slot_array[PPC64_HUGE_HPTE_BATCH];

These are really too big to be allocating on the stack.  You'd be
better off going direct from the char slot array to the data structure
for H_BULK_REMOVE, rather than introducing this intermediate structure.

> +	unsigned long shift, hidx, vpn = 0, vsid, hash, slot;
> +
> +	shift = mmu_psize_defs[psize].shift;
> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
> +
> +	for (i = 0; i < max_hpte_count; i++) {
> +		/*
> +		 * 8 bits per each hpte entries
> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
> +		 */
> +		valid = hpte_slot_array[i] & 0x1;
> +		if (!valid)
> +			continue;
> +		hidx =  hpte_slot_array[i]  >> 1;
> +
> +		/* get the vpn */
> +		addr = s_addr + (i * (1ul << shift));
> +		if (!is_kernel_addr(addr)) {
> +			ssize = user_segment_size(addr);
> +			vsid = get_vsid(mm->context.id, addr, ssize);
> +			WARN_ON(vsid == 0);
> +		} else {
> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
> +			ssize = mmu_kernel_ssize;
> +		}
> +
> +		vpn = hpt_vpn(addr, vsid, ssize);
> +		hash = hpt_hash(vpn, shift, ssize);
> +		if (hidx & _PTEIDX_SECONDARY)
> +			hash = ~hash;
> +
> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
> +		slot += hidx & _PTEIDX_GROUP_IX;
> +
> +		slot_array[index] = slot;
> +		vpn_array[index] = vpn;
> +		if (index == PPC64_HUGE_HPTE_BATCH - 1) {
> +			/*
> +			 * Now do a bluk invalidate
> +			 */
> +			__pSeries_lpar_hugepage_invalidate(slot_array,
> +							   vpn_array,
> +							   PPC64_HUGE_HPTE_BATCH,
> +							   psize, ssize);
> +			index = 0;
> +		} else
> +			index++;
> +	}
> +	if (index)
> +		__pSeries_lpar_hugepage_invalidate(slot_array, vpn_array,
> +						   index, psize, ssize);
> +}
> +
>  static void pSeries_lpar_hpte_removebolted(unsigned long ea,
>  					   int psize, int ssize)
>  {
> @@ -354,13 +472,6 @@ static void pSeries_lpar_hpte_removebolted(unsigned long ea,
>  	pSeries_lpar_hpte_invalidate(slot, vpn, psize, ssize, 0);
>  }
>  
> -/* Flag bits for H_BULK_REMOVE */
> -#define HBR_REQUEST	0x4000000000000000UL
> -#define HBR_RESPONSE	0x8000000000000000UL
> -#define HBR_END		0xc000000000000000UL
> -#define HBR_AVPN	0x0200000000000000UL
> -#define HBR_ANDCOND	0x0100000000000000UL
> -
>  /*
>   * Take a spinlock around flushes to avoid bouncing the hypervisor tlbie
>   * lock.
> @@ -446,6 +557,7 @@ void __init hpte_init_lpar(void)
>  	ppc_md.hpte_removebolted = pSeries_lpar_hpte_removebolted;
>  	ppc_md.flush_hash_range	= pSeries_lpar_flush_hash_range;
>  	ppc_md.hpte_clear_all   = pSeries_lpar_hptab_clear;
> +	ppc_md.hugepage_invalidate = pSeries_lpar_hugepage_invalidate;
>  }
>  
>  #ifdef CONFIG_PPC_SMLPAR

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 21/25] powerpc: Handle hugepage in perf callchain
  2013-04-12  1:34   ` David Gibson
@ 2013-04-12  5:05     ` Aneesh Kumar K.V
  0 siblings, 0 replies; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-12  5:05 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 04, 2013 at 11:27:59AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/perf/callchain.c |   32 +++++++++++++++++++++-----------
>>  1 file changed, 21 insertions(+), 11 deletions(-)
>> 
>> diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
>> index 578cac7..99262ce 100644
>> --- a/arch/powerpc/perf/callchain.c
>> +++ b/arch/powerpc/perf/callchain.c
>> @@ -115,7 +115,7 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
>>  {
>>  	pgd_t *pgdir;
>>  	pte_t *ptep, pte;
>> -	unsigned shift;
>> +	unsigned shift, hugepage;
>>  	unsigned long addr = (unsigned long) ptr;
>>  	unsigned long offset;
>>  	unsigned long pfn;
>> @@ -125,20 +125,30 @@ static int read_user_stack_slow(void __user *ptr, void *ret, int nb)
>>  	if (!pgdir)
>>  		return -EFAULT;
>>  
>> -	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, NULL);
>> +	ptep = find_linux_pte_or_hugepte(pgdir, addr, &shift, &hugepage);
>
> So, this patch pretty much demonstrates that your earlier patch adding
> the optional hugepage argument and making the existing callers pass
> NULL was broken.
>
> Any code which calls this function and doesn't use and handle the
> hugepage return value is horribly broken, so permitting the hugepage
> parameter to be optional is itself broken.
>
> I think instead you need to have an early patch that replaces
> find_linux_pte_or_hugepte with a new, more abstracted interface, so
> that code using it will remain correct when hugepage PMDs become
> possible.


The entire thing could have been simple if we supported only one
hugepage size (this is what sparc ended up doing). I guess we don't want
to do that. Also we want to support 16MB and 16GB, which mean we need
hugepd for 16GB at PGD level. My goal was to keep the hugetlb related
code for both 16MB and 16GB similar and consider THP huge page in a
different bucket.

Let me look at again how best I can simplify find_linux_pte_or_hugepte

-aneehs

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64
  2013-04-12  0:51       ` David Gibson
@ 2013-04-12  5:06         ` Aneesh Kumar K.V
  2013-04-12  5:39           ` David Gibson
  0 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-12  5:06 UTC (permalink / raw)
  To: David Gibson; +Cc: linuxppc-dev, paulus, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 11, 2013 at 01:10:29PM +0530, Aneesh Kumar K.V wrote:
>> David Gibson <dwg@au1.ibm.com> writes:
>> 
>> > On Thu, Apr 04, 2013 at 11:27:55AM +0530, Aneesh Kumar K.V wrote:
>> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> >> 
>> >> We now have pmd entries covering to 16MB range. To implement THP on powerpc,
>> >> we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
>> >> We also use the depoisted PTE page for tracking the HPTE information. The information
>> >> include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
>> >> With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
>> >> 4096 entries. Both will fit in a 4K PTE page.
>> >> 
>> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> >> ---
>> >>  arch/powerpc/include/asm/page.h              |    2 +-
>> >>  arch/powerpc/include/asm/pgtable-ppc64-64k.h |    3 +-
>> >>  arch/powerpc/include/asm/pgtable-ppc64.h     |    2 +-
>> >>  arch/powerpc/include/asm/pgtable.h           |  240 ++++++++++++++++++++
>> >>  arch/powerpc/mm/pgtable.c                    |  314 ++++++++++++++++++++++++++
>> >>  arch/powerpc/mm/pgtable_64.c                 |   13 ++
>> >>  arch/powerpc/platforms/Kconfig.cputype       |    1 +
>> >>  7 files changed, 572 insertions(+), 3 deletions(-)
>> >> 
>> >> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
>> >> index 38e7ff6..b927447 100644
>> >> --- a/arch/powerpc/include/asm/page.h
>> >> +++ b/arch/powerpc/include/asm/page.h
>> >> @@ -40,7 +40,7 @@
>> >>  #ifdef CONFIG_HUGETLB_PAGE
>> >>  extern unsigned int HPAGE_SHIFT;
>> >>  #else
>> >> -#define HPAGE_SHIFT PAGE_SHIFT
>> >> +#define HPAGE_SHIFT PMD_SHIFT
>> >
>> > That looks like it could break everything except the 64k page size
>> > 64-bit base.
>> 
>> How about 
>
> It seems very dubious to me to have transparent hugepages enabled
> without explicit hugepages in the first place.
>

IMHO once we have THP, we will not be using explicit hugepages unless we
want 16GB pages.

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64
  2013-04-12  5:06         ` Aneesh Kumar K.V
@ 2013-04-12  5:39           ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-12  5:39 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

[-- Attachment #1: Type: text/plain, Size: 2696 bytes --]

On Fri, Apr 12, 2013 at 10:36:58AM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> 
> > On Thu, Apr 11, 2013 at 01:10:29PM +0530, Aneesh Kumar K.V wrote:
> >> David Gibson <dwg@au1.ibm.com> writes:
> >> 
> >> > On Thu, Apr 04, 2013 at 11:27:55AM +0530, Aneesh Kumar K.V wrote:
> >> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> >> >> 
> >> >> We now have pmd entries covering to 16MB range. To implement THP on powerpc,
> >> >> we double the size of PMD. The second half is used to deposit the pgtable (PTE page).
> >> >> We also use the depoisted PTE page for tracking the HPTE information. The information
> >> >> include [ secondary group | 3 bit hidx | valid ]. We use one byte per each HPTE entry.
> >> >> With 16MB hugepage and 64K HPTE we need 256 entries and with 4K HPTE we need
> >> >> 4096 entries. Both will fit in a 4K PTE page.
> >> >> 
> >> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> >> >> ---
> >> >>  arch/powerpc/include/asm/page.h              |    2 +-
> >> >>  arch/powerpc/include/asm/pgtable-ppc64-64k.h |    3 +-
> >> >>  arch/powerpc/include/asm/pgtable-ppc64.h     |    2 +-
> >> >>  arch/powerpc/include/asm/pgtable.h           |  240 ++++++++++++++++++++
> >> >>  arch/powerpc/mm/pgtable.c                    |  314 ++++++++++++++++++++++++++
> >> >>  arch/powerpc/mm/pgtable_64.c                 |   13 ++
> >> >>  arch/powerpc/platforms/Kconfig.cputype       |    1 +
> >> >>  7 files changed, 572 insertions(+), 3 deletions(-)
> >> >> 
> >> >> diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
> >> >> index 38e7ff6..b927447 100644
> >> >> --- a/arch/powerpc/include/asm/page.h
> >> >> +++ b/arch/powerpc/include/asm/page.h
> >> >> @@ -40,7 +40,7 @@
> >> >>  #ifdef CONFIG_HUGETLB_PAGE
> >> >>  extern unsigned int HPAGE_SHIFT;
> >> >>  #else
> >> >> -#define HPAGE_SHIFT PAGE_SHIFT
> >> >> +#define HPAGE_SHIFT PMD_SHIFT
> >> >
> >> > That looks like it could break everything except the 64k page size
> >> > 64-bit base.
> >> 
> >> How about 
> >
> > It seems very dubious to me to have transparent hugepages enabled
> > without explicit hugepages in the first place.
> >
> 
> IMHO once we have THP, we will not be using explicit hugepages unless we
> want 16GB pages.

We still can't go breaking the combination in the interim.  Especially
if users are already in the habit of invoking things with
libhugetlbfs.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate
  2013-04-12  4:21   ` David Gibson
@ 2013-04-14 10:02     ` Aneesh Kumar K.V
  2013-04-15  1:18       ` David Gibson
  0 siblings, 1 reply; 73+ messages in thread
From: Aneesh Kumar K.V @ 2013-04-14 10:02 UTC (permalink / raw)
  To: David Gibson; +Cc: paulus, linuxppc-dev, linux-mm

David Gibson <dwg@au1.ibm.com> writes:

> On Thu, Apr 04, 2013 at 11:28:02AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> Hugepage invalidate involves invalidating multiple hpte entries.
>> Optimize the operation using H_BULK_REMOVE on lpar platforms.
>> On native, reduce the number of tlb flush.
>> 
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/include/asm/machdep.h    |    3 +
>>  arch/powerpc/mm/hash_native_64.c      |   78 ++++++++++++++++++++
>>  arch/powerpc/mm/pgtable.c             |   13 +++-
>>  arch/powerpc/platforms/pseries/lpar.c |  126 +++++++++++++++++++++++++++++++--
>>  4 files changed, 210 insertions(+), 10 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
>> index 6cee6e0..3bc7816 100644
>> --- a/arch/powerpc/include/asm/machdep.h
>> +++ b/arch/powerpc/include/asm/machdep.h
>> @@ -56,6 +56,9 @@ struct machdep_calls {
>>  	void            (*hpte_removebolted)(unsigned long ea,
>>  					     int psize, int ssize);
>>  	void		(*flush_hash_range)(unsigned long number, int local);
>> +	void		(*hugepage_invalidate)(struct mm_struct *mm,
>> +					       unsigned char *hpte_slot_array,
>> +					       unsigned long addr, int psize);
>>  
>>  	/* special for kexec, to be called in real mode, linear mapping is
>>  	 * destroyed as well */
>> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
>> index ac84fa6..59f29bf 100644
>> --- a/arch/powerpc/mm/hash_native_64.c
>> +++ b/arch/powerpc/mm/hash_native_64.c
>> @@ -450,6 +450,83 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
>>  	local_irq_restore(flags);
>>  }
>>  
>> +static void native_hugepage_invalidate(struct mm_struct *mm,
>> +				       unsigned char *hpte_slot_array,
>> +				       unsigned long addr, int psize)
>> +{
>> +	int ssize = 0, i;
>> +	int lock_tlbie;
>> +	struct hash_pte *hptep;
>> +	int actual_psize = MMU_PAGE_16M;
>> +	unsigned int max_hpte_count, valid;
>> +	unsigned long flags, s_addr = addr;
>> +	unsigned long hpte_v, want_v, shift;
>> +	unsigned long hidx, vpn = 0, vsid, hash, slot;
>> +
>> +	shift = mmu_psize_defs[psize].shift;
>> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
>> +
>> +	local_irq_save(flags);
>> +	for (i = 0; i < max_hpte_count; i++) {
>> +		/*
>> +		 * 8 bits per each hpte entries
>> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
>> +		 */
>> +		valid = hpte_slot_array[i] & 0x1;
>> +		if (!valid)
>> +			continue;
>> +		hidx =  hpte_slot_array[i]  >> 1;
>> +
>> +		/* get the vpn */
>> +		addr = s_addr + (i * (1ul << shift));
>> +		if (!is_kernel_addr(addr)) {
>> +			ssize = user_segment_size(addr);
>> +			vsid = get_vsid(mm->context.id, addr, ssize);
>> +			WARN_ON(vsid == 0);
>> +		} else {
>> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
>> +			ssize = mmu_kernel_ssize;
>> +		}
>> +
>> +		vpn = hpt_vpn(addr, vsid, ssize);
>> +		hash = hpt_hash(vpn, shift, ssize);
>> +		if (hidx & _PTEIDX_SECONDARY)
>> +			hash = ~hash;
>> +
>> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
>> +		slot += hidx & _PTEIDX_GROUP_IX;
>> +
>> +		hptep = htab_address + slot;
>> +		want_v = hpte_encode_avpn(vpn, psize, ssize);
>> +		native_lock_hpte(hptep);
>> +		hpte_v = hptep->v;
>> +
>> +		/* Even if we miss, we need to invalidate the TLB */
>> +		if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
>> +			native_unlock_hpte(hptep);
>> +		else
>> +			/* Invalidate the hpte. NOTE: this also unlocks it */
>> +			hptep->v = 0;
>
> Shouldn't you be clearing the entry from the slot_array once it is
> invalidated in the hash table?

We don't need to do that. We should be fine even if hptes get
invalidated under us. Also inorder to update slot_array i will have to
mark the corresponding hpte busy, so that we can ensure nobody is
looking at the slot array.

>
>> +	}
>> +	/*
>> +	 * Since this is a hugepage, we just need a single tlbie.
>> +	 * use the last vpn.
>> +	 */
>> +	lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
>> +	if (lock_tlbie)
>> +		raw_spin_lock(&native_tlbie_lock);
>> +
>> +	asm volatile("ptesync":::"memory");
>> +	__tlbie(vpn, psize, actual_psize, ssize);
>> +	asm volatile("eieio; tlbsync; ptesync":::"memory");
>> +
>> +	if (lock_tlbie)
>> +		raw_spin_unlock(&native_tlbie_lock);
>> +
>> +	local_irq_restore(flags);
>> +}
>> +
>> +
>>  static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
>>  			int *psize, int *apsize, int *ssize, unsigned long *vpn)
>>  {
>> @@ -678,4 +755,5 @@ void __init hpte_init_native(void)
>>  	ppc_md.hpte_remove	= native_hpte_remove;
>>  	ppc_md.hpte_clear_all	= native_hpte_clear;
>>  	ppc_md.flush_hash_range = native_flush_hash_range;
>> +	ppc_md.hugepage_invalidate   = native_hugepage_invalidate;
>>  }
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index fbff062..386cab8 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -433,6 +433,7 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
>>  {
>>  	int ssize, i;
>>  	unsigned long s_addr;
>> +	int max_hpte_count;
>>  	unsigned int psize, valid;
>>  	unsigned char *hpte_slot_array;
>>  	unsigned long hidx, vpn, vsid, hash, shift, slot;
>> @@ -446,12 +447,18 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
>>  	 * second half of the PMD
>>  	 */
>>  	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
>> -
>>  	/* get the base page size */
>>  	psize = get_slice_psize(mm, s_addr);
>> -	shift = mmu_psize_defs[psize].shift;
>>  
>> -	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {
>> +	if (ppc_md.hugepage_invalidate)
>> +		return ppc_md.hugepage_invalidate(mm, hpte_slot_array,
>> +						  s_addr, psize);
>> +	/*
>> +	 * No bluk hpte removal support, invalidate each entry
>> +	 */
>> +	shift = mmu_psize_defs[psize].shift;
>> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
>> +	for (i = 0; i < max_hpte_count; i++) {
>>  		/*
>>  		 * 8 bits per each hpte entries
>>  		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
>> diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
>> index 3daced3..5fcc621 100644
>> --- a/arch/powerpc/platforms/pseries/lpar.c
>> +++ b/arch/powerpc/platforms/pseries/lpar.c
>> @@ -45,6 +45,13 @@
>>  #include "plpar_wrappers.h"
>>  #include "pseries.h"
>>  
>> +/* Flag bits for H_BULK_REMOVE */
>> +#define HBR_REQUEST	0x4000000000000000UL
>> +#define HBR_RESPONSE	0x8000000000000000UL
>> +#define HBR_END		0xc000000000000000UL
>> +#define HBR_AVPN	0x0200000000000000UL
>> +#define HBR_ANDCOND	0x0100000000000000UL
>> +
>>  
>>  /* in hvCall.S */
>>  EXPORT_SYMBOL(plpar_hcall);
>> @@ -339,6 +346,117 @@ static void pSeries_lpar_hpte_invalidate(unsigned long slot, unsigned long vpn,
>>  	BUG_ON(lpar_rc != H_SUCCESS);
>>  }
>>  
>> +/*
>> + * Limit iterations holding pSeries_lpar_tlbie_lock to 3. We also need
>> + * to make sure that we avoid bouncing the hypervisor tlbie lock.
>> + */
>> +#define PPC64_HUGE_HPTE_BATCH 12
>> +
>> +static void __pSeries_lpar_hugepage_invalidate(unsigned long *slot,
>> +					     unsigned long *vpn, int count,
>> +					     int psize, int ssize)
>> +{
>> +	unsigned long param[9];
>> +	int i = 0, pix = 0, rc;
>> +	unsigned long flags = 0;
>> +	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
>> +
>> +	if (lock_tlbie)
>> +		spin_lock_irqsave(&pSeries_lpar_tlbie_lock, flags);
>> +
>> +	for (i = 0; i < count; i++) {
>> +
>> +		if (!firmware_has_feature(FW_FEATURE_BULK_REMOVE)) {
>> +			pSeries_lpar_hpte_invalidate(slot[i], vpn[i], psize,
>> +						     ssize, 0);
>> +		} else {
>> +			param[pix] = HBR_REQUEST | HBR_AVPN | slot[i];
>> +			param[pix+1] = hpte_encode_avpn(vpn[i], psize, ssize);
>> +			pix += 2;
>> +			if (pix == 8) {
>> +				rc = plpar_hcall9(H_BULK_REMOVE, param,
>> +						  param[0], param[1], param[2],
>> +						  param[3], param[4], param[5],
>> +						  param[6], param[7]);
>> +				BUG_ON(rc != H_SUCCESS);
>> +				pix = 0;
>> +			}
>> +		}
>> +	}
>> +	if (pix) {
>> +		param[pix] = HBR_END;
>> +		rc = plpar_hcall9(H_BULK_REMOVE, param, param[0], param[1],
>> +				  param[2], param[3], param[4], param[5],
>> +				  param[6], param[7]);
>> +		BUG_ON(rc != H_SUCCESS);
>> +	}
>> +
>> +	if (lock_tlbie)
>> +		spin_unlock_irqrestore(&pSeries_lpar_tlbie_lock, flags);
>> +}
>> +
>> +static void pSeries_lpar_hugepage_invalidate(struct mm_struct *mm,
>> +				       unsigned char *hpte_slot_array,
>> +				       unsigned long addr, int psize)
>> +{
>> +	int ssize = 0, i, index = 0;
>> +	unsigned long s_addr = addr;
>> +	unsigned int max_hpte_count, valid;
>> +	unsigned long vpn_array[PPC64_HUGE_HPTE_BATCH];
>> +	unsigned long slot_array[PPC64_HUGE_HPTE_BATCH];
>
> These are really too big to be allocating on the stack.  You'd be
> better off going direct from the char slot array to the data structure
> for H_BULK_REMOVE, rather than introducing this intermediate
> structure.

The reason i wanted to do that was to make sure i don't lock/unlock
pSeries_lpar_tlbie_lock that frequently, ie, for ever H_BULK_REMOVE.
The total size taken by both the array is only 192 bytes. Is that big
enough to create trouble ?

>
>> +	unsigned long shift, hidx, vpn = 0, vsid, hash, slot;
>> +
>> +	shift = mmu_psize_defs[psize].shift;
>> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
>> +
>> +	for (i = 0; i < max_hpte_count; i++) {
>> +		/*
>> +		 * 8 bits per each hpte entries
>> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
>> +		 */
>> +		valid = hpte_slot_array[i] & 0x1;
>> +		if (!valid)
>> +			continue;
>> +		hidx =  hpte_slot_array[i]  >> 1;
>> +
>> +		/* get the vpn */
>> +		addr = s_addr + (i * (1ul << shift));
>> +		if (!is_kernel_addr(addr)) {
>> +			ssize = user_segment_size(addr);
>> +			vsid = get_vsid(mm->context.id, addr, ssize);
>> +			WARN_ON(vsid == 0);
>> +		} else {
>> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
>> +			ssize = mmu_kernel_ssize;
>> +		}
>> +
>> +		vpn = hpt_vpn(addr, vsid, ssize);
>> +		hash = hpt_hash(vpn, shift, ssize);
>> +		if (hidx & _PTEIDX_SECONDARY)
>> +			hash = ~hash;
>> +
>> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
>> +		slot += hidx & _PTEIDX_GROUP_IX;
>> +
>> +		slot_array[index] = slot;
>> +		vpn_array[index] = vpn;
>> +		if (index == PPC64_HUGE_HPTE_BATCH - 1) {
>> +			/*
>> +			 * Now do a bluk invalidate
>> +			 */
>> +			__pSeries_lpar_hugepage_invalidate(slot_array,
>> +							   vpn_array,
>> +							   PPC64_HUGE_HPTE_BATCH,
>> +							   psize, ssize);
>> +			index = 0;
>> +		} else
>> +			index++;
>> +	}
>> +	if (index)
>> +		__pSeries_lpar_hugepage_invalidate(slot_array, vpn_array,
>> +						   index, psize, ssize);
>> +}
>> +
>>  static void pSeries_lpar_hpte_removebolted(unsigned long ea,
>>  					   int psize, int ssize)
>>  {
>> @@ -354,13 +472,6 @@ static void pSeries_lpar_hpte_removebolted(unsigned long ea,
>>  	pSeries_lpar_hpte_invalidate(slot, vpn, psize, ssize, 0);
>>  }
>>  
>> -/* Flag bits for H_BULK_REMOVE */
>> -#define HBR_REQUEST	0x4000000000000000UL
>> -#define HBR_RESPONSE	0x8000000000000000UL
>> -#define HBR_END		0xc000000000000000UL
>> -#define HBR_AVPN	0x0200000000000000UL
>> -#define HBR_ANDCOND	0x0100000000000000UL
>> -
>>  /*
>>   * Take a spinlock around flushes to avoid bouncing the hypervisor tlbie
>>   * lock.
>> @@ -446,6 +557,7 @@ void __init hpte_init_lpar(void)
>>  	ppc_md.hpte_removebolted = pSeries_lpar_hpte_removebolted;
>>  	ppc_md.flush_hash_range	= pSeries_lpar_flush_hash_range;
>>  	ppc_md.hpte_clear_all   = pSeries_lpar_hptab_clear;
>> +	ppc_md.hugepage_invalidate = pSeries_lpar_hugepage_invalidate;
>>  }
>>  
>>  #ifdef CONFIG_PPC_SMLPAR

-aneesh

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate
  2013-04-14 10:02     ` Aneesh Kumar K.V
@ 2013-04-15  1:18       ` David Gibson
  0 siblings, 0 replies; 73+ messages in thread
From: David Gibson @ 2013-04-15  1:18 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: linuxppc-dev, paulus, linux-mm

[-- Attachment #1: Type: text/plain, Size: 10497 bytes --]

On Sun, Apr 14, 2013 at 03:32:12PM +0530, Aneesh Kumar K.V wrote:
> David Gibson <dwg@au1.ibm.com> writes:
> 
> > On Thu, Apr 04, 2013 at 11:28:02AM +0530, Aneesh Kumar K.V wrote:
> >> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> >> 
> >> Hugepage invalidate involves invalidating multiple hpte entries.
> >> Optimize the operation using H_BULK_REMOVE on lpar platforms.
> >> On native, reduce the number of tlb flush.
> >> 
> >> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> >> ---
> >>  arch/powerpc/include/asm/machdep.h    |    3 +
> >>  arch/powerpc/mm/hash_native_64.c      |   78 ++++++++++++++++++++
> >>  arch/powerpc/mm/pgtable.c             |   13 +++-
> >>  arch/powerpc/platforms/pseries/lpar.c |  126 +++++++++++++++++++++++++++++++--
> >>  4 files changed, 210 insertions(+), 10 deletions(-)
> >> 
> >> diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
> >> index 6cee6e0..3bc7816 100644
> >> --- a/arch/powerpc/include/asm/machdep.h
> >> +++ b/arch/powerpc/include/asm/machdep.h
> >> @@ -56,6 +56,9 @@ struct machdep_calls {
> >>  	void            (*hpte_removebolted)(unsigned long ea,
> >>  					     int psize, int ssize);
> >>  	void		(*flush_hash_range)(unsigned long number, int local);
> >> +	void		(*hugepage_invalidate)(struct mm_struct *mm,
> >> +					       unsigned char *hpte_slot_array,
> >> +					       unsigned long addr, int psize);
> >>  
> >>  	/* special for kexec, to be called in real mode, linear mapping is
> >>  	 * destroyed as well */
> >> diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
> >> index ac84fa6..59f29bf 100644
> >> --- a/arch/powerpc/mm/hash_native_64.c
> >> +++ b/arch/powerpc/mm/hash_native_64.c
> >> @@ -450,6 +450,83 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
> >>  	local_irq_restore(flags);
> >>  }
> >>  
> >> +static void native_hugepage_invalidate(struct mm_struct *mm,
> >> +				       unsigned char *hpte_slot_array,
> >> +				       unsigned long addr, int psize)
> >> +{
> >> +	int ssize = 0, i;
> >> +	int lock_tlbie;
> >> +	struct hash_pte *hptep;
> >> +	int actual_psize = MMU_PAGE_16M;
> >> +	unsigned int max_hpte_count, valid;
> >> +	unsigned long flags, s_addr = addr;
> >> +	unsigned long hpte_v, want_v, shift;
> >> +	unsigned long hidx, vpn = 0, vsid, hash, slot;
> >> +
> >> +	shift = mmu_psize_defs[psize].shift;
> >> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
> >> +
> >> +	local_irq_save(flags);
> >> +	for (i = 0; i < max_hpte_count; i++) {
> >> +		/*
> >> +		 * 8 bits per each hpte entries
> >> +		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
> >> +		 */
> >> +		valid = hpte_slot_array[i] & 0x1;
> >> +		if (!valid)
> >> +			continue;
> >> +		hidx =  hpte_slot_array[i]  >> 1;
> >> +
> >> +		/* get the vpn */
> >> +		addr = s_addr + (i * (1ul << shift));
> >> +		if (!is_kernel_addr(addr)) {
> >> +			ssize = user_segment_size(addr);
> >> +			vsid = get_vsid(mm->context.id, addr, ssize);
> >> +			WARN_ON(vsid == 0);
> >> +		} else {
> >> +			vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
> >> +			ssize = mmu_kernel_ssize;
> >> +		}
> >> +
> >> +		vpn = hpt_vpn(addr, vsid, ssize);
> >> +		hash = hpt_hash(vpn, shift, ssize);
> >> +		if (hidx & _PTEIDX_SECONDARY)
> >> +			hash = ~hash;
> >> +
> >> +		slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
> >> +		slot += hidx & _PTEIDX_GROUP_IX;
> >> +
> >> +		hptep = htab_address + slot;
> >> +		want_v = hpte_encode_avpn(vpn, psize, ssize);
> >> +		native_lock_hpte(hptep);
> >> +		hpte_v = hptep->v;
> >> +
> >> +		/* Even if we miss, we need to invalidate the TLB */
> >> +		if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
> >> +			native_unlock_hpte(hptep);
> >> +		else
> >> +			/* Invalidate the hpte. NOTE: this also unlocks it */
> >> +			hptep->v = 0;
> >
> > Shouldn't you be clearing the entry from the slot_array once it is
> > invalidated in the hash table?
> 
> We don't need to do that. We should be fine even if hptes get
> invalidated under us. Also inorder to update slot_array i will have to
> mark the corresponding hpte busy, so that we can ensure nobody is
> looking at the slot array.

Hm, ok.

> >> +	}
> >> +	/*
> >> +	 * Since this is a hugepage, we just need a single tlbie.
> >> +	 * use the last vpn.
> >> +	 */
> >> +	lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
> >> +	if (lock_tlbie)
> >> +		raw_spin_lock(&native_tlbie_lock);
> >> +
> >> +	asm volatile("ptesync":::"memory");
> >> +	__tlbie(vpn, psize, actual_psize, ssize);
> >> +	asm volatile("eieio; tlbsync; ptesync":::"memory");
> >> +
> >> +	if (lock_tlbie)
> >> +		raw_spin_unlock(&native_tlbie_lock);
> >> +
> >> +	local_irq_restore(flags);
> >> +}
> >> +
> >> +
> >>  static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
> >>  			int *psize, int *apsize, int *ssize, unsigned long *vpn)
> >>  {
> >> @@ -678,4 +755,5 @@ void __init hpte_init_native(void)
> >>  	ppc_md.hpte_remove	= native_hpte_remove;
> >>  	ppc_md.hpte_clear_all	= native_hpte_clear;
> >>  	ppc_md.flush_hash_range = native_flush_hash_range;
> >> +	ppc_md.hugepage_invalidate   = native_hugepage_invalidate;
> >>  }
> >> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> >> index fbff062..386cab8 100644
> >> --- a/arch/powerpc/mm/pgtable.c
> >> +++ b/arch/powerpc/mm/pgtable.c
> >> @@ -433,6 +433,7 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
> >>  {
> >>  	int ssize, i;
> >>  	unsigned long s_addr;
> >> +	int max_hpte_count;
> >>  	unsigned int psize, valid;
> >>  	unsigned char *hpte_slot_array;
> >>  	unsigned long hidx, vpn, vsid, hash, shift, slot;
> >> @@ -446,12 +447,18 @@ void hpte_need_hugepage_flush(struct mm_struct *mm, unsigned long addr,
> >>  	 * second half of the PMD
> >>  	 */
> >>  	hpte_slot_array = *(char **)(pmdp + PTRS_PER_PMD);
> >> -
> >>  	/* get the base page size */
> >>  	psize = get_slice_psize(mm, s_addr);
> >> -	shift = mmu_psize_defs[psize].shift;
> >>  
> >> -	for (i = 0; i < HUGE_PAGE_SIZE/(1ul << shift); i++) {
> >> +	if (ppc_md.hugepage_invalidate)
> >> +		return ppc_md.hugepage_invalidate(mm, hpte_slot_array,
> >> +						  s_addr, psize);
> >> +	/*
> >> +	 * No bluk hpte removal support, invalidate each entry
> >> +	 */
> >> +	shift = mmu_psize_defs[psize].shift;
> >> +	max_hpte_count = HUGE_PAGE_SIZE/(1ul << shift);
> >> +	for (i = 0; i < max_hpte_count; i++) {
> >>  		/*
> >>  		 * 8 bits per each hpte entries
> >>  		 * 000| [ secondary group (one bit) | hidx (3 bits) | valid bit]
> >> diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
> >> index 3daced3..5fcc621 100644
> >> --- a/arch/powerpc/platforms/pseries/lpar.c
> >> +++ b/arch/powerpc/platforms/pseries/lpar.c
> >> @@ -45,6 +45,13 @@
> >>  #include "plpar_wrappers.h"
> >>  #include "pseries.h"
> >>  
> >> +/* Flag bits for H_BULK_REMOVE */
> >> +#define HBR_REQUEST	0x4000000000000000UL
> >> +#define HBR_RESPONSE	0x8000000000000000UL
> >> +#define HBR_END		0xc000000000000000UL
> >> +#define HBR_AVPN	0x0200000000000000UL
> >> +#define HBR_ANDCOND	0x0100000000000000UL
> >> +
> >>  
> >>  /* in hvCall.S */
> >>  EXPORT_SYMBOL(plpar_hcall);
> >> @@ -339,6 +346,117 @@ static void pSeries_lpar_hpte_invalidate(unsigned long slot, unsigned long vpn,
> >>  	BUG_ON(lpar_rc != H_SUCCESS);
> >>  }
> >>  
> >> +/*
> >> + * Limit iterations holding pSeries_lpar_tlbie_lock to 3. We also need
> >> + * to make sure that we avoid bouncing the hypervisor tlbie lock.
> >> + */
> >> +#define PPC64_HUGE_HPTE_BATCH 12
> >> +
> >> +static void __pSeries_lpar_hugepage_invalidate(unsigned long *slot,
> >> +					     unsigned long *vpn, int count,
> >> +					     int psize, int ssize)
> >> +{
> >> +	unsigned long param[9];
> >> +	int i = 0, pix = 0, rc;
> >> +	unsigned long flags = 0;
> >> +	int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
> >> +
> >> +	if (lock_tlbie)
> >> +		spin_lock_irqsave(&pSeries_lpar_tlbie_lock, flags);
> >> +
> >> +	for (i = 0; i < count; i++) {
> >> +
> >> +		if (!firmware_has_feature(FW_FEATURE_BULK_REMOVE)) {
> >> +			pSeries_lpar_hpte_invalidate(slot[i], vpn[i], psize,
> >> +						     ssize, 0);
> >> +		} else {
> >> +			param[pix] = HBR_REQUEST | HBR_AVPN | slot[i];
> >> +			param[pix+1] = hpte_encode_avpn(vpn[i], psize, ssize);
> >> +			pix += 2;
> >> +			if (pix == 8) {
> >> +				rc = plpar_hcall9(H_BULK_REMOVE, param,
> >> +						  param[0], param[1], param[2],
> >> +						  param[3], param[4], param[5],
> >> +						  param[6], param[7]);
> >> +				BUG_ON(rc != H_SUCCESS);
> >> +				pix = 0;
> >> +			}
> >> +		}
> >> +	}
> >> +	if (pix) {
> >> +		param[pix] = HBR_END;
> >> +		rc = plpar_hcall9(H_BULK_REMOVE, param, param[0], param[1],
> >> +				  param[2], param[3], param[4], param[5],
> >> +				  param[6], param[7]);
> >> +		BUG_ON(rc != H_SUCCESS);
> >> +	}
> >> +
> >> +	if (lock_tlbie)
> >> +		spin_unlock_irqrestore(&pSeries_lpar_tlbie_lock, flags);
> >> +}
> >> +
> >> +static void pSeries_lpar_hugepage_invalidate(struct mm_struct *mm,
> >> +				       unsigned char *hpte_slot_array,
> >> +				       unsigned long addr, int psize)
> >> +{
> >> +	int ssize = 0, i, index = 0;
> >> +	unsigned long s_addr = addr;
> >> +	unsigned int max_hpte_count, valid;
> >> +	unsigned long vpn_array[PPC64_HUGE_HPTE_BATCH];
> >> +	unsigned long slot_array[PPC64_HUGE_HPTE_BATCH];
> >
> > These are really too big to be allocating on the stack.  You'd be
> > better off going direct from the char slot array to the data structure
> > for H_BULK_REMOVE, rather than introducing this intermediate
> > structure.
> 
> The reason i wanted to do that was to make sure i don't lock/unlock
> pSeries_lpar_tlbie_lock that frequently, ie, for ever H_BULK_REMOVE.
> The total size taken by both the array is only 192 bytes. Is that big
> enough to create trouble ?

Oh, sorry, I missed the batch invalidate.  I think 192 bytes is
borderline - Paul or Ben might know better.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH -V5 00/25] THP support for PPC64
  2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
                   ` (26 preceding siblings ...)
  2013-04-04  6:14 ` Simon Jeons
@ 2013-04-19  1:55 ` Simon Jeons
  27 siblings, 0 replies; 73+ messages in thread
From: Simon Jeons @ 2013-04-19  1:55 UTC (permalink / raw)
  To: Aneesh Kumar K.V; +Cc: paulus, linuxppc-dev, linux-mm

Hi Aneesh,
On 04/04/2013 01:57 PM, Aneesh Kumar K.V wrote:
> Hi,
>
> This patchset adds transparent hugepage support for PPC64.
>
> TODO:
> * hash preload support in update_mmu_cache_pmd (we don't do that for hugetlb)
>
> Some numbers:
>
> The latency measurements code from Anton  found at
> http://ozlabs.org/~anton/junkcode/latency2001.c
>
> THP disabled 64K page size
> ------------------------
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    731.73 cycles    205.77 ns
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    743.39 cycles    209.05 ns
> [root@llmp24l02 ~]#
>
> THP disabled large page via hugetlbfs
> -------------------------------------
> [root@llmp24l02 ~]# ./latency2001  -l 8G
>   8589934592    416.09 cycles    117.01 ns
> [root@llmp24l02 ~]# ./latency2001  -l 8G
>   8589934592    415.74 cycles    116.91 ns
>
> THP enabled 64K page size.
> ----------------
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    405.07 cycles    113.91 ns
> [root@llmp24l02 ~]# ./latency2001 8G
>   8589934592    411.82 cycles    115.81 ns
> [root@llmp24l02 ~]#
>
> We are close to hugetlbfs in latency and we can achieve this with zero
> config/page reservation. Most of the allocations above are fault allocated.
>
> Another test that does 50000000 random access over 1GB area goes from
> 2.65 seconds to 1.07 seconds with this patchset.
>
> split_huge_page impact:
> ---------------------
> To look at the performance impact of large page invalidate, I tried the below
> experiment. The test involved, accessing a large contiguous region of memory
> location as below
>
>      for (i = 0; i < size; i += PAGE_SIZE)
> 	data[i] = i;
>
> We wanted to access the data in sequential order so that we look at the
> worst case THP performance. Accesing the data in sequential order implies
> we have the Page table cached and overhead of TLB miss is as minimal as
> possible. We also don't touch the entire page, because that can result in
> cache evict.
>
> After we touched the full range as above, we now call mprotect on each
> of that page. A mprotect will result in a hugepage split. This should
> allow us to measure the impact of hugepage split.
>
>      for (i = 0; i < size; i += PAGE_SIZE)
> 	 mprotect(&data[i], PAGE_SIZE, PROT_READ);
>
> Split hugepage impact:
> ---------------------
> THP enabled: 2.851561705 seconds for test completion
> THP disable: 3.599146098 seconds for test completion
>
> We are 20.7% better than non THP case even when we have all the large pages split.
>
> Detailed output:
>
> THP enabled:
> ---------------------------------------
> [root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
> thp_fault_alloc 0
> thp_fault_fallback 0
> thp_collapse_alloc 0
> thp_collapse_alloc_failed 0
> thp_split 0
> thp_zero_page_alloc 0
> thp_zero_page_alloc_failed 0
> [root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G
> time taken to touch all the data in ns: 2763096913
>
>   Performance counter stats for './split-huge-page-mpro 20G':
>
>               1,581 page-faults
>               3,159 dTLB-load-misses
>
>         2.851561705 seconds time elapsed
>
> [root@llmp24l02 ~]#
> [root@llmp24l02 ~]# cat /proc/vmstat  | grep thp
> thp_fault_alloc 1279
> thp_fault_fallback 0
> thp_collapse_alloc 0
> thp_collapse_alloc_failed 0
> thp_split 1279
> thp_zero_page_alloc 0
> thp_zero_page_alloc_failed 0
> [root@llmp24l02 ~]#
>
>      77.05%  split-huge-page  [kernel.kallsyms]     [k] .clear_user_page
>       7.10%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap_ctx
>       1.51%  split-huge-page  split-huge-page-mpro  [.] 0x0000000000000a70
>       0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc
>       0.81%  split-huge-page  [kernel.kallsyms]     [k] .up_write
>       0.76%  split-huge-page  [kernel.kallsyms]     [k] .perf_event_mmap
>       0.76%  split-huge-page  [kernel.kallsyms]     [k] .down_write
>       0.74%  split-huge-page  [kernel.kallsyms]     [k] .lru_add_page_tail
>       0.61%  split-huge-page  [kernel.kallsyms]     [k] .split_huge_page
>       0.59%  split-huge-page  [kernel.kallsyms]     [k] .change_protection
>       0.51%  split-huge-page  [kernel.kallsyms]     [k] .release_pages
>
>
>       0.96%  split-huge-page  [unknown]             [H] 0x000000000157e3bc
>              |
>              |--79.44%-- reloc_start
>              |          |
>              |          |--86.54%-- .__pSeries_lpar_hugepage_invalidate
>              |          |          .pSeries_lpar_hugepage_invalidate
>              |          |          .hpte_need_hugepage_flush
>              |          |          .split_huge_page
>              |          |          .__split_huge_page_pmd
>              |          |          .vma_adjust
>              |          |          .vma_merge
>              |          |          .mprotect_fixup
>              |          |          .SyS_mprotect
>
>
> THP disabled:
> ---------------
> [root@llmp24l02 ~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
> [root@llmp24l02 ~]# /root/thp/tools/perf/perf stat -e page-faults,dTLB-load-misses ./split-huge-page-mpro 20G
> time taken to touch all the data in ns: 3513767220
>
>   Performance counter stats for './split-huge-page-mpro 20G':
>
>            3,27,726 page-faults
>            3,29,654 dTLB-load-misses
>
>         3.599146098 seconds time elapsed
>
> [root@llmp24l02 ~]#

Thanks for your great work. One question about page table of ppc64:
Why x86 use tree based page table and ppc64 use hash based page table?

>
> Changes from V4:
> * Fix bad page error in page_table_alloc
>    BUG: Bad page state in process stream  pfn:f1a59
>    page:f0000000034dc378 count:1 mapcount:0 mapping:          (null) index:0x0
>    [c000000f322c77d0] [c00000000015e198] .bad_page+0xe8/0x140
>    [c000000f322c7860] [c00000000015e3c4] .free_pages_prepare+0x1d4/0x1e0
>    [c000000f322c7910] [c000000000160450] .free_hot_cold_page+0x50/0x230
>    [c000000f322c79c0] [c00000000003ad18] .page_table_alloc+0x168/0x1c0
>
> Changes from V3:
> * PowerNV boot fixes
>
> Change from V2:
> * Change patch "powerpc: Reduce PTE table memory wastage" to use much simpler approach
>    for PTE page sharing.
> * Changes to handle huge pages in KVM code.
> * Address other review comments
>
> Changes from V1
> * Address review comments
> * More patch split
> * Add batch hpte invalidate for hugepages.
>
> Changes from RFC V2:
> * Address review comments
> * More code cleanup and patch split
>
> Changes from RFC V1:
> * HugeTLB fs now works
> * Compile issues fixed
> * rebased to v3.8
> * Patch series reorded so that ppc64 cleanups and MM THP changes are moved
>    early in the series. This should help in picking those patches early.
>
> Thanks,
> -aneesh
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2013-04-19  1:56 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-04-04  5:57 [PATCH -V5 00/25] THP support for PPC64 Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 01/25] powerpc: Use signed formatting when printing error Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 02/25] powerpc: Save DAR and DSISR in pt_regs on MCE Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 03/25] powerpc: Don't hard code the size of pte page Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 04/25] powerpc: Reduce the PTE_INDEX_SIZE Aneesh Kumar K.V
2013-04-11  7:10   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 05/25] powerpc: Move the pte free routines from common header Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 06/25] powerpc: Reduce PTE table memory wastage Aneesh Kumar K.V
2013-04-10  4:46   ` David Gibson
2013-04-10  6:29     ` Aneesh Kumar K.V
2013-04-10  7:04       ` David Gibson
2013-04-10  7:53         ` Aneesh Kumar K.V
2013-04-10 17:47           ` Aneesh Kumar K.V
2013-04-11  1:20             ` David Gibson
2013-04-11  1:12           ` David Gibson
2013-04-10  7:14   ` Michael Ellerman
2013-04-10  7:54     ` Aneesh Kumar K.V
2013-04-10  8:52       ` Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 07/25] powerpc: Use encode avpn where we need only avpn values Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 08/25] powerpc: Decode the pte-lp-encoding bits correctly Aneesh Kumar K.V
2013-04-10  7:19   ` David Gibson
2013-04-10  8:11     ` Aneesh Kumar K.V
2013-04-10 17:49       ` Aneesh Kumar K.V
2013-04-11  1:28       ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 09/25] powerpc: Fix hpte_decode to use the correct decoding for page sizes Aneesh Kumar K.V
2013-04-11  3:20   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 10/25] powerpc: print both base and actual page size on hash failure Aneesh Kumar K.V
2013-04-11  3:21   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 11/25] powerpc: Print page size info during boot Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 12/25] powerpc: Return all the valid pte ecndoing in KVM_PPC_GET_SMMU_INFO ioctl Aneesh Kumar K.V
2013-04-11  3:24   ` David Gibson
2013-04-11  5:11     ` Aneesh Kumar K.V
2013-04-11  5:57       ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 13/25] powerpc: Update tlbie/tlbiel as per ISA doc Aneesh Kumar K.V
2013-04-11  3:30   ` David Gibson
2013-04-11  5:20     ` Aneesh Kumar K.V
2013-04-11  6:16       ` David Gibson
2013-04-11  6:36         ` Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 14/25] mm/THP: HPAGE_SHIFT is not a #define on some arch Aneesh Kumar K.V
2013-04-11  3:36   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 15/25] mm/THP: Add pmd args to pgtable deposit and withdraw APIs Aneesh Kumar K.V
2013-04-11  3:40   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 16/25] mm/THP: withdraw the pgtable after pmdp related operations Aneesh Kumar K.V
2013-04-04  5:57 ` [PATCH -V5 17/25] powerpc/THP: Implement transparent hugepages for ppc64 Aneesh Kumar K.V
2013-04-11  5:38   ` David Gibson
2013-04-11  7:40     ` Aneesh Kumar K.V
2013-04-12  0:51       ` David Gibson
2013-04-12  5:06         ` Aneesh Kumar K.V
2013-04-12  5:39           ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 18/25] powerpc/THP: Double the PMD table size for THP Aneesh Kumar K.V
2013-04-11  6:18   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 19/25] powerpc/THP: Differentiate THP PMD entries from HUGETLB PMD entries Aneesh Kumar K.V
2013-04-10  7:21   ` Michael Ellerman
2013-04-10 18:26     ` Aneesh Kumar K.V
2013-04-12  1:28   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 20/25] powerpc/THP: Add code to handle HPTE faults for large pages Aneesh Kumar K.V
2013-04-12  4:01   ` David Gibson
2013-04-04  5:57 ` [PATCH -V5 21/25] powerpc: Handle hugepage in perf callchain Aneesh Kumar K.V
2013-04-12  1:34   ` David Gibson
2013-04-12  5:05     ` Aneesh Kumar K.V
2013-04-04  5:58 ` [PATCH -V5 22/25] powerpc/THP: get_user_pages_fast changes Aneesh Kumar K.V
2013-04-12  1:41   ` David Gibson
2013-04-04  5:58 ` [PATCH -V5 23/25] powerpc/THP: Enable THP on PPC64 Aneesh Kumar K.V
2013-04-04  5:58 ` [PATCH -V5 24/25] powerpc: Optimize hugepage invalidate Aneesh Kumar K.V
2013-04-12  4:21   ` David Gibson
2013-04-14 10:02     ` Aneesh Kumar K.V
2013-04-15  1:18       ` David Gibson
2013-04-04  5:58 ` [PATCH -V5 25/25] powerpc: Handle hugepages in kvm Aneesh Kumar K.V
2013-04-04  6:00 ` [PATCH -V5 00/25] THP support for PPC64 Simon Jeons
2013-04-04  6:10   ` Aneesh Kumar K.V
2013-04-04  6:14 ` Simon Jeons
2013-04-04  8:38   ` Aneesh Kumar K.V
2013-04-19  1:55 ` Simon Jeons

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).