linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash
@ 2015-08-07  6:19 Michael Ellerman
  2015-08-07  6:19 ` [PATCH 2/5] powerpc/cell: Drop support for 64K local store on 4K kernels Michael Ellerman
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Michael Ellerman @ 2015-08-07  6:19 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aneesh.kumar, Benjamin Herrenschmidt, Jeremy Kerr

The powerpc kernel can be built to have either a 4K PAGE_SIZE or a 64K
PAGE_SIZE.

However when built with a 4K PAGE_SIZE there is an additional config
option which can be enabled, PPC_HAS_HASH_64K, which means the kernel
also knows how to hash a 64K page even though the base PAGE_SIZE is 4K.

This is used in one obscure configuration, to support 64K pages for SPU
local store on the Cell processor when the rest of the kernel is using
4K pages.

In this configuration, pte_pagesize_index() is defined to just pass
through its arguments to get_slice_psize(). However pte_pagesize_index()
is called for both user and kernel addresses, whereas get_slice_psize()
only knows how to handle user addresses.

This has been broken forever, however until recently it happened to
work. That was because in get_slice_psize() the large kernel address
would cause the right shift of the slice mask to return zero.

However in commit 7aa0727f3302 "powerpc/mm: Increase the slice range to
64TB", the get_slice_psize() code was changed so that instead of a right
shift we do an array lookup based on the address. When passed a kernel
address this means we index way off the end of the slice array and
return random junk.

That is only fatal if we happen to hit something non-zero, but when we
do return a non-zero value we confuse the MMU code and eventually cause
a check stop.

This fix is ugly, but simple. When we're called for a kernel address we
return 4K, which is always correct in this configuration, otherwise we
use the slice mask.

Fixes: 7aa0727f3302 ("powerpc/mm: Increase the slice range to 64TB")
Reported-by: Cyril Bur <cyrilbur@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/pgtable-ppc64.h | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index 3bb7488bd24b..7ee2300ee392 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -135,7 +135,19 @@
 #define pte_iterate_hashed_end() } while(0)
 
 #ifdef CONFIG_PPC_HAS_HASH_64K
-#define pte_pagesize_index(mm, addr, pte)	get_slice_psize(mm, addr)
+/*
+ * We expect this to be called only for user addresses or kernel virtual
+ * addresses other than the linear mapping.
+ */
+#define pte_pagesize_index(mm, addr, pte)			\
+	({							\
+		unsigned int psize;				\
+		if (is_kernel_addr(addr))			\
+			psize = MMU_PAGE_4K;			\
+		else						\
+			psize = get_slice_psize(mm, addr);	\
+		psize;						\
+	})
 #else
 #define pte_pagesize_index(mm, addr, pte)	MMU_PAGE_4K
 #endif
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/5] powerpc/cell: Drop support for 64K local store on 4K kernels
  2015-08-07  6:19 [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Michael Ellerman
@ 2015-08-07  6:19 ` Michael Ellerman
  2015-08-10  8:14   ` Jeremy Kerr
  2015-08-07  6:19 ` [PATCH 3/5] powerpc/mm: Drop the 64K on 4K version of pte_pagesize_index() Michael Ellerman
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Michael Ellerman @ 2015-08-07  6:19 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aneesh.kumar, Benjamin Herrenschmidt, Jeremy Kerr

Back in the olden days we added support for using 64K pages to map the
SPU (Synergistic Processing Unit) local store on Cell, when the main
kernel was using 4K pages.

This was useful at the time because distros were using 4K pages, but
using 64K pages on the SPUs could reduce TLB pressure there.

However these days the number of Cell users is approaching zero, and
supporting this option adds unpleasant complexity to the memory
management code.

So drop the option, CONFIG_SPU_FS_64K_LS, and all related code.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/spu_csa.h              |   6 --
 arch/powerpc/mm/hugetlbpage.c                   |   8 --
 arch/powerpc/platforms/cell/Kconfig             |  15 ---
 arch/powerpc/platforms/cell/spufs/file.c        |  55 -----------
 arch/powerpc/platforms/cell/spufs/lscsa_alloc.c | 124 +-----------------------
 5 files changed, 2 insertions(+), 206 deletions(-)

diff --git a/arch/powerpc/include/asm/spu_csa.h b/arch/powerpc/include/asm/spu_csa.h
index a40fd491250c..51f80b41cda3 100644
--- a/arch/powerpc/include/asm/spu_csa.h
+++ b/arch/powerpc/include/asm/spu_csa.h
@@ -241,12 +241,6 @@ struct spu_priv2_collapsed {
  */
 struct spu_state {
 	struct spu_lscsa *lscsa;
-#ifdef CONFIG_SPU_FS_64K_LS
-	int		use_big_pages;
-	/* One struct page per 64k page */
-#define SPU_LSCSA_NUM_BIG_PAGES	(sizeof(struct spu_lscsa) / 0x10000)
-	struct page	*lscsa_pages[SPU_LSCSA_NUM_BIG_PAGES];
-#endif
 	struct spu_problem_collapsed prob;
 	struct spu_priv1_collapsed priv1;
 	struct spu_priv2_collapsed priv2;
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index bb0bd7025cb8..06c14523b787 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -808,14 +808,6 @@ static int __init add_huge_page_size(unsigned long long size)
 	if ((mmu_psize = shift_to_mmu_psize(shift)) < 0)
 		return -EINVAL;
 
-#ifdef CONFIG_SPU_FS_64K_LS
-	/* Disable support for 64K huge pages when 64K SPU local store
-	 * support is enabled as the current implementation conflicts.
-	 */
-	if (shift == PAGE_SHIFT_64K)
-		return -EINVAL;
-#endif /* CONFIG_SPU_FS_64K_LS */
-
 	BUG_ON(mmu_psize_defs[mmu_psize].shift != shift);
 
 	/* Return if huge page size has already been setup */
diff --git a/arch/powerpc/platforms/cell/Kconfig b/arch/powerpc/platforms/cell/Kconfig
index 2f23133ab3d1..b0ac1773cea6 100644
--- a/arch/powerpc/platforms/cell/Kconfig
+++ b/arch/powerpc/platforms/cell/Kconfig
@@ -57,21 +57,6 @@ config SPU_FS
 	  Units on machines implementing the Broadband Processor
 	  Architecture.
 
-config SPU_FS_64K_LS
-	bool "Use 64K pages to map SPE local  store"
-	# we depend on PPC_MM_SLICES for now rather than selecting
-	# it because we depend on hugetlbfs hooks being present. We
-	# will fix that when the generic code has been improved to
-	# not require hijacking hugetlbfs hooks.
-	depends on SPU_FS && PPC_MM_SLICES && !PPC_64K_PAGES
-	default y
-	select PPC_HAS_HASH_64K
-	help
-	  This option causes SPE local stores to be mapped in process
-	  address spaces using 64K pages while the rest of the kernel
-	  uses 4K pages. This can improve performances of applications
-	  using multiple SPEs by lowering the TLB pressure on them.
-
 config SPU_BASE
 	bool
 	default n
diff --git a/arch/powerpc/platforms/cell/spufs/file.c b/arch/powerpc/platforms/cell/spufs/file.c
index d966bbe58b8f..5038fd578e65 100644
--- a/arch/powerpc/platforms/cell/spufs/file.c
+++ b/arch/powerpc/platforms/cell/spufs/file.c
@@ -239,23 +239,6 @@ spufs_mem_mmap_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
 	unsigned long address = (unsigned long)vmf->virtual_address;
 	unsigned long pfn, offset;
 
-#ifdef CONFIG_SPU_FS_64K_LS
-	struct spu_state *csa = &ctx->csa;
-	int psize;
-
-	/* Check what page size we are using */
-	psize = get_slice_psize(vma->vm_mm, address);
-
-	/* Some sanity checking */
-	BUG_ON(csa->use_big_pages != (psize == MMU_PAGE_64K));
-
-	/* Wow, 64K, cool, we need to align the address though */
-	if (csa->use_big_pages) {
-		BUG_ON(vma->vm_start & 0xffff);
-		address &= ~0xfffful;
-	}
-#endif /* CONFIG_SPU_FS_64K_LS */
-
 	offset = vmf->pgoff << PAGE_SHIFT;
 	if (offset >= LS_SIZE)
 		return VM_FAULT_SIGBUS;
@@ -310,22 +293,6 @@ static const struct vm_operations_struct spufs_mem_mmap_vmops = {
 
 static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
 {
-#ifdef CONFIG_SPU_FS_64K_LS
-	struct spu_context	*ctx = file->private_data;
-	struct spu_state	*csa = &ctx->csa;
-
-	/* Sanity check VMA alignment */
-	if (csa->use_big_pages) {
-		pr_debug("spufs_mem_mmap 64K, start=0x%lx, end=0x%lx,"
-			 " pgoff=0x%lx\n", vma->vm_start, vma->vm_end,
-			 vma->vm_pgoff);
-		if (vma->vm_start & 0xffff)
-			return -EINVAL;
-		if (vma->vm_pgoff & 0xf)
-			return -EINVAL;
-	}
-#endif /* CONFIG_SPU_FS_64K_LS */
-
 	if (!(vma->vm_flags & VM_SHARED))
 		return -EINVAL;
 
@@ -336,25 +303,6 @@ static int spufs_mem_mmap(struct file *file, struct vm_area_struct *vma)
 	return 0;
 }
 
-#ifdef CONFIG_SPU_FS_64K_LS
-static unsigned long spufs_get_unmapped_area(struct file *file,
-		unsigned long addr, unsigned long len, unsigned long pgoff,
-		unsigned long flags)
-{
-	struct spu_context	*ctx = file->private_data;
-	struct spu_state	*csa = &ctx->csa;
-
-	/* If not using big pages, fallback to normal MM g_u_a */
-	if (!csa->use_big_pages)
-		return current->mm->get_unmapped_area(file, addr, len,
-						      pgoff, flags);
-
-	/* Else, try to obtain a 64K pages slice */
-	return slice_get_unmapped_area(addr, len, flags,
-				       MMU_PAGE_64K, 1);
-}
-#endif /* CONFIG_SPU_FS_64K_LS */
-
 static const struct file_operations spufs_mem_fops = {
 	.open			= spufs_mem_open,
 	.release		= spufs_mem_release,
@@ -362,9 +310,6 @@ static const struct file_operations spufs_mem_fops = {
 	.write			= spufs_mem_write,
 	.llseek			= generic_file_llseek,
 	.mmap			= spufs_mem_mmap,
-#ifdef CONFIG_SPU_FS_64K_LS
-	.get_unmapped_area	= spufs_get_unmapped_area,
-#endif
 };
 
 static int spufs_ps_fault(struct vm_area_struct *vma,
diff --git a/arch/powerpc/platforms/cell/spufs/lscsa_alloc.c b/arch/powerpc/platforms/cell/spufs/lscsa_alloc.c
index 147069938cfe..b847e9403566 100644
--- a/arch/powerpc/platforms/cell/spufs/lscsa_alloc.c
+++ b/arch/powerpc/platforms/cell/spufs/lscsa_alloc.c
@@ -31,7 +31,7 @@
 
 #include "spufs.h"
 
-static int spu_alloc_lscsa_std(struct spu_state *csa)
+int spu_alloc_lscsa(struct spu_state *csa)
 {
 	struct spu_lscsa *lscsa;
 	unsigned char *p;
@@ -48,7 +48,7 @@ static int spu_alloc_lscsa_std(struct spu_state *csa)
 	return 0;
 }
 
-static void spu_free_lscsa_std(struct spu_state *csa)
+void spu_free_lscsa(struct spu_state *csa)
 {
 	/* Clear reserved bit before vfree. */
 	unsigned char *p;
@@ -61,123 +61,3 @@ static void spu_free_lscsa_std(struct spu_state *csa)
 
 	vfree(csa->lscsa);
 }
-
-#ifdef CONFIG_SPU_FS_64K_LS
-
-#define SPU_64K_PAGE_SHIFT	16
-#define SPU_64K_PAGE_ORDER	(SPU_64K_PAGE_SHIFT - PAGE_SHIFT)
-#define SPU_64K_PAGE_COUNT	(1ul << SPU_64K_PAGE_ORDER)
-
-int spu_alloc_lscsa(struct spu_state *csa)
-{
-	struct page	**pgarray;
-	unsigned char	*p;
-	int		i, j, n_4k;
-
-	/* Check availability of 64K pages */
-	if (!spu_64k_pages_available())
-		goto fail;
-
-	csa->use_big_pages = 1;
-
-	pr_debug("spu_alloc_lscsa(csa=0x%p), trying to allocate 64K pages\n",
-		 csa);
-
-	/* First try to allocate our 64K pages. We need 5 of them
-	 * with the current implementation. In the future, we should try
-	 * to separate the lscsa with the actual local store image, thus
-	 * allowing us to require only 4 64K pages per context
-	 */
-	for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++) {
-		/* XXX This is likely to fail, we should use a special pool
-		 *     similar to what hugetlbfs does.
-		 */
-		csa->lscsa_pages[i] = alloc_pages(GFP_KERNEL,
-						  SPU_64K_PAGE_ORDER);
-		if (csa->lscsa_pages[i] == NULL)
-			goto fail;
-	}
-
-	pr_debug(" success ! creating vmap...\n");
-
-	/* Now we need to create a vmalloc mapping of these for the kernel
-	 * and SPU context switch code to use. Currently, we stick to a
-	 * normal kernel vmalloc mapping, which in our case will be 4K
-	 */
-	n_4k = SPU_64K_PAGE_COUNT * SPU_LSCSA_NUM_BIG_PAGES;
-	pgarray = kmalloc(sizeof(struct page *) * n_4k, GFP_KERNEL);
-	if (pgarray == NULL)
-		goto fail;
-	for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++)
-		for (j = 0; j < SPU_64K_PAGE_COUNT; j++)
-			/* We assume all the struct page's are contiguous
-			 * which should be hopefully the case for an order 4
-			 * allocation..
-			 */
-			pgarray[i * SPU_64K_PAGE_COUNT + j] =
-				csa->lscsa_pages[i] + j;
-	csa->lscsa = vmap(pgarray, n_4k, VM_USERMAP, PAGE_KERNEL);
-	kfree(pgarray);
-	if (csa->lscsa == NULL)
-		goto fail;
-
-	memset(csa->lscsa, 0, sizeof(struct spu_lscsa));
-
-	/* Set LS pages reserved to allow for user-space mapping.
-	 *
-	 * XXX isn't that a bit obsolete ? I think we should just
-	 * make sure the page count is high enough. Anyway, won't harm
-	 * for now
-	 */
-	for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE)
-		SetPageReserved(vmalloc_to_page(p));
-
-	pr_debug(" all good !\n");
-
-	return 0;
-fail:
-	pr_debug("spufs: failed to allocate lscsa 64K pages, falling back\n");
-	spu_free_lscsa(csa);
-	return spu_alloc_lscsa_std(csa);
-}
-
-void spu_free_lscsa(struct spu_state *csa)
-{
-	unsigned char *p;
-	int i;
-
-	if (!csa->use_big_pages) {
-		spu_free_lscsa_std(csa);
-		return;
-	}
-	csa->use_big_pages = 0;
-
-	if (csa->lscsa == NULL)
-		goto free_pages;
-
-	for (p = csa->lscsa->ls; p < csa->lscsa->ls + LS_SIZE; p += PAGE_SIZE)
-		ClearPageReserved(vmalloc_to_page(p));
-
-	vunmap(csa->lscsa);
-	csa->lscsa = NULL;
-
- free_pages:
-
-	for (i = 0; i < SPU_LSCSA_NUM_BIG_PAGES; i++)
-		if (csa->lscsa_pages[i])
-			__free_pages(csa->lscsa_pages[i], SPU_64K_PAGE_ORDER);
-}
-
-#else /* CONFIG_SPU_FS_64K_LS */
-
-int spu_alloc_lscsa(struct spu_state *csa)
-{
-	return spu_alloc_lscsa_std(csa);
-}
-
-void spu_free_lscsa(struct spu_state *csa)
-{
-	spu_free_lscsa_std(csa);
-}
-
-#endif /* !defined(CONFIG_SPU_FS_64K_LS) */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/5] powerpc/mm: Drop the 64K on 4K version of pte_pagesize_index()
  2015-08-07  6:19 [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Michael Ellerman
  2015-08-07  6:19 ` [PATCH 2/5] powerpc/cell: Drop support for 64K local store on 4K kernels Michael Ellerman
@ 2015-08-07  6:19 ` Michael Ellerman
  2015-08-10  5:34   ` Aneesh Kumar K.V
  2015-08-07  6:19 ` [PATCH 4/5] powerpc/mm: Simplify page size kconfig dependencies Michael Ellerman
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Michael Ellerman @ 2015-08-07  6:19 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aneesh.kumar, Benjamin Herrenschmidt, Jeremy Kerr

Now that support for 64k pages with a 4K kernel is removed, this code is
unreachable.

CONFIG_PPC_HAS_HASH_64K can only be true when CONFIG_PPC_64K_PAGES is
also true.

But when CONFIG_PPC_64K_PAGES is true we include pte-hash64.h which
includes pte-hash64-64k.h, which defines both pte_pagesize_index() and
crucially __real_pte, which means this defintion can never be used.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/pgtable-ppc64.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
index 7ee2300ee392..fa1dfb7f7b48 100644
--- a/arch/powerpc/include/asm/pgtable-ppc64.h
+++ b/arch/powerpc/include/asm/pgtable-ppc64.h
@@ -134,23 +134,11 @@
 
 #define pte_iterate_hashed_end() } while(0)
 
-#ifdef CONFIG_PPC_HAS_HASH_64K
 /*
  * We expect this to be called only for user addresses or kernel virtual
  * addresses other than the linear mapping.
  */
-#define pte_pagesize_index(mm, addr, pte)			\
-	({							\
-		unsigned int psize;				\
-		if (is_kernel_addr(addr))			\
-			psize = MMU_PAGE_4K;			\
-		else						\
-			psize = get_slice_psize(mm, addr);	\
-		psize;						\
-	})
-#else
 #define pte_pagesize_index(mm, addr, pte)	MMU_PAGE_4K
-#endif
 
 #endif /* __real_pte */
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/5] powerpc/mm: Simplify page size kconfig dependencies
  2015-08-07  6:19 [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Michael Ellerman
  2015-08-07  6:19 ` [PATCH 2/5] powerpc/cell: Drop support for 64K local store on 4K kernels Michael Ellerman
  2015-08-07  6:19 ` [PATCH 3/5] powerpc/mm: Drop the 64K on 4K version of pte_pagesize_index() Michael Ellerman
@ 2015-08-07  6:19 ` Michael Ellerman
  2015-08-10  5:36   ` Aneesh Kumar K.V
  2015-08-07  6:19 ` [PATCH 5/5] powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K Michael Ellerman
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Michael Ellerman @ 2015-08-07  6:19 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aneesh.kumar, Benjamin Herrenschmidt, Jeremy Kerr

For config options with only a single value, guarding the single value
with 'if' is the same as adding a 'depends' statement. And it's more
standard to just use 'depends'.

And if the option has both an 'if' guard and a 'depends' we can collapse
them into a single 'depends' by combining them with &&.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 5ef27113b898..3a4ba2809201 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -560,16 +560,17 @@ config PPC_4K_PAGES
 	bool "4k page size"
 
 config PPC_16K_PAGES
-	bool "16k page size" if 44x || PPC_8xx
+	bool "16k page size"
+	depends on 44x || PPC_8xx
 
 config PPC_64K_PAGES
-	bool "64k page size" if 44x || PPC_STD_MMU_64 || PPC_BOOK3E_64
-	depends on !PPC_FSL_BOOK3E
+	bool "64k page size"
+	depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
 	select PPC_HAS_HASH_64K if PPC_STD_MMU_64
 
 config PPC_256K_PAGES
-	bool "256k page size" if 44x
-	depends on !STDBINUTILS
+	bool "256k page size"
+	depends on 44x && !STDBINUTILS
 	help
 	  Make the page size 256k.
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/5] powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K
  2015-08-07  6:19 [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Michael Ellerman
                   ` (2 preceding siblings ...)
  2015-08-07  6:19 ` [PATCH 4/5] powerpc/mm: Simplify page size kconfig dependencies Michael Ellerman
@ 2015-08-07  6:19 ` Michael Ellerman
  2015-08-10  5:37   ` Aneesh Kumar K.V
  2015-08-10  5:33 ` [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Aneesh Kumar K.V
  2015-08-19 23:14 ` [1/5] " Michael Ellerman
  5 siblings, 1 reply; 11+ messages in thread
From: Michael Ellerman @ 2015-08-07  6:19 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: aneesh.kumar, Benjamin Herrenschmidt, Jeremy Kerr

The relation between CONFIG_PPC_HAS_HASH_64K and CONFIG_PPC_64K_PAGES is
painfully complicated.

But if we rearrange it enough we can see that PPC_HAS_HASH_64K
essentially depends on PPC_STD_MMU_64 && PPC_64K_PAGES.

We can then notice that PPC_HAS_HASH_64K is used in files that are only
built for PPC_STD_MMU_64, meaning it's equivalent to PPC_64K_PAGES.

So replace all uses and drop it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig            |  6 ------
 arch/powerpc/mm/hash_low_64.S   |  4 ++--
 arch/powerpc/mm/hash_utils_64.c | 12 ++++++------
 3 files changed, 8 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 3a4ba2809201..1e69dee29be3 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -514,11 +514,6 @@ config NODES_SPAN_OTHER_NODES
 	def_bool y
 	depends on NEED_MULTIPLE_NODES
 
-config PPC_HAS_HASH_64K
-	bool
-	depends on PPC64
-	default n
-
 config STDBINUTILS
 	bool "Using standard binutils settings"
 	depends on 44x
@@ -566,7 +561,6 @@ config PPC_16K_PAGES
 config PPC_64K_PAGES
 	bool "64k page size"
 	depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
-	select PPC_HAS_HASH_64K if PPC_STD_MMU_64
 
 config PPC_256K_PAGES
 	bool "256k page size"
diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
index 463174a4a647..3b49e3295901 100644
--- a/arch/powerpc/mm/hash_low_64.S
+++ b/arch/powerpc/mm/hash_low_64.S
@@ -701,7 +701,7 @@ htab_pte_insert_failure:
 
 #endif /* CONFIG_PPC_64K_PAGES */
 
-#ifdef CONFIG_PPC_HAS_HASH_64K
+#ifdef CONFIG_PPC_64K_PAGES
 
 /*****************************************************************************
  *                                                                           *
@@ -993,7 +993,7 @@ ht64_pte_insert_failure:
 	b	ht64_bail
 
 
-#endif /* CONFIG_PPC_HAS_HASH_64K */
+#endif /* CONFIG_PPC_64K_PAGES */
 
 
 /*****************************************************************************
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 5ec987f65b2c..aee70171355b 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -640,7 +640,7 @@ extern u32 ht64_call_hpte_updatepp[];
 
 static void __init htab_finish_init(void)
 {
-#ifdef CONFIG_PPC_HAS_HASH_64K
+#ifdef CONFIG_PPC_64K_PAGES
 	patch_branch(ht64_call_hpte_insert1,
 		ppc_function_entry(ppc_md.hpte_insert),
 		BRANCH_SET_LINK);
@@ -653,7 +653,7 @@ static void __init htab_finish_init(void)
 	patch_branch(ht64_call_hpte_updatepp,
 		ppc_function_entry(ppc_md.hpte_updatepp),
 		BRANCH_SET_LINK);
-#endif /* CONFIG_PPC_HAS_HASH_64K */
+#endif /* CONFIG_PPC_64K_PAGES */
 
 	patch_branch(htab_call_hpte_insert1,
 		ppc_function_entry(ppc_md.hpte_insert),
@@ -1151,12 +1151,12 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
 		check_paca_psize(ea, mm, psize, user_region);
 #endif /* CONFIG_PPC_64K_PAGES */
 
-#ifdef CONFIG_PPC_HAS_HASH_64K
+#ifdef CONFIG_PPC_64K_PAGES
 	if (psize == MMU_PAGE_64K)
 		rc = __hash_page_64K(ea, access, vsid, ptep, trap,
 				     flags, ssize);
 	else
-#endif /* CONFIG_PPC_HAS_HASH_64K */
+#endif /* CONFIG_PPC_64K_PAGES */
 	{
 		int spp = subpage_protection(mm, ea);
 		if (access & spp)
@@ -1264,12 +1264,12 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
 		update_flags |= HPTE_LOCAL_UPDATE;
 
 	/* Hash it in */
-#ifdef CONFIG_PPC_HAS_HASH_64K
+#ifdef CONFIG_PPC_64K_PAGES
 	if (mm->context.user_psize == MMU_PAGE_64K)
 		rc = __hash_page_64K(ea, access, vsid, ptep, trap,
 				     update_flags, ssize);
 	else
-#endif /* CONFIG_PPC_HAS_HASH_64K */
+#endif /* CONFIG_PPC_64K_PAGES */
 		rc = __hash_page_4K(ea, access, vsid, ptep, trap, update_flags,
 				    ssize, subpage_protection(mm, ea));
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash
  2015-08-07  6:19 [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Michael Ellerman
                   ` (3 preceding siblings ...)
  2015-08-07  6:19 ` [PATCH 5/5] powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K Michael Ellerman
@ 2015-08-10  5:33 ` Aneesh Kumar K.V
  2015-08-19 23:14 ` [1/5] " Michael Ellerman
  5 siblings, 0 replies; 11+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-10  5:33 UTC (permalink / raw)
  To: Michael Ellerman, linuxppc-dev; +Cc: Benjamin Herrenschmidt, Jeremy Kerr

Michael Ellerman <mpe@ellerman.id.au> writes:

> The powerpc kernel can be built to have either a 4K PAGE_SIZE or a 64K
> PAGE_SIZE.
>
> However when built with a 4K PAGE_SIZE there is an additional config
> option which can be enabled, PPC_HAS_HASH_64K, which means the kernel
> also knows how to hash a 64K page even though the base PAGE_SIZE is 4K.
>
> This is used in one obscure configuration, to support 64K pages for SPU
> local store on the Cell processor when the rest of the kernel is using
> 4K pages.
>
> In this configuration, pte_pagesize_index() is defined to just pass
> through its arguments to get_slice_psize(). However pte_pagesize_index()
> is called for both user and kernel addresses, whereas get_slice_psize()
> only knows how to handle user addresses.
>
> This has been broken forever, however until recently it happened to
> work. That was because in get_slice_psize() the large kernel address
> would cause the right shift of the slice mask to return zero.
>
> However in commit 7aa0727f3302 "powerpc/mm: Increase the slice range to
> 64TB", the get_slice_psize() code was changed so that instead of a right
> shift we do an array lookup based on the address. When passed a kernel
> address this means we index way off the end of the slice array and
> return random junk.
>
> That is only fatal if we happen to hit something non-zero, but when we
> do return a non-zero value we confuse the MMU code and eventually cause
> a check stop.
>
> This fix is ugly, but simple. When we're called for a kernel address we
> return 4K, which is always correct in this configuration, otherwise we
> use the slice mask.
>
> Fixes: 7aa0727f3302 ("powerpc/mm: Increase the slice range to 64TB")
> Reported-by: Cyril Bur <cyrilbur@gmail.com>
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>


Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

> ---
>  arch/powerpc/include/asm/pgtable-ppc64.h | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
> index 3bb7488bd24b..7ee2300ee392 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
> @@ -135,7 +135,19 @@
>  #define pte_iterate_hashed_end() } while(0)
>
>  #ifdef CONFIG_PPC_HAS_HASH_64K
> -#define pte_pagesize_index(mm, addr, pte)	get_slice_psize(mm, addr)
> +/*
> + * We expect this to be called only for user addresses or kernel virtual
> + * addresses other than the linear mapping.
> + */
> +#define pte_pagesize_index(mm, addr, pte)			\
> +	({							\
> +		unsigned int psize;				\
> +		if (is_kernel_addr(addr))			\
> +			psize = MMU_PAGE_4K;			\
> +		else						\
> +			psize = get_slice_psize(mm, addr);	\
> +		psize;						\
> +	})
>  #else
>  #define pte_pagesize_index(mm, addr, pte)	MMU_PAGE_4K
>  #endif
> -- 
> 2.1.4

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/5] powerpc/mm: Drop the 64K on 4K version of pte_pagesize_index()
  2015-08-07  6:19 ` [PATCH 3/5] powerpc/mm: Drop the 64K on 4K version of pte_pagesize_index() Michael Ellerman
@ 2015-08-10  5:34   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 11+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-10  5:34 UTC (permalink / raw)
  To: Michael Ellerman, linuxppc-dev; +Cc: Benjamin Herrenschmidt, Jeremy Kerr

Michael Ellerman <mpe@ellerman.id.au> writes:

> Now that support for 64k pages with a 4K kernel is removed, this code is
> unreachable.
>
> CONFIG_PPC_HAS_HASH_64K can only be true when CONFIG_PPC_64K_PAGES is
> also true.
>
> But when CONFIG_PPC_64K_PAGES is true we include pte-hash64.h which
> includes pte-hash64-64k.h, which defines both pte_pagesize_index() and
> crucially __real_pte, which means this defintion can never be used.
>
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/pgtable-ppc64.h | 12 ------------
>  1 file changed, 12 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/pgtable-ppc64.h b/arch/powerpc/include/asm/pgtable-ppc64.h
> index 7ee2300ee392..fa1dfb7f7b48 100644
> --- a/arch/powerpc/include/asm/pgtable-ppc64.h
> +++ b/arch/powerpc/include/asm/pgtable-ppc64.h
> @@ -134,23 +134,11 @@
>
>  #define pte_iterate_hashed_end() } while(0)
>
> -#ifdef CONFIG_PPC_HAS_HASH_64K
>  /*
>   * We expect this to be called only for user addresses or kernel virtual
>   * addresses other than the linear mapping.
>   */
> -#define pte_pagesize_index(mm, addr, pte)			\
> -	({							\
> -		unsigned int psize;				\
> -		if (is_kernel_addr(addr))			\
> -			psize = MMU_PAGE_4K;			\
> -		else						\
> -			psize = get_slice_psize(mm, addr);	\
> -		psize;						\
> -	})
> -#else
>  #define pte_pagesize_index(mm, addr, pte)	MMU_PAGE_4K
> -#endif
>
>  #endif /* __real_pte */
>
> -- 
> 2.1.4

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/5] powerpc/mm: Simplify page size kconfig dependencies
  2015-08-07  6:19 ` [PATCH 4/5] powerpc/mm: Simplify page size kconfig dependencies Michael Ellerman
@ 2015-08-10  5:36   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 11+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-10  5:36 UTC (permalink / raw)
  To: Michael Ellerman, linuxppc-dev; +Cc: Benjamin Herrenschmidt, Jeremy Kerr

Michael Ellerman <mpe@ellerman.id.au> writes:

> For config options with only a single value, guarding the single value
> with 'if' is the same as adding a 'depends' statement. And it's more
> standard to just use 'depends'.
>
> And if the option has both an 'if' guard and a 'depends' we can collapse
> them into a single 'depends' by combining them with &&.
>
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

> ---
>  arch/powerpc/Kconfig | 11 ++++++-----
>  1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 5ef27113b898..3a4ba2809201 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -560,16 +560,17 @@ config PPC_4K_PAGES
>  	bool "4k page size"
>
>  config PPC_16K_PAGES
> -	bool "16k page size" if 44x || PPC_8xx
> +	bool "16k page size"
> +	depends on 44x || PPC_8xx
>
>  config PPC_64K_PAGES
> -	bool "64k page size" if 44x || PPC_STD_MMU_64 || PPC_BOOK3E_64
> -	depends on !PPC_FSL_BOOK3E
> +	bool "64k page size"
> +	depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
>  	select PPC_HAS_HASH_64K if PPC_STD_MMU_64
>
>  config PPC_256K_PAGES
> -	bool "256k page size" if 44x
> -	depends on !STDBINUTILS
> +	bool "256k page size"
> +	depends on 44x && !STDBINUTILS
>  	help
>  	  Make the page size 256k.
>
> -- 
> 2.1.4

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 5/5] powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K
  2015-08-07  6:19 ` [PATCH 5/5] powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K Michael Ellerman
@ 2015-08-10  5:37   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 11+ messages in thread
From: Aneesh Kumar K.V @ 2015-08-10  5:37 UTC (permalink / raw)
  To: Michael Ellerman, linuxppc-dev; +Cc: Benjamin Herrenschmidt, Jeremy Kerr

Michael Ellerman <mpe@ellerman.id.au> writes:

> The relation between CONFIG_PPC_HAS_HASH_64K and CONFIG_PPC_64K_PAGES is
> painfully complicated.
>
> But if we rearrange it enough we can see that PPC_HAS_HASH_64K
> essentially depends on PPC_STD_MMU_64 && PPC_64K_PAGES.
>
> We can then notice that PPC_HAS_HASH_64K is used in files that are only
> built for PPC_STD_MMU_64, meaning it's equivalent to PPC_64K_PAGES.
>
> So replace all uses and drop it.
>
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

> ---
>  arch/powerpc/Kconfig            |  6 ------
>  arch/powerpc/mm/hash_low_64.S   |  4 ++--
>  arch/powerpc/mm/hash_utils_64.c | 12 ++++++------
>  3 files changed, 8 insertions(+), 14 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 3a4ba2809201..1e69dee29be3 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -514,11 +514,6 @@ config NODES_SPAN_OTHER_NODES
>  	def_bool y
>  	depends on NEED_MULTIPLE_NODES
>
> -config PPC_HAS_HASH_64K
> -	bool
> -	depends on PPC64
> -	default n
> -
>  config STDBINUTILS
>  	bool "Using standard binutils settings"
>  	depends on 44x
> @@ -566,7 +561,6 @@ config PPC_16K_PAGES
>  config PPC_64K_PAGES
>  	bool "64k page size"
>  	depends on !PPC_FSL_BOOK3E && (44x || PPC_STD_MMU_64 || PPC_BOOK3E_64)
> -	select PPC_HAS_HASH_64K if PPC_STD_MMU_64
>
>  config PPC_256K_PAGES
>  	bool "256k page size"
> diff --git a/arch/powerpc/mm/hash_low_64.S b/arch/powerpc/mm/hash_low_64.S
> index 463174a4a647..3b49e3295901 100644
> --- a/arch/powerpc/mm/hash_low_64.S
> +++ b/arch/powerpc/mm/hash_low_64.S
> @@ -701,7 +701,7 @@ htab_pte_insert_failure:
>
>  #endif /* CONFIG_PPC_64K_PAGES */
>
> -#ifdef CONFIG_PPC_HAS_HASH_64K
> +#ifdef CONFIG_PPC_64K_PAGES
>
>  /*****************************************************************************
>   *                                                                           *
> @@ -993,7 +993,7 @@ ht64_pte_insert_failure:
>  	b	ht64_bail
>
>
> -#endif /* CONFIG_PPC_HAS_HASH_64K */
> +#endif /* CONFIG_PPC_64K_PAGES */
>
>
>  /*****************************************************************************
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index 5ec987f65b2c..aee70171355b 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -640,7 +640,7 @@ extern u32 ht64_call_hpte_updatepp[];
>
>  static void __init htab_finish_init(void)
>  {
> -#ifdef CONFIG_PPC_HAS_HASH_64K
> +#ifdef CONFIG_PPC_64K_PAGES
>  	patch_branch(ht64_call_hpte_insert1,
>  		ppc_function_entry(ppc_md.hpte_insert),
>  		BRANCH_SET_LINK);
> @@ -653,7 +653,7 @@ static void __init htab_finish_init(void)
>  	patch_branch(ht64_call_hpte_updatepp,
>  		ppc_function_entry(ppc_md.hpte_updatepp),
>  		BRANCH_SET_LINK);
> -#endif /* CONFIG_PPC_HAS_HASH_64K */
> +#endif /* CONFIG_PPC_64K_PAGES */
>
>  	patch_branch(htab_call_hpte_insert1,
>  		ppc_function_entry(ppc_md.hpte_insert),
> @@ -1151,12 +1151,12 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
>  		check_paca_psize(ea, mm, psize, user_region);
>  #endif /* CONFIG_PPC_64K_PAGES */
>
> -#ifdef CONFIG_PPC_HAS_HASH_64K
> +#ifdef CONFIG_PPC_64K_PAGES
>  	if (psize == MMU_PAGE_64K)
>  		rc = __hash_page_64K(ea, access, vsid, ptep, trap,
>  				     flags, ssize);
>  	else
> -#endif /* CONFIG_PPC_HAS_HASH_64K */
> +#endif /* CONFIG_PPC_64K_PAGES */
>  	{
>  		int spp = subpage_protection(mm, ea);
>  		if (access & spp)
> @@ -1264,12 +1264,12 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
>  		update_flags |= HPTE_LOCAL_UPDATE;
>
>  	/* Hash it in */
> -#ifdef CONFIG_PPC_HAS_HASH_64K
> +#ifdef CONFIG_PPC_64K_PAGES
>  	if (mm->context.user_psize == MMU_PAGE_64K)
>  		rc = __hash_page_64K(ea, access, vsid, ptep, trap,
>  				     update_flags, ssize);
>  	else
> -#endif /* CONFIG_PPC_HAS_HASH_64K */
> +#endif /* CONFIG_PPC_64K_PAGES */
>  		rc = __hash_page_4K(ea, access, vsid, ptep, trap, update_flags,
>  				    ssize, subpage_protection(mm, ea));
>
> -- 
> 2.1.4

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/5] powerpc/cell: Drop support for 64K local store on 4K kernels
  2015-08-07  6:19 ` [PATCH 2/5] powerpc/cell: Drop support for 64K local store on 4K kernels Michael Ellerman
@ 2015-08-10  8:14   ` Jeremy Kerr
  0 siblings, 0 replies; 11+ messages in thread
From: Jeremy Kerr @ 2015-08-10  8:14 UTC (permalink / raw)
  To: Michael Ellerman, linuxppc-dev; +Cc: aneesh.kumar, Benjamin Herrenschmidt

Hi Michael,

> Back in the olden days we added support for using 64K pages to map the
> SPU (Synergistic Processing Unit) local store on Cell, when the main
> kernel was using 4K pages.
> 
> This was useful at the time because distros were using 4K pages, but
> using 64K pages on the SPUs could reduce TLB pressure there.
> 
> However these days the number of Cell users is approaching zero, and
> supporting this option adds unpleasant complexity to the memory
> management code.
> 
> So drop the option, CONFIG_SPU_FS_64K_LS, and all related code.

Yep, I'd be happy to drop this - impact should be little to none.

Acked-by: Jeremy Kerr <jk@ozlabs.org>

Cheers,


Jeremy

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash
  2015-08-07  6:19 [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Michael Ellerman
                   ` (4 preceding siblings ...)
  2015-08-10  5:33 ` [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Aneesh Kumar K.V
@ 2015-08-19 23:14 ` Michael Ellerman
  5 siblings, 0 replies; 11+ messages in thread
From: Michael Ellerman @ 2015-08-19 23:14 UTC (permalink / raw)
  To: Michael Ellerman, linuxppc-dev; +Cc: aneesh.kumar, Jeremy Kerr

On Fri, 2015-07-08 at 06:19:43 UTC, Michael Ellerman wrote:
> The powerpc kernel can be built to have either a 4K PAGE_SIZE or a 64K
> PAGE_SIZE.
...

> This fix is ugly, but simple. When we're called for a kernel address we
> return 4K, which is always correct in this configuration, otherwise we
> use the slice mask.
> 
> Fixes: 7aa0727f3302 ("powerpc/mm: Increase the slice range to 64TB")
> Reported-by: Cyril Bur <cyrilbur@gmail.com>
> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>

Series applied to powerpc next.

https://git.kernel.org/powerpc/c/74b5037baa2011a2799e

cheers

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-08-19 23:14 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-07  6:19 [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Michael Ellerman
2015-08-07  6:19 ` [PATCH 2/5] powerpc/cell: Drop support for 64K local store on 4K kernels Michael Ellerman
2015-08-10  8:14   ` Jeremy Kerr
2015-08-07  6:19 ` [PATCH 3/5] powerpc/mm: Drop the 64K on 4K version of pte_pagesize_index() Michael Ellerman
2015-08-10  5:34   ` Aneesh Kumar K.V
2015-08-07  6:19 ` [PATCH 4/5] powerpc/mm: Simplify page size kconfig dependencies Michael Ellerman
2015-08-10  5:36   ` Aneesh Kumar K.V
2015-08-07  6:19 ` [PATCH 5/5] powerpc/mm: Drop CONFIG_PPC_HAS_HASH_64K Michael Ellerman
2015-08-10  5:37   ` Aneesh Kumar K.V
2015-08-10  5:33 ` [PATCH 1/5] powerpc/mm: Fix pte_pagesize_index() crash on 4K w/64K hash Aneesh Kumar K.V
2015-08-19 23:14 ` [1/5] " Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).