linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/11] Reduce ifdef mess in slice.c
@ 2019-04-25 14:29 Christophe Leroy
  2019-04-25 14:29 ` [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init Christophe Leroy
                   ` (10 more replies)
  0 siblings, 11 replies; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

This series is a split out of the v1 series "Reduce ifdef mess in hugetlbpage.c and slice.c".

It is also rebased after the series from Aneesh to reduce context size for Radix.

See http://kisskb.ellerman.id.au/kisskb/branch/chleroy/head/f263887b4ca31f4bb0fe77823e301c28ba27c796/ for wide compilation.

Christophe Leroy (11):
  powerpc/mm: fix erroneous duplicate slb_addr_limit init
  powerpc/mm: no slice for nohash/64
  powerpc/mm: hand a context_t over to slice_mask_for_size() instead of
    mm_struct
  powerpc/mm: move slice_mask_for_size() into mmu.h
  powerpc/mm: get rid of mm_ctx_slice_mask_xxx()
  powerpc/mm: remove unnecessary #ifdef CONFIG_PPC64
  powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in
    mm/slice.c
  powerpc/8xx: get rid of #ifdef CONFIG_HUGETLB_PAGE for slices
  powerpc/mm: define get_slice_psize() all the time
  powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT
  powerpc/mm: drop slice DEBUG

 arch/powerpc/include/asm/book3s/64/mmu.h     |  29 +++---
 arch/powerpc/include/asm/book3s/64/slice.h   |   2 +
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h |  51 +++++------
 arch/powerpc/include/asm/nohash/32/slice.h   |   2 +
 arch/powerpc/include/asm/nohash/64/slice.h   |  12 ---
 arch/powerpc/include/asm/slice.h             |   9 +-
 arch/powerpc/kernel/setup-common.c           |   6 --
 arch/powerpc/mm/hash_utils_64.c              |   2 +-
 arch/powerpc/mm/hugetlbpage.c                |   4 +-
 arch/powerpc/mm/slice.c                      | 132 ++++-----------------------
 arch/powerpc/mm/tlb_nohash.c                 |   4 +-
 arch/powerpc/platforms/Kconfig.cputype       |   4 +
 12 files changed, 69 insertions(+), 188 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/nohash/64/slice.h

-- 
2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:32   ` Aneesh Kumar K.V
  2019-05-03  6:59   ` Michael Ellerman
  2019-04-25 14:29 ` [PATCH v2 02/11] powerpc/mm: no slice for nohash/64 Christophe Leroy
                   ` (9 subsequent siblings)
  10 siblings, 2 replies; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

Commit 67fda38f0d68 ("powerpc/mm: Move slb_addr_linit to
early_init_mmu") moved slb_addr_limit init out of setup_arch().

Commit 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t
for radix") brought it back into setup_arch() by error.

This patch reverts that erroneous regress.

Fixes: 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t for radix")
Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/setup-common.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 1729bf409562..7af085b38cd1 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -950,12 +950,6 @@ void __init setup_arch(char **cmdline_p)
 	init_mm.end_data = (unsigned long) _edata;
 	init_mm.brk = klimit;
 
-#ifdef CONFIG_PPC_MM_SLICES
-#if defined(CONFIG_PPC_8xx)
-	init_mm.context.slb_addr_limit = DEFAULT_MAP_WINDOW;
-#endif
-#endif
-
 #ifdef CONFIG_SPAPR_TCE_IOMMU
 	mm_iommu_init(&init_mm);
 #endif
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 02/11] powerpc/mm: no slice for nohash/64
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
  2019-04-25 14:29 ` [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:33   ` Aneesh Kumar K.V
  2019-04-25 14:29 ` [PATCH v2 03/11] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct Christophe Leroy
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

Only nohash/32 and book3s/64 support mm slices.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/nohash/64/slice.h | 12 ------------
 arch/powerpc/include/asm/slice.h           |  4 +---
 arch/powerpc/platforms/Kconfig.cputype     |  4 ++++
 3 files changed, 5 insertions(+), 15 deletions(-)
 delete mode 100644 arch/powerpc/include/asm/nohash/64/slice.h

diff --git a/arch/powerpc/include/asm/nohash/64/slice.h b/arch/powerpc/include/asm/nohash/64/slice.h
deleted file mode 100644
index ad0d6e3cc1c5..000000000000
--- a/arch/powerpc/include/asm/nohash/64/slice.h
+++ /dev/null
@@ -1,12 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_POWERPC_NOHASH_64_SLICE_H
-#define _ASM_POWERPC_NOHASH_64_SLICE_H
-
-#ifdef CONFIG_PPC_64K_PAGES
-#define get_slice_psize(mm, addr)	MMU_PAGE_64K
-#else /* CONFIG_PPC_64K_PAGES */
-#define get_slice_psize(mm, addr)	MMU_PAGE_4K
-#endif /* !CONFIG_PPC_64K_PAGES */
-#define slice_set_user_psize(mm, psize)	do { BUG(); } while (0)
-
-#endif /* _ASM_POWERPC_NOHASH_64_SLICE_H */
diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
index 44816cbc4198..be8af667098f 100644
--- a/arch/powerpc/include/asm/slice.h
+++ b/arch/powerpc/include/asm/slice.h
@@ -4,9 +4,7 @@
 
 #ifdef CONFIG_PPC_BOOK3S_64
 #include <asm/book3s/64/slice.h>
-#elif defined(CONFIG_PPC64)
-#include <asm/nohash/64/slice.h>
-#elif defined(CONFIG_PPC_MMU_NOHASH)
+#elif defined(CONFIG_PPC_MMU_NOHASH_32)
 #include <asm/nohash/32/slice.h>
 #endif
 
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 00b2bb536c74..04915f51f447 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -391,6 +391,10 @@ config PPC_MMU_NOHASH
 	def_bool y
 	depends on !PPC_BOOK3S
 
+config PPC_MMU_NOHASH_32
+	def_bool y
+	depends on PPC_MMU_NOHASH && PPC32
+
 config PPC_BOOK3E_MMU
 	def_bool y
 	depends on FSL_BOOKE || PPC_BOOK3E
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 03/11] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
  2019-04-25 14:29 ` [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init Christophe Leroy
  2019-04-25 14:29 ` [PATCH v2 02/11] powerpc/mm: no slice for nohash/64 Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:34   ` Aneesh Kumar K.V
  2019-04-25 14:29 ` [PATCH v2 04/11] powerpc/mm: move slice_mask_for_size() into mmu.h Christophe Leroy
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

slice_mask_for_size() only uses mm->context, so hand directly a
pointer to the context. This will help moving the function in
subarch mmu.h in the next patch by avoiding having to include
the definition of struct mm_struct

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/slice.c | 34 +++++++++++++++++-----------------
 1 file changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 35b278082391..8eb7e8b09c75 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -151,32 +151,32 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret,
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
-static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize)
+static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
 {
 #ifdef CONFIG_PPC_64K_PAGES
 	if (psize == MMU_PAGE_64K)
-		return mm_ctx_slice_mask_64k(&mm->context);
+		return mm_ctx_slice_mask_64k(&ctx);
 #endif
 	if (psize == MMU_PAGE_4K)
-		return mm_ctx_slice_mask_4k(&mm->context);
+		return mm_ctx_slice_mask_4k(&ctx);
 #ifdef CONFIG_HUGETLB_PAGE
 	if (psize == MMU_PAGE_16M)
-		return mm_ctx_slice_mask_16m(&mm->context);
+		return mm_ctx_slice_mask_16m(&ctx);
 	if (psize == MMU_PAGE_16G)
-		return mm_ctx_slice_mask_16g(&mm->context);
+		return mm_ctx_slice_mask_16g(&ctx);
 #endif
 	BUG();
 }
 #elif defined(CONFIG_PPC_8xx)
-static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize)
+static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
 {
 	if (psize == mmu_virtual_psize)
-		return &mm->context.mask_base_psize;
+		return &ctx->mask_base_psize;
 #ifdef CONFIG_HUGETLB_PAGE
 	if (psize == MMU_PAGE_512K)
-		return &mm->context.mask_512k;
+		return &ctx->mask_512k;
 	if (psize == MMU_PAGE_8M)
-		return &mm->context.mask_8m;
+		return &ctx->mask_8m;
 #endif
 	BUG();
 }
@@ -246,7 +246,7 @@ static void slice_convert(struct mm_struct *mm,
 	slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
 	slice_print_mask(" mask", mask);
 
-	psize_mask = slice_mask_for_size(mm, psize);
+	psize_mask = slice_mask_for_size(&mm->context, psize);
 
 	/* We need to use a spinlock here to protect against
 	 * concurrent 64k -> 4k demotion ...
@@ -263,7 +263,7 @@ static void slice_convert(struct mm_struct *mm,
 
 		/* Update the slice_mask */
 		old_psize = (lpsizes[index] >> (mask_index * 4)) & 0xf;
-		old_mask = slice_mask_for_size(mm, old_psize);
+		old_mask = slice_mask_for_size(&mm->context, old_psize);
 		old_mask->low_slices &= ~(1u << i);
 		psize_mask->low_slices |= 1u << i;
 
@@ -282,7 +282,7 @@ static void slice_convert(struct mm_struct *mm,
 
 		/* Update the slice_mask */
 		old_psize = (hpsizes[index] >> (mask_index * 4)) & 0xf;
-		old_mask = slice_mask_for_size(mm, old_psize);
+		old_mask = slice_mask_for_size(&mm->context, old_psize);
 		__clear_bit(i, old_mask->high_slices);
 		__set_bit(i, psize_mask->high_slices);
 
@@ -538,7 +538,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 	/* First make up a "good" mask of slices that have the right size
 	 * already
 	 */
-	maskp = slice_mask_for_size(mm, psize);
+	maskp = slice_mask_for_size(&mm->context, psize);
 
 	/*
 	 * Here "good" means slices that are already the right page size,
@@ -565,7 +565,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 	 * a pointer to good mask for the next code to use.
 	 */
 	if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) {
-		compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K);
+		compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K);
 		if (fixed)
 			slice_or_mask(&good_mask, maskp, compat_maskp);
 		else
@@ -760,7 +760,7 @@ void slice_init_new_context_exec(struct mm_struct *mm)
 	/*
 	 * Slice mask cache starts zeroed, fill the default size cache.
 	 */
-	mask = slice_mask_for_size(mm, psize);
+	mask = slice_mask_for_size(&mm->context, psize);
 	mask->low_slices = ~0UL;
 	if (SLICE_NUM_HIGH)
 		bitmap_fill(mask->high_slices, SLICE_NUM_HIGH);
@@ -819,14 +819,14 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
 
 	VM_BUG_ON(radix_enabled());
 
-	maskp = slice_mask_for_size(mm, psize);
+	maskp = slice_mask_for_size(&mm->context, psize);
 #ifdef CONFIG_PPC_64K_PAGES
 	/* We need to account for 4k slices too */
 	if (psize == MMU_PAGE_64K) {
 		const struct slice_mask *compat_maskp;
 		struct slice_mask available;
 
-		compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K);
+		compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K);
 		slice_or_mask(&available, maskp, compat_maskp);
 		return !slice_check_range_fits(mm, &available, addr, len);
 	}
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 04/11] powerpc/mm: move slice_mask_for_size() into mmu.h
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (2 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 03/11] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:36   ` Aneesh Kumar K.V
  2019-04-25 14:29 ` [PATCH v2 05/11] powerpc/mm: get rid of mm_ctx_slice_mask_xxx() Christophe Leroy
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

Move slice_mask_for_size() into subarch mmu.h

At the same time, replace BUG() by VM_BUG_ON() as those BUG() are not
there to catch runtime errors but to catch errors during development
cycle only.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/book3s/64/mmu.h     | 17 +++++++++++
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 42 +++++++++++++++++++---------
 arch/powerpc/mm/slice.c                      | 34 ----------------------
 3 files changed, 46 insertions(+), 47 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 230a9dec7677..ad00355f874f 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -203,6 +203,23 @@ static inline struct slice_mask *mm_ctx_slice_mask_16g(mm_context_t *ctx)
 }
 #endif
 
+static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
+{
+#ifdef CONFIG_PPC_64K_PAGES
+	if (psize == MMU_PAGE_64K)
+		return mm_ctx_slice_mask_64k(&ctx);
+#endif
+#ifdef CONFIG_HUGETLB_PAGE
+	if (psize == MMU_PAGE_16M)
+		return mm_ctx_slice_mask_16m(&ctx);
+	if (psize == MMU_PAGE_16G)
+		return mm_ctx_slice_mask_16g(&ctx);
+#endif
+	VM_BUG_ON(psize != MMU_PAGE_4K);
+
+	return mm_ctx_slice_mask_4k(&ctx);
+}
+
 #ifdef CONFIG_PPC_SUBPAGE_PROT
 static inline struct subpage_prot_table *mm_ctx_subpage_prot(mm_context_t *ctx)
 {
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index c503e2f05e61..a0f6844a1498 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -184,7 +184,23 @@
 #define LOW_SLICE_ARRAY_SZ	SLICE_ARRAY_SIZE
 #endif
 
+#if defined(CONFIG_PPC_4K_PAGES)
+#define mmu_virtual_psize	MMU_PAGE_4K
+#elif defined(CONFIG_PPC_16K_PAGES)
+#define mmu_virtual_psize	MMU_PAGE_16K
+#define PTE_FRAG_NR		4
+#define PTE_FRAG_SIZE_SHIFT	12
+#define PTE_FRAG_SIZE		(1UL << 12)
+#else
+#error "Unsupported PAGE_SIZE"
+#endif
+
+#define mmu_linear_psize	MMU_PAGE_8M
+
 #ifndef __ASSEMBLY__
+
+#include <linux/mmdebug.h>
+
 struct slice_mask {
 	u64 low_slices;
 	DECLARE_BITMAP(high_slices, 0);
@@ -255,6 +271,19 @@ static inline struct slice_mask *mm_ctx_slice_mask_8m(mm_context_t *ctx)
 	return &ctx->mask_8m;
 }
 #endif
+
+static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
+{
+#ifdef CONFIG_HUGETLB_PAGE
+	if (psize == MMU_PAGE_512K)
+		return &ctx->mask_512k;
+	if (psize == MMU_PAGE_8M)
+		return &ctx->mask_8m;
+#endif
+	VM_BUG_ON(psize != mmu_virtual_psize);
+
+	return &ctx->mask_base_psize;
+}
 #endif /* CONFIG_PPC_MM_SLICE */
 
 #define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000)
@@ -306,17 +335,4 @@ extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf;
 
 #endif /* !__ASSEMBLY__ */
 
-#if defined(CONFIG_PPC_4K_PAGES)
-#define mmu_virtual_psize	MMU_PAGE_4K
-#elif defined(CONFIG_PPC_16K_PAGES)
-#define mmu_virtual_psize	MMU_PAGE_16K
-#define PTE_FRAG_NR		4
-#define PTE_FRAG_SIZE_SHIFT	12
-#define PTE_FRAG_SIZE		(1UL << 12)
-#else
-#error "Unsupported PAGE_SIZE"
-#endif
-
-#define mmu_linear_psize	MMU_PAGE_8M
-
 #endif /* _ASM_POWERPC_MMU_8XX_H_ */
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 8eb7e8b09c75..31de91b65a64 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -150,40 +150,6 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret,
 			__set_bit(i, ret->high_slices);
 }
 
-#ifdef CONFIG_PPC_BOOK3S_64
-static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
-{
-#ifdef CONFIG_PPC_64K_PAGES
-	if (psize == MMU_PAGE_64K)
-		return mm_ctx_slice_mask_64k(&ctx);
-#endif
-	if (psize == MMU_PAGE_4K)
-		return mm_ctx_slice_mask_4k(&ctx);
-#ifdef CONFIG_HUGETLB_PAGE
-	if (psize == MMU_PAGE_16M)
-		return mm_ctx_slice_mask_16m(&ctx);
-	if (psize == MMU_PAGE_16G)
-		return mm_ctx_slice_mask_16g(&ctx);
-#endif
-	BUG();
-}
-#elif defined(CONFIG_PPC_8xx)
-static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
-{
-	if (psize == mmu_virtual_psize)
-		return &ctx->mask_base_psize;
-#ifdef CONFIG_HUGETLB_PAGE
-	if (psize == MMU_PAGE_512K)
-		return &ctx->mask_512k;
-	if (psize == MMU_PAGE_8M)
-		return &ctx->mask_8m;
-#endif
-	BUG();
-}
-#else
-#error "Must define the slice masks for page sizes supported by the platform"
-#endif
-
 static bool slice_check_range_fits(struct mm_struct *mm,
 			   const struct slice_mask *available,
 			   unsigned long start, unsigned long len)
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 05/11] powerpc/mm: get rid of mm_ctx_slice_mask_xxx()
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (3 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 04/11] powerpc/mm: move slice_mask_for_size() into mmu.h Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:37   ` Aneesh Kumar K.V
  2019-04-25 14:29 ` [PATCH v2 06/11] powerpc/mm: remove unnecessary #ifdef CONFIG_PPC64 Christophe Leroy
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

Now that slice_mask_for_size() is in mmu.h, the mm_ctx_slice_mask_xxx()
are not needed anymore, so drop them. Note that the 8xx ones where
not used anyway.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/book3s/64/mmu.h     | 32 ++++------------------------
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 17 ---------------
 2 files changed, 4 insertions(+), 45 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index ad00355f874f..e3d7f1404e20 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -179,45 +179,21 @@ static inline void mm_ctx_set_slb_addr_limit(mm_context_t *ctx, unsigned long li
 	ctx->hash_context->slb_addr_limit = limit;
 }
 
-#ifdef CONFIG_PPC_64K_PAGES
-static inline struct slice_mask *mm_ctx_slice_mask_64k(mm_context_t *ctx)
-{
-	return &ctx->hash_context->mask_64k;
-}
-#endif
-
-static inline struct slice_mask *mm_ctx_slice_mask_4k(mm_context_t *ctx)
-{
-	return &ctx->hash_context->mask_4k;
-}
-
-#ifdef CONFIG_HUGETLB_PAGE
-static inline struct slice_mask *mm_ctx_slice_mask_16m(mm_context_t *ctx)
-{
-	return &ctx->hash_context->mask_16m;
-}
-
-static inline struct slice_mask *mm_ctx_slice_mask_16g(mm_context_t *ctx)
-{
-	return &ctx->hash_context->mask_16g;
-}
-#endif
-
 static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
 {
 #ifdef CONFIG_PPC_64K_PAGES
 	if (psize == MMU_PAGE_64K)
-		return mm_ctx_slice_mask_64k(&ctx);
+		return &ctx->hash_context->mask_64k;
 #endif
 #ifdef CONFIG_HUGETLB_PAGE
 	if (psize == MMU_PAGE_16M)
-		return mm_ctx_slice_mask_16m(&ctx);
+		return &ctx->hash_context->mask_16m;
 	if (psize == MMU_PAGE_16G)
-		return mm_ctx_slice_mask_16g(&ctx);
+		return &ctx->hash_context->mask_16g;
 #endif
 	VM_BUG_ON(psize != MMU_PAGE_4K);
 
-	return mm_ctx_slice_mask_4k(&ctx);
+	return &ctx->hash_context->mask_4k;
 }
 
 #ifdef CONFIG_PPC_SUBPAGE_PROT
diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index a0f6844a1498..beded4df1f50 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -255,23 +255,6 @@ static inline void mm_ctx_set_slb_addr_limit(mm_context_t *ctx, unsigned long li
 	ctx->slb_addr_limit = limit;
 }
 
-static inline struct slice_mask *mm_ctx_slice_mask_base(mm_context_t *ctx)
-{
-	return &ctx->mask_base_psize;
-}
-
-#ifdef CONFIG_HUGETLB_PAGE
-static inline struct slice_mask *mm_ctx_slice_mask_512k(mm_context_t *ctx)
-{
-	return &ctx->mask_512k;
-}
-
-static inline struct slice_mask *mm_ctx_slice_mask_8m(mm_context_t *ctx)
-{
-	return &ctx->mask_8m;
-}
-#endif
-
 static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
 {
 #ifdef CONFIG_HUGETLB_PAGE
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 06/11] powerpc/mm: remove unnecessary #ifdef CONFIG_PPC64
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (4 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 05/11] powerpc/mm: get rid of mm_ctx_slice_mask_xxx() Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-25 14:29 ` [PATCH v2 07/11] powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c Christophe Leroy
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

For PPC32 that's a noop, gcc should be smart enough to ignore it.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/slice.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 31de91b65a64..840c4118a185 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -118,13 +118,11 @@ static int slice_high_has_vma(struct mm_struct *mm, unsigned long slice)
 	unsigned long start = slice << SLICE_HIGH_SHIFT;
 	unsigned long end = start + (1ul << SLICE_HIGH_SHIFT);
 
-#ifdef CONFIG_PPC64
 	/* Hack, so that each addresses is controlled by exactly one
 	 * of the high or low area bitmaps, the first high area starts
 	 * at 4GB, not 0 */
 	if (start == 0)
-		start = SLICE_LOW_TOP;
-#endif
+		start = (unsigned long)SLICE_LOW_TOP;
 
 	return !slice_area_is_free(mm, start, end - start);
 }
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 07/11] powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (5 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 06/11] powerpc/mm: remove unnecessary #ifdef CONFIG_PPC64 Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:40   ` Aneesh Kumar K.V
  2019-04-25 14:29 ` [PATCH v2 08/11] powerpc/8xx: get rid of #ifdef CONFIG_HUGETLB_PAGE for slices Christophe Leroy
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

This patch replaces a couple of #ifdef CONFIG_PPC_64K_PAGES
by IS_ENABLED(CONFIG_PPC_64K_PAGES) to improve code maintainability.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/slice.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 840c4118a185..ace97d953040 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -606,14 +606,13 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 	newaddr = slice_find_area(mm, len, &potential_mask,
 				  psize, topdown, high_limit);
 
-#ifdef CONFIG_PPC_64K_PAGES
-	if (newaddr == -ENOMEM && psize == MMU_PAGE_64K) {
+	if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && newaddr == -ENOMEM &&
+	    psize == MMU_PAGE_64K) {
 		/* retry the search with 4k-page slices included */
 		slice_or_mask(&potential_mask, &potential_mask, compat_maskp);
 		newaddr = slice_find_area(mm, len, &potential_mask,
 					  psize, topdown, high_limit);
 	}
-#endif
 
 	if (newaddr == -ENOMEM)
 		return -ENOMEM;
@@ -784,9 +783,9 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
 	VM_BUG_ON(radix_enabled());
 
 	maskp = slice_mask_for_size(&mm->context, psize);
-#ifdef CONFIG_PPC_64K_PAGES
+
 	/* We need to account for 4k slices too */
-	if (psize == MMU_PAGE_64K) {
+	if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) {
 		const struct slice_mask *compat_maskp;
 		struct slice_mask available;
 
@@ -794,7 +793,6 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
 		slice_or_mask(&available, maskp, compat_maskp);
 		return !slice_check_range_fits(mm, &available, addr, len);
 	}
-#endif
 
 	return !slice_check_range_fits(mm, maskp, addr, len);
 }
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 08/11] powerpc/8xx: get rid of #ifdef CONFIG_HUGETLB_PAGE for slices
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (6 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 07/11] powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-25 14:29 ` [PATCH v2 09/11] powerpc/mm: define get_slice_psize() all the time Christophe Leroy
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

The 8xx only selects CONFIG_PPC_MM_SLICES when CONFIG_HUGETLB_PAGE
is set.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
index beded4df1f50..0224fc7633b0 100644
--- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
+++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
@@ -216,10 +216,8 @@ typedef struct {
 	unsigned char high_slices_psize[0];
 	unsigned long slb_addr_limit;
 	struct slice_mask mask_base_psize; /* 4k or 16k */
-# ifdef CONFIG_HUGETLB_PAGE
 	struct slice_mask mask_512k;
 	struct slice_mask mask_8m;
-# endif
 #endif
 	void *pte_frag;
 } mm_context_t;
@@ -257,12 +255,10 @@ static inline void mm_ctx_set_slb_addr_limit(mm_context_t *ctx, unsigned long li
 
 static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
 {
-#ifdef CONFIG_HUGETLB_PAGE
 	if (psize == MMU_PAGE_512K)
 		return &ctx->mask_512k;
 	if (psize == MMU_PAGE_8M)
 		return &ctx->mask_8m;
-#endif
 	VM_BUG_ON(psize != mmu_virtual_psize);
 
 	return &ctx->mask_base_psize;
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 09/11] powerpc/mm: define get_slice_psize() all the time
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (7 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 08/11] powerpc/8xx: get rid of #ifdef CONFIG_HUGETLB_PAGE for slices Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:42   ` Aneesh Kumar K.V
  2019-04-25 14:29 ` [PATCH v2 10/11] powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT Christophe Leroy
  2019-04-25 14:29 ` [PATCH v2 11/11] powerpc/mm: drop slice DEBUG Christophe Leroy
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

get_slice_psize() can be defined regardless of CONFIG_PPC_MM_SLICES
to avoid ifdefs

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/slice.h | 5 +++++
 arch/powerpc/mm/hugetlbpage.c    | 4 +---
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
index be8af667098f..c6f466f4c241 100644
--- a/arch/powerpc/include/asm/slice.h
+++ b/arch/powerpc/include/asm/slice.h
@@ -36,6 +36,11 @@ void slice_setup_new_exec(void);
 
 static inline void slice_init_new_context_exec(struct mm_struct *mm) {}
 
+static inline unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
+{
+	return 0;
+}
+
 #endif /* CONFIG_PPC_MM_SLICES */
 
 #endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 9e732bb2c84a..5f67e7a4d1cc 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -578,14 +578,12 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
 
 unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
 {
-#ifdef CONFIG_PPC_MM_SLICES
 	/* With radix we don't use slice, so derive it from vma*/
-	if (!radix_enabled()) {
+	if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled()) {
 		unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start);
 
 		return 1UL << mmu_psize_to_shift(psize);
 	}
-#endif
 	return vma_kernel_pagesize(vma);
 }
 
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 10/11] powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (8 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 09/11] powerpc/mm: define get_slice_psize() all the time Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:43   ` Aneesh Kumar K.V
  2019-04-25 14:29 ` [PATCH v2 11/11] powerpc/mm: drop slice DEBUG Christophe Leroy
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

This patch defines a subarch specific SLB_ADDR_LIMIT_DEFAULT
to remove the #ifdefs around the setup of mm->context.slb_addr_limit

It also generalises the use of mm_ctx_set_slb_addr_limit() helper.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/book3s/64/slice.h | 2 ++
 arch/powerpc/include/asm/nohash/32/slice.h | 2 ++
 arch/powerpc/mm/hash_utils_64.c            | 2 +-
 arch/powerpc/mm/slice.c                    | 6 +-----
 arch/powerpc/mm/tlb_nohash.c               | 4 +---
 5 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/slice.h b/arch/powerpc/include/asm/book3s/64/slice.h
index 062e11136e9c..f0d3194ba41b 100644
--- a/arch/powerpc/include/asm/book3s/64/slice.h
+++ b/arch/powerpc/include/asm/book3s/64/slice.h
@@ -11,4 +11,6 @@
 #define SLICE_NUM_HIGH		(H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
 #define GET_HIGH_SLICE_INDEX(addr)	((addr) >> SLICE_HIGH_SHIFT)
 
+#define SLB_ADDR_LIMIT_DEFAULT	DEFAULT_MAP_WINDOW_USER64
+
 #endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */
diff --git a/arch/powerpc/include/asm/nohash/32/slice.h b/arch/powerpc/include/asm/nohash/32/slice.h
index 777d62e40ac0..39eb0154ae2d 100644
--- a/arch/powerpc/include/asm/nohash/32/slice.h
+++ b/arch/powerpc/include/asm/nohash/32/slice.h
@@ -13,6 +13,8 @@
 #define SLICE_NUM_HIGH		0ul
 #define GET_HIGH_SLICE_INDEX(addr)	(addr & 0)
 
+#define SLB_ADDR_LIMIT_DEFAULT	DEFAULT_MAP_WINDOW
+
 #endif /* CONFIG_PPC_MM_SLICES */
 
 #endif /* _ASM_POWERPC_NOHASH_32_SLICE_H */
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index f727197de713..884246e3bf0b 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -1050,7 +1050,7 @@ void __init hash__early_init_mmu(void)
 	htab_initialize();
 
 	init_mm.context.hash_context = &init_hash_mm_context;
-	init_mm.context.hash_context->slb_addr_limit = DEFAULT_MAP_WINDOW_USER64;
+	mm_ctx_set_slb_addr_limit(&init_mm.context, SLB_ADDR_LIMIT_DEFAULT);
 
 	pr_info("Initializing hash mmu with SLB\n");
 	/* Initialize SLB management */
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index ace97d953040..97fbf7b54422 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -704,11 +704,7 @@ void slice_init_new_context_exec(struct mm_struct *mm)
 	 * case of fork it is just inherited from the mm being
 	 * duplicated.
 	 */
-#ifdef CONFIG_PPC64
-	mm_ctx_set_slb_addr_limit(&mm->context, DEFAULT_MAP_WINDOW_USER64);
-#else
-	mm->context.slb_addr_limit = DEFAULT_MAP_WINDOW;
-#endif
+	mm_ctx_set_slb_addr_limit(&mm->context, SLB_ADDR_LIMIT_DEFAULT);
 	mm_ctx_set_user_psize(&mm->context, psize);
 
 	/*
diff --git a/arch/powerpc/mm/tlb_nohash.c b/arch/powerpc/mm/tlb_nohash.c
index 088e0a6b5ade..ba4bff11191f 100644
--- a/arch/powerpc/mm/tlb_nohash.c
+++ b/arch/powerpc/mm/tlb_nohash.c
@@ -802,9 +802,7 @@ void __init early_init_mmu(void)
 #endif
 
 #ifdef CONFIG_PPC_MM_SLICES
-#if defined(CONFIG_PPC_8xx)
-	init_mm.context.slb_addr_limit = DEFAULT_MAP_WINDOW;
-#endif
+	mm_ctx_set_slb_addr_limit(&init_mm.context, SLB_ADDR_LIMIT_DEFAULT);
 #endif
 }
 #endif /* CONFIG_PPC64 */
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 11/11] powerpc/mm: drop slice DEBUG
  2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
                   ` (9 preceding siblings ...)
  2019-04-25 14:29 ` [PATCH v2 10/11] powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT Christophe Leroy
@ 2019-04-25 14:29 ` Christophe Leroy
  2019-04-26  6:44   ` Aneesh Kumar K.V
  10 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-04-25 14:29 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, aneesh.kumar
  Cc: linux-kernel, linuxppc-dev

slice is now an improved functionnality. Drop the DEBUG stuff.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/mm/slice.c | 62 ++++---------------------------------------------
 1 file changed, 4 insertions(+), 58 deletions(-)

diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 97fbf7b54422..a9d803738b65 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -41,28 +41,6 @@
 
 static DEFINE_SPINLOCK(slice_convert_lock);
 
-#ifdef DEBUG
-int _slice_debug = 1;
-
-static void slice_print_mask(const char *label, const struct slice_mask *mask)
-{
-	if (!_slice_debug)
-		return;
-	pr_devel("%s low_slice: %*pbl\n", label,
-			(int)SLICE_NUM_LOW, &mask->low_slices);
-	pr_devel("%s high_slice: %*pbl\n", label,
-			(int)SLICE_NUM_HIGH, mask->high_slices);
-}
-
-#define slice_dbg(fmt...) do { if (_slice_debug) pr_devel(fmt); } while (0)
-
-#else
-
-static void slice_print_mask(const char *label, const struct slice_mask *mask) {}
-#define slice_dbg(fmt...)
-
-#endif
-
 static inline bool slice_addr_is_low(unsigned long addr)
 {
 	u64 tmp = (u64)addr;
@@ -207,9 +185,6 @@ static void slice_convert(struct mm_struct *mm,
 	unsigned long i, flags;
 	int old_psize;
 
-	slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
-	slice_print_mask(" mask", mask);
-
 	psize_mask = slice_mask_for_size(&mm->context, psize);
 
 	/* We need to use a spinlock here to protect against
@@ -255,10 +230,6 @@ static void slice_convert(struct mm_struct *mm,
 				(((unsigned long)psize) << (mask_index * 4));
 	}
 
-	slice_dbg(" lsps=%lx, hsps=%lx\n",
-		  (unsigned long)mm_ctx_low_slices(&mm->context),
-		  (unsigned long)mm_ctx_high_slices(&mm->context));
-
 	spin_unlock_irqrestore(&slice_convert_lock, flags);
 
 	copro_flush_all_slbs(mm);
@@ -485,14 +456,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 	BUG_ON(mm_ctx_slb_addr_limit(&mm->context) == 0);
 	VM_BUG_ON(radix_enabled());
 
-	slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
-	slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
-		  addr, len, flags, topdown);
-
 	/* If hint, make sure it matches our alignment restrictions */
 	if (!fixed && addr) {
 		addr = _ALIGN_UP(addr, page_size);
-		slice_dbg(" aligned addr=%lx\n", addr);
 		/* Ignore hint if it's too large or overlaps a VMA */
 		if (addr > high_limit - len || addr < mmap_min_addr ||
 		    !slice_area_is_free(mm, addr, len))
@@ -538,17 +504,12 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 		slice_copy_mask(&good_mask, maskp);
 	}
 
-	slice_print_mask(" good_mask", &good_mask);
-	if (compat_maskp)
-		slice_print_mask(" compat_mask", compat_maskp);
-
 	/* First check hint if it's valid or if we have MAP_FIXED */
 	if (addr != 0 || fixed) {
 		/* Check if we fit in the good mask. If we do, we just return,
 		 * nothing else to do
 		 */
 		if (slice_check_range_fits(mm, &good_mask, addr, len)) {
-			slice_dbg(" fits good !\n");
 			newaddr = addr;
 			goto return_addr;
 		}
@@ -558,13 +519,10 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 		 */
 		newaddr = slice_find_area(mm, len, &good_mask,
 					  psize, topdown, high_limit);
-		if (newaddr != -ENOMEM) {
-			/* Found within the good mask, we don't have to setup,
-			 * we thus return directly
-			 */
-			slice_dbg(" found area at 0x%lx\n", newaddr);
+
+		/* Found within good mask, don't have to setup, thus return directly */
+		if (newaddr != -ENOMEM)
 			goto return_addr;
-		}
 	}
 	/*
 	 * We don't fit in the good mask, check what other slices are
@@ -572,11 +530,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 	 */
 	slice_mask_for_free(mm, &potential_mask, high_limit);
 	slice_or_mask(&potential_mask, &potential_mask, &good_mask);
-	slice_print_mask(" potential", &potential_mask);
 
 	if (addr != 0 || fixed) {
 		if (slice_check_range_fits(mm, &potential_mask, addr, len)) {
-			slice_dbg(" fits potential !\n");
 			newaddr = addr;
 			goto convert;
 		}
@@ -586,18 +542,14 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 	if (fixed)
 		return -EBUSY;
 
-	slice_dbg(" search...\n");
-
 	/* If we had a hint that didn't work out, see if we can fit
 	 * anywhere in the good area.
 	 */
 	if (addr) {
 		newaddr = slice_find_area(mm, len, &good_mask,
 					  psize, topdown, high_limit);
-		if (newaddr != -ENOMEM) {
-			slice_dbg(" found area at 0x%lx\n", newaddr);
+		if (newaddr != -ENOMEM)
 			goto return_addr;
-		}
 	}
 
 	/* Now let's see if we can find something in the existing slices
@@ -618,8 +570,6 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 		return -ENOMEM;
 
 	slice_range_to_mask(newaddr, len, &potential_mask);
-	slice_dbg(" found potential area at 0x%lx\n", newaddr);
-	slice_print_mask(" mask", &potential_mask);
 
  convert:
 	/*
@@ -697,8 +647,6 @@ void slice_init_new_context_exec(struct mm_struct *mm)
 	struct slice_mask *mask;
 	unsigned int psize = mmu_virtual_psize;
 
-	slice_dbg("slice_init_new_context_exec(mm=%p)\n", mm);
-
 	/*
 	 * In the case of exec, use the default limit. In the
 	 * case of fork it is just inherited from the mm being
@@ -730,8 +678,6 @@ void slice_setup_new_exec(void)
 {
 	struct mm_struct *mm = current->mm;
 
-	slice_dbg("slice_setup_new_exec(mm=%p)\n", mm);
-
 	if (!is_32bit_task())
 		return;
 
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init
  2019-04-25 14:29 ` [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init Christophe Leroy
@ 2019-04-26  6:32   ` Aneesh Kumar K.V
  2019-05-03  6:59   ` Michael Ellerman
  1 sibling, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:32 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> Commit 67fda38f0d68 ("powerpc/mm: Move slb_addr_linit to
> early_init_mmu") moved slb_addr_limit init out of setup_arch().
>
> Commit 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t
> for radix") brought it back into setup_arch() by error.
>
> This patch reverts that erroneous regress.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Fixes: 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t for radix")
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/kernel/setup-common.c | 6 ------
>  1 file changed, 6 deletions(-)
>
> diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
> index 1729bf409562..7af085b38cd1 100644
> --- a/arch/powerpc/kernel/setup-common.c
> +++ b/arch/powerpc/kernel/setup-common.c
> @@ -950,12 +950,6 @@ void __init setup_arch(char **cmdline_p)
>  	init_mm.end_data = (unsigned long) _edata;
>  	init_mm.brk = klimit;
>  
> -#ifdef CONFIG_PPC_MM_SLICES
> -#if defined(CONFIG_PPC_8xx)
> -	init_mm.context.slb_addr_limit = DEFAULT_MAP_WINDOW;
> -#endif
> -#endif
> -
>  #ifdef CONFIG_SPAPR_TCE_IOMMU
>  	mm_iommu_init(&init_mm);
>  #endif
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 02/11] powerpc/mm: no slice for nohash/64
  2019-04-25 14:29 ` [PATCH v2 02/11] powerpc/mm: no slice for nohash/64 Christophe Leroy
@ 2019-04-26  6:33   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:33 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> Only nohash/32 and book3s/64 support mm slices.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/include/asm/nohash/64/slice.h | 12 ------------
>  arch/powerpc/include/asm/slice.h           |  4 +---
>  arch/powerpc/platforms/Kconfig.cputype     |  4 ++++
>  3 files changed, 5 insertions(+), 15 deletions(-)
>  delete mode 100644 arch/powerpc/include/asm/nohash/64/slice.h
>
> diff --git a/arch/powerpc/include/asm/nohash/64/slice.h b/arch/powerpc/include/asm/nohash/64/slice.h
> deleted file mode 100644
> index ad0d6e3cc1c5..000000000000
> --- a/arch/powerpc/include/asm/nohash/64/slice.h
> +++ /dev/null
> @@ -1,12 +0,0 @@
> -/* SPDX-License-Identifier: GPL-2.0 */
> -#ifndef _ASM_POWERPC_NOHASH_64_SLICE_H
> -#define _ASM_POWERPC_NOHASH_64_SLICE_H
> -
> -#ifdef CONFIG_PPC_64K_PAGES
> -#define get_slice_psize(mm, addr)	MMU_PAGE_64K
> -#else /* CONFIG_PPC_64K_PAGES */
> -#define get_slice_psize(mm, addr)	MMU_PAGE_4K
> -#endif /* !CONFIG_PPC_64K_PAGES */
> -#define slice_set_user_psize(mm, psize)	do { BUG(); } while (0)
> -
> -#endif /* _ASM_POWERPC_NOHASH_64_SLICE_H */
> diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
> index 44816cbc4198..be8af667098f 100644
> --- a/arch/powerpc/include/asm/slice.h
> +++ b/arch/powerpc/include/asm/slice.h
> @@ -4,9 +4,7 @@
>  
>  #ifdef CONFIG_PPC_BOOK3S_64
>  #include <asm/book3s/64/slice.h>
> -#elif defined(CONFIG_PPC64)
> -#include <asm/nohash/64/slice.h>
> -#elif defined(CONFIG_PPC_MMU_NOHASH)
> +#elif defined(CONFIG_PPC_MMU_NOHASH_32)
>  #include <asm/nohash/32/slice.h>
>  #endif
>  
> diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
> index 00b2bb536c74..04915f51f447 100644
> --- a/arch/powerpc/platforms/Kconfig.cputype
> +++ b/arch/powerpc/platforms/Kconfig.cputype
> @@ -391,6 +391,10 @@ config PPC_MMU_NOHASH
>  	def_bool y
>  	depends on !PPC_BOOK3S
>  
> +config PPC_MMU_NOHASH_32
> +	def_bool y
> +	depends on PPC_MMU_NOHASH && PPC32
> +
>  config PPC_BOOK3E_MMU
>  	def_bool y
>  	depends on FSL_BOOKE || PPC_BOOK3E
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 03/11] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct
  2019-04-25 14:29 ` [PATCH v2 03/11] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct Christophe Leroy
@ 2019-04-26  6:34   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:34 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> slice_mask_for_size() only uses mm->context, so hand directly a
> pointer to the context. This will help moving the function in
> subarch mmu.h in the next patch by avoiding having to include
> the definition of struct mm_struct
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/mm/slice.c | 34 +++++++++++++++++-----------------
>  1 file changed, 17 insertions(+), 17 deletions(-)
>
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 35b278082391..8eb7e8b09c75 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -151,32 +151,32 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret,
>  }
>  
>  #ifdef CONFIG_PPC_BOOK3S_64
> -static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize)
> +static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
>  {
>  #ifdef CONFIG_PPC_64K_PAGES
>  	if (psize == MMU_PAGE_64K)
> -		return mm_ctx_slice_mask_64k(&mm->context);
> +		return mm_ctx_slice_mask_64k(&ctx);
>  #endif
>  	if (psize == MMU_PAGE_4K)
> -		return mm_ctx_slice_mask_4k(&mm->context);
> +		return mm_ctx_slice_mask_4k(&ctx);
>  #ifdef CONFIG_HUGETLB_PAGE
>  	if (psize == MMU_PAGE_16M)
> -		return mm_ctx_slice_mask_16m(&mm->context);
> +		return mm_ctx_slice_mask_16m(&ctx);
>  	if (psize == MMU_PAGE_16G)
> -		return mm_ctx_slice_mask_16g(&mm->context);
> +		return mm_ctx_slice_mask_16g(&ctx);
>  #endif
>  	BUG();
>  }
>  #elif defined(CONFIG_PPC_8xx)
> -static struct slice_mask *slice_mask_for_size(struct mm_struct *mm, int psize)
> +static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
>  {
>  	if (psize == mmu_virtual_psize)
> -		return &mm->context.mask_base_psize;
> +		return &ctx->mask_base_psize;
>  #ifdef CONFIG_HUGETLB_PAGE
>  	if (psize == MMU_PAGE_512K)
> -		return &mm->context.mask_512k;
> +		return &ctx->mask_512k;
>  	if (psize == MMU_PAGE_8M)
> -		return &mm->context.mask_8m;
> +		return &ctx->mask_8m;
>  #endif
>  	BUG();
>  }
> @@ -246,7 +246,7 @@ static void slice_convert(struct mm_struct *mm,
>  	slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
>  	slice_print_mask(" mask", mask);
>  
> -	psize_mask = slice_mask_for_size(mm, psize);
> +	psize_mask = slice_mask_for_size(&mm->context, psize);
>  
>  	/* We need to use a spinlock here to protect against
>  	 * concurrent 64k -> 4k demotion ...
> @@ -263,7 +263,7 @@ static void slice_convert(struct mm_struct *mm,
>  
>  		/* Update the slice_mask */
>  		old_psize = (lpsizes[index] >> (mask_index * 4)) & 0xf;
> -		old_mask = slice_mask_for_size(mm, old_psize);
> +		old_mask = slice_mask_for_size(&mm->context, old_psize);
>  		old_mask->low_slices &= ~(1u << i);
>  		psize_mask->low_slices |= 1u << i;
>  
> @@ -282,7 +282,7 @@ static void slice_convert(struct mm_struct *mm,
>  
>  		/* Update the slice_mask */
>  		old_psize = (hpsizes[index] >> (mask_index * 4)) & 0xf;
> -		old_mask = slice_mask_for_size(mm, old_psize);
> +		old_mask = slice_mask_for_size(&mm->context, old_psize);
>  		__clear_bit(i, old_mask->high_slices);
>  		__set_bit(i, psize_mask->high_slices);
>  
> @@ -538,7 +538,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  	/* First make up a "good" mask of slices that have the right size
>  	 * already
>  	 */
> -	maskp = slice_mask_for_size(mm, psize);
> +	maskp = slice_mask_for_size(&mm->context, psize);
>  
>  	/*
>  	 * Here "good" means slices that are already the right page size,
> @@ -565,7 +565,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  	 * a pointer to good mask for the next code to use.
>  	 */
>  	if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) {
> -		compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K);
> +		compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K);
>  		if (fixed)
>  			slice_or_mask(&good_mask, maskp, compat_maskp);
>  		else
> @@ -760,7 +760,7 @@ void slice_init_new_context_exec(struct mm_struct *mm)
>  	/*
>  	 * Slice mask cache starts zeroed, fill the default size cache.
>  	 */
> -	mask = slice_mask_for_size(mm, psize);
> +	mask = slice_mask_for_size(&mm->context, psize);
>  	mask->low_slices = ~0UL;
>  	if (SLICE_NUM_HIGH)
>  		bitmap_fill(mask->high_slices, SLICE_NUM_HIGH);
> @@ -819,14 +819,14 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
>  
>  	VM_BUG_ON(radix_enabled());
>  
> -	maskp = slice_mask_for_size(mm, psize);
> +	maskp = slice_mask_for_size(&mm->context, psize);
>  #ifdef CONFIG_PPC_64K_PAGES
>  	/* We need to account for 4k slices too */
>  	if (psize == MMU_PAGE_64K) {
>  		const struct slice_mask *compat_maskp;
>  		struct slice_mask available;
>  
> -		compat_maskp = slice_mask_for_size(mm, MMU_PAGE_4K);
> +		compat_maskp = slice_mask_for_size(&mm->context, MMU_PAGE_4K);
>  		slice_or_mask(&available, maskp, compat_maskp);
>  		return !slice_check_range_fits(mm, &available, addr, len);
>  	}
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 04/11] powerpc/mm: move slice_mask_for_size() into mmu.h
  2019-04-25 14:29 ` [PATCH v2 04/11] powerpc/mm: move slice_mask_for_size() into mmu.h Christophe Leroy
@ 2019-04-26  6:36   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:36 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> Move slice_mask_for_size() into subarch mmu.h
>
> At the same time, replace BUG() by VM_BUG_ON() as those BUG() are not
> there to catch runtime errors but to catch errors during development
> cycle only.
>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/include/asm/book3s/64/mmu.h     | 17 +++++++++++
>  arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 42 +++++++++++++++++++---------
>  arch/powerpc/mm/slice.c                      | 34 ----------------------
>  3 files changed, 46 insertions(+), 47 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index 230a9dec7677..ad00355f874f 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -203,6 +203,23 @@ static inline struct slice_mask *mm_ctx_slice_mask_16g(mm_context_t *ctx)
>  }
>  #endif
>  
> +static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
> +{
> +#ifdef CONFIG_PPC_64K_PAGES
> +	if (psize == MMU_PAGE_64K)
> +		return mm_ctx_slice_mask_64k(&ctx);
> +#endif
> +#ifdef CONFIG_HUGETLB_PAGE
> +	if (psize == MMU_PAGE_16M)
> +		return mm_ctx_slice_mask_16m(&ctx);
> +	if (psize == MMU_PAGE_16G)
> +		return mm_ctx_slice_mask_16g(&ctx);
> +#endif
> +	VM_BUG_ON(psize != MMU_PAGE_4K);
> +
> +	return mm_ctx_slice_mask_4k(&ctx);
> +}
> +
>  #ifdef CONFIG_PPC_SUBPAGE_PROT
>  static inline struct subpage_prot_table *mm_ctx_subpage_prot(mm_context_t *ctx)
>  {
> diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
> index c503e2f05e61..a0f6844a1498 100644
> --- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
> +++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
> @@ -184,7 +184,23 @@
>  #define LOW_SLICE_ARRAY_SZ	SLICE_ARRAY_SIZE
>  #endif
>  
> +#if defined(CONFIG_PPC_4K_PAGES)
> +#define mmu_virtual_psize	MMU_PAGE_4K
> +#elif defined(CONFIG_PPC_16K_PAGES)
> +#define mmu_virtual_psize	MMU_PAGE_16K
> +#define PTE_FRAG_NR		4
> +#define PTE_FRAG_SIZE_SHIFT	12
> +#define PTE_FRAG_SIZE		(1UL << 12)
> +#else
> +#error "Unsupported PAGE_SIZE"
> +#endif
> +
> +#define mmu_linear_psize	MMU_PAGE_8M
> +
>  #ifndef __ASSEMBLY__
> +
> +#include <linux/mmdebug.h>
> +
>  struct slice_mask {
>  	u64 low_slices;
>  	DECLARE_BITMAP(high_slices, 0);
> @@ -255,6 +271,19 @@ static inline struct slice_mask *mm_ctx_slice_mask_8m(mm_context_t *ctx)
>  	return &ctx->mask_8m;
>  }
>  #endif
> +
> +static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
> +{
> +#ifdef CONFIG_HUGETLB_PAGE
> +	if (psize == MMU_PAGE_512K)
> +		return &ctx->mask_512k;
> +	if (psize == MMU_PAGE_8M)
> +		return &ctx->mask_8m;
> +#endif
> +	VM_BUG_ON(psize != mmu_virtual_psize);
> +
> +	return &ctx->mask_base_psize;
> +}
>  #endif /* CONFIG_PPC_MM_SLICE */
>  
>  #define PHYS_IMMR_BASE (mfspr(SPRN_IMMR) & 0xfff80000)
> @@ -306,17 +335,4 @@ extern s32 patch__itlbmiss_perf, patch__dtlbmiss_perf;
>  
>  #endif /* !__ASSEMBLY__ */
>  
> -#if defined(CONFIG_PPC_4K_PAGES)
> -#define mmu_virtual_psize	MMU_PAGE_4K
> -#elif defined(CONFIG_PPC_16K_PAGES)
> -#define mmu_virtual_psize	MMU_PAGE_16K
> -#define PTE_FRAG_NR		4
> -#define PTE_FRAG_SIZE_SHIFT	12
> -#define PTE_FRAG_SIZE		(1UL << 12)
> -#else
> -#error "Unsupported PAGE_SIZE"
> -#endif
> -
> -#define mmu_linear_psize	MMU_PAGE_8M
> -
>  #endif /* _ASM_POWERPC_MMU_8XX_H_ */
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 8eb7e8b09c75..31de91b65a64 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -150,40 +150,6 @@ static void slice_mask_for_free(struct mm_struct *mm, struct slice_mask *ret,
>  			__set_bit(i, ret->high_slices);
>  }
>  
> -#ifdef CONFIG_PPC_BOOK3S_64
> -static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
> -{
> -#ifdef CONFIG_PPC_64K_PAGES
> -	if (psize == MMU_PAGE_64K)
> -		return mm_ctx_slice_mask_64k(&ctx);
> -#endif
> -	if (psize == MMU_PAGE_4K)
> -		return mm_ctx_slice_mask_4k(&ctx);
> -#ifdef CONFIG_HUGETLB_PAGE
> -	if (psize == MMU_PAGE_16M)
> -		return mm_ctx_slice_mask_16m(&ctx);
> -	if (psize == MMU_PAGE_16G)
> -		return mm_ctx_slice_mask_16g(&ctx);
> -#endif
> -	BUG();
> -}
> -#elif defined(CONFIG_PPC_8xx)
> -static struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
> -{
> -	if (psize == mmu_virtual_psize)
> -		return &ctx->mask_base_psize;
> -#ifdef CONFIG_HUGETLB_PAGE
> -	if (psize == MMU_PAGE_512K)
> -		return &ctx->mask_512k;
> -	if (psize == MMU_PAGE_8M)
> -		return &ctx->mask_8m;
> -#endif
> -	BUG();
> -}
> -#else
> -#error "Must define the slice masks for page sizes supported by the platform"
> -#endif
> -
>  static bool slice_check_range_fits(struct mm_struct *mm,
>  			   const struct slice_mask *available,
>  			   unsigned long start, unsigned long len)
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 05/11] powerpc/mm: get rid of mm_ctx_slice_mask_xxx()
  2019-04-25 14:29 ` [PATCH v2 05/11] powerpc/mm: get rid of mm_ctx_slice_mask_xxx() Christophe Leroy
@ 2019-04-26  6:37   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:37 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> Now that slice_mask_for_size() is in mmu.h, the mm_ctx_slice_mask_xxx()
> are not needed anymore, so drop them. Note that the 8xx ones where
> not used anyway.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/include/asm/book3s/64/mmu.h     | 32 ++++------------------------
>  arch/powerpc/include/asm/nohash/32/mmu-8xx.h | 17 ---------------
>  2 files changed, 4 insertions(+), 45 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index ad00355f874f..e3d7f1404e20 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -179,45 +179,21 @@ static inline void mm_ctx_set_slb_addr_limit(mm_context_t *ctx, unsigned long li
>  	ctx->hash_context->slb_addr_limit = limit;
>  }
>  
> -#ifdef CONFIG_PPC_64K_PAGES
> -static inline struct slice_mask *mm_ctx_slice_mask_64k(mm_context_t *ctx)
> -{
> -	return &ctx->hash_context->mask_64k;
> -}
> -#endif
> -
> -static inline struct slice_mask *mm_ctx_slice_mask_4k(mm_context_t *ctx)
> -{
> -	return &ctx->hash_context->mask_4k;
> -}
> -
> -#ifdef CONFIG_HUGETLB_PAGE
> -static inline struct slice_mask *mm_ctx_slice_mask_16m(mm_context_t *ctx)
> -{
> -	return &ctx->hash_context->mask_16m;
> -}
> -
> -static inline struct slice_mask *mm_ctx_slice_mask_16g(mm_context_t *ctx)
> -{
> -	return &ctx->hash_context->mask_16g;
> -}
> -#endif
> -
>  static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
>  {
>  #ifdef CONFIG_PPC_64K_PAGES
>  	if (psize == MMU_PAGE_64K)
> -		return mm_ctx_slice_mask_64k(&ctx);
> +		return &ctx->hash_context->mask_64k;
>  #endif
>  #ifdef CONFIG_HUGETLB_PAGE
>  	if (psize == MMU_PAGE_16M)
> -		return mm_ctx_slice_mask_16m(&ctx);
> +		return &ctx->hash_context->mask_16m;
>  	if (psize == MMU_PAGE_16G)
> -		return mm_ctx_slice_mask_16g(&ctx);
> +		return &ctx->hash_context->mask_16g;
>  #endif
>  	VM_BUG_ON(psize != MMU_PAGE_4K);
>  
> -	return mm_ctx_slice_mask_4k(&ctx);
> +	return &ctx->hash_context->mask_4k;
>  }
>  
>  #ifdef CONFIG_PPC_SUBPAGE_PROT
> diff --git a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
> index a0f6844a1498..beded4df1f50 100644
> --- a/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
> +++ b/arch/powerpc/include/asm/nohash/32/mmu-8xx.h
> @@ -255,23 +255,6 @@ static inline void mm_ctx_set_slb_addr_limit(mm_context_t *ctx, unsigned long li
>  	ctx->slb_addr_limit = limit;
>  }
>  
> -static inline struct slice_mask *mm_ctx_slice_mask_base(mm_context_t *ctx)
> -{
> -	return &ctx->mask_base_psize;
> -}
> -
> -#ifdef CONFIG_HUGETLB_PAGE
> -static inline struct slice_mask *mm_ctx_slice_mask_512k(mm_context_t *ctx)
> -{
> -	return &ctx->mask_512k;
> -}
> -
> -static inline struct slice_mask *mm_ctx_slice_mask_8m(mm_context_t *ctx)
> -{
> -	return &ctx->mask_8m;
> -}
> -#endif
> -
>  static inline struct slice_mask *slice_mask_for_size(mm_context_t *ctx, int psize)
>  {
>  #ifdef CONFIG_HUGETLB_PAGE
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 07/11] powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c
  2019-04-25 14:29 ` [PATCH v2 07/11] powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c Christophe Leroy
@ 2019-04-26  6:40   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:40 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> This patch replaces a couple of #ifdef CONFIG_PPC_64K_PAGES
> by IS_ENABLED(CONFIG_PPC_64K_PAGES) to improve code maintainability.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/mm/slice.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
>
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 840c4118a185..ace97d953040 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -606,14 +606,13 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  	newaddr = slice_find_area(mm, len, &potential_mask,
>  				  psize, topdown, high_limit);
>  
> -#ifdef CONFIG_PPC_64K_PAGES
> -	if (newaddr == -ENOMEM && psize == MMU_PAGE_64K) {
> +	if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && newaddr == -ENOMEM &&
> +	    psize == MMU_PAGE_64K) {
>  		/* retry the search with 4k-page slices included */
>  		slice_or_mask(&potential_mask, &potential_mask, compat_maskp);
>  		newaddr = slice_find_area(mm, len, &potential_mask,
>  					  psize, topdown, high_limit);
>  	}
> -#endif
>  
>  	if (newaddr == -ENOMEM)
>  		return -ENOMEM;
> @@ -784,9 +783,9 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
>  	VM_BUG_ON(radix_enabled());
>  
>  	maskp = slice_mask_for_size(&mm->context, psize);
> -#ifdef CONFIG_PPC_64K_PAGES
> +
>  	/* We need to account for 4k slices too */
> -	if (psize == MMU_PAGE_64K) {
> +	if (IS_ENABLED(CONFIG_PPC_64K_PAGES) && psize == MMU_PAGE_64K) {
>  		const struct slice_mask *compat_maskp;
>  		struct slice_mask available;
>  
> @@ -794,7 +793,6 @@ int slice_is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
>  		slice_or_mask(&available, maskp, compat_maskp);
>  		return !slice_check_range_fits(mm, &available, addr, len);
>  	}
> -#endif
>  
>  	return !slice_check_range_fits(mm, maskp, addr, len);
>  }
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 09/11] powerpc/mm: define get_slice_psize() all the time
  2019-04-25 14:29 ` [PATCH v2 09/11] powerpc/mm: define get_slice_psize() all the time Christophe Leroy
@ 2019-04-26  6:42   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:42 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> get_slice_psize() can be defined regardless of CONFIG_PPC_MM_SLICES
> to avoid ifdefs
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/include/asm/slice.h | 5 +++++
>  arch/powerpc/mm/hugetlbpage.c    | 4 +---
>  2 files changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/slice.h b/arch/powerpc/include/asm/slice.h
> index be8af667098f..c6f466f4c241 100644
> --- a/arch/powerpc/include/asm/slice.h
> +++ b/arch/powerpc/include/asm/slice.h
> @@ -36,6 +36,11 @@ void slice_setup_new_exec(void);
>  
>  static inline void slice_init_new_context_exec(struct mm_struct *mm) {}
>  
> +static inline unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
> +{
> +	return 0;
> +}
> +
>  #endif /* CONFIG_PPC_MM_SLICES */
>  
>  #endif /* __ASSEMBLY__ */
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index 9e732bb2c84a..5f67e7a4d1cc 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -578,14 +578,12 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>  
>  unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
>  {
> -#ifdef CONFIG_PPC_MM_SLICES
>  	/* With radix we don't use slice, so derive it from vma*/
> -	if (!radix_enabled()) {
> +	if (IS_ENABLED(CONFIG_PPC_MM_SLICES) && !radix_enabled()) {
>  		unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start);
>  
>  		return 1UL << mmu_psize_to_shift(psize);
>  	}
> -#endif
>  	return vma_kernel_pagesize(vma);
>  }
>  
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 10/11] powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT
  2019-04-25 14:29 ` [PATCH v2 10/11] powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT Christophe Leroy
@ 2019-04-26  6:43   ` Aneesh Kumar K.V
  0 siblings, 0 replies; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:43 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> This patch defines a subarch specific SLB_ADDR_LIMIT_DEFAULT
> to remove the #ifdefs around the setup of mm->context.slb_addr_limit
>
> It also generalises the use of mm_ctx_set_slb_addr_limit() helper.
>

Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/include/asm/book3s/64/slice.h | 2 ++
>  arch/powerpc/include/asm/nohash/32/slice.h | 2 ++
>  arch/powerpc/mm/hash_utils_64.c            | 2 +-
>  arch/powerpc/mm/slice.c                    | 6 +-----
>  arch/powerpc/mm/tlb_nohash.c               | 4 +---
>  5 files changed, 7 insertions(+), 9 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/slice.h b/arch/powerpc/include/asm/book3s/64/slice.h
> index 062e11136e9c..f0d3194ba41b 100644
> --- a/arch/powerpc/include/asm/book3s/64/slice.h
> +++ b/arch/powerpc/include/asm/book3s/64/slice.h
> @@ -11,4 +11,6 @@
>  #define SLICE_NUM_HIGH		(H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
>  #define GET_HIGH_SLICE_INDEX(addr)	((addr) >> SLICE_HIGH_SHIFT)
>  
> +#define SLB_ADDR_LIMIT_DEFAULT	DEFAULT_MAP_WINDOW_USER64
> +
>  #endif /* _ASM_POWERPC_BOOK3S_64_SLICE_H */
> diff --git a/arch/powerpc/include/asm/nohash/32/slice.h b/arch/powerpc/include/asm/nohash/32/slice.h
> index 777d62e40ac0..39eb0154ae2d 100644
> --- a/arch/powerpc/include/asm/nohash/32/slice.h
> +++ b/arch/powerpc/include/asm/nohash/32/slice.h
> @@ -13,6 +13,8 @@
>  #define SLICE_NUM_HIGH		0ul
>  #define GET_HIGH_SLICE_INDEX(addr)	(addr & 0)
>  
> +#define SLB_ADDR_LIMIT_DEFAULT	DEFAULT_MAP_WINDOW
> +
>  #endif /* CONFIG_PPC_MM_SLICES */
>  
>  #endif /* _ASM_POWERPC_NOHASH_32_SLICE_H */
> diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
> index f727197de713..884246e3bf0b 100644
> --- a/arch/powerpc/mm/hash_utils_64.c
> +++ b/arch/powerpc/mm/hash_utils_64.c
> @@ -1050,7 +1050,7 @@ void __init hash__early_init_mmu(void)
>  	htab_initialize();
>  
>  	init_mm.context.hash_context = &init_hash_mm_context;
> -	init_mm.context.hash_context->slb_addr_limit = DEFAULT_MAP_WINDOW_USER64;
> +	mm_ctx_set_slb_addr_limit(&init_mm.context, SLB_ADDR_LIMIT_DEFAULT);
>  
>  	pr_info("Initializing hash mmu with SLB\n");
>  	/* Initialize SLB management */
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index ace97d953040..97fbf7b54422 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -704,11 +704,7 @@ void slice_init_new_context_exec(struct mm_struct *mm)
>  	 * case of fork it is just inherited from the mm being
>  	 * duplicated.
>  	 */
> -#ifdef CONFIG_PPC64
> -	mm_ctx_set_slb_addr_limit(&mm->context, DEFAULT_MAP_WINDOW_USER64);
> -#else
> -	mm->context.slb_addr_limit = DEFAULT_MAP_WINDOW;
> -#endif
> +	mm_ctx_set_slb_addr_limit(&mm->context, SLB_ADDR_LIMIT_DEFAULT);
>  	mm_ctx_set_user_psize(&mm->context, psize);
>  
>  	/*
> diff --git a/arch/powerpc/mm/tlb_nohash.c b/arch/powerpc/mm/tlb_nohash.c
> index 088e0a6b5ade..ba4bff11191f 100644
> --- a/arch/powerpc/mm/tlb_nohash.c
> +++ b/arch/powerpc/mm/tlb_nohash.c
> @@ -802,9 +802,7 @@ void __init early_init_mmu(void)
>  #endif
>  
>  #ifdef CONFIG_PPC_MM_SLICES
> -#if defined(CONFIG_PPC_8xx)
> -	init_mm.context.slb_addr_limit = DEFAULT_MAP_WINDOW;
> -#endif
> +	mm_ctx_set_slb_addr_limit(&init_mm.context, SLB_ADDR_LIMIT_DEFAULT);
>  #endif
>  }
>  #endif /* CONFIG_PPC64 */
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm: drop slice DEBUG
  2019-04-25 14:29 ` [PATCH v2 11/11] powerpc/mm: drop slice DEBUG Christophe Leroy
@ 2019-04-26  6:44   ` Aneesh Kumar K.V
  2019-04-26  6:49     ` Christophe Leroy
  0 siblings, 1 reply; 23+ messages in thread
From: Aneesh Kumar K.V @ 2019-04-26  6:44 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> slice is now an improved functionnality. Drop the DEBUG stuff.
>

I would like to keep that. I helped a lot when moving address ranges and
it should not have any runtime impact.


> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> ---
>  arch/powerpc/mm/slice.c | 62 ++++---------------------------------------------
>  1 file changed, 4 insertions(+), 58 deletions(-)
>
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 97fbf7b54422..a9d803738b65 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -41,28 +41,6 @@
>  
>  static DEFINE_SPINLOCK(slice_convert_lock);
>  
> -#ifdef DEBUG
> -int _slice_debug = 1;
> -
> -static void slice_print_mask(const char *label, const struct slice_mask *mask)
> -{
> -	if (!_slice_debug)
> -		return;
> -	pr_devel("%s low_slice: %*pbl\n", label,
> -			(int)SLICE_NUM_LOW, &mask->low_slices);
> -	pr_devel("%s high_slice: %*pbl\n", label,
> -			(int)SLICE_NUM_HIGH, mask->high_slices);
> -}
> -
> -#define slice_dbg(fmt...) do { if (_slice_debug) pr_devel(fmt); } while (0)
> -
> -#else
> -
> -static void slice_print_mask(const char *label, const struct slice_mask *mask) {}
> -#define slice_dbg(fmt...)
> -
> -#endif
> -
>  static inline bool slice_addr_is_low(unsigned long addr)
>  {
>  	u64 tmp = (u64)addr;
> @@ -207,9 +185,6 @@ static void slice_convert(struct mm_struct *mm,
>  	unsigned long i, flags;
>  	int old_psize;
>  
> -	slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
> -	slice_print_mask(" mask", mask);
> -
>  	psize_mask = slice_mask_for_size(&mm->context, psize);
>  
>  	/* We need to use a spinlock here to protect against
> @@ -255,10 +230,6 @@ static void slice_convert(struct mm_struct *mm,
>  				(((unsigned long)psize) << (mask_index * 4));
>  	}
>  
> -	slice_dbg(" lsps=%lx, hsps=%lx\n",
> -		  (unsigned long)mm_ctx_low_slices(&mm->context),
> -		  (unsigned long)mm_ctx_high_slices(&mm->context));
> -
>  	spin_unlock_irqrestore(&slice_convert_lock, flags);
>  
>  	copro_flush_all_slbs(mm);
> @@ -485,14 +456,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  	BUG_ON(mm_ctx_slb_addr_limit(&mm->context) == 0);
>  	VM_BUG_ON(radix_enabled());
>  
> -	slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
> -	slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
> -		  addr, len, flags, topdown);
> -
>  	/* If hint, make sure it matches our alignment restrictions */
>  	if (!fixed && addr) {
>  		addr = _ALIGN_UP(addr, page_size);
> -		slice_dbg(" aligned addr=%lx\n", addr);
>  		/* Ignore hint if it's too large or overlaps a VMA */
>  		if (addr > high_limit - len || addr < mmap_min_addr ||
>  		    !slice_area_is_free(mm, addr, len))
> @@ -538,17 +504,12 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  		slice_copy_mask(&good_mask, maskp);
>  	}
>  
> -	slice_print_mask(" good_mask", &good_mask);
> -	if (compat_maskp)
> -		slice_print_mask(" compat_mask", compat_maskp);
> -
>  	/* First check hint if it's valid or if we have MAP_FIXED */
>  	if (addr != 0 || fixed) {
>  		/* Check if we fit in the good mask. If we do, we just return,
>  		 * nothing else to do
>  		 */
>  		if (slice_check_range_fits(mm, &good_mask, addr, len)) {
> -			slice_dbg(" fits good !\n");
>  			newaddr = addr;
>  			goto return_addr;
>  		}
> @@ -558,13 +519,10 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  		 */
>  		newaddr = slice_find_area(mm, len, &good_mask,
>  					  psize, topdown, high_limit);
> -		if (newaddr != -ENOMEM) {
> -			/* Found within the good mask, we don't have to setup,
> -			 * we thus return directly
> -			 */
> -			slice_dbg(" found area at 0x%lx\n", newaddr);
> +
> +		/* Found within good mask, don't have to setup, thus return directly */
> +		if (newaddr != -ENOMEM)
>  			goto return_addr;
> -		}
>  	}
>  	/*
>  	 * We don't fit in the good mask, check what other slices are
> @@ -572,11 +530,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  	 */
>  	slice_mask_for_free(mm, &potential_mask, high_limit);
>  	slice_or_mask(&potential_mask, &potential_mask, &good_mask);
> -	slice_print_mask(" potential", &potential_mask);
>  
>  	if (addr != 0 || fixed) {
>  		if (slice_check_range_fits(mm, &potential_mask, addr, len)) {
> -			slice_dbg(" fits potential !\n");
>  			newaddr = addr;
>  			goto convert;
>  		}
> @@ -586,18 +542,14 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  	if (fixed)
>  		return -EBUSY;
>  
> -	slice_dbg(" search...\n");
> -
>  	/* If we had a hint that didn't work out, see if we can fit
>  	 * anywhere in the good area.
>  	 */
>  	if (addr) {
>  		newaddr = slice_find_area(mm, len, &good_mask,
>  					  psize, topdown, high_limit);
> -		if (newaddr != -ENOMEM) {
> -			slice_dbg(" found area at 0x%lx\n", newaddr);
> +		if (newaddr != -ENOMEM)
>  			goto return_addr;
> -		}
>  	}
>  
>  	/* Now let's see if we can find something in the existing slices
> @@ -618,8 +570,6 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>  		return -ENOMEM;
>  
>  	slice_range_to_mask(newaddr, len, &potential_mask);
> -	slice_dbg(" found potential area at 0x%lx\n", newaddr);
> -	slice_print_mask(" mask", &potential_mask);
>  
>   convert:
>  	/*
> @@ -697,8 +647,6 @@ void slice_init_new_context_exec(struct mm_struct *mm)
>  	struct slice_mask *mask;
>  	unsigned int psize = mmu_virtual_psize;
>  
> -	slice_dbg("slice_init_new_context_exec(mm=%p)\n", mm);
> -
>  	/*
>  	 * In the case of exec, use the default limit. In the
>  	 * case of fork it is just inherited from the mm being
> @@ -730,8 +678,6 @@ void slice_setup_new_exec(void)
>  {
>  	struct mm_struct *mm = current->mm;
>  
> -	slice_dbg("slice_setup_new_exec(mm=%p)\n", mm);
> -
>  	if (!is_32bit_task())
>  		return;
>  
> -- 
> 2.13.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 11/11] powerpc/mm: drop slice DEBUG
  2019-04-26  6:44   ` Aneesh Kumar K.V
@ 2019-04-26  6:49     ` Christophe Leroy
  0 siblings, 0 replies; 23+ messages in thread
From: Christophe Leroy @ 2019-04-26  6:49 UTC (permalink / raw)
  To: Aneesh Kumar K.V, Benjamin Herrenschmidt, Paul Mackerras,
	Michael Ellerman
  Cc: linuxppc-dev, linux-kernel



Le 26/04/2019 à 08:44, Aneesh Kumar K.V a écrit :
> Christophe Leroy <christophe.leroy@c-s.fr> writes:
> 
>> slice is now an improved functionnality. Drop the DEBUG stuff.
>>
> 
> I would like to keep that. I helped a lot when moving address ranges and
> it should not have any runtime impact.

Ok for me.

Christophe

> 
> 
>> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
>> ---
>>   arch/powerpc/mm/slice.c | 62 ++++---------------------------------------------
>>   1 file changed, 4 insertions(+), 58 deletions(-)
>>
>> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
>> index 97fbf7b54422..a9d803738b65 100644
>> --- a/arch/powerpc/mm/slice.c
>> +++ b/arch/powerpc/mm/slice.c
>> @@ -41,28 +41,6 @@
>>   
>>   static DEFINE_SPINLOCK(slice_convert_lock);
>>   
>> -#ifdef DEBUG
>> -int _slice_debug = 1;
>> -
>> -static void slice_print_mask(const char *label, const struct slice_mask *mask)
>> -{
>> -	if (!_slice_debug)
>> -		return;
>> -	pr_devel("%s low_slice: %*pbl\n", label,
>> -			(int)SLICE_NUM_LOW, &mask->low_slices);
>> -	pr_devel("%s high_slice: %*pbl\n", label,
>> -			(int)SLICE_NUM_HIGH, mask->high_slices);
>> -}
>> -
>> -#define slice_dbg(fmt...) do { if (_slice_debug) pr_devel(fmt); } while (0)
>> -
>> -#else
>> -
>> -static void slice_print_mask(const char *label, const struct slice_mask *mask) {}
>> -#define slice_dbg(fmt...)
>> -
>> -#endif
>> -
>>   static inline bool slice_addr_is_low(unsigned long addr)
>>   {
>>   	u64 tmp = (u64)addr;
>> @@ -207,9 +185,6 @@ static void slice_convert(struct mm_struct *mm,
>>   	unsigned long i, flags;
>>   	int old_psize;
>>   
>> -	slice_dbg("slice_convert(mm=%p, psize=%d)\n", mm, psize);
>> -	slice_print_mask(" mask", mask);
>> -
>>   	psize_mask = slice_mask_for_size(&mm->context, psize);
>>   
>>   	/* We need to use a spinlock here to protect against
>> @@ -255,10 +230,6 @@ static void slice_convert(struct mm_struct *mm,
>>   				(((unsigned long)psize) << (mask_index * 4));
>>   	}
>>   
>> -	slice_dbg(" lsps=%lx, hsps=%lx\n",
>> -		  (unsigned long)mm_ctx_low_slices(&mm->context),
>> -		  (unsigned long)mm_ctx_high_slices(&mm->context));
>> -
>>   	spin_unlock_irqrestore(&slice_convert_lock, flags);
>>   
>>   	copro_flush_all_slbs(mm);
>> @@ -485,14 +456,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>>   	BUG_ON(mm_ctx_slb_addr_limit(&mm->context) == 0);
>>   	VM_BUG_ON(radix_enabled());
>>   
>> -	slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
>> -	slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
>> -		  addr, len, flags, topdown);
>> -
>>   	/* If hint, make sure it matches our alignment restrictions */
>>   	if (!fixed && addr) {
>>   		addr = _ALIGN_UP(addr, page_size);
>> -		slice_dbg(" aligned addr=%lx\n", addr);
>>   		/* Ignore hint if it's too large or overlaps a VMA */
>>   		if (addr > high_limit - len || addr < mmap_min_addr ||
>>   		    !slice_area_is_free(mm, addr, len))
>> @@ -538,17 +504,12 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>>   		slice_copy_mask(&good_mask, maskp);
>>   	}
>>   
>> -	slice_print_mask(" good_mask", &good_mask);
>> -	if (compat_maskp)
>> -		slice_print_mask(" compat_mask", compat_maskp);
>> -
>>   	/* First check hint if it's valid or if we have MAP_FIXED */
>>   	if (addr != 0 || fixed) {
>>   		/* Check if we fit in the good mask. If we do, we just return,
>>   		 * nothing else to do
>>   		 */
>>   		if (slice_check_range_fits(mm, &good_mask, addr, len)) {
>> -			slice_dbg(" fits good !\n");
>>   			newaddr = addr;
>>   			goto return_addr;
>>   		}
>> @@ -558,13 +519,10 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>>   		 */
>>   		newaddr = slice_find_area(mm, len, &good_mask,
>>   					  psize, topdown, high_limit);
>> -		if (newaddr != -ENOMEM) {
>> -			/* Found within the good mask, we don't have to setup,
>> -			 * we thus return directly
>> -			 */
>> -			slice_dbg(" found area at 0x%lx\n", newaddr);
>> +
>> +		/* Found within good mask, don't have to setup, thus return directly */
>> +		if (newaddr != -ENOMEM)
>>   			goto return_addr;
>> -		}
>>   	}
>>   	/*
>>   	 * We don't fit in the good mask, check what other slices are
>> @@ -572,11 +530,9 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>>   	 */
>>   	slice_mask_for_free(mm, &potential_mask, high_limit);
>>   	slice_or_mask(&potential_mask, &potential_mask, &good_mask);
>> -	slice_print_mask(" potential", &potential_mask);
>>   
>>   	if (addr != 0 || fixed) {
>>   		if (slice_check_range_fits(mm, &potential_mask, addr, len)) {
>> -			slice_dbg(" fits potential !\n");
>>   			newaddr = addr;
>>   			goto convert;
>>   		}
>> @@ -586,18 +542,14 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>>   	if (fixed)
>>   		return -EBUSY;
>>   
>> -	slice_dbg(" search...\n");
>> -
>>   	/* If we had a hint that didn't work out, see if we can fit
>>   	 * anywhere in the good area.
>>   	 */
>>   	if (addr) {
>>   		newaddr = slice_find_area(mm, len, &good_mask,
>>   					  psize, topdown, high_limit);
>> -		if (newaddr != -ENOMEM) {
>> -			slice_dbg(" found area at 0x%lx\n", newaddr);
>> +		if (newaddr != -ENOMEM)
>>   			goto return_addr;
>> -		}
>>   	}
>>   
>>   	/* Now let's see if we can find something in the existing slices
>> @@ -618,8 +570,6 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>>   		return -ENOMEM;
>>   
>>   	slice_range_to_mask(newaddr, len, &potential_mask);
>> -	slice_dbg(" found potential area at 0x%lx\n", newaddr);
>> -	slice_print_mask(" mask", &potential_mask);
>>   
>>    convert:
>>   	/*
>> @@ -697,8 +647,6 @@ void slice_init_new_context_exec(struct mm_struct *mm)
>>   	struct slice_mask *mask;
>>   	unsigned int psize = mmu_virtual_psize;
>>   
>> -	slice_dbg("slice_init_new_context_exec(mm=%p)\n", mm);
>> -
>>   	/*
>>   	 * In the case of exec, use the default limit. In the
>>   	 * case of fork it is just inherited from the mm being
>> @@ -730,8 +678,6 @@ void slice_setup_new_exec(void)
>>   {
>>   	struct mm_struct *mm = current->mm;
>>   
>> -	slice_dbg("slice_setup_new_exec(mm=%p)\n", mm);
>> -
>>   	if (!is_32bit_task())
>>   		return;
>>   
>> -- 
>> 2.13.3

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init
  2019-04-25 14:29 ` [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init Christophe Leroy
  2019-04-26  6:32   ` Aneesh Kumar K.V
@ 2019-05-03  6:59   ` Michael Ellerman
  1 sibling, 0 replies; 23+ messages in thread
From: Michael Ellerman @ 2019-05-03  6:59 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras, aneesh.kumar
  Cc: linuxppc-dev, linux-kernel

On Thu, 2019-04-25 at 14:29:27 UTC, Christophe Leroy wrote:
> Commit 67fda38f0d68 ("powerpc/mm: Move slb_addr_linit to
> early_init_mmu") moved slb_addr_limit init out of setup_arch().
> 
> Commit 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t
> for radix") brought it back into setup_arch() by error.
> 
> This patch reverts that erroneous regress.
> 
> Fixes: 701101865f5d ("powerpc/mm: Reduce memory usage for mm_context_t for radix")
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>

Patches 1-10 applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/5ba666d56c4ff9b011c1b029dcc689cf

cheers

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-05-03  6:59 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-25 14:29 [PATCH v2 00/11] Reduce ifdef mess in slice.c Christophe Leroy
2019-04-25 14:29 ` [PATCH v2 01/11] powerpc/mm: fix erroneous duplicate slb_addr_limit init Christophe Leroy
2019-04-26  6:32   ` Aneesh Kumar K.V
2019-05-03  6:59   ` Michael Ellerman
2019-04-25 14:29 ` [PATCH v2 02/11] powerpc/mm: no slice for nohash/64 Christophe Leroy
2019-04-26  6:33   ` Aneesh Kumar K.V
2019-04-25 14:29 ` [PATCH v2 03/11] powerpc/mm: hand a context_t over to slice_mask_for_size() instead of mm_struct Christophe Leroy
2019-04-26  6:34   ` Aneesh Kumar K.V
2019-04-25 14:29 ` [PATCH v2 04/11] powerpc/mm: move slice_mask_for_size() into mmu.h Christophe Leroy
2019-04-26  6:36   ` Aneesh Kumar K.V
2019-04-25 14:29 ` [PATCH v2 05/11] powerpc/mm: get rid of mm_ctx_slice_mask_xxx() Christophe Leroy
2019-04-26  6:37   ` Aneesh Kumar K.V
2019-04-25 14:29 ` [PATCH v2 06/11] powerpc/mm: remove unnecessary #ifdef CONFIG_PPC64 Christophe Leroy
2019-04-25 14:29 ` [PATCH v2 07/11] powerpc/mm: remove a couple of #ifdef CONFIG_PPC_64K_PAGES in mm/slice.c Christophe Leroy
2019-04-26  6:40   ` Aneesh Kumar K.V
2019-04-25 14:29 ` [PATCH v2 08/11] powerpc/8xx: get rid of #ifdef CONFIG_HUGETLB_PAGE for slices Christophe Leroy
2019-04-25 14:29 ` [PATCH v2 09/11] powerpc/mm: define get_slice_psize() all the time Christophe Leroy
2019-04-26  6:42   ` Aneesh Kumar K.V
2019-04-25 14:29 ` [PATCH v2 10/11] powerpc/mm: define subarch SLB_ADDR_LIMIT_DEFAULT Christophe Leroy
2019-04-26  6:43   ` Aneesh Kumar K.V
2019-04-25 14:29 ` [PATCH v2 11/11] powerpc/mm: drop slice DEBUG Christophe Leroy
2019-04-26  6:44   ` Aneesh Kumar K.V
2019-04-26  6:49     ` Christophe Leroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).