All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/18] powerpc: Make hash MMU code build configurable
@ 2021-12-01 14:41 Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 01/18] powerpc: Remove unused FW_FEATURE_NATIVE references Nicholas Piggin
                   ` (18 more replies)
  0 siblings, 19 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

Now that there's a platform that can make good use of it, here's
a series that can prevent the hash MMU code being built for 64s
platforms that don't need it.

Since v5:
- Make cxl select hash.
- Add new patch (15) to prevent radix using different get_unmapped_area
  code when hash support is disabled. This is an intermediate step for
  now, ideally we will end up with radix always going via the generic
  code.

Since v4:
- Fix 32s allnoconfig compile found by kernel test robot.
- Fix 64s platforms like cell that require hash but allow POWER9 CPU
  and !HASH to be selected found by Christophe.

Since v3:
- Merged microwatt patches into 1.
- Fix some changelogs, titles, comments.
- Keep MMU_FTR_HPTE_TABLE in the features when booting radix if hash
  support is is configured because it will be needed for KVM radix host
  hash guest (when we extend this option to KVM).
- Accounted for hopefully all review comments (thanks Christophe)

Since v2:
- Split MMU_FTR_HPTE_TABLE clearing for radix boot into its own patch.
- Remove memremap_compat_align from other sub archs entirely.
- Flip patch order of the 2 main patches to put Kconfig change first.
- Fixed Book3S/32 xmon segment dumping bug.
- Removed a few more ifdefs, changed numbers to use SZ_ definitions,
  etc.
- Fixed microwatt defconfig so it should actually disable hash MMU now.

Since v1:
- Split out most of the Kconfig change from the conditional compilation
  changes.
- Split out several more changes into preparatory patches.
- Reduced some ifdefs.
- Caught a few missing hash bits: pgtable dump, lkdtm,
  memremap_compat_align.

Since RFC:
- Split out large code movement from other changes.
- Used mmu ftr test constant folding rather than adding new constant
  true/false for radix_enabled().
- Restore tlbie trace point that had to be commented out in the
  previous.
- Avoid minor (probably unreachable) behaviour change in machine check
  handler when hash was not compiled.
- Fix microwatt updates so !HASH is not enforced.
- Rebase, build fixes.

Thanks,
Nick


Nicholas Piggin (18):
  powerpc: Remove unused FW_FEATURE_NATIVE references
  powerpc: Rename PPC_NATIVE to PPC_HASH_MMU_NATIVE
  powerpc/pseries: Stop selecting PPC_HASH_MMU_NATIVE
  powerpc/64s: Move and rename do_bad_slb_fault as it is not hash
    specific
  powerpc/pseries: move process table registration away from
    hash-specific code
  powerpc/pseries: lparcfg don't include slb_size line in radix mode
  powerpc/64s: move THP trace point creation out of hash specific file
  powerpc/64s: Make flush_and_reload_slb a no-op when radix is enabled
  powerpc/64s: move page size definitions from hash specific file
  powerpc/64s: Rename hash_hugetlbpage.c to hugetlbpage.c
  powerpc/64: pcpu setup avoid reading mmu_linear_psize on 64e or radix
  powerpc: make memremap_compat_align 64s-only
  powerpc/64e: remove mmu_linear_psize
  powerpc/64s: Fix radix MMU when MMU_FTR_HPTE_TABLE is clear
  powerpc/64s: Always define arch unmapped area calls
  powerpc/64s: Make hash MMU support configurable
  powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
  powerpc/microwatt: add POWER9_CPU, clear PPC_64S_HASH_MMU

 arch/powerpc/Kconfig                          |   5 +-
 arch/powerpc/configs/microwatt_defconfig      |   3 +-
 arch/powerpc/include/asm/book3s/64/hash.h     |   4 -
 arch/powerpc/include/asm/book3s/64/mmu.h      |  27 ++++-
 .../include/asm/book3s/64/tlbflush-hash.h     |   6 +
 arch/powerpc/include/asm/book3s/64/tlbflush.h |   4 -
 arch/powerpc/include/asm/book3s/pgtable.h     |   4 +
 arch/powerpc/include/asm/firmware.h           |   8 --
 arch/powerpc/include/asm/interrupt.h          |   2 +-
 arch/powerpc/include/asm/mmu.h                |  16 ++-
 arch/powerpc/include/asm/mmu_context.h        |   2 +
 arch/powerpc/include/asm/nohash/mmu-book3e.h  |   1 -
 arch/powerpc/include/asm/paca.h               |   8 ++
 arch/powerpc/kernel/asm-offsets.c             |   2 +
 arch/powerpc/kernel/dt_cpu_ftrs.c             |  14 ++-
 arch/powerpc/kernel/entry_64.S                |   4 +-
 arch/powerpc/kernel/exceptions-64s.S          |  20 +++-
 arch/powerpc/kernel/mce.c                     |   2 +-
 arch/powerpc/kernel/mce_power.c               |  16 ++-
 arch/powerpc/kernel/paca.c                    |  18 ++-
 arch/powerpc/kernel/process.c                 |  13 +-
 arch/powerpc/kernel/prom.c                    |   2 +
 arch/powerpc/kernel/setup_64.c                |  26 +++-
 arch/powerpc/kexec/core_64.c                  |   4 +-
 arch/powerpc/kexec/ranges.c                   |   4 +
 arch/powerpc/kvm/Kconfig                      |   1 +
 arch/powerpc/mm/book3s64/Makefile             |  19 +--
 arch/powerpc/mm/book3s64/hash_native.c        | 104 ----------------
 arch/powerpc/mm/book3s64/hash_pgtable.c       |   1 -
 arch/powerpc/mm/book3s64/hash_utils.c         | 111 +++++++++++++++++-
 .../{hash_hugetlbpage.c => hugetlbpage.c}     |   2 +
 arch/powerpc/mm/book3s64/mmu_context.c        |  32 ++++-
 arch/powerpc/mm/book3s64/pgtable.c            |  27 +++++
 arch/powerpc/mm/book3s64/radix_pgtable.c      |   4 +
 arch/powerpc/mm/book3s64/slb.c                |  16 ---
 arch/powerpc/mm/book3s64/trace.c              |   8 ++
 arch/powerpc/mm/copro_fault.c                 |   2 +
 arch/powerpc/mm/fault.c                       |  24 ++++
 arch/powerpc/mm/hugetlbpage.c                 |  16 ++-
 arch/powerpc/mm/init_64.c                     |  13 +-
 arch/powerpc/mm/ioremap.c                     |  20 ----
 arch/powerpc/mm/mmap.c                        |  40 ++++++-
 arch/powerpc/mm/nohash/tlb.c                  |   9 --
 arch/powerpc/mm/pgtable.c                     |   9 +-
 arch/powerpc/mm/ptdump/Makefile               |   2 +-
 arch/powerpc/mm/slice.c                       |  20 ----
 arch/powerpc/platforms/52xx/Kconfig           |   2 +-
 arch/powerpc/platforms/Kconfig                |   4 +-
 arch/powerpc/platforms/Kconfig.cputype        |  28 ++++-
 arch/powerpc/platforms/cell/Kconfig           |   3 +-
 arch/powerpc/platforms/chrp/Kconfig           |   2 +-
 arch/powerpc/platforms/embedded6xx/Kconfig    |   2 +-
 arch/powerpc/platforms/maple/Kconfig          |   3 +-
 arch/powerpc/platforms/microwatt/Kconfig      |   1 -
 arch/powerpc/platforms/pasemi/Kconfig         |   3 +-
 arch/powerpc/platforms/powermac/Kconfig       |   3 +-
 arch/powerpc/platforms/powernv/Kconfig        |   2 +-
 arch/powerpc/platforms/powernv/idle.c         |   2 +
 arch/powerpc/platforms/powernv/setup.c        |   2 +
 arch/powerpc/platforms/pseries/Kconfig        |   1 -
 arch/powerpc/platforms/pseries/lpar.c         |  67 ++++++-----
 arch/powerpc/platforms/pseries/lparcfg.c      |   5 +-
 arch/powerpc/platforms/pseries/mobility.c     |   6 +
 arch/powerpc/platforms/pseries/ras.c          |   2 +
 arch/powerpc/platforms/pseries/reconfig.c     |   2 +
 arch/powerpc/platforms/pseries/setup.c        |   6 +-
 arch/powerpc/xmon/xmon.c                      |   8 +-
 drivers/misc/cxl/Kconfig                      |   1 +
 drivers/misc/lkdtm/Makefile                   |   2 +-
 drivers/misc/lkdtm/core.c                     |   2 +-
 70 files changed, 527 insertions(+), 327 deletions(-)
 rename arch/powerpc/mm/book3s64/{hash_hugetlbpage.c => hugetlbpage.c} (99%)
 create mode 100644 arch/powerpc/mm/book3s64/trace.c

-- 
2.23.0


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [PATCH v6 01/18] powerpc: Remove unused FW_FEATURE_NATIVE references
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 02/18] powerpc: Rename PPC_NATIVE to PPC_HASH_MMU_NATIVE Nicholas Piggin
                   ` (17 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

FW_FEATURE_NATIVE_ALWAYS and FW_FEATURE_NATIVE_POSSIBLE are always
zero and never do anything. Remove them.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/firmware.h | 8 --------
 1 file changed, 8 deletions(-)

diff --git a/arch/powerpc/include/asm/firmware.h b/arch/powerpc/include/asm/firmware.h
index 97a3bd9ffeb9..9b702d2b80fb 100644
--- a/arch/powerpc/include/asm/firmware.h
+++ b/arch/powerpc/include/asm/firmware.h
@@ -80,8 +80,6 @@ enum {
 	FW_FEATURE_POWERNV_ALWAYS = 0,
 	FW_FEATURE_PS3_POSSIBLE = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
 	FW_FEATURE_PS3_ALWAYS = FW_FEATURE_LPAR | FW_FEATURE_PS3_LV1,
-	FW_FEATURE_NATIVE_POSSIBLE = 0,
-	FW_FEATURE_NATIVE_ALWAYS = 0,
 	FW_FEATURE_POSSIBLE =
 #ifdef CONFIG_PPC_PSERIES
 		FW_FEATURE_PSERIES_POSSIBLE |
@@ -91,9 +89,6 @@ enum {
 #endif
 #ifdef CONFIG_PPC_PS3
 		FW_FEATURE_PS3_POSSIBLE |
-#endif
-#ifdef CONFIG_PPC_NATIVE
-		FW_FEATURE_NATIVE_ALWAYS |
 #endif
 		0,
 	FW_FEATURE_ALWAYS =
@@ -105,9 +100,6 @@ enum {
 #endif
 #ifdef CONFIG_PPC_PS3
 		FW_FEATURE_PS3_ALWAYS &
-#endif
-#ifdef CONFIG_PPC_NATIVE
-		FW_FEATURE_NATIVE_ALWAYS &
 #endif
 		FW_FEATURE_POSSIBLE,
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 02/18] powerpc: Rename PPC_NATIVE to PPC_HASH_MMU_NATIVE
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 01/18] powerpc: Remove unused FW_FEATURE_NATIVE references Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 03/18] powerpc/pseries: Stop selecting PPC_HASH_MMU_NATIVE Nicholas Piggin
                   ` (16 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

PPC_NATIVE now only controls the native HPT code, so rename it to be
more descriptive. Restrict it to Book3S only.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/Makefile          | 2 +-
 arch/powerpc/mm/book3s64/hash_utils.c      | 2 +-
 arch/powerpc/platforms/52xx/Kconfig        | 2 +-
 arch/powerpc/platforms/Kconfig             | 4 ++--
 arch/powerpc/platforms/cell/Kconfig        | 2 +-
 arch/powerpc/platforms/chrp/Kconfig        | 2 +-
 arch/powerpc/platforms/embedded6xx/Kconfig | 2 +-
 arch/powerpc/platforms/maple/Kconfig       | 2 +-
 arch/powerpc/platforms/microwatt/Kconfig   | 2 +-
 arch/powerpc/platforms/pasemi/Kconfig      | 2 +-
 arch/powerpc/platforms/powermac/Kconfig    | 2 +-
 arch/powerpc/platforms/powernv/Kconfig     | 2 +-
 arch/powerpc/platforms/pseries/Kconfig     | 2 +-
 13 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
index 1b56d3af47d4..319f4b7f3357 100644
--- a/arch/powerpc/mm/book3s64/Makefile
+++ b/arch/powerpc/mm/book3s64/Makefile
@@ -6,7 +6,7 @@ CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
 
 obj-y				+= hash_pgtable.o hash_utils.o slb.o \
 				   mmu_context.o pgtable.o hash_tlb.o
-obj-$(CONFIG_PPC_NATIVE)	+= hash_native.o
+obj-$(CONFIG_PPC_HASH_MMU_NATIVE)	+= hash_native.o
 obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_pgtable.o radix_tlb.o
 obj-$(CONFIG_PPC_4K_PAGES)	+= hash_4k.o
 obj-$(CONFIG_PPC_64K_PAGES)	+= hash_64k.o
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index cfd45245d009..92680da5229a 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1091,7 +1091,7 @@ void __init hash__early_init_mmu(void)
 		ps3_early_mm_init();
 	else if (firmware_has_feature(FW_FEATURE_LPAR))
 		hpte_init_pseries();
-	else if (IS_ENABLED(CONFIG_PPC_NATIVE))
+	else if (IS_ENABLED(CONFIG_PPC_HASH_MMU_NATIVE))
 		hpte_init_native();
 
 	if (!mmu_hash_ops.hpte_insert)
diff --git a/arch/powerpc/platforms/52xx/Kconfig b/arch/powerpc/platforms/52xx/Kconfig
index 99d60acc20c8..b72ed2950ca8 100644
--- a/arch/powerpc/platforms/52xx/Kconfig
+++ b/arch/powerpc/platforms/52xx/Kconfig
@@ -34,7 +34,7 @@ config PPC_EFIKA
 	bool "bPlan Efika 5k2. MPC5200B based computer"
 	depends on PPC_MPC52xx
 	select PPC_RTAS
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 
 config PPC_LITE5200
 	bool "Freescale Lite5200 Eval Board"
diff --git a/arch/powerpc/platforms/Kconfig b/arch/powerpc/platforms/Kconfig
index e02d29a9d12f..d41dad227de8 100644
--- a/arch/powerpc/platforms/Kconfig
+++ b/arch/powerpc/platforms/Kconfig
@@ -40,9 +40,9 @@ config EPAPR_PARAVIRT
 
 	  In case of doubt, say Y
 
-config PPC_NATIVE
+config PPC_HASH_MMU_NATIVE
 	bool
-	depends on PPC_BOOK3S_32 || PPC64
+	depends on PPC_BOOK3S
 	help
 	  Support for running natively on the hardware, i.e. without
 	  a hypervisor. This option is not user-selectable but should
diff --git a/arch/powerpc/platforms/cell/Kconfig b/arch/powerpc/platforms/cell/Kconfig
index cb70c5f25bc6..db4465c51b56 100644
--- a/arch/powerpc/platforms/cell/Kconfig
+++ b/arch/powerpc/platforms/cell/Kconfig
@@ -8,7 +8,7 @@ config PPC_CELL_COMMON
 	select PPC_DCR_MMIO
 	select PPC_INDIRECT_PIO
 	select PPC_INDIRECT_MMIO
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select PPC_RTAS
 	select IRQ_EDGE_EOI_HANDLER
 
diff --git a/arch/powerpc/platforms/chrp/Kconfig b/arch/powerpc/platforms/chrp/Kconfig
index 9b5c5505718a..ff30ed579a39 100644
--- a/arch/powerpc/platforms/chrp/Kconfig
+++ b/arch/powerpc/platforms/chrp/Kconfig
@@ -11,6 +11,6 @@ config PPC_CHRP
 	select RTAS_ERROR_LOGGING
 	select PPC_MPC106
 	select PPC_UDBG_16550
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select FORCE_PCI
 	default y
diff --git a/arch/powerpc/platforms/embedded6xx/Kconfig b/arch/powerpc/platforms/embedded6xx/Kconfig
index 4c6d703a4284..c54786f8461e 100644
--- a/arch/powerpc/platforms/embedded6xx/Kconfig
+++ b/arch/powerpc/platforms/embedded6xx/Kconfig
@@ -55,7 +55,7 @@ config MVME5100
 	select FORCE_PCI
 	select PPC_INDIRECT_PCI
 	select PPC_I8259
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select PPC_UDBG_16550
 	help
 	  This option enables support for the Motorola (now Emerson) MVME5100
diff --git a/arch/powerpc/platforms/maple/Kconfig b/arch/powerpc/platforms/maple/Kconfig
index 86ae210bee9a..7fd84311ade5 100644
--- a/arch/powerpc/platforms/maple/Kconfig
+++ b/arch/powerpc/platforms/maple/Kconfig
@@ -9,7 +9,7 @@ config PPC_MAPLE
 	select GENERIC_TBSYNC
 	select PPC_UDBG_16550
 	select PPC_970_NAP
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select PPC_RTAS
 	select MMIO_NVRAM
 	select ATA_NONSTANDARD if ATA
diff --git a/arch/powerpc/platforms/microwatt/Kconfig b/arch/powerpc/platforms/microwatt/Kconfig
index 8f6a81978461..62b51e37fc05 100644
--- a/arch/powerpc/platforms/microwatt/Kconfig
+++ b/arch/powerpc/platforms/microwatt/Kconfig
@@ -5,7 +5,7 @@ config PPC_MICROWATT
 	select PPC_XICS
 	select PPC_ICS_NATIVE
 	select PPC_ICP_NATIVE
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select PPC_UDBG_16550
 	select ARCH_RANDOM
 	help
diff --git a/arch/powerpc/platforms/pasemi/Kconfig b/arch/powerpc/platforms/pasemi/Kconfig
index c52731a7773f..bc7137353a7f 100644
--- a/arch/powerpc/platforms/pasemi/Kconfig
+++ b/arch/powerpc/platforms/pasemi/Kconfig
@@ -5,7 +5,7 @@ config PPC_PASEMI
 	select MPIC
 	select FORCE_PCI
 	select PPC_UDBG_16550
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select MPIC_BROKEN_REGREAD
 	help
 	  This option enables support for PA Semi's PWRficient line
diff --git a/arch/powerpc/platforms/powermac/Kconfig b/arch/powerpc/platforms/powermac/Kconfig
index b97bf12801eb..2b56df145b82 100644
--- a/arch/powerpc/platforms/powermac/Kconfig
+++ b/arch/powerpc/platforms/powermac/Kconfig
@@ -6,7 +6,7 @@ config PPC_PMAC
 	select FORCE_PCI
 	select PPC_INDIRECT_PCI if PPC32
 	select PPC_MPC106 if PPC32
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select ZONE_DMA if PPC32
 	default y
 
diff --git a/arch/powerpc/platforms/powernv/Kconfig b/arch/powerpc/platforms/powernv/Kconfig
index 043eefbbdd28..cd754e116184 100644
--- a/arch/powerpc/platforms/powernv/Kconfig
+++ b/arch/powerpc/platforms/powernv/Kconfig
@@ -2,7 +2,7 @@
 config PPC_POWERNV
 	depends on PPC64 && PPC_BOOK3S
 	bool "IBM PowerNV (Non-Virtualized) platform support"
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select PPC_XICS
 	select PPC_ICP_NATIVE
 	select PPC_XIVE_NATIVE
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 9bd542164128..30618750bd98 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -17,7 +17,7 @@ config PPC_PSERIES
 	select PPC_RTAS_DAEMON
 	select RTAS_ERROR_LOGGING
 	select PPC_UDBG_16550
-	select PPC_NATIVE
+	select PPC_HASH_MMU_NATIVE
 	select PPC_DOORBELL
 	select HOTPLUG_CPU
 	select ARCH_RANDOM
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 03/18] powerpc/pseries: Stop selecting PPC_HASH_MMU_NATIVE
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 01/18] powerpc: Remove unused FW_FEATURE_NATIVE references Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 02/18] powerpc: Rename PPC_NATIVE to PPC_HASH_MMU_NATIVE Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 04/18] powerpc/64s: Move and rename do_bad_slb_fault as it is not hash specific Nicholas Piggin
                   ` (15 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

The pseries platform does not use the native hash code but the PAPR
virtualised hash interfaces, so remove PPC_HASH_MMU_NATIVE.

This requires moving tlbiel code from hash_native.c to hash_utils.c.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/tlbflush.h |   4 -
 arch/powerpc/mm/book3s64/hash_native.c        | 104 ------------------
 arch/powerpc/mm/book3s64/hash_utils.c         | 104 ++++++++++++++++++
 arch/powerpc/platforms/pseries/Kconfig        |   1 -
 4 files changed, 104 insertions(+), 109 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h
index 215973b4cb26..d2e80f178b6d 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h
@@ -14,7 +14,6 @@ enum {
 	TLB_INVAL_SCOPE_LPID = 1,	/* invalidate TLBs for current LPID */
 };
 
-#ifdef CONFIG_PPC_NATIVE
 static inline void tlbiel_all(void)
 {
 	/*
@@ -30,9 +29,6 @@ static inline void tlbiel_all(void)
 	else
 		hash__tlbiel_all(TLB_INVAL_SCOPE_GLOBAL);
 }
-#else
-static inline void tlbiel_all(void) { BUG(); }
-#endif
 
 static inline void tlbiel_all_lpid(bool radix)
 {
diff --git a/arch/powerpc/mm/book3s64/hash_native.c b/arch/powerpc/mm/book3s64/hash_native.c
index d8279bfe68ea..d2a320828c0b 100644
--- a/arch/powerpc/mm/book3s64/hash_native.c
+++ b/arch/powerpc/mm/book3s64/hash_native.c
@@ -43,110 +43,6 @@
 
 static DEFINE_RAW_SPINLOCK(native_tlbie_lock);
 
-static inline void tlbiel_hash_set_isa206(unsigned int set, unsigned int is)
-{
-	unsigned long rb;
-
-	rb = (set << PPC_BITLSHIFT(51)) | (is << PPC_BITLSHIFT(53));
-
-	asm volatile("tlbiel %0" : : "r" (rb));
-}
-
-/*
- * tlbiel instruction for hash, set invalidation
- * i.e., r=1 and is=01 or is=10 or is=11
- */
-static __always_inline void tlbiel_hash_set_isa300(unsigned int set, unsigned int is,
-					unsigned int pid,
-					unsigned int ric, unsigned int prs)
-{
-	unsigned long rb;
-	unsigned long rs;
-	unsigned int r = 0; /* hash format */
-
-	rb = (set << PPC_BITLSHIFT(51)) | (is << PPC_BITLSHIFT(53));
-	rs = ((unsigned long)pid << PPC_BITLSHIFT(31));
-
-	asm volatile(PPC_TLBIEL(%0, %1, %2, %3, %4)
-		     : : "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r)
-		     : "memory");
-}
-
-
-static void tlbiel_all_isa206(unsigned int num_sets, unsigned int is)
-{
-	unsigned int set;
-
-	asm volatile("ptesync": : :"memory");
-
-	for (set = 0; set < num_sets; set++)
-		tlbiel_hash_set_isa206(set, is);
-
-	ppc_after_tlbiel_barrier();
-}
-
-static void tlbiel_all_isa300(unsigned int num_sets, unsigned int is)
-{
-	unsigned int set;
-
-	asm volatile("ptesync": : :"memory");
-
-	/*
-	 * Flush the partition table cache if this is HV mode.
-	 */
-	if (early_cpu_has_feature(CPU_FTR_HVMODE))
-		tlbiel_hash_set_isa300(0, is, 0, 2, 0);
-
-	/*
-	 * Now invalidate the process table cache. UPRT=0 HPT modes (what
-	 * current hardware implements) do not use the process table, but
-	 * add the flushes anyway.
-	 *
-	 * From ISA v3.0B p. 1078:
-	 *     The following forms are invalid.
-	 *      * PRS=1, R=0, and RIC!=2 (The only process-scoped
-	 *        HPT caching is of the Process Table.)
-	 */
-	tlbiel_hash_set_isa300(0, is, 0, 2, 1);
-
-	/*
-	 * Then flush the sets of the TLB proper. Hash mode uses
-	 * partition scoped TLB translations, which may be flushed
-	 * in !HV mode.
-	 */
-	for (set = 0; set < num_sets; set++)
-		tlbiel_hash_set_isa300(set, is, 0, 0, 0);
-
-	ppc_after_tlbiel_barrier();
-
-	asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT "; isync" : : :"memory");
-}
-
-void hash__tlbiel_all(unsigned int action)
-{
-	unsigned int is;
-
-	switch (action) {
-	case TLB_INVAL_SCOPE_GLOBAL:
-		is = 3;
-		break;
-	case TLB_INVAL_SCOPE_LPID:
-		is = 2;
-		break;
-	default:
-		BUG();
-	}
-
-	if (early_cpu_has_feature(CPU_FTR_ARCH_300))
-		tlbiel_all_isa300(POWER9_TLB_SETS_HASH, is);
-	else if (early_cpu_has_feature(CPU_FTR_ARCH_207S))
-		tlbiel_all_isa206(POWER8_TLB_SETS, is);
-	else if (early_cpu_has_feature(CPU_FTR_ARCH_206))
-		tlbiel_all_isa206(POWER7_TLB_SETS, is);
-	else
-		WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
-}
-
 static inline unsigned long  ___tlbie(unsigned long vpn, int psize,
 						int apsize, int ssize)
 {
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 92680da5229a..97a36fa3940e 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -175,6 +175,110 @@ static struct mmu_psize_def mmu_psize_defaults_gp[] = {
 	},
 };
 
+static inline void tlbiel_hash_set_isa206(unsigned int set, unsigned int is)
+{
+	unsigned long rb;
+
+	rb = (set << PPC_BITLSHIFT(51)) | (is << PPC_BITLSHIFT(53));
+
+	asm volatile("tlbiel %0" : : "r" (rb));
+}
+
+/*
+ * tlbiel instruction for hash, set invalidation
+ * i.e., r=1 and is=01 or is=10 or is=11
+ */
+static __always_inline void tlbiel_hash_set_isa300(unsigned int set, unsigned int is,
+					unsigned int pid,
+					unsigned int ric, unsigned int prs)
+{
+	unsigned long rb;
+	unsigned long rs;
+	unsigned int r = 0; /* hash format */
+
+	rb = (set << PPC_BITLSHIFT(51)) | (is << PPC_BITLSHIFT(53));
+	rs = ((unsigned long)pid << PPC_BITLSHIFT(31));
+
+	asm volatile(PPC_TLBIEL(%0, %1, %2, %3, %4)
+		     : : "r"(rb), "r"(rs), "i"(ric), "i"(prs), "i"(r)
+		     : "memory");
+}
+
+
+static void tlbiel_all_isa206(unsigned int num_sets, unsigned int is)
+{
+	unsigned int set;
+
+	asm volatile("ptesync": : :"memory");
+
+	for (set = 0; set < num_sets; set++)
+		tlbiel_hash_set_isa206(set, is);
+
+	ppc_after_tlbiel_barrier();
+}
+
+static void tlbiel_all_isa300(unsigned int num_sets, unsigned int is)
+{
+	unsigned int set;
+
+	asm volatile("ptesync": : :"memory");
+
+	/*
+	 * Flush the partition table cache if this is HV mode.
+	 */
+	if (early_cpu_has_feature(CPU_FTR_HVMODE))
+		tlbiel_hash_set_isa300(0, is, 0, 2, 0);
+
+	/*
+	 * Now invalidate the process table cache. UPRT=0 HPT modes (what
+	 * current hardware implements) do not use the process table, but
+	 * add the flushes anyway.
+	 *
+	 * From ISA v3.0B p. 1078:
+	 *     The following forms are invalid.
+	 *      * PRS=1, R=0, and RIC!=2 (The only process-scoped
+	 *        HPT caching is of the Process Table.)
+	 */
+	tlbiel_hash_set_isa300(0, is, 0, 2, 1);
+
+	/*
+	 * Then flush the sets of the TLB proper. Hash mode uses
+	 * partition scoped TLB translations, which may be flushed
+	 * in !HV mode.
+	 */
+	for (set = 0; set < num_sets; set++)
+		tlbiel_hash_set_isa300(set, is, 0, 0, 0);
+
+	ppc_after_tlbiel_barrier();
+
+	asm volatile(PPC_ISA_3_0_INVALIDATE_ERAT "; isync" : : :"memory");
+}
+
+void hash__tlbiel_all(unsigned int action)
+{
+	unsigned int is;
+
+	switch (action) {
+	case TLB_INVAL_SCOPE_GLOBAL:
+		is = 3;
+		break;
+	case TLB_INVAL_SCOPE_LPID:
+		is = 2;
+		break;
+	default:
+		BUG();
+	}
+
+	if (early_cpu_has_feature(CPU_FTR_ARCH_300))
+		tlbiel_all_isa300(POWER9_TLB_SETS_HASH, is);
+	else if (early_cpu_has_feature(CPU_FTR_ARCH_207S))
+		tlbiel_all_isa206(POWER8_TLB_SETS, is);
+	else if (early_cpu_has_feature(CPU_FTR_ARCH_206))
+		tlbiel_all_isa206(POWER7_TLB_SETS, is);
+	else
+		WARN(1, "%s called on pre-POWER7 CPU\n", __func__);
+}
+
 /*
  * 'R' and 'C' update notes:
  *  - Under pHyp or KVM, the updatepp path will not set C, thus it *will*
diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig
index 30618750bd98..f7fd91d153a4 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -17,7 +17,6 @@ config PPC_PSERIES
 	select PPC_RTAS_DAEMON
 	select RTAS_ERROR_LOGGING
 	select PPC_UDBG_16550
-	select PPC_HASH_MMU_NATIVE
 	select PPC_DOORBELL
 	select HOTPLUG_CPU
 	select ARCH_RANDOM
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 04/18] powerpc/64s: Move and rename do_bad_slb_fault as it is not hash specific
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (2 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 03/18] powerpc/pseries: Stop selecting PPC_HASH_MMU_NATIVE Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 05/18] powerpc/pseries: move process table registration away from hash-specific code Nicholas Piggin
                   ` (14 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

slb.c is hash-specific SLB management, but do_bad_slb_fault deals with
segment interrupts that occur with radix MMU as well.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h |  2 +-
 arch/powerpc/kernel/exceptions-64s.S |  4 ++--
 arch/powerpc/mm/book3s64/slb.c       | 16 ----------------
 arch/powerpc/mm/fault.c              | 24 ++++++++++++++++++++++++
 4 files changed, 27 insertions(+), 19 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index a1d238255f07..3487aab12229 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -564,7 +564,7 @@ DECLARE_INTERRUPT_HANDLER(kernel_bad_stack);
 
 /* slb.c */
 DECLARE_INTERRUPT_HANDLER_RAW(do_slb_fault);
-DECLARE_INTERRUPT_HANDLER(do_bad_slb_fault);
+DECLARE_INTERRUPT_HANDLER(do_bad_segment_interrupt);
 
 /* hash_utils.c */
 DECLARE_INTERRUPT_HANDLER_RAW(do_hash_fault);
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index eaf1f72131a1..046c99e31d01 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1430,7 +1430,7 @@ MMU_FTR_SECTION_ELSE
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
 	std	r3,RESULT(r1)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	do_bad_slb_fault
+	bl	do_bad_segment_interrupt
 	b	interrupt_return_srr
 
 
@@ -1510,7 +1510,7 @@ MMU_FTR_SECTION_ELSE
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
 	std	r3,RESULT(r1)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	do_bad_slb_fault
+	bl	do_bad_segment_interrupt
 	b	interrupt_return_srr
 
 
diff --git a/arch/powerpc/mm/book3s64/slb.c b/arch/powerpc/mm/book3s64/slb.c
index f0037bcc47a0..31f4cef3adac 100644
--- a/arch/powerpc/mm/book3s64/slb.c
+++ b/arch/powerpc/mm/book3s64/slb.c
@@ -868,19 +868,3 @@ DEFINE_INTERRUPT_HANDLER_RAW(do_slb_fault)
 		return err;
 	}
 }
-
-DEFINE_INTERRUPT_HANDLER(do_bad_slb_fault)
-{
-	int err = regs->result;
-
-	if (err == -EFAULT) {
-		if (user_mode(regs))
-			_exception(SIGSEGV, regs, SEGV_BNDERR, regs->dar);
-		else
-			bad_page_fault(regs, SIGSEGV);
-	} else if (err == -EINVAL) {
-		unrecoverable_exception(regs);
-	} else {
-		BUG();
-	}
-}
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index a8d0ce85d39a..2d4a411c7c85 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -35,6 +35,7 @@
 #include <linux/kfence.h>
 #include <linux/pkeys.h>
 
+#include <asm/asm-prototypes.h>
 #include <asm/firmware.h>
 #include <asm/interrupt.h>
 #include <asm/page.h>
@@ -620,4 +621,27 @@ DEFINE_INTERRUPT_HANDLER(do_bad_page_fault_segv)
 {
 	bad_page_fault(regs, SIGSEGV);
 }
+
+/*
+ * In radix, segment interrupts indicate the EA is not addressable by the
+ * page table geometry, so they are always sent here.
+ *
+ * In hash, this is called if do_slb_fault returns error. Typically it is
+ * because the EA was outside the region allowed by software.
+ */
+DEFINE_INTERRUPT_HANDLER(do_bad_segment_interrupt)
+{
+	int err = regs->result;
+
+	if (err == -EFAULT) {
+		if (user_mode(regs))
+			_exception(SIGSEGV, regs, SEGV_BNDERR, regs->dar);
+		else
+			bad_page_fault(regs, SIGSEGV);
+	} else if (err == -EINVAL) {
+		unrecoverable_exception(regs);
+	} else {
+		BUG();
+	}
+}
 #endif
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 05/18] powerpc/pseries: move process table registration away from hash-specific code
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (3 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 04/18] powerpc/64s: Move and rename do_bad_slb_fault as it is not hash specific Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 06/18] powerpc/pseries: lparcfg don't include slb_size line in radix mode Nicholas Piggin
                   ` (13 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

This reduces ifdefs in a later change which makes hash support configurable.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/platforms/pseries/lpar.c | 56 +++++++++++++--------------
 1 file changed, 28 insertions(+), 28 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 3df6bdfea475..06d6a824c0dc 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -712,6 +712,34 @@ void vpa_init(int cpu)
 
 #ifdef CONFIG_PPC_BOOK3S_64
 
+static int pseries_lpar_register_process_table(unsigned long base,
+			unsigned long page_size, unsigned long table_size)
+{
+	long rc;
+	unsigned long flags = 0;
+
+	if (table_size)
+		flags |= PROC_TABLE_NEW;
+	if (radix_enabled()) {
+		flags |= PROC_TABLE_RADIX;
+		if (mmu_has_feature(MMU_FTR_GTSE))
+			flags |= PROC_TABLE_GTSE;
+	} else
+		flags |= PROC_TABLE_HPT_SLB;
+	for (;;) {
+		rc = plpar_hcall_norets(H_REGISTER_PROC_TBL, flags, base,
+					page_size, table_size);
+		if (!H_IS_LONG_BUSY(rc))
+			break;
+		mdelay(get_longbusy_msecs(rc));
+	}
+	if (rc != H_SUCCESS) {
+		pr_err("Failed to register process table (rc=%ld)\n", rc);
+		BUG();
+	}
+	return rc;
+}
+
 static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
 				     unsigned long vpn, unsigned long pa,
 				     unsigned long rflags, unsigned long vflags,
@@ -1680,34 +1708,6 @@ static int pseries_lpar_resize_hpt(unsigned long shift)
 	return 0;
 }
 
-static int pseries_lpar_register_process_table(unsigned long base,
-			unsigned long page_size, unsigned long table_size)
-{
-	long rc;
-	unsigned long flags = 0;
-
-	if (table_size)
-		flags |= PROC_TABLE_NEW;
-	if (radix_enabled()) {
-		flags |= PROC_TABLE_RADIX;
-		if (mmu_has_feature(MMU_FTR_GTSE))
-			flags |= PROC_TABLE_GTSE;
-	} else
-		flags |= PROC_TABLE_HPT_SLB;
-	for (;;) {
-		rc = plpar_hcall_norets(H_REGISTER_PROC_TBL, flags, base,
-					page_size, table_size);
-		if (!H_IS_LONG_BUSY(rc))
-			break;
-		mdelay(get_longbusy_msecs(rc));
-	}
-	if (rc != H_SUCCESS) {
-		pr_err("Failed to register process table (rc=%ld)\n", rc);
-		BUG();
-	}
-	return rc;
-}
-
 void __init hpte_init_pseries(void)
 {
 	mmu_hash_ops.hpte_invalidate	 = pSeries_lpar_hpte_invalidate;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 06/18] powerpc/pseries: lparcfg don't include slb_size line in radix mode
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (4 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 05/18] powerpc/pseries: move process table registration away from hash-specific code Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 07/18] powerpc/64s: move THP trace point creation out of hash specific file Nicholas Piggin
                   ` (12 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

This avoids a change in behaviour in the later patch making hash
support configurable. This is possibly a user interface change, so
the alternative would be a hard-coded slb_size=0 here.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/platforms/pseries/lparcfg.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
index f71eac74ea92..3354c00914fa 100644
--- a/arch/powerpc/platforms/pseries/lparcfg.c
+++ b/arch/powerpc/platforms/pseries/lparcfg.c
@@ -532,7 +532,8 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v)
 		   lppaca_shared_proc(get_lppaca()));
 
 #ifdef CONFIG_PPC_BOOK3S_64
-	seq_printf(m, "slb_size=%d\n", mmu_slb_size);
+	if (!radix_enabled())
+		seq_printf(m, "slb_size=%d\n", mmu_slb_size);
 #endif
 	parse_em_data(m);
 	maxmem_data(m);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 07/18] powerpc/64s: move THP trace point creation out of hash specific file
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (5 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 06/18] powerpc/pseries: lparcfg don't include slb_size line in radix mode Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 08/18] powerpc/64s: Make flush_and_reload_slb a no-op when radix is enabled Nicholas Piggin
                   ` (11 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

In preparation for making hash MMU support configurable, move THP
trace point function definitions out of an otherwise hash-specific
file.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/Makefile       | 2 +-
 arch/powerpc/mm/book3s64/hash_pgtable.c | 1 -
 arch/powerpc/mm/book3s64/trace.c        | 8 ++++++++
 3 files changed, 9 insertions(+), 2 deletions(-)
 create mode 100644 arch/powerpc/mm/book3s64/trace.c

diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
index 319f4b7f3357..1579e18e098d 100644
--- a/arch/powerpc/mm/book3s64/Makefile
+++ b/arch/powerpc/mm/book3s64/Makefile
@@ -5,7 +5,7 @@ ccflags-y	:= $(NO_MINIMAL_TOC)
 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
 
 obj-y				+= hash_pgtable.o hash_utils.o slb.o \
-				   mmu_context.o pgtable.o hash_tlb.o
+				   mmu_context.o pgtable.o hash_tlb.o trace.o
 obj-$(CONFIG_PPC_HASH_MMU_NATIVE)	+= hash_native.o
 obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_pgtable.o radix_tlb.o
 obj-$(CONFIG_PPC_4K_PAGES)	+= hash_4k.o
diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
index ad5eff097d31..7ce8914992e3 100644
--- a/arch/powerpc/mm/book3s64/hash_pgtable.c
+++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
@@ -16,7 +16,6 @@
 
 #include <mm/mmu_decl.h>
 
-#define CREATE_TRACE_POINTS
 #include <trace/events/thp.h>
 
 #if H_PGTABLE_RANGE > (USER_VSID_RANGE * (TASK_SIZE_USER64 / TASK_CONTEXT_SIZE))
diff --git a/arch/powerpc/mm/book3s64/trace.c b/arch/powerpc/mm/book3s64/trace.c
new file mode 100644
index 000000000000..b86e7b906257
--- /dev/null
+++ b/arch/powerpc/mm/book3s64/trace.c
@@ -0,0 +1,8 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * This file is for defining trace points and trace related helpers.
+ */
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define CREATE_TRACE_POINTS
+#include <trace/events/thp.h>
+#endif
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 08/18] powerpc/64s: Make flush_and_reload_slb a no-op when radix is enabled
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (6 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 07/18] powerpc/64s: move THP trace point creation out of hash specific file Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 09/18] powerpc/64s: move page size definitions from hash specific file Nicholas Piggin
                   ` (10 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

The radix test can exclude slb_flush_all_realmode() from being called
because flush_and_reload_slb() is only expected to flush ERAT when
called by flush_erat(), which is only on pre-ISA v3.0 CPUs that do not
support radix.

This helps the later change to make hash support configurable to not
introduce runtime changes to radix mode behaviour.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/mce_power.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
index c2f55fe7092d..cf5263b648fc 100644
--- a/arch/powerpc/kernel/mce_power.c
+++ b/arch/powerpc/kernel/mce_power.c
@@ -80,12 +80,12 @@ static bool mce_in_guest(void)
 #ifdef CONFIG_PPC_BOOK3S_64
 void flush_and_reload_slb(void)
 {
-	/* Invalidate all SLBs */
-	slb_flush_all_realmode();
-
 	if (early_radix_enabled())
 		return;
 
+	/* Invalidate all SLBs */
+	slb_flush_all_realmode();
+
 	/*
 	 * This probably shouldn't happen, but it may be possible it's
 	 * called in early boot before SLB shadows are allocated.
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 09/18] powerpc/64s: move page size definitions from hash specific file
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (7 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 08/18] powerpc/64s: Make flush_and_reload_slb a no-op when radix is enabled Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 10/18] powerpc/64s: Rename hash_hugetlbpage.c to hugetlbpage.c Nicholas Piggin
                   ` (9 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

The radix code uses some of the psize variables. Move the common
ones from hash_utils.c to pgtable.c.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/hash_utils.c | 5 -----
 arch/powerpc/mm/book3s64/pgtable.c    | 7 +++++++
 2 files changed, 7 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 97a36fa3940e..eced266dc5e9 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -99,8 +99,6 @@
  */
 
 static unsigned long _SDR1;
-struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
-EXPORT_SYMBOL_GPL(mmu_psize_defs);
 
 u8 hpte_page_sizes[1 << LP_BITS];
 EXPORT_SYMBOL_GPL(hpte_page_sizes);
@@ -114,9 +112,6 @@ EXPORT_SYMBOL_GPL(mmu_linear_psize);
 int mmu_virtual_psize = MMU_PAGE_4K;
 int mmu_vmalloc_psize = MMU_PAGE_4K;
 EXPORT_SYMBOL_GPL(mmu_vmalloc_psize);
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
-int mmu_vmemmap_psize = MMU_PAGE_4K;
-#endif
 int mmu_io_psize = MMU_PAGE_4K;
 int mmu_kernel_ssize = MMU_SEGSIZE_256M;
 EXPORT_SYMBOL_GPL(mmu_kernel_ssize);
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 9e16c7b1a6c5..81f2e5b670e2 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -22,6 +22,13 @@
 
 #include "internal.h"
 
+struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
+EXPORT_SYMBOL_GPL(mmu_psize_defs);
+
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+int mmu_vmemmap_psize = MMU_PAGE_4K;
+#endif
+
 unsigned long __pmd_frag_nr;
 EXPORT_SYMBOL(__pmd_frag_nr);
 unsigned long __pmd_frag_size_shift;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 10/18] powerpc/64s: Rename hash_hugetlbpage.c to hugetlbpage.c
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (8 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 09/18] powerpc/64s: move page size definitions from hash specific file Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 11/18] powerpc/64: pcpu setup avoid reading mmu_linear_psize on 64e or radix Nicholas Piggin
                   ` (8 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

This file contains functions and data common to radix, so rename it to
remove the hash_ prefix.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/Makefile                              | 2 +-
 arch/powerpc/mm/book3s64/{hash_hugetlbpage.c => hugetlbpage.c} | 0
 2 files changed, 1 insertion(+), 1 deletion(-)
 rename arch/powerpc/mm/book3s64/{hash_hugetlbpage.c => hugetlbpage.c} (100%)

diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
index 1579e18e098d..501efadb287f 100644
--- a/arch/powerpc/mm/book3s64/Makefile
+++ b/arch/powerpc/mm/book3s64/Makefile
@@ -10,7 +10,7 @@ obj-$(CONFIG_PPC_HASH_MMU_NATIVE)	+= hash_native.o
 obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_pgtable.o radix_tlb.o
 obj-$(CONFIG_PPC_4K_PAGES)	+= hash_4k.o
 obj-$(CONFIG_PPC_64K_PAGES)	+= hash_64k.o
-obj-$(CONFIG_HUGETLB_PAGE)	+= hash_hugetlbpage.o
+obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
 ifdef CONFIG_HUGETLB_PAGE
 obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_hugetlbpage.o
 endif
diff --git a/arch/powerpc/mm/book3s64/hash_hugetlbpage.c b/arch/powerpc/mm/book3s64/hugetlbpage.c
similarity index 100%
rename from arch/powerpc/mm/book3s64/hash_hugetlbpage.c
rename to arch/powerpc/mm/book3s64/hugetlbpage.c
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 11/18] powerpc/64: pcpu setup avoid reading mmu_linear_psize on 64e or radix
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (9 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 10/18] powerpc/64s: Rename hash_hugetlbpage.c to hugetlbpage.c Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 12/18] powerpc: make memremap_compat_align 64s-only Nicholas Piggin
                   ` (7 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

Radix never sets mmu_linear_psize so it's always 4K, which causes pcpu
atom_size to always be PAGE_SIZE. 64e sets it to 1GB always.

Make paths for these platforms to be explicit about what value they set
atom_size to.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/setup_64.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 6052f5d5ded3..9a493796ce66 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -880,14 +880,23 @@ void __init setup_per_cpu_areas(void)
 	int rc = -EINVAL;
 
 	/*
-	 * Linear mapping is one of 4K, 1M and 16M.  For 4K, no need
-	 * to group units.  For larger mappings, use 1M atom which
-	 * should be large enough to contain a number of units.
+	 * BookE and BookS radix are historical values and should be revisited.
 	 */
-	if (mmu_linear_psize == MMU_PAGE_4K)
+	if (IS_ENABLED(CONFIG_PPC_BOOK3E)) {
+		atom_size = SZ_1M;
+	} else if (radix_enabled()) {
 		atom_size = PAGE_SIZE;
-	else
-		atom_size = 1 << 20;
+	} else {
+		/*
+		 * Linear mapping is one of 4K, 1M and 16M.  For 4K, no need
+		 * to group units.  For larger mappings, use 1M atom which
+		 * should be large enough to contain a number of units.
+		 */
+		if (mmu_linear_psize == MMU_PAGE_4K)
+			atom_size = PAGE_SIZE;
+		else
+			atom_size = SZ_1M;
+	}
 
 	if (pcpu_chosen_fc != PCPU_FC_PAGE) {
 		rc = pcpu_embed_first_chunk(0, dyn_size, atom_size, pcpu_cpu_distance,
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 12/18] powerpc: make memremap_compat_align 64s-only
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (10 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 11/18] powerpc/64: pcpu setup avoid reading mmu_linear_psize on 64e or radix Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 13/18] powerpc/64e: remove mmu_linear_psize Nicholas Piggin
                   ` (6 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

memremap_compat_align is only relevant when ZONE_DEVICE is selected.
ZONE_DEVICE depends on ARCH_HAS_PTE_DEVMAP, which is only selected
by PPC_BOOK3S_64.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/Kconfig               |  2 +-
 arch/powerpc/mm/book3s64/pgtable.c | 20 ++++++++++++++++++++
 arch/powerpc/mm/ioremap.c          | 20 --------------------
 3 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index dea74d7717c0..27afb64d027c 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -129,7 +129,7 @@ config PPC
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_MEMBARRIER_CALLBACKS
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
-	select ARCH_HAS_MEMREMAP_COMPAT_ALIGN
+	select ARCH_HAS_MEMREMAP_COMPAT_ALIGN	if PPC_BOOK3S_64
 	select ARCH_HAS_MMIOWB			if PPC64
 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
 	select ARCH_HAS_PHYS_TO_DMA
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 81f2e5b670e2..4d97d1525d49 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -533,3 +533,23 @@ static int __init pgtable_debugfs_setup(void)
 	return 0;
 }
 arch_initcall(pgtable_debugfs_setup);
+
+#ifdef CONFIG_ZONE_DEVICE
+/*
+ * Override the generic version in mm/memremap.c.
+ *
+ * With hash translation, the direct-map range is mapped with just one
+ * page size selected by htab_init_page_sizes(). Consult
+ * mmu_psize_defs[] to determine the minimum page size alignment.
+*/
+unsigned long memremap_compat_align(void)
+{
+	if (!radix_enabled()) {
+		unsigned int shift = mmu_psize_defs[mmu_linear_psize].shift;
+		return max(SUBSECTION_SIZE, 1UL << shift);
+	}
+
+	return SUBSECTION_SIZE;
+}
+EXPORT_SYMBOL_GPL(memremap_compat_align);
+#endif
diff --git a/arch/powerpc/mm/ioremap.c b/arch/powerpc/mm/ioremap.c
index 57342154d2b0..4f12504fb405 100644
--- a/arch/powerpc/mm/ioremap.c
+++ b/arch/powerpc/mm/ioremap.c
@@ -98,23 +98,3 @@ void __iomem *do_ioremap(phys_addr_t pa, phys_addr_t offset, unsigned long size,
 
 	return NULL;
 }
-
-#ifdef CONFIG_ZONE_DEVICE
-/*
- * Override the generic version in mm/memremap.c.
- *
- * With hash translation, the direct-map range is mapped with just one
- * page size selected by htab_init_page_sizes(). Consult
- * mmu_psize_defs[] to determine the minimum page size alignment.
-*/
-unsigned long memremap_compat_align(void)
-{
-	unsigned int shift = mmu_psize_defs[mmu_linear_psize].shift;
-
-	if (radix_enabled())
-		return SUBSECTION_SIZE;
-	return max(SUBSECTION_SIZE, 1UL << shift);
-
-}
-EXPORT_SYMBOL_GPL(memremap_compat_align);
-#endif
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 13/18] powerpc/64e: remove mmu_linear_psize
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (11 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 12/18] powerpc: make memremap_compat_align 64s-only Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 14/18] powerpc/64s: Fix radix MMU when MMU_FTR_HPTE_TABLE is clear Nicholas Piggin
                   ` (5 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

mmu_linear_psize is only set at boot once on 64e, is not necessarily
the correct size of the linear map pages, and is never used anywhere.
Remove it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/nohash/mmu-book3e.h | 1 -
 arch/powerpc/mm/nohash/tlb.c                 | 9 ---------
 2 files changed, 10 deletions(-)

diff --git a/arch/powerpc/include/asm/nohash/mmu-book3e.h b/arch/powerpc/include/asm/nohash/mmu-book3e.h
index e43a418d3ccd..787e6482e299 100644
--- a/arch/powerpc/include/asm/nohash/mmu-book3e.h
+++ b/arch/powerpc/include/asm/nohash/mmu-book3e.h
@@ -284,7 +284,6 @@ static inline unsigned int mmu_psize_to_shift(unsigned int mmu_psize)
 #error Unsupported page size
 #endif
 
-extern int mmu_linear_psize;
 extern int mmu_vmemmap_psize;
 
 struct tlb_core_data {
diff --git a/arch/powerpc/mm/nohash/tlb.c b/arch/powerpc/mm/nohash/tlb.c
index 647bf454a0fa..311281063d48 100644
--- a/arch/powerpc/mm/nohash/tlb.c
+++ b/arch/powerpc/mm/nohash/tlb.c
@@ -150,7 +150,6 @@ static inline int mmu_get_tsize(int psize)
  */
 #ifdef CONFIG_PPC64
 
-int mmu_linear_psize;		/* Page size used for the linear mapping */
 int mmu_pte_psize;		/* Page size used for PTE pages */
 int mmu_vmemmap_psize;		/* Page size used for the virtual mem map */
 int book3e_htw_mode;		/* HW tablewalk?  Value is PPC_HTW_* */
@@ -657,14 +656,6 @@ static void early_init_this_mmu(void)
 
 static void __init early_init_mmu_global(void)
 {
-	/* XXX This will have to be decided at runtime, but right
-	 * now our boot and TLB miss code hard wires it. Ideally
-	 * we should find out a suitable page size and patch the
-	 * TLB miss code (either that or use the PACA to store
-	 * the value we want)
-	 */
-	mmu_linear_psize = MMU_PAGE_1G;
-
 	/* XXX This should be decided at runtime based on supported
 	 * page sizes in the TLB, but for now let's assume 16M is
 	 * always there and a good fit (which it probably is)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 14/18] powerpc/64s: Fix radix MMU when MMU_FTR_HPTE_TABLE is clear
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (12 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 13/18] powerpc/64e: remove mmu_linear_psize Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls Nicholas Piggin
                   ` (4 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

There are a few places that require MMU_FTR_HPTE_TABLE to be set even
when running in radix mode. Fix those up.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/pgtable.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index ce9482383144..abb3198bd277 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -81,9 +81,6 @@ static struct page *maybe_pte_to_page(pte_t pte)
 
 static pte_t set_pte_filter_hash(pte_t pte)
 {
-	if (radix_enabled())
-		return pte;
-
 	pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS);
 	if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) ||
 				       cpu_has_feature(CPU_FTR_NOEXECUTE))) {
@@ -112,6 +109,9 @@ static inline pte_t set_pte_filter(pte_t pte)
 {
 	struct page *pg;
 
+	if (radix_enabled())
+		return pte;
+
 	if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
 		return set_pte_filter_hash(pte);
 
@@ -144,6 +144,9 @@ static pte_t set_access_flags_filter(pte_t pte, struct vm_area_struct *vma,
 {
 	struct page *pg;
 
+	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64))
+		return pte;
+
 	if (mmu_has_feature(MMU_FTR_HPTE_TABLE))
 		return pte;
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (13 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 14/18] powerpc/64s: Fix radix MMU when MMU_FTR_HPTE_TABLE is clear Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-08  9:38   ` Christophe Leroy
  2021-12-08 10:00   ` Christophe Leroy
  2021-12-01 14:41 ` [PATCH v6 16/18] powerpc/64s: Make hash MMU support configurable Nicholas Piggin
                   ` (3 subsequent siblings)
  18 siblings, 2 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

To avoid any functional changes to radix paths when building with hash
MMU support disabled (and CONFIG_PPC_MM_SLICES=n), always define the
arch get_unmapped_area calls on 64s platforms.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/book3s/64/hash.h |  4 ---
 arch/powerpc/include/asm/book3s/64/mmu.h  |  6 ++++
 arch/powerpc/mm/hugetlbpage.c             | 16 ++++++---
 arch/powerpc/mm/mmap.c                    | 40 +++++++++++++++++++----
 arch/powerpc/mm/slice.c                   | 20 ------------
 5 files changed, 51 insertions(+), 35 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index 674fe0e890dc..a7a0572f3846 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -99,10 +99,6 @@
  * Defines the address of the vmemap area, in its own region on
  * hash table CPUs.
  */
-#ifdef CONFIG_PPC_MM_SLICES
-#define HAVE_ARCH_UNMAPPED_AREA
-#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
-#endif /* CONFIG_PPC_MM_SLICES */
 
 /* PTEIDX nibble */
 #define _PTEIDX_SECONDARY	0x8
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index c02f42d1031e..015d7d972d16 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -4,6 +4,12 @@
 
 #include <asm/page.h>
 
+#ifdef CONFIG_HUGETLB_PAGE
+#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
+#endif
+#define HAVE_ARCH_UNMAPPED_AREA
+#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
+
 #ifndef __ASSEMBLY__
 /*
  * Page size definition
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index 82d8b368ca6d..ddead41e2194 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -542,20 +542,26 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
 	return page;
 }
 
-#ifdef CONFIG_PPC_MM_SLICES
+#ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
+static inline int file_to_psize(struct file *file)
+{
+	struct hstate *hstate = hstate_file(file);
+	return shift_to_mmu_psize(huge_page_shift(hstate));
+}
+
 unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
 					unsigned long len, unsigned long pgoff,
 					unsigned long flags)
 {
-	struct hstate *hstate = hstate_file(file);
-	int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
-
 #ifdef CONFIG_PPC_RADIX_MMU
 	if (radix_enabled())
 		return radix__hugetlb_get_unmapped_area(file, addr, len,
 						       pgoff, flags);
 #endif
-	return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
+#ifdef CONFIG_PPC_MM_SLICES
+	return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
+#endif
+	BUG();
 }
 #endif
 
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index ae683fdc716c..c475cf810aa8 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -80,6 +80,7 @@ static inline unsigned long mmap_base(unsigned long rnd,
 	return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
 }
 
+#ifdef HAVE_ARCH_UNMAPPED_AREA
 #ifdef CONFIG_PPC_RADIX_MMU
 /*
  * Same function as generic code used only for radix, because we don't need to overload
@@ -181,11 +182,42 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
 	 */
 	return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
 }
+#endif
+
+unsigned long arch_get_unmapped_area(struct file *filp,
+				     unsigned long addr,
+				     unsigned long len,
+				     unsigned long pgoff,
+				     unsigned long flags)
+{
+#ifdef CONFIG_PPC_MM_SLICES
+	return slice_get_unmapped_area(addr, len, flags,
+				       mm_ctx_user_psize(&current->mm->context), 0);
+#else
+	BUG();
+#endif
+}
+
+unsigned long arch_get_unmapped_area_topdown(struct file *filp,
+					     const unsigned long addr0,
+					     const unsigned long len,
+					     const unsigned long pgoff,
+					     const unsigned long flags)
+{
+#ifdef CONFIG_PPC_MM_SLICES
+	return slice_get_unmapped_area(addr0, len, flags,
+				       mm_ctx_user_psize(&current->mm->context), 1);
+#else
+	BUG();
+#endif
+}
+#endif /* HAVE_ARCH_UNMAPPED_AREA */
 
 static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
 					unsigned long random_factor,
 					struct rlimit *rlim_stack)
 {
+#ifdef CONFIG_PPC_RADIX_MMU
 	if (mmap_is_legacy(rlim_stack)) {
 		mm->mmap_base = TASK_UNMAPPED_BASE;
 		mm->get_unmapped_area = radix__arch_get_unmapped_area;
@@ -193,13 +225,9 @@ static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
 		mm->mmap_base = mmap_base(random_factor, rlim_stack);
 		mm->get_unmapped_area = radix__arch_get_unmapped_area_topdown;
 	}
-}
-#else
-/* dummy */
-extern void radix__arch_pick_mmap_layout(struct mm_struct *mm,
-					unsigned long random_factor,
-					struct rlimit *rlim_stack);
 #endif
+}
+
 /*
  * This function, called very early during the creation of a new
  * process VM image, sets up which VM layout function to use:
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 82b45b1cb973..f42711f865f3 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -639,26 +639,6 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
 }
 EXPORT_SYMBOL_GPL(slice_get_unmapped_area);
 
-unsigned long arch_get_unmapped_area(struct file *filp,
-				     unsigned long addr,
-				     unsigned long len,
-				     unsigned long pgoff,
-				     unsigned long flags)
-{
-	return slice_get_unmapped_area(addr, len, flags,
-				       mm_ctx_user_psize(&current->mm->context), 0);
-}
-
-unsigned long arch_get_unmapped_area_topdown(struct file *filp,
-					     const unsigned long addr0,
-					     const unsigned long len,
-					     const unsigned long pgoff,
-					     const unsigned long flags)
-{
-	return slice_get_unmapped_area(addr0, len, flags,
-				       mm_ctx_user_psize(&current->mm->context), 1);
-}
-
 unsigned int notrace get_slice_psize(struct mm_struct *mm, unsigned long addr)
 {
 	unsigned char *psizes;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 16/18] powerpc/64s: Make hash MMU support configurable
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (14 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-01 14:41 ` [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU Nicholas Piggin
                   ` (2 subsequent siblings)
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

This adds Kconfig selection which allows 64s hash MMU support to be
disabled. It can be disabled if radix support is enabled, the minimum
supported CPU type is POWER9 (or higher), and KVM is not selected.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/Kconfig                     |  3 ++-
 arch/powerpc/include/asm/mmu.h           | 16 +++++++++++---
 arch/powerpc/kernel/dt_cpu_ftrs.c        | 14 ++++++++----
 arch/powerpc/kvm/Kconfig                 |  1 +
 arch/powerpc/mm/init_64.c                | 13 +++++++++--
 arch/powerpc/platforms/Kconfig.cputype   | 28 ++++++++++++++++++++++--
 arch/powerpc/platforms/cell/Kconfig      |  1 +
 arch/powerpc/platforms/maple/Kconfig     |  1 +
 arch/powerpc/platforms/microwatt/Kconfig |  2 +-
 arch/powerpc/platforms/pasemi/Kconfig    |  1 +
 arch/powerpc/platforms/powermac/Kconfig  |  1 +
 arch/powerpc/platforms/powernv/Kconfig   |  2 +-
 drivers/misc/cxl/Kconfig                 |  1 +
 drivers/misc/lkdtm/Makefile              |  2 +-
 14 files changed, 71 insertions(+), 15 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 27afb64d027c..1fa336ec8faf 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -845,7 +845,7 @@ config FORCE_MAX_ZONEORDER
 config PPC_SUBPAGE_PROT
 	bool "Support setting protections for 4k subpages (subpage_prot syscall)"
 	default n
-	depends on PPC_BOOK3S_64 && PPC_64K_PAGES
+	depends on PPC_64S_HASH_MMU && PPC_64K_PAGES
 	help
 	  This option adds support for system call to allow user programs
 	  to set access permissions (read/write, readonly, or no access)
@@ -943,6 +943,7 @@ config PPC_MEM_KEYS
 	prompt "PowerPC Memory Protection Keys"
 	def_bool y
 	depends on PPC_BOOK3S_64
+	depends on PPC_64S_HASH_MMU
 	select ARCH_USES_HIGH_VMA_FLAGS
 	select ARCH_HAS_PKEYS
 	help
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 8abe8e42e045..5f41565a1e5d 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -157,7 +157,7 @@ DECLARE_PER_CPU(int, next_tlbcam_idx);
 
 enum {
 	MMU_FTRS_POSSIBLE =
-#if defined(CONFIG_PPC_BOOK3S_64) || defined(CONFIG_PPC_BOOK3S_604)
+#if defined(CONFIG_PPC_BOOK3S_604)
 		MMU_FTR_HPTE_TABLE |
 #endif
 #ifdef CONFIG_PPC_8xx
@@ -184,15 +184,18 @@ enum {
 		MMU_FTR_USE_TLBRSRV | MMU_FTR_USE_PAIRED_MAS |
 #endif
 #ifdef CONFIG_PPC_BOOK3S_64
+		MMU_FTR_KERNEL_RO |
+#ifdef CONFIG_PPC_64S_HASH_MMU
 		MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL |
 		MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE |
 		MMU_FTR_1T_SEGMENT | MMU_FTR_TLBIE_CROP_VA |
-		MMU_FTR_KERNEL_RO | MMU_FTR_68_BIT_VA |
+		MMU_FTR_68_BIT_VA | MMU_FTR_HPTE_TABLE |
 #endif
 #ifdef CONFIG_PPC_RADIX_MMU
 		MMU_FTR_TYPE_RADIX |
 		MMU_FTR_GTSE |
 #endif /* CONFIG_PPC_RADIX_MMU */
+#endif
 #ifdef CONFIG_PPC_KUAP
 	MMU_FTR_BOOK3S_KUAP |
 #endif /* CONFIG_PPC_KUAP */
@@ -224,6 +227,13 @@ enum {
 #define MMU_FTRS_ALWAYS		MMU_FTR_TYPE_FSL_E
 #endif
 
+/* BOOK3S_64 options */
+#if defined(CONFIG_PPC_RADIX_MMU) && !defined(CONFIG_PPC_64S_HASH_MMU)
+#define MMU_FTRS_ALWAYS		MMU_FTR_TYPE_RADIX
+#elif !defined(CONFIG_PPC_RADIX_MMU) && defined(CONFIG_PPC_64S_HASH_MMU)
+#define MMU_FTRS_ALWAYS		MMU_FTR_HPTE_TABLE
+#endif
+
 #ifndef MMU_FTRS_ALWAYS
 #define MMU_FTRS_ALWAYS		0
 #endif
@@ -329,7 +339,7 @@ static __always_inline bool radix_enabled(void)
 	return mmu_has_feature(MMU_FTR_TYPE_RADIX);
 }
 
-static inline bool early_radix_enabled(void)
+static __always_inline bool early_radix_enabled(void)
 {
 	return early_mmu_has_feature(MMU_FTR_TYPE_RADIX);
 }
diff --git a/arch/powerpc/kernel/dt_cpu_ftrs.c b/arch/powerpc/kernel/dt_cpu_ftrs.c
index d2b35fb9181d..1ac8d7357195 100644
--- a/arch/powerpc/kernel/dt_cpu_ftrs.c
+++ b/arch/powerpc/kernel/dt_cpu_ftrs.c
@@ -273,6 +273,9 @@ static int __init feat_enable_mmu_hash(struct dt_cpu_feature *f)
 {
 	u64 lpcr;
 
+	if (!IS_ENABLED(CONFIG_PPC_64S_HASH_MMU))
+		return 0;
+
 	lpcr = mfspr(SPRN_LPCR);
 	lpcr &= ~LPCR_ISL;
 
@@ -292,6 +295,9 @@ static int __init feat_enable_mmu_hash_v3(struct dt_cpu_feature *f)
 {
 	u64 lpcr;
 
+	if (!IS_ENABLED(CONFIG_PPC_64S_HASH_MMU))
+		return 0;
+
 	lpcr = mfspr(SPRN_LPCR);
 	lpcr &= ~(LPCR_ISL | LPCR_UPRT | LPCR_HR);
 	mtspr(SPRN_LPCR, lpcr);
@@ -305,15 +311,15 @@ static int __init feat_enable_mmu_hash_v3(struct dt_cpu_feature *f)
 
 static int __init feat_enable_mmu_radix(struct dt_cpu_feature *f)
 {
-#ifdef CONFIG_PPC_RADIX_MMU
+	if (!IS_ENABLED(CONFIG_PPC_RADIX_MMU))
+		return 0;
+
+	cur_cpu_spec->mmu_features |= MMU_FTR_KERNEL_RO;
 	cur_cpu_spec->mmu_features |= MMU_FTR_TYPE_RADIX;
-	cur_cpu_spec->mmu_features |= MMU_FTRS_HASH_BASE;
 	cur_cpu_spec->mmu_features |= MMU_FTR_GTSE;
 	cur_cpu_spec->cpu_user_features |= PPC_FEATURE_HAS_MMU;
 
 	return 1;
-#endif
-	return 0;
 }
 
 static int __init feat_enable_dscr(struct dt_cpu_feature *f)
diff --git a/arch/powerpc/kvm/Kconfig b/arch/powerpc/kvm/Kconfig
index 6a58532300c5..f947b77386a9 100644
--- a/arch/powerpc/kvm/Kconfig
+++ b/arch/powerpc/kvm/Kconfig
@@ -69,6 +69,7 @@ config KVM_BOOK3S_64
 	select KVM_BOOK3S_64_HANDLER
 	select KVM
 	select KVM_BOOK3S_PR_POSSIBLE if !KVM_BOOK3S_HV_POSSIBLE
+	select PPC_64S_HASH_MMU
 	select SPAPR_TCE_IOMMU if IOMMU_SUPPORT && (PPC_PSERIES || PPC_POWERNV)
 	help
 	  Support running unmodified book3s_64 and book3s_32 guest kernels
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 386be136026e..e6876e702d8f 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -440,8 +440,12 @@ static void __init early_check_vec5(void)
 void __init mmu_early_init_devtree(void)
 {
 	/* Disable radix mode based on kernel command line. */
-	if (disable_radix)
-		cur_cpu_spec->mmu_features &= ~MMU_FTR_TYPE_RADIX;
+	if (disable_radix) {
+		if (IS_ENABLED(CONFIG_PPC_64S_HASH_MMU))
+			cur_cpu_spec->mmu_features &= ~MMU_FTR_TYPE_RADIX;
+		else
+			pr_warn("WARNING: Ignoring cmdline option disable_radix\n");
+	}
 
 	/*
 	 * Check /chosen/ibm,architecture-vec-5 if running as a guest.
@@ -454,6 +458,7 @@ void __init mmu_early_init_devtree(void)
 
 	if (early_radix_enabled()) {
 		radix__early_init_devtree();
+
 		/*
 		 * We have finalized the translation we are going to use by now.
 		 * Radix mode is not limited by RMA / VRMA addressing.
@@ -463,5 +468,9 @@ void __init mmu_early_init_devtree(void)
 		memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
 	} else
 		hash__early_init_devtree();
+
+	if (!(cur_cpu_spec->mmu_features & MMU_FTR_HPTE_TABLE) &&
+	    !(cur_cpu_spec->mmu_features & MMU_FTR_TYPE_RADIX))
+		panic("kernel does not support any MMU type offered by platform");
 }
 #endif /* CONFIG_PPC_BOOK3S_64 */
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index a208997ade88..7ca07df1c374 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -105,9 +105,9 @@ config PPC_BOOK3S_64
 	select HAVE_MOVE_PMD
 	select HAVE_MOVE_PUD
 	select IRQ_WORK
-	select PPC_MM_SLICES
 	select PPC_HAVE_KUEP
 	select PPC_HAVE_KUAP
+	select PPC_64S_HASH_MMU if !PPC_RADIX_MMU
 
 config PPC_BOOK3E_64
 	bool "Embedded processors"
@@ -130,11 +130,13 @@ choice
 config GENERIC_CPU
 	bool "Generic (POWER4 and above)"
 	depends on PPC64 && !CPU_LITTLE_ENDIAN
+	select PPC_64S_HASH_MMU if PPC_BOOK3S_64
 
 config GENERIC_CPU
 	bool "Generic (POWER8 and above)"
 	depends on PPC64 && CPU_LITTLE_ENDIAN
 	select ARCH_HAS_FAST_MULTIPLIER
+	select PPC_64S_HASH_MMU
 
 config GENERIC_CPU
 	bool "Generic 32 bits powerpc"
@@ -143,24 +145,29 @@ config GENERIC_CPU
 config CELL_CPU
 	bool "Cell Broadband Engine"
 	depends on PPC_BOOK3S_64 && !CPU_LITTLE_ENDIAN
+	select PPC_64S_HASH_MMU
 
 config POWER5_CPU
 	bool "POWER5"
 	depends on PPC_BOOK3S_64 && !CPU_LITTLE_ENDIAN
+	select PPC_64S_HASH_MMU
 
 config POWER6_CPU
 	bool "POWER6"
 	depends on PPC_BOOK3S_64 && !CPU_LITTLE_ENDIAN
+	select PPC_64S_HASH_MMU
 
 config POWER7_CPU
 	bool "POWER7"
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_FAST_MULTIPLIER
+	select PPC_64S_HASH_MMU
 
 config POWER8_CPU
 	bool "POWER8"
 	depends on PPC_BOOK3S_64
 	select ARCH_HAS_FAST_MULTIPLIER
+	select PPC_64S_HASH_MMU
 
 config POWER9_CPU
 	bool "POWER9"
@@ -364,6 +371,22 @@ config SPE
 
 	  If in doubt, say Y here.
 
+config PPC_64S_HASH_MMU
+	bool "Hash MMU Support"
+	depends on PPC_BOOK3S_64
+	select PPC_MM_SLICES
+	default y
+	help
+	  Enable support for the Power ISA Hash style MMU. This is implemented
+	  by all IBM Power and other 64-bit Book3S CPUs before ISA v3.0. The
+	  OpenPOWER ISA does not mandate the hash MMU and some CPUs do not
+	  implement it (e.g., Microwatt).
+
+	  Note that POWER9 PowerVM platforms only support the hash
+	  MMU. From POWER10 radix is also supported by PowerVM.
+
+	  If you're unsure, say Y.
+
 config PPC_RADIX_MMU
 	bool "Radix MMU Support"
 	depends on PPC_BOOK3S_64
@@ -375,7 +398,8 @@ config PPC_RADIX_MMU
 	  you can probably disable this.
 
 config PPC_RADIX_MMU_DEFAULT
-	bool "Default to using the Radix MMU when possible"
+	bool "Default to using the Radix MMU when possible" if PPC_64S_HASH_MMU
+	depends on PPC_BOOK3S_64
 	depends on PPC_RADIX_MMU
 	default y
 	help
diff --git a/arch/powerpc/platforms/cell/Kconfig b/arch/powerpc/platforms/cell/Kconfig
index db4465c51b56..34669b060f36 100644
--- a/arch/powerpc/platforms/cell/Kconfig
+++ b/arch/powerpc/platforms/cell/Kconfig
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 config PPC_CELL
+	select PPC_64S_HASH_MMU if PPC64
 	bool
 
 config PPC_CELL_COMMON
diff --git a/arch/powerpc/platforms/maple/Kconfig b/arch/powerpc/platforms/maple/Kconfig
index 7fd84311ade5..4c058cc57c90 100644
--- a/arch/powerpc/platforms/maple/Kconfig
+++ b/arch/powerpc/platforms/maple/Kconfig
@@ -9,6 +9,7 @@ config PPC_MAPLE
 	select GENERIC_TBSYNC
 	select PPC_UDBG_16550
 	select PPC_970_NAP
+	select PPC_64S_HASH_MMU
 	select PPC_HASH_MMU_NATIVE
 	select PPC_RTAS
 	select MMIO_NVRAM
diff --git a/arch/powerpc/platforms/microwatt/Kconfig b/arch/powerpc/platforms/microwatt/Kconfig
index 62b51e37fc05..823192e9d38a 100644
--- a/arch/powerpc/platforms/microwatt/Kconfig
+++ b/arch/powerpc/platforms/microwatt/Kconfig
@@ -5,7 +5,7 @@ config PPC_MICROWATT
 	select PPC_XICS
 	select PPC_ICS_NATIVE
 	select PPC_ICP_NATIVE
-	select PPC_HASH_MMU_NATIVE
+	select PPC_HASH_MMU_NATIVE if PPC_64S_HASH_MMU
 	select PPC_UDBG_16550
 	select ARCH_RANDOM
 	help
diff --git a/arch/powerpc/platforms/pasemi/Kconfig b/arch/powerpc/platforms/pasemi/Kconfig
index bc7137353a7f..85ae18ddd911 100644
--- a/arch/powerpc/platforms/pasemi/Kconfig
+++ b/arch/powerpc/platforms/pasemi/Kconfig
@@ -5,6 +5,7 @@ config PPC_PASEMI
 	select MPIC
 	select FORCE_PCI
 	select PPC_UDBG_16550
+	select PPC_64S_HASH_MMU
 	select PPC_HASH_MMU_NATIVE
 	select MPIC_BROKEN_REGREAD
 	help
diff --git a/arch/powerpc/platforms/powermac/Kconfig b/arch/powerpc/platforms/powermac/Kconfig
index 2b56df145b82..130707ec9f99 100644
--- a/arch/powerpc/platforms/powermac/Kconfig
+++ b/arch/powerpc/platforms/powermac/Kconfig
@@ -6,6 +6,7 @@ config PPC_PMAC
 	select FORCE_PCI
 	select PPC_INDIRECT_PCI if PPC32
 	select PPC_MPC106 if PPC32
+	select PPC_64S_HASH_MMU if PPC64
 	select PPC_HASH_MMU_NATIVE
 	select ZONE_DMA if PPC32
 	default y
diff --git a/arch/powerpc/platforms/powernv/Kconfig b/arch/powerpc/platforms/powernv/Kconfig
index cd754e116184..161dfe024085 100644
--- a/arch/powerpc/platforms/powernv/Kconfig
+++ b/arch/powerpc/platforms/powernv/Kconfig
@@ -2,7 +2,7 @@
 config PPC_POWERNV
 	depends on PPC64 && PPC_BOOK3S
 	bool "IBM PowerNV (Non-Virtualized) platform support"
-	select PPC_HASH_MMU_NATIVE
+	select PPC_HASH_MMU_NATIVE if PPC_64S_HASH_MMU
 	select PPC_XICS
 	select PPC_ICP_NATIVE
 	select PPC_XIVE_NATIVE
diff --git a/drivers/misc/cxl/Kconfig b/drivers/misc/cxl/Kconfig
index 51aecafdcbdf..5efc4151bf58 100644
--- a/drivers/misc/cxl/Kconfig
+++ b/drivers/misc/cxl/Kconfig
@@ -6,6 +6,7 @@
 config CXL_BASE
 	bool
 	select PPC_COPRO_BASE
+	select PPC_64S_HASH_MMU
 
 config CXL
 	tristate "Support for IBM Coherent Accelerators (CXL)"
diff --git a/drivers/misc/lkdtm/Makefile b/drivers/misc/lkdtm/Makefile
index aa12097668d3..83a7baf5df82 100644
--- a/drivers/misc/lkdtm/Makefile
+++ b/drivers/misc/lkdtm/Makefile
@@ -11,7 +11,7 @@ lkdtm-$(CONFIG_LKDTM)		+= usercopy.o
 lkdtm-$(CONFIG_LKDTM)		+= stackleak.o
 lkdtm-$(CONFIG_LKDTM)		+= cfi.o
 lkdtm-$(CONFIG_LKDTM)		+= fortify.o
-lkdtm-$(CONFIG_PPC_BOOK3S_64)	+= powerpc.o
+lkdtm-$(CONFIG_PPC_64S_HASH_MMU)	+= powerpc.o
 
 KASAN_SANITIZE_rodata.o		:= n
 KASAN_SANITIZE_stackleak.o	:= n
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (15 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 16/18] powerpc/64s: Make hash MMU support configurable Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-02 13:30   ` Fabiano Rosas
                     ` (3 more replies)
  2021-12-01 14:41 ` [PATCH v6 18/18] powerpc/microwatt: add POWER9_CPU, clear PPC_64S_HASH_MMU Nicholas Piggin
  2021-12-15  0:24 ` [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Michael Ellerman
  18 siblings, 4 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

Compiling out hash support code when CONFIG_PPC_64S_HASH_MMU=n saves
128kB kernel image size (90kB text) on powernv_defconfig minus KVM,
350kB on pseries_defconfig minus KVM, 40kB on a tiny config.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/Kconfig                          |  2 +-
 arch/powerpc/include/asm/book3s/64/mmu.h      | 21 ++++++++++--
 .../include/asm/book3s/64/tlbflush-hash.h     |  6 ++++
 arch/powerpc/include/asm/book3s/pgtable.h     |  4 +++
 arch/powerpc/include/asm/mmu_context.h        |  2 ++
 arch/powerpc/include/asm/paca.h               |  8 +++++
 arch/powerpc/kernel/asm-offsets.c             |  2 ++
 arch/powerpc/kernel/entry_64.S                |  4 +--
 arch/powerpc/kernel/exceptions-64s.S          | 16 ++++++++++
 arch/powerpc/kernel/mce.c                     |  2 +-
 arch/powerpc/kernel/mce_power.c               | 10 ++++--
 arch/powerpc/kernel/paca.c                    | 18 ++++-------
 arch/powerpc/kernel/process.c                 | 13 ++++----
 arch/powerpc/kernel/prom.c                    |  2 ++
 arch/powerpc/kernel/setup_64.c                |  5 +++
 arch/powerpc/kexec/core_64.c                  |  4 +--
 arch/powerpc/kexec/ranges.c                   |  4 +++
 arch/powerpc/mm/book3s64/Makefile             | 15 +++++----
 arch/powerpc/mm/book3s64/hugetlbpage.c        |  2 ++
 arch/powerpc/mm/book3s64/mmu_context.c        | 32 +++++++++++++++----
 arch/powerpc/mm/book3s64/pgtable.c            |  2 +-
 arch/powerpc/mm/book3s64/radix_pgtable.c      |  4 +++
 arch/powerpc/mm/copro_fault.c                 |  2 ++
 arch/powerpc/mm/ptdump/Makefile               |  2 +-
 arch/powerpc/platforms/powernv/idle.c         |  2 ++
 arch/powerpc/platforms/powernv/setup.c        |  2 ++
 arch/powerpc/platforms/pseries/lpar.c         | 11 +++++--
 arch/powerpc/platforms/pseries/lparcfg.c      |  2 +-
 arch/powerpc/platforms/pseries/mobility.c     |  6 ++++
 arch/powerpc/platforms/pseries/ras.c          |  2 ++
 arch/powerpc/platforms/pseries/reconfig.c     |  2 ++
 arch/powerpc/platforms/pseries/setup.c        |  6 ++--
 arch/powerpc/xmon/xmon.c                      |  8 +++--
 drivers/misc/lkdtm/core.c                     |  2 +-
 34 files changed, 173 insertions(+), 52 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 1fa336ec8faf..fb48823ccd62 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -129,7 +129,7 @@ config PPC
 	select ARCH_HAS_KCOV
 	select ARCH_HAS_MEMBARRIER_CALLBACKS
 	select ARCH_HAS_MEMBARRIER_SYNC_CORE
-	select ARCH_HAS_MEMREMAP_COMPAT_ALIGN	if PPC_BOOK3S_64
+	select ARCH_HAS_MEMREMAP_COMPAT_ALIGN	if PPC_64S_HASH_MMU
 	select ARCH_HAS_MMIOWB			if PPC64
 	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
 	select ARCH_HAS_PHYS_TO_DMA
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
index 015d7d972d16..c480d21a146c 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -104,7 +104,9 @@ typedef struct {
 		 * from EA and new context ids to build the new VAs.
 		 */
 		mm_context_id_t id;
+#ifdef CONFIG_PPC_64S_HASH_MMU
 		mm_context_id_t extended_id[TASK_SIZE_USER64/TASK_CONTEXT_SIZE];
+#endif
 	};
 
 	/* Number of bits in the mm_cpumask */
@@ -116,7 +118,9 @@ typedef struct {
 	/* Number of user space windows opened in process mm_context */
 	atomic_t vas_windows;
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	struct hash_mm_context *hash_context;
+#endif
 
 	void __user *vdso;
 	/*
@@ -139,6 +143,7 @@ typedef struct {
 #endif
 } mm_context_t;
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 static inline u16 mm_ctx_user_psize(mm_context_t *ctx)
 {
 	return ctx->hash_context->user_psize;
@@ -199,8 +204,15 @@ static inline struct subpage_prot_table *mm_ctx_subpage_prot(mm_context_t *ctx)
 extern int mmu_linear_psize;
 extern int mmu_virtual_psize;
 extern int mmu_vmalloc_psize;
-extern int mmu_vmemmap_psize;
 extern int mmu_io_psize;
+#else /* CONFIG_PPC_64S_HASH_MMU */
+#ifdef CONFIG_PPC_64K_PAGES
+#define mmu_virtual_psize MMU_PAGE_64K
+#else
+#define mmu_virtual_psize MMU_PAGE_4K
+#endif
+#endif
+extern int mmu_vmemmap_psize;
 
 /* MMU initialization */
 void mmu_early_init_devtree(void);
@@ -239,8 +251,9 @@ static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
 	 * know which translations we will pick. Hence go with hash
 	 * restrictions.
 	 */
-	return hash__setup_initial_memory_limit(first_memblock_base,
-					   first_memblock_size);
+	if (!radix_enabled())
+		hash__setup_initial_memory_limit(first_memblock_base,
+						 first_memblock_size);
 }
 
 #ifdef CONFIG_PPC_PSERIES
@@ -261,6 +274,7 @@ static inline void radix_init_pseries(void) { }
 void cleanup_cpu_mmu_context(void);
 #endif
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 static inline int get_user_context(mm_context_t *ctx, unsigned long ea)
 {
 	int index = ea >> MAX_EA_BITS_PER_CONTEXT;
@@ -280,6 +294,7 @@ static inline unsigned long get_user_vsid(mm_context_t *ctx,
 
 	return get_vsid(context, ea, ssize);
 }
+#endif
 
 #endif /* __ASSEMBLY__ */
 #endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
index 3b95769739c7..8b762f282190 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
@@ -112,8 +112,14 @@ static inline void hash__flush_tlb_kernel_range(unsigned long start,
 
 struct mmu_gather;
 extern void hash__tlb_flush(struct mmu_gather *tlb);
+void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr);
+
+#ifdef CONFIG_PPC_64S_HASH_MMU
 /* Private function for use by PCI IO mapping code */
 extern void __flush_hash_table_range(unsigned long start, unsigned long end);
 extern void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd,
 				unsigned long addr);
+#else
+static inline void __flush_hash_table_range(unsigned long start, unsigned long end) { }
+#endif
 #endif /*  _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H */
diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h
index ad130e15a126..e8269434ecbe 100644
--- a/arch/powerpc/include/asm/book3s/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/pgtable.h
@@ -25,6 +25,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
 				     unsigned long size, pgprot_t vma_prot);
 #define __HAVE_PHYS_MEM_ACCESS_PROT
 
+#if defined(CONFIG_PPC32) || defined(CONFIG_PPC_64S_HASH_MMU)
 /*
  * This gets called at the end of handling a page fault, when
  * the kernel has put a new PTE into the page table for the process.
@@ -35,6 +36,9 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
  * waiting for the inevitable extra hash-table miss exception.
  */
 void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep);
+#else
+static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {}
+#endif
 
 #endif /* __ASSEMBLY__ */
 #endif
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 9ba6b585337f..e46394d27785 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -75,6 +75,7 @@ extern void hash__reserve_context_id(int id);
 extern void __destroy_context(int context_id);
 static inline void mmu_context_init(void) { }
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 static inline int alloc_extended_context(struct mm_struct *mm,
 					 unsigned long ea)
 {
@@ -100,6 +101,7 @@ static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
 		return true;
 	return false;
 }
+#endif
 
 #else
 extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index dc05a862e72a..295573a82c66 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -97,7 +97,9 @@ struct paca_struct {
 					/* this becomes non-zero. */
 	u8 kexec_state;		/* set when kexec down has irqs off */
 #ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	struct slb_shadow *slb_shadow_ptr;
+#endif
 	struct dtl_entry *dispatch_log;
 	struct dtl_entry *dispatch_log_end;
 #endif
@@ -110,6 +112,7 @@ struct paca_struct {
 	/* used for most interrupts/exceptions */
 	u64 exgen[EX_SIZE] __attribute__((aligned(0x80)));
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	/* SLB related definitions */
 	u16 vmalloc_sllp;
 	u8 slb_cache_ptr;
@@ -120,6 +123,7 @@ struct paca_struct {
 	u32 slb_used_bitmap;		/* Bitmaps for first 32 SLB entries. */
 	u32 slb_kern_bitmap;
 	u32 slb_cache[SLB_CACHE_ENTRIES];
+#endif
 #endif /* CONFIG_PPC_BOOK3S_64 */
 
 #ifdef CONFIG_PPC_BOOK3E
@@ -149,6 +153,7 @@ struct paca_struct {
 #endif /* CONFIG_PPC_BOOK3E */
 
 #ifdef CONFIG_PPC_BOOK3S
+#ifdef CONFIG_PPC_64S_HASH_MMU
 #ifdef CONFIG_PPC_MM_SLICES
 	unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
 	unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
@@ -156,6 +161,7 @@ struct paca_struct {
 	u16 mm_ctx_user_psize;
 	u16 mm_ctx_sllp;
 #endif
+#endif
 #endif
 
 	/*
@@ -268,9 +274,11 @@ struct paca_struct {
 #endif /* CONFIG_PPC_PSERIES */
 
 #ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	/* Capture SLB related old contents in MCE handler. */
 	struct slb_entry *mce_faulty_slbs;
 	u16 slb_save_cache_ptr;
+#endif
 #endif /* CONFIG_PPC_BOOK3S_64 */
 #ifdef CONFIG_STACKPROTECTOR
 	unsigned long canary;
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index cc05522f50bf..b823f484c640 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -218,10 +218,12 @@ int main(void)
 	OFFSET(PACA_EXGEN, paca_struct, exgen);
 	OFFSET(PACA_EXMC, paca_struct, exmc);
 	OFFSET(PACA_EXNMI, paca_struct, exnmi);
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	OFFSET(PACA_SLBSHADOWPTR, paca_struct, slb_shadow_ptr);
 	OFFSET(SLBSHADOW_STACKVSID, slb_shadow, save_area[SLB_NUM_BOLTED - 1].vsid);
 	OFFSET(SLBSHADOW_STACKESID, slb_shadow, save_area[SLB_NUM_BOLTED - 1].esid);
 	OFFSET(SLBSHADOW_SAVEAREA, slb_shadow, save_area);
+#endif
 	OFFSET(LPPACA_PMCINUSE, lppaca, pmcregs_in_use);
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 	OFFSET(PACA_PMCINUSE, paca_struct, pmcregs_in_use);
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 70cff7b49e17..9581906b5ee9 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -180,7 +180,7 @@ _GLOBAL(_switch)
 #endif
 
 	ld	r8,KSP(r4)	/* new stack pointer */
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 BEGIN_MMU_FTR_SECTION
 	b	2f
 END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
@@ -232,7 +232,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
 	slbmte	r7,r0
 	isync
 2:
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_64S_HASH_MMU */
 
 	clrrdi	r7, r8, THREAD_SHIFT	/* base of new stack */
 	/* Note: this uses SWITCH_FRAME_SIZE rather than INT_FRAME_SIZE
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 046c99e31d01..65b695e9401e 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -1369,11 +1369,15 @@ EXC_COMMON_BEGIN(data_access_common)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	andis.	r0,r4,DSISR_DABRMATCH@h
 	bne-	1f
+#ifdef CONFIG_PPC_64S_HASH_MMU
 BEGIN_MMU_FTR_SECTION
 	bl	do_hash_fault
 MMU_FTR_SECTION_ELSE
 	bl	do_page_fault
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+#else
+	bl	do_page_fault
+#endif
 	b	interrupt_return_srr
 
 1:	bl	do_break
@@ -1416,6 +1420,7 @@ EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80)
 EXC_VIRT_END(data_access_slb, 0x4380, 0x80)
 EXC_COMMON_BEGIN(data_access_slb_common)
 	GEN_COMMON data_access_slb
+#ifdef CONFIG_PPC_64S_HASH_MMU
 BEGIN_MMU_FTR_SECTION
 	/* HPT case, do SLB fault */
 	addi	r3,r1,STACK_FRAME_OVERHEAD
@@ -1428,6 +1433,9 @@ MMU_FTR_SECTION_ELSE
 	/* Radix case, access is outside page table range */
 	li	r3,-EFAULT
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+#else
+	li	r3,-EFAULT
+#endif
 	std	r3,RESULT(r1)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	do_bad_segment_interrupt
@@ -1462,11 +1470,15 @@ EXC_VIRT_END(instruction_access, 0x4400, 0x80)
 EXC_COMMON_BEGIN(instruction_access_common)
 	GEN_COMMON instruction_access
 	addi	r3,r1,STACK_FRAME_OVERHEAD
+#ifdef CONFIG_PPC_64S_HASH_MMU
 BEGIN_MMU_FTR_SECTION
 	bl	do_hash_fault
 MMU_FTR_SECTION_ELSE
 	bl	do_page_fault
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+#else
+	bl	do_page_fault
+#endif
 	b	interrupt_return_srr
 
 
@@ -1496,6 +1508,7 @@ EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80)
 EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80)
 EXC_COMMON_BEGIN(instruction_access_slb_common)
 	GEN_COMMON instruction_access_slb
+#ifdef CONFIG_PPC_64S_HASH_MMU
 BEGIN_MMU_FTR_SECTION
 	/* HPT case, do SLB fault */
 	addi	r3,r1,STACK_FRAME_OVERHEAD
@@ -1508,6 +1521,9 @@ MMU_FTR_SECTION_ELSE
 	/* Radix case, access is outside page table range */
 	li	r3,-EFAULT
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+#else
+	li	r3,-EFAULT
+#endif
 	std	r3,RESULT(r1)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	do_bad_segment_interrupt
diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index fd829f7f25a4..2503dd4713b9 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -586,7 +586,7 @@ void machine_check_print_event_info(struct machine_check_event *evt,
 		mc_error_class[evt->error_class] : "Unknown";
 	printk("%sMCE: CPU%d: %s\n", level, evt->cpu, subtype);
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	/* Display faulty slb contents for SLB errors. */
 	if (evt->error_type == MCE_ERROR_TYPE_SLB && !in_guest)
 		slb_dump_contents(local_paca->mce_faulty_slbs);
diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
index cf5263b648fc..a48ff18d6d65 100644
--- a/arch/powerpc/kernel/mce_power.c
+++ b/arch/powerpc/kernel/mce_power.c
@@ -77,7 +77,7 @@ static bool mce_in_guest(void)
 }
 
 /* flush SLBs and reload */
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 void flush_and_reload_slb(void)
 {
 	if (early_radix_enabled())
@@ -99,7 +99,7 @@ void flush_and_reload_slb(void)
 
 void flush_erat(void)
 {
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
 		flush_and_reload_slb();
 		return;
@@ -114,7 +114,7 @@ void flush_erat(void)
 
 static int mce_flush(int what)
 {
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (what == MCE_FLUSH_SLB) {
 		flush_and_reload_slb();
 		return 1;
@@ -499,8 +499,10 @@ static int mce_handle_ierror(struct pt_regs *regs, unsigned long srr1,
 			/* attempt to correct the error */
 			switch (table[i].error_type) {
 			case MCE_ERROR_TYPE_SLB:
+#ifdef CONFIG_PPC_64S_HASH_MMU
 				if (local_paca->in_mce == 1)
 					slb_save_contents(local_paca->mce_faulty_slbs);
+#endif
 				handled = mce_flush(MCE_FLUSH_SLB);
 				break;
 			case MCE_ERROR_TYPE_ERAT:
@@ -588,8 +590,10 @@ static int mce_handle_derror(struct pt_regs *regs,
 			/* attempt to correct the error */
 			switch (table[i].error_type) {
 			case MCE_ERROR_TYPE_SLB:
+#ifdef CONFIG_PPC_64S_HASH_MMU
 				if (local_paca->in_mce == 1)
 					slb_save_contents(local_paca->mce_faulty_slbs);
+#endif
 				if (mce_flush(MCE_FLUSH_SLB))
 					handled = 1;
 				break;
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index 4208b4044d12..39da688a9455 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -139,8 +139,7 @@ static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
 }
 #endif /* CONFIG_PPC_PSERIES */
 
-#ifdef CONFIG_PPC_BOOK3S_64
-
+#ifdef CONFIG_PPC_64S_HASH_MMU
 /*
  * 3 persistent SLBs are allocated here.  The buffer will be zero
  * initially, hence will all be invaild until we actually write them.
@@ -169,8 +168,7 @@ static struct slb_shadow * __init new_slb_shadow(int cpu, unsigned long limit)
 
 	return s;
 }
-
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_64S_HASH_MMU */
 
 #ifdef CONFIG_PPC_PSERIES
 /**
@@ -226,7 +224,7 @@ void __init initialise_paca(struct paca_struct *new_paca, int cpu)
 	new_paca->kexec_state = KEXEC_STATE_NONE;
 	new_paca->__current = &init_task;
 	new_paca->data_offset = 0xfeeeeeeeeeeeeeeeULL;
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	new_paca->slb_shadow_ptr = NULL;
 #endif
 
@@ -307,7 +305,7 @@ void __init allocate_paca(int cpu)
 #ifdef CONFIG_PPC_PSERIES
 	paca->lppaca_ptr = new_lppaca(cpu, limit);
 #endif
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	paca->slb_shadow_ptr = new_slb_shadow(cpu, limit);
 #endif
 #ifdef CONFIG_PPC_PSERIES
@@ -328,7 +326,7 @@ void __init free_unused_pacas(void)
 	paca_nr_cpu_ids = nr_cpu_ids;
 	paca_ptrs_size = new_ptrs_size;
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (early_radix_enabled()) {
 		/* Ugly fixup, see new_slb_shadow() */
 		memblock_phys_free(__pa(paca_ptrs[boot_cpuid]->slb_shadow_ptr),
@@ -341,9 +339,9 @@ void __init free_unused_pacas(void)
 			paca_ptrs_size + paca_struct_size, nr_cpu_ids);
 }
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 void copy_mm_to_paca(struct mm_struct *mm)
 {
-#ifdef CONFIG_PPC_BOOK3S
 	mm_context_t *context = &mm->context;
 
 #ifdef CONFIG_PPC_MM_SLICES
@@ -356,7 +354,5 @@ void copy_mm_to_paca(struct mm_struct *mm)
 	get_paca()->mm_ctx_user_psize = context->user_psize;
 	get_paca()->mm_ctx_sllp = context->sllp;
 #endif
-#else /* !CONFIG_PPC_BOOK3S */
-	return;
-#endif
 }
+#endif /* CONFIG_PPC_64S_HASH_MMU */
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 5d2333d2a283..a64cfbb85ca2 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1240,7 +1240,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
 {
 	struct thread_struct *new_thread, *old_thread;
 	struct task_struct *last;
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	struct ppc64_tlb_batch *batch;
 #endif
 
@@ -1249,7 +1249,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
 
 	WARN_ON(!irqs_disabled());
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	batch = this_cpu_ptr(&ppc64_tlb_batch);
 	if (batch->active) {
 		current_thread_info()->local_flags |= _TLF_LAZY_MMU;
@@ -1328,6 +1328,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
 	 */
 
 #ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	/*
 	 * This applies to a process that was context switched while inside
 	 * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
@@ -1339,6 +1340,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
 		batch = this_cpu_ptr(&ppc64_tlb_batch);
 		batch->active = 1;
 	}
+#endif
 
 	/*
 	 * Math facilities are masked out of the child MSR in copy_thread.
@@ -1689,7 +1691,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
 
 static void setup_ksp_vsid(struct task_struct *p, unsigned long sp)
 {
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	unsigned long sp_vsid;
 	unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp;
 
@@ -2333,10 +2335,9 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
 	 * the heap, we can put it above 1TB so it is backed by a 1TB
 	 * segment. Otherwise the heap will be in the bottom 1TB
 	 * which always uses 256MB segments and this may result in a
-	 * performance penalty. We don't need to worry about radix. For
-	 * radix, mmu_highuser_ssize remains unchanged from 256MB.
+	 * performance penalty.
 	 */
-	if (!is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
+	if (!radix_enabled() && !is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
 		base = max_t(unsigned long, mm->brk, 1UL << SID_SHIFT_1T);
 #endif
 
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 2e67588f6f6e..2197404cdcc4 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -234,6 +234,7 @@ static void __init check_cpu_pa_features(unsigned long node)
 #ifdef CONFIG_PPC_BOOK3S_64
 static void __init init_mmu_slb_size(unsigned long node)
 {
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	const __be32 *slb_size_ptr;
 
 	slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL) ? :
@@ -241,6 +242,7 @@ static void __init init_mmu_slb_size(unsigned long node)
 
 	if (slb_size_ptr)
 		mmu_slb_size = be32_to_cpup(slb_size_ptr);
+#endif
 }
 #else
 #define init_mmu_slb_size(node) do { } while(0)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 9a493796ce66..22647bb82198 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -887,6 +887,8 @@ void __init setup_per_cpu_areas(void)
 	} else if (radix_enabled()) {
 		atom_size = PAGE_SIZE;
 	} else {
+#ifdef CONFIG_PPC_64S_HASH_MMU
+
 		/*
 		 * Linear mapping is one of 4K, 1M and 16M.  For 4K, no need
 		 * to group units.  For larger mappings, use 1M atom which
@@ -896,6 +898,9 @@ void __init setup_per_cpu_areas(void)
 			atom_size = PAGE_SIZE;
 		else
 			atom_size = SZ_1M;
+#else
+		BUILD_BUG(); // radix_enabled() should be constant true
+#endif
 	}
 
 	if (pcpu_chosen_fc != PCPU_FC_PAGE) {
diff --git a/arch/powerpc/kexec/core_64.c b/arch/powerpc/kexec/core_64.c
index 66678518b938..635b5fc30b53 100644
--- a/arch/powerpc/kexec/core_64.c
+++ b/arch/powerpc/kexec/core_64.c
@@ -378,7 +378,7 @@ void default_machine_kexec(struct kimage *image)
 	/* NOTREACHED */
 }
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 /* Values we need to export to the second kernel via the device tree. */
 static unsigned long htab_base;
 static unsigned long htab_size;
@@ -420,4 +420,4 @@ static int __init export_htab_values(void)
 	return 0;
 }
 late_initcall(export_htab_values);
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_64S_HASH_MMU */
diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c
index 6b81c852feab..92d831621fa0 100644
--- a/arch/powerpc/kexec/ranges.c
+++ b/arch/powerpc/kexec/ranges.c
@@ -306,10 +306,14 @@ int add_initrd_mem_range(struct crash_mem **mem_ranges)
  */
 int add_htab_mem_range(struct crash_mem **mem_ranges)
 {
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!htab_address)
 		return 0;
 
 	return add_mem_range(mem_ranges, __pa(htab_address), htab_size_bytes);
+#else
+	return 0;
+#endif
 }
 #endif
 
diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
index 501efadb287f..2d50cac499c5 100644
--- a/arch/powerpc/mm/book3s64/Makefile
+++ b/arch/powerpc/mm/book3s64/Makefile
@@ -2,20 +2,23 @@
 
 ccflags-y	:= $(NO_MINIMAL_TOC)
 
+obj-y				+= mmu_context.o pgtable.o trace.o
+ifdef CONFIG_PPC_64S_HASH_MMU
 CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
-
-obj-y				+= hash_pgtable.o hash_utils.o slb.o \
-				   mmu_context.o pgtable.o hash_tlb.o trace.o
+obj-y				+= hash_pgtable.o hash_utils.o hash_tlb.o slb.o
 obj-$(CONFIG_PPC_HASH_MMU_NATIVE)	+= hash_native.o
-obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_pgtable.o radix_tlb.o
 obj-$(CONFIG_PPC_4K_PAGES)	+= hash_4k.o
 obj-$(CONFIG_PPC_64K_PAGES)	+= hash_64k.o
+obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
+obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage_prot.o
+endif
+
 obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
+
+obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_pgtable.o radix_tlb.o
 ifdef CONFIG_HUGETLB_PAGE
 obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_hugetlbpage.o
 endif
-obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
-obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage_prot.o
 obj-$(CONFIG_SPAPR_TCE_IOMMU)	+= iommu_api.o
 obj-$(CONFIG_PPC_PKEY)	+= pkeys.o
 
diff --git a/arch/powerpc/mm/book3s64/hugetlbpage.c b/arch/powerpc/mm/book3s64/hugetlbpage.c
index a688e1324ae5..95b2a283fd6e 100644
--- a/arch/powerpc/mm/book3s64/hugetlbpage.c
+++ b/arch/powerpc/mm/book3s64/hugetlbpage.c
@@ -16,6 +16,7 @@
 unsigned int hpage_shift;
 EXPORT_SYMBOL(hpage_shift);
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 		     pte_t *ptep, unsigned long trap, unsigned long flags,
 		     int ssize, unsigned int shift, unsigned int mmu_psize)
@@ -122,6 +123,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
 	*ptep = __pte(new_pte & ~H_PAGE_BUSY);
 	return 0;
 }
+#endif
 
 pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
 				  unsigned long addr, pte_t *ptep)
diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
index c10fc8a72fb3..24aa953c9311 100644
--- a/arch/powerpc/mm/book3s64/mmu_context.c
+++ b/arch/powerpc/mm/book3s64/mmu_context.c
@@ -31,6 +31,7 @@ static int alloc_context_id(int min_id, int max_id)
 	return ida_alloc_range(&mmu_context_ida, min_id, max_id, GFP_KERNEL);
 }
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 void hash__reserve_context_id(int id)
 {
 	int result = ida_alloc_range(&mmu_context_ida, id, id, GFP_KERNEL);
@@ -50,7 +51,9 @@ int hash__alloc_context_id(void)
 	return alloc_context_id(MIN_USER_CONTEXT, max);
 }
 EXPORT_SYMBOL_GPL(hash__alloc_context_id);
+#endif
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 static int realloc_context_ids(mm_context_t *ctx)
 {
 	int i, id;
@@ -150,6 +153,13 @@ void hash__setup_new_exec(void)
 
 	slb_setup_new_exec();
 }
+#else
+static inline int hash__init_new_context(struct mm_struct *mm)
+{
+	BUILD_BUG();
+	return 0;
+}
+#endif
 
 static int radix__init_new_context(struct mm_struct *mm)
 {
@@ -175,7 +185,9 @@ static int radix__init_new_context(struct mm_struct *mm)
 	 */
 	asm volatile("ptesync;isync" : : : "memory");
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	mm->context.hash_context = NULL;
+#endif
 
 	return index;
 }
@@ -213,14 +225,22 @@ EXPORT_SYMBOL_GPL(__destroy_context);
 
 static void destroy_contexts(mm_context_t *ctx)
 {
-	int index, context_id;
+	if (radix_enabled()) {
+		ida_free(&mmu_context_ida, ctx->id);
+	} else {
+#ifdef CONFIG_PPC_64S_HASH_MMU
+		int index, context_id;
 
-	for (index = 0; index < ARRAY_SIZE(ctx->extended_id); index++) {
-		context_id = ctx->extended_id[index];
-		if (context_id)
-			ida_free(&mmu_context_ida, context_id);
+		for (index = 0; index < ARRAY_SIZE(ctx->extended_id); index++) {
+			context_id = ctx->extended_id[index];
+			if (context_id)
+				ida_free(&mmu_context_ida, context_id);
+		}
+		kfree(ctx->hash_context);
+#else
+		BUILD_BUG(); // radix_enabled() should be constant true
+#endif
 	}
-	kfree(ctx->hash_context);
 }
 
 static void pmd_frag_destroy(void *pmd_frag)
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 4d97d1525d49..d765d972566b 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -534,7 +534,7 @@ static int __init pgtable_debugfs_setup(void)
 }
 arch_initcall(pgtable_debugfs_setup);
 
-#ifdef CONFIG_ZONE_DEVICE
+#if defined(CONFIG_ZONE_DEVICE) && defined(ARCH_HAS_MEMREMAP_COMPAT_ALIGN)
 /*
  * Override the generic version in mm/memremap.c.
  *
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 77820036c722..99dbee114539 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -334,8 +334,10 @@ static void __init radix_init_pgtable(void)
 	phys_addr_t start, end;
 	u64 i;
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	/* We don't support slb for radix */
 	mmu_slb_size = 0;
+#endif
 
 	/*
 	 * Create the linear mapping
@@ -576,6 +578,7 @@ void __init radix__early_init_mmu(void)
 {
 	unsigned long lpcr;
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 #ifdef CONFIG_PPC_64K_PAGES
 	/* PAGE_SIZE mappings */
 	mmu_virtual_psize = MMU_PAGE_64K;
@@ -592,6 +595,7 @@ void __init radix__early_init_mmu(void)
 		mmu_vmemmap_psize = MMU_PAGE_2M;
 	} else
 		mmu_vmemmap_psize = mmu_virtual_psize;
+#endif
 #endif
 	/*
 	 * initialize page table size
diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c
index 8acd00178956..c1cb21a00884 100644
--- a/arch/powerpc/mm/copro_fault.c
+++ b/arch/powerpc/mm/copro_fault.c
@@ -82,6 +82,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
 }
 EXPORT_SYMBOL_GPL(copro_handle_mm_fault);
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 int copro_calculate_slb(struct mm_struct *mm, u64 ea, struct copro_slb *slb)
 {
 	u64 vsid, vsidkey;
@@ -146,3 +147,4 @@ void copro_flush_all_slbs(struct mm_struct *mm)
 	cxl_slbia(mm);
 }
 EXPORT_SYMBOL_GPL(copro_flush_all_slbs);
+#endif
diff --git a/arch/powerpc/mm/ptdump/Makefile b/arch/powerpc/mm/ptdump/Makefile
index 4050cbb55acf..b533caaf0910 100644
--- a/arch/powerpc/mm/ptdump/Makefile
+++ b/arch/powerpc/mm/ptdump/Makefile
@@ -10,5 +10,5 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= book3s64.o
 
 ifdef CONFIG_PTDUMP_DEBUGFS
 obj-$(CONFIG_PPC_BOOK3S_32)	+= bats.o segment_regs.o
-obj-$(CONFIG_PPC_BOOK3S_64)	+= hashpagetable.o
+obj-$(CONFIG_PPC_64S_HASH_MMU)	+= hashpagetable.o
 endif
diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
index 3bc84e2fe064..f0edfe6c1598 100644
--- a/arch/powerpc/platforms/powernv/idle.c
+++ b/arch/powerpc/platforms/powernv/idle.c
@@ -491,12 +491,14 @@ static unsigned long power7_idle_insn(unsigned long type)
 
 	mtspr(SPRN_SPRG3,	local_paca->sprg_vdso);
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	/*
 	 * The SLB has to be restored here, but it sometimes still
 	 * contains entries, so the __ variant must be used to prevent
 	 * multi hits.
 	 */
 	__slb_restore_bolted_realmode();
+#endif
 
 	return srr1;
 }
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 5ef6b8afb3d0..f37d6524a24d 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -211,6 +211,7 @@ static void __init pnv_init(void)
 #endif
 		add_preferred_console("hvc", 0, NULL);
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!radix_enabled()) {
 		size_t size = sizeof(struct slb_entry) * mmu_slb_size;
 		int i;
@@ -223,6 +224,7 @@ static void __init pnv_init(void)
 						cpu_to_node(i));
 		}
 	}
+#endif
 }
 
 static void __init pnv_init_IRQ(void)
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 06d6a824c0dc..fac5d86777db 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -58,6 +58,7 @@ EXPORT_SYMBOL(plpar_hcall);
 EXPORT_SYMBOL(plpar_hcall9);
 EXPORT_SYMBOL(plpar_hcall_norets);
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 /*
  * H_BLOCK_REMOVE supported block size for this page size in segment who's base
  * page size is that page size.
@@ -66,6 +67,7 @@ EXPORT_SYMBOL(plpar_hcall_norets);
  * page size.
  */
 static int hblkrm_size[MMU_PAGE_COUNT][MMU_PAGE_COUNT] __ro_after_init;
+#endif
 
 /*
  * Due to the involved complexity, and that the current hypervisor is only
@@ -689,7 +691,7 @@ void vpa_init(int cpu)
 		return;
 	}
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	/*
 	 * PAPR says this feature is SLB-Buffer but firmware never
 	 * reports that.  All SPLPAR support SLB shadow buffer.
@@ -702,7 +704,7 @@ void vpa_init(int cpu)
 			       "cpu %d (hw %d) of area %lx failed with %ld\n",
 			       cpu, hwcpu, addr, ret);
 	}
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_64S_HASH_MMU */
 
 	/*
 	 * Register dispatch trace log, if one has been allocated.
@@ -740,6 +742,8 @@ static int pseries_lpar_register_process_table(unsigned long base,
 	return rc;
 }
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
+
 static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
 				     unsigned long vpn, unsigned long pa,
 				     unsigned long rflags, unsigned long vflags,
@@ -1730,6 +1734,7 @@ void __init hpte_init_pseries(void)
 	if (cpu_has_feature(CPU_FTR_ARCH_300))
 		pseries_lpar_register_process_table(0, 0, 0);
 }
+#endif /* CONFIG_PPC_64S_HASH_MMU */
 
 #ifdef CONFIG_PPC_RADIX_MMU
 void radix_init_pseries(void)
@@ -1932,6 +1937,7 @@ int h_get_mpp_x(struct hvcall_mpp_x_data *mpp_x_data)
 	return rc;
 }
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 static unsigned long vsid_unscramble(unsigned long vsid, int ssize)
 {
 	unsigned long protovsid;
@@ -1992,6 +1998,7 @@ static int __init reserve_vrma_context_id(void)
 	return 0;
 }
 machine_device_initcall(pseries, reserve_vrma_context_id);
+#endif
 
 #ifdef CONFIG_DEBUG_FS
 /* debugfs file interface for vpa data */
diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
index 3354c00914fa..c7940fcfc911 100644
--- a/arch/powerpc/platforms/pseries/lparcfg.c
+++ b/arch/powerpc/platforms/pseries/lparcfg.c
@@ -531,7 +531,7 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v)
 	seq_printf(m, "shared_processor_mode=%d\n",
 		   lppaca_shared_proc(get_lppaca()));
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!radix_enabled())
 		seq_printf(m, "slb_size=%d\n", mmu_slb_size);
 #endif
diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
index 210a37a065fb..21b706bcea76 100644
--- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -451,11 +451,15 @@ static void prod_others(void)
 
 static u16 clamp_slb_size(void)
 {
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	u16 prev = mmu_slb_size;
 
 	slb_set_size(SLB_MIN_SIZE);
 
 	return prev;
+#else
+	return 0;
+#endif
 }
 
 static int do_suspend(void)
@@ -480,7 +484,9 @@ static int do_suspend(void)
 	ret = rtas_ibm_suspend_me(&status);
 	if (ret != 0) {
 		pr_err("ibm,suspend-me error: %d\n", status);
+#ifdef CONFIG_PPC_64S_HASH_MMU
 		slb_set_size(saved_slb_size);
+#endif
 	}
 
 	return ret;
diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
index 56092dccfdb8..74c9b1b5bc66 100644
--- a/arch/powerpc/platforms/pseries/ras.c
+++ b/arch/powerpc/platforms/pseries/ras.c
@@ -526,6 +526,7 @@ static int mce_handle_err_realmode(int disposition, u8 error_type)
 			disposition = RTAS_DISP_FULLY_RECOVERED;
 			break;
 		case	MC_ERROR_TYPE_SLB:
+#ifdef CONFIG_PPC_64S_HASH_MMU
 			/*
 			 * Store the old slb content in paca before flushing.
 			 * Print this when we go to virtual mode.
@@ -538,6 +539,7 @@ static int mce_handle_err_realmode(int disposition, u8 error_type)
 				slb_save_contents(local_paca->mce_faulty_slbs);
 			flush_and_reload_slb();
 			disposition = RTAS_DISP_FULLY_RECOVERED;
+#endif
 			break;
 		default:
 			break;
diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c
index 7f7369fec46b..80dae18d6621 100644
--- a/arch/powerpc/platforms/pseries/reconfig.c
+++ b/arch/powerpc/platforms/pseries/reconfig.c
@@ -337,8 +337,10 @@ static int do_update_property(char *buf, size_t bufsize)
 	if (!newprop)
 		return -ENOMEM;
 
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!strcmp(name, "slb-size") || !strcmp(name, "ibm,slb-size"))
 		slb_set_size(*(int *)value);
+#endif
 
 	return of_update_property(np, newprop);
 }
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 8a62af5b9c24..7f69237d4fa4 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -112,7 +112,7 @@ static void __init fwnmi_init(void)
 	u8 *mce_data_buf;
 	unsigned int i;
 	int nr_cpus = num_possible_cpus();
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	struct slb_entry *slb_ptr;
 	size_t size;
 #endif
@@ -152,7 +152,7 @@ static void __init fwnmi_init(void)
 						(RTAS_ERROR_LOG_MAX * i);
 	}
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!radix_enabled()) {
 		/* Allocate per cpu area to save old slb contents during MCE */
 		size = sizeof(struct slb_entry) * mmu_slb_size * nr_cpus;
@@ -801,7 +801,9 @@ static void __init pSeries_setup_arch(void)
 	fwnmi_init();
 
 	pseries_setup_security_mitigations();
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	pseries_lpar_read_hblkrm_characteristics();
+#endif
 
 	/* By default, only probe PCI (can be overridden by rtas_pci) */
 	pci_add_flags(PCI_PROBE_ONLY);
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 3b2be65a32f2..46dd292a4694 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -1159,7 +1159,7 @@ cmds(struct pt_regs *excp)
 		case 'P':
 			show_tasks();
 			break;
-#ifdef CONFIG_PPC_BOOK3S
+#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_PPC_64S_HASH_MMU)
 		case 'u':
 			dump_segments();
 			break;
@@ -2614,7 +2614,7 @@ static void dump_tracing(void)
 static void dump_one_paca(int cpu)
 {
 	struct paca_struct *p;
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	int i = 0;
 #endif
 
@@ -2656,6 +2656,7 @@ static void dump_one_paca(int cpu)
 	DUMP(p, cpu_start, "%#-*x");
 	DUMP(p, kexec_state, "%#-*x");
 #ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!early_radix_enabled()) {
 		for (i = 0; i < SLB_NUM_BOLTED; i++) {
 			u64 esid, vsid;
@@ -2683,6 +2684,7 @@ static void dump_one_paca(int cpu)
 				       22, "slb_cache", i, p->slb_cache[i]);
 		}
 	}
+#endif
 
 	DUMP(p, rfi_flush_fallback_area, "%-*px");
 #endif
@@ -3746,7 +3748,7 @@ static void xmon_print_symbol(unsigned long address, const char *mid,
 	printf("%s", after);
 }
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 void dump_segments(void)
 {
 	int i;
diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
index 609d9ee2acc0..82fb276f7e09 100644
--- a/drivers/misc/lkdtm/core.c
+++ b/drivers/misc/lkdtm/core.c
@@ -182,7 +182,7 @@ static const struct crashtype crashtypes[] = {
 	CRASHTYPE(FORTIFIED_SUBOBJECT),
 	CRASHTYPE(FORTIFIED_STRSCPY),
 	CRASHTYPE(DOUBLE_FAULT),
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 	CRASHTYPE(PPC_SLB_MULTIHIT),
 #endif
 };
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [PATCH v6 18/18] powerpc/microwatt: add POWER9_CPU, clear PPC_64S_HASH_MMU
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (16 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU Nicholas Piggin
@ 2021-12-01 14:41 ` Nicholas Piggin
  2021-12-15  0:24 ` [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Michael Ellerman
  18 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-01 14:41 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Nicholas Piggin

Microwatt implements a subset of ISA v3.0 (which is equivalent to
the POWER9_CPU option). It is radix-only, so does not require hash
MMU support.

This saves 20kB compressed dtbImage and 56kB vmlinux size.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/configs/microwatt_defconfig | 3 ++-
 arch/powerpc/platforms/microwatt/Kconfig | 1 -
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/configs/microwatt_defconfig b/arch/powerpc/configs/microwatt_defconfig
index 07d87a4044b2..eff933ebbb9e 100644
--- a/arch/powerpc/configs/microwatt_defconfig
+++ b/arch/powerpc/configs/microwatt_defconfig
@@ -15,6 +15,8 @@ CONFIG_EMBEDDED=y
 # CONFIG_COMPAT_BRK is not set
 # CONFIG_SLAB_MERGE_DEFAULT is not set
 CONFIG_PPC64=y
+CONFIG_POWER9_CPU=y
+# CONFIG_PPC_64S_HASH_MMU is not set
 # CONFIG_PPC_KUEP is not set
 # CONFIG_PPC_KUAP is not set
 CONFIG_CPU_LITTLE_ENDIAN=y
@@ -27,7 +29,6 @@ CONFIG_PPC_MICROWATT=y
 CONFIG_CPU_FREQ=y
 CONFIG_HZ_100=y
 CONFIG_PPC_4K_PAGES=y
-# CONFIG_PPC_MEM_KEYS is not set
 # CONFIG_SECCOMP is not set
 # CONFIG_MQ_IOSCHED_KYBER is not set
 # CONFIG_COREDUMP is not set
diff --git a/arch/powerpc/platforms/microwatt/Kconfig b/arch/powerpc/platforms/microwatt/Kconfig
index 823192e9d38a..5e320f49583a 100644
--- a/arch/powerpc/platforms/microwatt/Kconfig
+++ b/arch/powerpc/platforms/microwatt/Kconfig
@@ -5,7 +5,6 @@ config PPC_MICROWATT
 	select PPC_XICS
 	select PPC_ICS_NATIVE
 	select PPC_ICP_NATIVE
-	select PPC_HASH_MMU_NATIVE if PPC_64S_HASH_MMU
 	select PPC_UDBG_16550
 	select ARCH_RANDOM
 	help
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
  2021-12-01 14:41 ` [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU Nicholas Piggin
@ 2021-12-02 13:30   ` Fabiano Rosas
  2021-12-03 12:34   ` Michael Ellerman
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 31+ messages in thread
From: Fabiano Rosas @ 2021-12-02 13:30 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:

> Compiling out hash support code when CONFIG_PPC_64S_HASH_MMU=n saves
> 128kB kernel image size (90kB text) on powernv_defconfig minus KVM,
> 350kB on pseries_defconfig minus KVM, 40kB on a tiny config.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Tested this series on a P9. Tried to force some invalid configs with KVM
and it held up. Also built all defconfigs from make help.

Tested-by: Fabiano Rosas <farosas@linux.ibm.com>

> ---
>  arch/powerpc/Kconfig                          |  2 +-
>  arch/powerpc/include/asm/book3s/64/mmu.h      | 21 ++++++++++--
>  .../include/asm/book3s/64/tlbflush-hash.h     |  6 ++++
>  arch/powerpc/include/asm/book3s/pgtable.h     |  4 +++
>  arch/powerpc/include/asm/mmu_context.h        |  2 ++
>  arch/powerpc/include/asm/paca.h               |  8 +++++
>  arch/powerpc/kernel/asm-offsets.c             |  2 ++
>  arch/powerpc/kernel/entry_64.S                |  4 +--
>  arch/powerpc/kernel/exceptions-64s.S          | 16 ++++++++++
>  arch/powerpc/kernel/mce.c                     |  2 +-
>  arch/powerpc/kernel/mce_power.c               | 10 ++++--
>  arch/powerpc/kernel/paca.c                    | 18 ++++-------
>  arch/powerpc/kernel/process.c                 | 13 ++++----
>  arch/powerpc/kernel/prom.c                    |  2 ++
>  arch/powerpc/kernel/setup_64.c                |  5 +++
>  arch/powerpc/kexec/core_64.c                  |  4 +--
>  arch/powerpc/kexec/ranges.c                   |  4 +++
>  arch/powerpc/mm/book3s64/Makefile             | 15 +++++----
>  arch/powerpc/mm/book3s64/hugetlbpage.c        |  2 ++
>  arch/powerpc/mm/book3s64/mmu_context.c        | 32 +++++++++++++++----
>  arch/powerpc/mm/book3s64/pgtable.c            |  2 +-
>  arch/powerpc/mm/book3s64/radix_pgtable.c      |  4 +++
>  arch/powerpc/mm/copro_fault.c                 |  2 ++
>  arch/powerpc/mm/ptdump/Makefile               |  2 +-
>  arch/powerpc/platforms/powernv/idle.c         |  2 ++
>  arch/powerpc/platforms/powernv/setup.c        |  2 ++
>  arch/powerpc/platforms/pseries/lpar.c         | 11 +++++--
>  arch/powerpc/platforms/pseries/lparcfg.c      |  2 +-
>  arch/powerpc/platforms/pseries/mobility.c     |  6 ++++
>  arch/powerpc/platforms/pseries/ras.c          |  2 ++
>  arch/powerpc/platforms/pseries/reconfig.c     |  2 ++
>  arch/powerpc/platforms/pseries/setup.c        |  6 ++--
>  arch/powerpc/xmon/xmon.c                      |  8 +++--
>  drivers/misc/lkdtm/core.c                     |  2 +-
>  34 files changed, 173 insertions(+), 52 deletions(-)
>
> diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
> index 1fa336ec8faf..fb48823ccd62 100644
> --- a/arch/powerpc/Kconfig
> +++ b/arch/powerpc/Kconfig
> @@ -129,7 +129,7 @@ config PPC
>  	select ARCH_HAS_KCOV
>  	select ARCH_HAS_MEMBARRIER_CALLBACKS
>  	select ARCH_HAS_MEMBARRIER_SYNC_CORE
> -	select ARCH_HAS_MEMREMAP_COMPAT_ALIGN	if PPC_BOOK3S_64
> +	select ARCH_HAS_MEMREMAP_COMPAT_ALIGN	if PPC_64S_HASH_MMU
>  	select ARCH_HAS_MMIOWB			if PPC64
>  	select ARCH_HAS_NON_OVERLAPPING_ADDRESS_SPACE
>  	select ARCH_HAS_PHYS_TO_DMA
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index 015d7d972d16..c480d21a146c 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -104,7 +104,9 @@ typedef struct {
>  		 * from EA and new context ids to build the new VAs.
>  		 */
>  		mm_context_id_t id;
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  		mm_context_id_t extended_id[TASK_SIZE_USER64/TASK_CONTEXT_SIZE];
> +#endif
>  	};
>
>  	/* Number of bits in the mm_cpumask */
> @@ -116,7 +118,9 @@ typedef struct {
>  	/* Number of user space windows opened in process mm_context */
>  	atomic_t vas_windows;
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	struct hash_mm_context *hash_context;
> +#endif
>
>  	void __user *vdso;
>  	/*
> @@ -139,6 +143,7 @@ typedef struct {
>  #endif
>  } mm_context_t;
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  static inline u16 mm_ctx_user_psize(mm_context_t *ctx)
>  {
>  	return ctx->hash_context->user_psize;
> @@ -199,8 +204,15 @@ static inline struct subpage_prot_table *mm_ctx_subpage_prot(mm_context_t *ctx)
>  extern int mmu_linear_psize;
>  extern int mmu_virtual_psize;
>  extern int mmu_vmalloc_psize;
> -extern int mmu_vmemmap_psize;
>  extern int mmu_io_psize;
> +#else /* CONFIG_PPC_64S_HASH_MMU */
> +#ifdef CONFIG_PPC_64K_PAGES
> +#define mmu_virtual_psize MMU_PAGE_64K
> +#else
> +#define mmu_virtual_psize MMU_PAGE_4K
> +#endif
> +#endif
> +extern int mmu_vmemmap_psize;
>
>  /* MMU initialization */
>  void mmu_early_init_devtree(void);
> @@ -239,8 +251,9 @@ static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
>  	 * know which translations we will pick. Hence go with hash
>  	 * restrictions.
>  	 */
> -	return hash__setup_initial_memory_limit(first_memblock_base,
> -					   first_memblock_size);
> +	if (!radix_enabled())
> +		hash__setup_initial_memory_limit(first_memblock_base,
> +						 first_memblock_size);
>  }
>
>  #ifdef CONFIG_PPC_PSERIES
> @@ -261,6 +274,7 @@ static inline void radix_init_pseries(void) { }
>  void cleanup_cpu_mmu_context(void);
>  #endif
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  static inline int get_user_context(mm_context_t *ctx, unsigned long ea)
>  {
>  	int index = ea >> MAX_EA_BITS_PER_CONTEXT;
> @@ -280,6 +294,7 @@ static inline unsigned long get_user_vsid(mm_context_t *ctx,
>
>  	return get_vsid(context, ea, ssize);
>  }
> +#endif
>
>  #endif /* __ASSEMBLY__ */
>  #endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> index 3b95769739c7..8b762f282190 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> @@ -112,8 +112,14 @@ static inline void hash__flush_tlb_kernel_range(unsigned long start,
>
>  struct mmu_gather;
>  extern void hash__tlb_flush(struct mmu_gather *tlb);
> +void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr);
> +
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  /* Private function for use by PCI IO mapping code */
>  extern void __flush_hash_table_range(unsigned long start, unsigned long end);
>  extern void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd,
>  				unsigned long addr);
> +#else
> +static inline void __flush_hash_table_range(unsigned long start, unsigned long end) { }
> +#endif
>  #endif /*  _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H */
> diff --git a/arch/powerpc/include/asm/book3s/pgtable.h b/arch/powerpc/include/asm/book3s/pgtable.h
> index ad130e15a126..e8269434ecbe 100644
> --- a/arch/powerpc/include/asm/book3s/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/pgtable.h
> @@ -25,6 +25,7 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
>  				     unsigned long size, pgprot_t vma_prot);
>  #define __HAVE_PHYS_MEM_ACCESS_PROT
>
> +#if defined(CONFIG_PPC32) || defined(CONFIG_PPC_64S_HASH_MMU)
>  /*
>   * This gets called at the end of handling a page fault, when
>   * the kernel has put a new PTE into the page table for the process.
> @@ -35,6 +36,9 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
>   * waiting for the inevitable extra hash-table miss exception.
>   */
>  void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep);
> +#else
> +static inline void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) {}
> +#endif
>
>  #endif /* __ASSEMBLY__ */
>  #endif
> diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
> index 9ba6b585337f..e46394d27785 100644
> --- a/arch/powerpc/include/asm/mmu_context.h
> +++ b/arch/powerpc/include/asm/mmu_context.h
> @@ -75,6 +75,7 @@ extern void hash__reserve_context_id(int id);
>  extern void __destroy_context(int context_id);
>  static inline void mmu_context_init(void) { }
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  static inline int alloc_extended_context(struct mm_struct *mm,
>  					 unsigned long ea)
>  {
> @@ -100,6 +101,7 @@ static inline bool need_extra_context(struct mm_struct *mm, unsigned long ea)
>  		return true;
>  	return false;
>  }
> +#endif
>
>  #else
>  extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
> diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
> index dc05a862e72a..295573a82c66 100644
> --- a/arch/powerpc/include/asm/paca.h
> +++ b/arch/powerpc/include/asm/paca.h
> @@ -97,7 +97,9 @@ struct paca_struct {
>  					/* this becomes non-zero. */
>  	u8 kexec_state;		/* set when kexec down has irqs off */
>  #ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	struct slb_shadow *slb_shadow_ptr;
> +#endif
>  	struct dtl_entry *dispatch_log;
>  	struct dtl_entry *dispatch_log_end;
>  #endif
> @@ -110,6 +112,7 @@ struct paca_struct {
>  	/* used for most interrupts/exceptions */
>  	u64 exgen[EX_SIZE] __attribute__((aligned(0x80)));
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/* SLB related definitions */
>  	u16 vmalloc_sllp;
>  	u8 slb_cache_ptr;
> @@ -120,6 +123,7 @@ struct paca_struct {
>  	u32 slb_used_bitmap;		/* Bitmaps for first 32 SLB entries. */
>  	u32 slb_kern_bitmap;
>  	u32 slb_cache[SLB_CACHE_ENTRIES];
> +#endif
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>
>  #ifdef CONFIG_PPC_BOOK3E
> @@ -149,6 +153,7 @@ struct paca_struct {
>  #endif /* CONFIG_PPC_BOOK3E */
>
>  #ifdef CONFIG_PPC_BOOK3S
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  #ifdef CONFIG_PPC_MM_SLICES
>  	unsigned char mm_ctx_low_slices_psize[BITS_PER_LONG / BITS_PER_BYTE];
>  	unsigned char mm_ctx_high_slices_psize[SLICE_ARRAY_SIZE];
> @@ -156,6 +161,7 @@ struct paca_struct {
>  	u16 mm_ctx_user_psize;
>  	u16 mm_ctx_sllp;
>  #endif
> +#endif
>  #endif
>
>  	/*
> @@ -268,9 +274,11 @@ struct paca_struct {
>  #endif /* CONFIG_PPC_PSERIES */
>
>  #ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/* Capture SLB related old contents in MCE handler. */
>  	struct slb_entry *mce_faulty_slbs;
>  	u16 slb_save_cache_ptr;
> +#endif
>  #endif /* CONFIG_PPC_BOOK3S_64 */
>  #ifdef CONFIG_STACKPROTECTOR
>  	unsigned long canary;
> diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
> index cc05522f50bf..b823f484c640 100644
> --- a/arch/powerpc/kernel/asm-offsets.c
> +++ b/arch/powerpc/kernel/asm-offsets.c
> @@ -218,10 +218,12 @@ int main(void)
>  	OFFSET(PACA_EXGEN, paca_struct, exgen);
>  	OFFSET(PACA_EXMC, paca_struct, exmc);
>  	OFFSET(PACA_EXNMI, paca_struct, exnmi);
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	OFFSET(PACA_SLBSHADOWPTR, paca_struct, slb_shadow_ptr);
>  	OFFSET(SLBSHADOW_STACKVSID, slb_shadow, save_area[SLB_NUM_BOLTED - 1].vsid);
>  	OFFSET(SLBSHADOW_STACKESID, slb_shadow, save_area[SLB_NUM_BOLTED - 1].esid);
>  	OFFSET(SLBSHADOW_SAVEAREA, slb_shadow, save_area);
> +#endif
>  	OFFSET(LPPACA_PMCINUSE, lppaca, pmcregs_in_use);
>  #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
>  	OFFSET(PACA_PMCINUSE, paca_struct, pmcregs_in_use);
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 70cff7b49e17..9581906b5ee9 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -180,7 +180,7 @@ _GLOBAL(_switch)
>  #endif
>
>  	ld	r8,KSP(r4)	/* new stack pointer */
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  BEGIN_MMU_FTR_SECTION
>  	b	2f
>  END_MMU_FTR_SECTION_IFSET(MMU_FTR_TYPE_RADIX)
> @@ -232,7 +232,7 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
>  	slbmte	r7,r0
>  	isync
>  2:
> -#endif /* CONFIG_PPC_BOOK3S_64 */
> +#endif /* CONFIG_PPC_64S_HASH_MMU */
>
>  	clrrdi	r7, r8, THREAD_SHIFT	/* base of new stack */
>  	/* Note: this uses SWITCH_FRAME_SIZE rather than INT_FRAME_SIZE
> diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
> index 046c99e31d01..65b695e9401e 100644
> --- a/arch/powerpc/kernel/exceptions-64s.S
> +++ b/arch/powerpc/kernel/exceptions-64s.S
> @@ -1369,11 +1369,15 @@ EXC_COMMON_BEGIN(data_access_common)
>  	addi	r3,r1,STACK_FRAME_OVERHEAD
>  	andis.	r0,r4,DSISR_DABRMATCH@h
>  	bne-	1f
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  BEGIN_MMU_FTR_SECTION
>  	bl	do_hash_fault
>  MMU_FTR_SECTION_ELSE
>  	bl	do_page_fault
>  ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
> +#else
> +	bl	do_page_fault
> +#endif
>  	b	interrupt_return_srr
>
>  1:	bl	do_break
> @@ -1416,6 +1420,7 @@ EXC_VIRT_BEGIN(data_access_slb, 0x4380, 0x80)
>  EXC_VIRT_END(data_access_slb, 0x4380, 0x80)
>  EXC_COMMON_BEGIN(data_access_slb_common)
>  	GEN_COMMON data_access_slb
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  BEGIN_MMU_FTR_SECTION
>  	/* HPT case, do SLB fault */
>  	addi	r3,r1,STACK_FRAME_OVERHEAD
> @@ -1428,6 +1433,9 @@ MMU_FTR_SECTION_ELSE
>  	/* Radix case, access is outside page table range */
>  	li	r3,-EFAULT
>  ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
> +#else
> +	li	r3,-EFAULT
> +#endif
>  	std	r3,RESULT(r1)
>  	addi	r3,r1,STACK_FRAME_OVERHEAD
>  	bl	do_bad_segment_interrupt
> @@ -1462,11 +1470,15 @@ EXC_VIRT_END(instruction_access, 0x4400, 0x80)
>  EXC_COMMON_BEGIN(instruction_access_common)
>  	GEN_COMMON instruction_access
>  	addi	r3,r1,STACK_FRAME_OVERHEAD
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  BEGIN_MMU_FTR_SECTION
>  	bl	do_hash_fault
>  MMU_FTR_SECTION_ELSE
>  	bl	do_page_fault
>  ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
> +#else
> +	bl	do_page_fault
> +#endif
>  	b	interrupt_return_srr
>
>
> @@ -1496,6 +1508,7 @@ EXC_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x80)
>  EXC_VIRT_END(instruction_access_slb, 0x4480, 0x80)
>  EXC_COMMON_BEGIN(instruction_access_slb_common)
>  	GEN_COMMON instruction_access_slb
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  BEGIN_MMU_FTR_SECTION
>  	/* HPT case, do SLB fault */
>  	addi	r3,r1,STACK_FRAME_OVERHEAD
> @@ -1508,6 +1521,9 @@ MMU_FTR_SECTION_ELSE
>  	/* Radix case, access is outside page table range */
>  	li	r3,-EFAULT
>  ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
> +#else
> +	li	r3,-EFAULT
> +#endif
>  	std	r3,RESULT(r1)
>  	addi	r3,r1,STACK_FRAME_OVERHEAD
>  	bl	do_bad_segment_interrupt
> diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
> index fd829f7f25a4..2503dd4713b9 100644
> --- a/arch/powerpc/kernel/mce.c
> +++ b/arch/powerpc/kernel/mce.c
> @@ -586,7 +586,7 @@ void machine_check_print_event_info(struct machine_check_event *evt,
>  		mc_error_class[evt->error_class] : "Unknown";
>  	printk("%sMCE: CPU%d: %s\n", level, evt->cpu, subtype);
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/* Display faulty slb contents for SLB errors. */
>  	if (evt->error_type == MCE_ERROR_TYPE_SLB && !in_guest)
>  		slb_dump_contents(local_paca->mce_faulty_slbs);
> diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
> index cf5263b648fc..a48ff18d6d65 100644
> --- a/arch/powerpc/kernel/mce_power.c
> +++ b/arch/powerpc/kernel/mce_power.c
> @@ -77,7 +77,7 @@ static bool mce_in_guest(void)
>  }
>
>  /* flush SLBs and reload */
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  void flush_and_reload_slb(void)
>  {
>  	if (early_radix_enabled())
> @@ -99,7 +99,7 @@ void flush_and_reload_slb(void)
>
>  void flush_erat(void)
>  {
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!early_cpu_has_feature(CPU_FTR_ARCH_300)) {
>  		flush_and_reload_slb();
>  		return;
> @@ -114,7 +114,7 @@ void flush_erat(void)
>
>  static int mce_flush(int what)
>  {
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (what == MCE_FLUSH_SLB) {
>  		flush_and_reload_slb();
>  		return 1;
> @@ -499,8 +499,10 @@ static int mce_handle_ierror(struct pt_regs *regs, unsigned long srr1,
>  			/* attempt to correct the error */
>  			switch (table[i].error_type) {
>  			case MCE_ERROR_TYPE_SLB:
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  				if (local_paca->in_mce == 1)
>  					slb_save_contents(local_paca->mce_faulty_slbs);
> +#endif
>  				handled = mce_flush(MCE_FLUSH_SLB);
>  				break;
>  			case MCE_ERROR_TYPE_ERAT:
> @@ -588,8 +590,10 @@ static int mce_handle_derror(struct pt_regs *regs,
>  			/* attempt to correct the error */
>  			switch (table[i].error_type) {
>  			case MCE_ERROR_TYPE_SLB:
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  				if (local_paca->in_mce == 1)
>  					slb_save_contents(local_paca->mce_faulty_slbs);
> +#endif
>  				if (mce_flush(MCE_FLUSH_SLB))
>  					handled = 1;
>  				break;
> diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
> index 4208b4044d12..39da688a9455 100644
> --- a/arch/powerpc/kernel/paca.c
> +++ b/arch/powerpc/kernel/paca.c
> @@ -139,8 +139,7 @@ static struct lppaca * __init new_lppaca(int cpu, unsigned long limit)
>  }
>  #endif /* CONFIG_PPC_PSERIES */
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> -
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  /*
>   * 3 persistent SLBs are allocated here.  The buffer will be zero
>   * initially, hence will all be invaild until we actually write them.
> @@ -169,8 +168,7 @@ static struct slb_shadow * __init new_slb_shadow(int cpu, unsigned long limit)
>
>  	return s;
>  }
> -
> -#endif /* CONFIG_PPC_BOOK3S_64 */
> +#endif /* CONFIG_PPC_64S_HASH_MMU */
>
>  #ifdef CONFIG_PPC_PSERIES
>  /**
> @@ -226,7 +224,7 @@ void __init initialise_paca(struct paca_struct *new_paca, int cpu)
>  	new_paca->kexec_state = KEXEC_STATE_NONE;
>  	new_paca->__current = &init_task;
>  	new_paca->data_offset = 0xfeeeeeeeeeeeeeeeULL;
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	new_paca->slb_shadow_ptr = NULL;
>  #endif
>
> @@ -307,7 +305,7 @@ void __init allocate_paca(int cpu)
>  #ifdef CONFIG_PPC_PSERIES
>  	paca->lppaca_ptr = new_lppaca(cpu, limit);
>  #endif
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	paca->slb_shadow_ptr = new_slb_shadow(cpu, limit);
>  #endif
>  #ifdef CONFIG_PPC_PSERIES
> @@ -328,7 +326,7 @@ void __init free_unused_pacas(void)
>  	paca_nr_cpu_ids = nr_cpu_ids;
>  	paca_ptrs_size = new_ptrs_size;
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (early_radix_enabled()) {
>  		/* Ugly fixup, see new_slb_shadow() */
>  		memblock_phys_free(__pa(paca_ptrs[boot_cpuid]->slb_shadow_ptr),
> @@ -341,9 +339,9 @@ void __init free_unused_pacas(void)
>  			paca_ptrs_size + paca_struct_size, nr_cpu_ids);
>  }
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  void copy_mm_to_paca(struct mm_struct *mm)
>  {
> -#ifdef CONFIG_PPC_BOOK3S
>  	mm_context_t *context = &mm->context;
>
>  #ifdef CONFIG_PPC_MM_SLICES
> @@ -356,7 +354,5 @@ void copy_mm_to_paca(struct mm_struct *mm)
>  	get_paca()->mm_ctx_user_psize = context->user_psize;
>  	get_paca()->mm_ctx_sllp = context->sllp;
>  #endif
> -#else /* !CONFIG_PPC_BOOK3S */
> -	return;
> -#endif
>  }
> +#endif /* CONFIG_PPC_64S_HASH_MMU */
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 5d2333d2a283..a64cfbb85ca2 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -1240,7 +1240,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
>  {
>  	struct thread_struct *new_thread, *old_thread;
>  	struct task_struct *last;
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	struct ppc64_tlb_batch *batch;
>  #endif
>
> @@ -1249,7 +1249,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
>
>  	WARN_ON(!irqs_disabled());
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	batch = this_cpu_ptr(&ppc64_tlb_batch);
>  	if (batch->active) {
>  		current_thread_info()->local_flags |= _TLF_LAZY_MMU;
> @@ -1328,6 +1328,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
>  	 */
>
>  #ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/*
>  	 * This applies to a process that was context switched while inside
>  	 * arch_enter_lazy_mmu_mode(), to re-activate the batch that was
> @@ -1339,6 +1340,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
>  		batch = this_cpu_ptr(&ppc64_tlb_batch);
>  		batch->active = 1;
>  	}
> +#endif
>
>  	/*
>  	 * Math facilities are masked out of the child MSR in copy_thread.
> @@ -1689,7 +1691,7 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
>
>  static void setup_ksp_vsid(struct task_struct *p, unsigned long sp)
>  {
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	unsigned long sp_vsid;
>  	unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp;
>
> @@ -2333,10 +2335,9 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
>  	 * the heap, we can put it above 1TB so it is backed by a 1TB
>  	 * segment. Otherwise the heap will be in the bottom 1TB
>  	 * which always uses 256MB segments and this may result in a
> -	 * performance penalty. We don't need to worry about radix. For
> -	 * radix, mmu_highuser_ssize remains unchanged from 256MB.
> +	 * performance penalty.
>  	 */
> -	if (!is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
> +	if (!radix_enabled() && !is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
>  		base = max_t(unsigned long, mm->brk, 1UL << SID_SHIFT_1T);
>  #endif
>
> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> index 2e67588f6f6e..2197404cdcc4 100644
> --- a/arch/powerpc/kernel/prom.c
> +++ b/arch/powerpc/kernel/prom.c
> @@ -234,6 +234,7 @@ static void __init check_cpu_pa_features(unsigned long node)
>  #ifdef CONFIG_PPC_BOOK3S_64
>  static void __init init_mmu_slb_size(unsigned long node)
>  {
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	const __be32 *slb_size_ptr;
>
>  	slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL) ? :
> @@ -241,6 +242,7 @@ static void __init init_mmu_slb_size(unsigned long node)
>
>  	if (slb_size_ptr)
>  		mmu_slb_size = be32_to_cpup(slb_size_ptr);
> +#endif
>  }
>  #else
>  #define init_mmu_slb_size(node) do { } while(0)
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 9a493796ce66..22647bb82198 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -887,6 +887,8 @@ void __init setup_per_cpu_areas(void)
>  	} else if (radix_enabled()) {
>  		atom_size = PAGE_SIZE;
>  	} else {
> +#ifdef CONFIG_PPC_64S_HASH_MMU
> +
>  		/*
>  		 * Linear mapping is one of 4K, 1M and 16M.  For 4K, no need
>  		 * to group units.  For larger mappings, use 1M atom which
> @@ -896,6 +898,9 @@ void __init setup_per_cpu_areas(void)
>  			atom_size = PAGE_SIZE;
>  		else
>  			atom_size = SZ_1M;
> +#else
> +		BUILD_BUG(); // radix_enabled() should be constant true
> +#endif
>  	}
>
>  	if (pcpu_chosen_fc != PCPU_FC_PAGE) {
> diff --git a/arch/powerpc/kexec/core_64.c b/arch/powerpc/kexec/core_64.c
> index 66678518b938..635b5fc30b53 100644
> --- a/arch/powerpc/kexec/core_64.c
> +++ b/arch/powerpc/kexec/core_64.c
> @@ -378,7 +378,7 @@ void default_machine_kexec(struct kimage *image)
>  	/* NOTREACHED */
>  }
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  /* Values we need to export to the second kernel via the device tree. */
>  static unsigned long htab_base;
>  static unsigned long htab_size;
> @@ -420,4 +420,4 @@ static int __init export_htab_values(void)
>  	return 0;
>  }
>  late_initcall(export_htab_values);
> -#endif /* CONFIG_PPC_BOOK3S_64 */
> +#endif /* CONFIG_PPC_64S_HASH_MMU */
> diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c
> index 6b81c852feab..92d831621fa0 100644
> --- a/arch/powerpc/kexec/ranges.c
> +++ b/arch/powerpc/kexec/ranges.c
> @@ -306,10 +306,14 @@ int add_initrd_mem_range(struct crash_mem **mem_ranges)
>   */
>  int add_htab_mem_range(struct crash_mem **mem_ranges)
>  {
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!htab_address)
>  		return 0;
>
>  	return add_mem_range(mem_ranges, __pa(htab_address), htab_size_bytes);
> +#else
> +	return 0;
> +#endif
>  }
>  #endif
>
> diff --git a/arch/powerpc/mm/book3s64/Makefile b/arch/powerpc/mm/book3s64/Makefile
> index 501efadb287f..2d50cac499c5 100644
> --- a/arch/powerpc/mm/book3s64/Makefile
> +++ b/arch/powerpc/mm/book3s64/Makefile
> @@ -2,20 +2,23 @@
>
>  ccflags-y	:= $(NO_MINIMAL_TOC)
>
> +obj-y				+= mmu_context.o pgtable.o trace.o
> +ifdef CONFIG_PPC_64S_HASH_MMU
>  CFLAGS_REMOVE_slb.o = $(CC_FLAGS_FTRACE)
> -
> -obj-y				+= hash_pgtable.o hash_utils.o slb.o \
> -				   mmu_context.o pgtable.o hash_tlb.o trace.o
> +obj-y				+= hash_pgtable.o hash_utils.o hash_tlb.o slb.o
>  obj-$(CONFIG_PPC_HASH_MMU_NATIVE)	+= hash_native.o
> -obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_pgtable.o radix_tlb.o
>  obj-$(CONFIG_PPC_4K_PAGES)	+= hash_4k.o
>  obj-$(CONFIG_PPC_64K_PAGES)	+= hash_64k.o
> +obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
> +obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage_prot.o
> +endif
> +
>  obj-$(CONFIG_HUGETLB_PAGE)	+= hugetlbpage.o
> +
> +obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_pgtable.o radix_tlb.o
>  ifdef CONFIG_HUGETLB_PAGE
>  obj-$(CONFIG_PPC_RADIX_MMU)	+= radix_hugetlbpage.o
>  endif
> -obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hash_hugepage.o
> -obj-$(CONFIG_PPC_SUBPAGE_PROT)	+= subpage_prot.o
>  obj-$(CONFIG_SPAPR_TCE_IOMMU)	+= iommu_api.o
>  obj-$(CONFIG_PPC_PKEY)	+= pkeys.o
>
> diff --git a/arch/powerpc/mm/book3s64/hugetlbpage.c b/arch/powerpc/mm/book3s64/hugetlbpage.c
> index a688e1324ae5..95b2a283fd6e 100644
> --- a/arch/powerpc/mm/book3s64/hugetlbpage.c
> +++ b/arch/powerpc/mm/book3s64/hugetlbpage.c
> @@ -16,6 +16,7 @@
>  unsigned int hpage_shift;
>  EXPORT_SYMBOL(hpage_shift);
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
>  		     pte_t *ptep, unsigned long trap, unsigned long flags,
>  		     int ssize, unsigned int shift, unsigned int mmu_psize)
> @@ -122,6 +123,7 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
>  	*ptep = __pte(new_pte & ~H_PAGE_BUSY);
>  	return 0;
>  }
> +#endif
>
>  pte_t huge_ptep_modify_prot_start(struct vm_area_struct *vma,
>  				  unsigned long addr, pte_t *ptep)
> diff --git a/arch/powerpc/mm/book3s64/mmu_context.c b/arch/powerpc/mm/book3s64/mmu_context.c
> index c10fc8a72fb3..24aa953c9311 100644
> --- a/arch/powerpc/mm/book3s64/mmu_context.c
> +++ b/arch/powerpc/mm/book3s64/mmu_context.c
> @@ -31,6 +31,7 @@ static int alloc_context_id(int min_id, int max_id)
>  	return ida_alloc_range(&mmu_context_ida, min_id, max_id, GFP_KERNEL);
>  }
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  void hash__reserve_context_id(int id)
>  {
>  	int result = ida_alloc_range(&mmu_context_ida, id, id, GFP_KERNEL);
> @@ -50,7 +51,9 @@ int hash__alloc_context_id(void)
>  	return alloc_context_id(MIN_USER_CONTEXT, max);
>  }
>  EXPORT_SYMBOL_GPL(hash__alloc_context_id);
> +#endif
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  static int realloc_context_ids(mm_context_t *ctx)
>  {
>  	int i, id;
> @@ -150,6 +153,13 @@ void hash__setup_new_exec(void)
>
>  	slb_setup_new_exec();
>  }
> +#else
> +static inline int hash__init_new_context(struct mm_struct *mm)
> +{
> +	BUILD_BUG();
> +	return 0;
> +}
> +#endif
>
>  static int radix__init_new_context(struct mm_struct *mm)
>  {
> @@ -175,7 +185,9 @@ static int radix__init_new_context(struct mm_struct *mm)
>  	 */
>  	asm volatile("ptesync;isync" : : : "memory");
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	mm->context.hash_context = NULL;
> +#endif
>
>  	return index;
>  }
> @@ -213,14 +225,22 @@ EXPORT_SYMBOL_GPL(__destroy_context);
>
>  static void destroy_contexts(mm_context_t *ctx)
>  {
> -	int index, context_id;
> +	if (radix_enabled()) {
> +		ida_free(&mmu_context_ida, ctx->id);
> +	} else {
> +#ifdef CONFIG_PPC_64S_HASH_MMU
> +		int index, context_id;
>
> -	for (index = 0; index < ARRAY_SIZE(ctx->extended_id); index++) {
> -		context_id = ctx->extended_id[index];
> -		if (context_id)
> -			ida_free(&mmu_context_ida, context_id);
> +		for (index = 0; index < ARRAY_SIZE(ctx->extended_id); index++) {
> +			context_id = ctx->extended_id[index];
> +			if (context_id)
> +				ida_free(&mmu_context_ida, context_id);
> +		}
> +		kfree(ctx->hash_context);
> +#else
> +		BUILD_BUG(); // radix_enabled() should be constant true
> +#endif
>  	}
> -	kfree(ctx->hash_context);
>  }
>
>  static void pmd_frag_destroy(void *pmd_frag)
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 4d97d1525d49..d765d972566b 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -534,7 +534,7 @@ static int __init pgtable_debugfs_setup(void)
>  }
>  arch_initcall(pgtable_debugfs_setup);
>
> -#ifdef CONFIG_ZONE_DEVICE
> +#if defined(CONFIG_ZONE_DEVICE) && defined(ARCH_HAS_MEMREMAP_COMPAT_ALIGN)
>  /*
>   * Override the generic version in mm/memremap.c.
>   *
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 77820036c722..99dbee114539 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -334,8 +334,10 @@ static void __init radix_init_pgtable(void)
>  	phys_addr_t start, end;
>  	u64 i;
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/* We don't support slb for radix */
>  	mmu_slb_size = 0;
> +#endif
>
>  	/*
>  	 * Create the linear mapping
> @@ -576,6 +578,7 @@ void __init radix__early_init_mmu(void)
>  {
>  	unsigned long lpcr;
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  #ifdef CONFIG_PPC_64K_PAGES
>  	/* PAGE_SIZE mappings */
>  	mmu_virtual_psize = MMU_PAGE_64K;
> @@ -592,6 +595,7 @@ void __init radix__early_init_mmu(void)
>  		mmu_vmemmap_psize = MMU_PAGE_2M;
>  	} else
>  		mmu_vmemmap_psize = mmu_virtual_psize;
> +#endif
>  #endif
>  	/*
>  	 * initialize page table size
> diff --git a/arch/powerpc/mm/copro_fault.c b/arch/powerpc/mm/copro_fault.c
> index 8acd00178956..c1cb21a00884 100644
> --- a/arch/powerpc/mm/copro_fault.c
> +++ b/arch/powerpc/mm/copro_fault.c
> @@ -82,6 +82,7 @@ int copro_handle_mm_fault(struct mm_struct *mm, unsigned long ea,
>  }
>  EXPORT_SYMBOL_GPL(copro_handle_mm_fault);
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  int copro_calculate_slb(struct mm_struct *mm, u64 ea, struct copro_slb *slb)
>  {
>  	u64 vsid, vsidkey;
> @@ -146,3 +147,4 @@ void copro_flush_all_slbs(struct mm_struct *mm)
>  	cxl_slbia(mm);
>  }
>  EXPORT_SYMBOL_GPL(copro_flush_all_slbs);
> +#endif
> diff --git a/arch/powerpc/mm/ptdump/Makefile b/arch/powerpc/mm/ptdump/Makefile
> index 4050cbb55acf..b533caaf0910 100644
> --- a/arch/powerpc/mm/ptdump/Makefile
> +++ b/arch/powerpc/mm/ptdump/Makefile
> @@ -10,5 +10,5 @@ obj-$(CONFIG_PPC_BOOK3S_64)	+= book3s64.o
>
>  ifdef CONFIG_PTDUMP_DEBUGFS
>  obj-$(CONFIG_PPC_BOOK3S_32)	+= bats.o segment_regs.o
> -obj-$(CONFIG_PPC_BOOK3S_64)	+= hashpagetable.o
> +obj-$(CONFIG_PPC_64S_HASH_MMU)	+= hashpagetable.o
>  endif
> diff --git a/arch/powerpc/platforms/powernv/idle.c b/arch/powerpc/platforms/powernv/idle.c
> index 3bc84e2fe064..f0edfe6c1598 100644
> --- a/arch/powerpc/platforms/powernv/idle.c
> +++ b/arch/powerpc/platforms/powernv/idle.c
> @@ -491,12 +491,14 @@ static unsigned long power7_idle_insn(unsigned long type)
>
>  	mtspr(SPRN_SPRG3,	local_paca->sprg_vdso);
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/*
>  	 * The SLB has to be restored here, but it sometimes still
>  	 * contains entries, so the __ variant must be used to prevent
>  	 * multi hits.
>  	 */
>  	__slb_restore_bolted_realmode();
> +#endif
>
>  	return srr1;
>  }
> diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
> index 5ef6b8afb3d0..f37d6524a24d 100644
> --- a/arch/powerpc/platforms/powernv/setup.c
> +++ b/arch/powerpc/platforms/powernv/setup.c
> @@ -211,6 +211,7 @@ static void __init pnv_init(void)
>  #endif
>  		add_preferred_console("hvc", 0, NULL);
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!radix_enabled()) {
>  		size_t size = sizeof(struct slb_entry) * mmu_slb_size;
>  		int i;
> @@ -223,6 +224,7 @@ static void __init pnv_init(void)
>  						cpu_to_node(i));
>  		}
>  	}
> +#endif
>  }
>
>  static void __init pnv_init_IRQ(void)
> diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
> index 06d6a824c0dc..fac5d86777db 100644
> --- a/arch/powerpc/platforms/pseries/lpar.c
> +++ b/arch/powerpc/platforms/pseries/lpar.c
> @@ -58,6 +58,7 @@ EXPORT_SYMBOL(plpar_hcall);
>  EXPORT_SYMBOL(plpar_hcall9);
>  EXPORT_SYMBOL(plpar_hcall_norets);
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  /*
>   * H_BLOCK_REMOVE supported block size for this page size in segment who's base
>   * page size is that page size.
> @@ -66,6 +67,7 @@ EXPORT_SYMBOL(plpar_hcall_norets);
>   * page size.
>   */
>  static int hblkrm_size[MMU_PAGE_COUNT][MMU_PAGE_COUNT] __ro_after_init;
> +#endif
>
>  /*
>   * Due to the involved complexity, and that the current hypervisor is only
> @@ -689,7 +691,7 @@ void vpa_init(int cpu)
>  		return;
>  	}
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/*
>  	 * PAPR says this feature is SLB-Buffer but firmware never
>  	 * reports that.  All SPLPAR support SLB shadow buffer.
> @@ -702,7 +704,7 @@ void vpa_init(int cpu)
>  			       "cpu %d (hw %d) of area %lx failed with %ld\n",
>  			       cpu, hwcpu, addr, ret);
>  	}
> -#endif /* CONFIG_PPC_BOOK3S_64 */
> +#endif /* CONFIG_PPC_64S_HASH_MMU */
>
>  	/*
>  	 * Register dispatch trace log, if one has been allocated.
> @@ -740,6 +742,8 @@ static int pseries_lpar_register_process_table(unsigned long base,
>  	return rc;
>  }
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
> +
>  static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
>  				     unsigned long vpn, unsigned long pa,
>  				     unsigned long rflags, unsigned long vflags,
> @@ -1730,6 +1734,7 @@ void __init hpte_init_pseries(void)
>  	if (cpu_has_feature(CPU_FTR_ARCH_300))
>  		pseries_lpar_register_process_table(0, 0, 0);
>  }
> +#endif /* CONFIG_PPC_64S_HASH_MMU */
>
>  #ifdef CONFIG_PPC_RADIX_MMU
>  void radix_init_pseries(void)
> @@ -1932,6 +1937,7 @@ int h_get_mpp_x(struct hvcall_mpp_x_data *mpp_x_data)
>  	return rc;
>  }
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  static unsigned long vsid_unscramble(unsigned long vsid, int ssize)
>  {
>  	unsigned long protovsid;
> @@ -1992,6 +1998,7 @@ static int __init reserve_vrma_context_id(void)
>  	return 0;
>  }
>  machine_device_initcall(pseries, reserve_vrma_context_id);
> +#endif
>
>  #ifdef CONFIG_DEBUG_FS
>  /* debugfs file interface for vpa data */
> diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
> index 3354c00914fa..c7940fcfc911 100644
> --- a/arch/powerpc/platforms/pseries/lparcfg.c
> +++ b/arch/powerpc/platforms/pseries/lparcfg.c
> @@ -531,7 +531,7 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v)
>  	seq_printf(m, "shared_processor_mode=%d\n",
>  		   lppaca_shared_proc(get_lppaca()));
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!radix_enabled())
>  		seq_printf(m, "slb_size=%d\n", mmu_slb_size);
>  #endif
> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
> index 210a37a065fb..21b706bcea76 100644
> --- a/arch/powerpc/platforms/pseries/mobility.c
> +++ b/arch/powerpc/platforms/pseries/mobility.c
> @@ -451,11 +451,15 @@ static void prod_others(void)
>
>  static u16 clamp_slb_size(void)
>  {
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	u16 prev = mmu_slb_size;
>
>  	slb_set_size(SLB_MIN_SIZE);
>
>  	return prev;
> +#else
> +	return 0;
> +#endif
>  }
>
>  static int do_suspend(void)
> @@ -480,7 +484,9 @@ static int do_suspend(void)
>  	ret = rtas_ibm_suspend_me(&status);
>  	if (ret != 0) {
>  		pr_err("ibm,suspend-me error: %d\n", status);
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  		slb_set_size(saved_slb_size);
> +#endif
>  	}
>
>  	return ret;
> diff --git a/arch/powerpc/platforms/pseries/ras.c b/arch/powerpc/platforms/pseries/ras.c
> index 56092dccfdb8..74c9b1b5bc66 100644
> --- a/arch/powerpc/platforms/pseries/ras.c
> +++ b/arch/powerpc/platforms/pseries/ras.c
> @@ -526,6 +526,7 @@ static int mce_handle_err_realmode(int disposition, u8 error_type)
>  			disposition = RTAS_DISP_FULLY_RECOVERED;
>  			break;
>  		case	MC_ERROR_TYPE_SLB:
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  			/*
>  			 * Store the old slb content in paca before flushing.
>  			 * Print this when we go to virtual mode.
> @@ -538,6 +539,7 @@ static int mce_handle_err_realmode(int disposition, u8 error_type)
>  				slb_save_contents(local_paca->mce_faulty_slbs);
>  			flush_and_reload_slb();
>  			disposition = RTAS_DISP_FULLY_RECOVERED;
> +#endif
>  			break;
>  		default:
>  			break;
> diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c
> index 7f7369fec46b..80dae18d6621 100644
> --- a/arch/powerpc/platforms/pseries/reconfig.c
> +++ b/arch/powerpc/platforms/pseries/reconfig.c
> @@ -337,8 +337,10 @@ static int do_update_property(char *buf, size_t bufsize)
>  	if (!newprop)
>  		return -ENOMEM;
>
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!strcmp(name, "slb-size") || !strcmp(name, "ibm,slb-size"))
>  		slb_set_size(*(int *)value);
> +#endif
>
>  	return of_update_property(np, newprop);
>  }
> diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
> index 8a62af5b9c24..7f69237d4fa4 100644
> --- a/arch/powerpc/platforms/pseries/setup.c
> +++ b/arch/powerpc/platforms/pseries/setup.c
> @@ -112,7 +112,7 @@ static void __init fwnmi_init(void)
>  	u8 *mce_data_buf;
>  	unsigned int i;
>  	int nr_cpus = num_possible_cpus();
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	struct slb_entry *slb_ptr;
>  	size_t size;
>  #endif
> @@ -152,7 +152,7 @@ static void __init fwnmi_init(void)
>  						(RTAS_ERROR_LOG_MAX * i);
>  	}
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!radix_enabled()) {
>  		/* Allocate per cpu area to save old slb contents during MCE */
>  		size = sizeof(struct slb_entry) * mmu_slb_size * nr_cpus;
> @@ -801,7 +801,9 @@ static void __init pSeries_setup_arch(void)
>  	fwnmi_init();
>
>  	pseries_setup_security_mitigations();
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	pseries_lpar_read_hblkrm_characteristics();
> +#endif
>
>  	/* By default, only probe PCI (can be overridden by rtas_pci) */
>  	pci_add_flags(PCI_PROBE_ONLY);
> diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
> index 3b2be65a32f2..46dd292a4694 100644
> --- a/arch/powerpc/xmon/xmon.c
> +++ b/arch/powerpc/xmon/xmon.c
> @@ -1159,7 +1159,7 @@ cmds(struct pt_regs *excp)
>  		case 'P':
>  			show_tasks();
>  			break;
> -#ifdef CONFIG_PPC_BOOK3S
> +#if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_PPC_64S_HASH_MMU)
>  		case 'u':
>  			dump_segments();
>  			break;
> @@ -2614,7 +2614,7 @@ static void dump_tracing(void)
>  static void dump_one_paca(int cpu)
>  {
>  	struct paca_struct *p;
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	int i = 0;
>  #endif
>
> @@ -2656,6 +2656,7 @@ static void dump_one_paca(int cpu)
>  	DUMP(p, cpu_start, "%#-*x");
>  	DUMP(p, kexec_state, "%#-*x");
>  #ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!early_radix_enabled()) {
>  		for (i = 0; i < SLB_NUM_BOLTED; i++) {
>  			u64 esid, vsid;
> @@ -2683,6 +2684,7 @@ static void dump_one_paca(int cpu)
>  				       22, "slb_cache", i, p->slb_cache[i]);
>  		}
>  	}
> +#endif
>
>  	DUMP(p, rfi_flush_fallback_area, "%-*px");
>  #endif
> @@ -3746,7 +3748,7 @@ static void xmon_print_symbol(unsigned long address, const char *mid,
>  	printf("%s", after);
>  }
>
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  void dump_segments(void)
>  {
>  	int i;
> diff --git a/drivers/misc/lkdtm/core.c b/drivers/misc/lkdtm/core.c
> index 609d9ee2acc0..82fb276f7e09 100644
> --- a/drivers/misc/lkdtm/core.c
> +++ b/drivers/misc/lkdtm/core.c
> @@ -182,7 +182,7 @@ static const struct crashtype crashtypes[] = {
>  	CRASHTYPE(FORTIFIED_SUBOBJECT),
>  	CRASHTYPE(FORTIFIED_STRSCPY),
>  	CRASHTYPE(DOUBLE_FAULT),
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  	CRASHTYPE(PPC_SLB_MULTIHIT),
>  #endif
>  };

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
  2021-12-01 14:41 ` [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU Nicholas Piggin
  2021-12-02 13:30   ` Fabiano Rosas
@ 2021-12-03 12:34   ` Michael Ellerman
  2021-12-07 12:58   ` Michael Ellerman
  2021-12-07 13:00   ` Michael Ellerman
  3 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2021-12-03 12:34 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 4d97d1525d49..d765d972566b 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -534,7 +534,7 @@ static int __init pgtable_debugfs_setup(void)
>  }
>  arch_initcall(pgtable_debugfs_setup);
>  
> -#ifdef CONFIG_ZONE_DEVICE
> +#if defined(CONFIG_ZONE_DEVICE) && defined(ARCH_HAS_MEMREMAP_COMPAT_ALIGN)
                                              ^
                                              This needs "CONFIG_"

I fixed it up when applying.

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
  2021-12-01 14:41 ` [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU Nicholas Piggin
  2021-12-02 13:30   ` Fabiano Rosas
  2021-12-03 12:34   ` Michael Ellerman
@ 2021-12-07 12:58   ` Michael Ellerman
  2021-12-07 13:00   ` Michael Ellerman
  3 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2021-12-07 12:58 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index 015d7d972d16..c480d21a146c 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -239,8 +251,9 @@ static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
>  	 * know which translations we will pick. Hence go with hash
>  	 * restrictions.
>  	 */
> -	return hash__setup_initial_memory_limit(first_memblock_base,
> -					   first_memblock_size);
> +	if (!radix_enabled())
> +		hash__setup_initial_memory_limit(first_memblock_base,
> +						 first_memblock_size);
>  }

This needs to use early_radix_enabled(), it's called before jump label
patching.

With the jump label feature check debugging on you get a warning:

  Booting Linux via __start() @ 0x0000000000400000 ...
  [    0.000000][    T0] Warning! mmu_has_feature() used prior to jump label init!
  [    0.000000][    T0] CPU: 0 PID: 0 Comm: swapper Not tainted 5.16.0-rc2-00167-ga2397104dbef #149
  [    0.000000][    T0] Call Trace:
  [    0.000000][    T0] [c000000002843e20] [c000000000894d40] dump_stack_lvl+0x74/0xa8 (unreliable)
  [    0.000000][    T0] [c000000002843e60] [c000000002009a28] early_init_devtree+0x164/0x554
  [    0.000000][    T0] [c000000002843f10] [c00000000200b3d4] early_setup+0xc8/0x280
  [    0.000000][    T0] [c000000002843f90] [000000000000d368] 0xd368


Or otherwise a really obscure crash :D

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
  2021-12-01 14:41 ` [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU Nicholas Piggin
                     ` (2 preceding siblings ...)
  2021-12-07 12:58   ` Michael Ellerman
@ 2021-12-07 13:00   ` Michael Ellerman
  2021-12-09  8:30     ` Nicholas Piggin
  3 siblings, 1 reply; 31+ messages in thread
From: Michael Ellerman @ 2021-12-07 13:00 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Nicholas Piggin

Nicholas Piggin <npiggin@gmail.com> writes:
> Compiling out hash support code when CONFIG_PPC_64S_HASH_MMU=n saves
> 128kB kernel image size (90kB text) on powernv_defconfig minus KVM,
> 350kB on pseries_defconfig minus KVM, 40kB on a tiny config.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>  arch/powerpc/Kconfig                          |  2 +-
>  arch/powerpc/include/asm/book3s/64/mmu.h      | 21 ++++++++++--
>  .../include/asm/book3s/64/tlbflush-hash.h     |  6 ++++
>  arch/powerpc/include/asm/book3s/pgtable.h     |  4 +++
>  arch/powerpc/include/asm/mmu_context.h        |  2 ++
>  arch/powerpc/include/asm/paca.h               |  8 +++++
>  arch/powerpc/kernel/asm-offsets.c             |  2 ++
>  arch/powerpc/kernel/entry_64.S                |  4 +--
>  arch/powerpc/kernel/exceptions-64s.S          | 16 ++++++++++
>  arch/powerpc/kernel/mce.c                     |  2 +-
>  arch/powerpc/kernel/mce_power.c               | 10 ++++--
>  arch/powerpc/kernel/paca.c                    | 18 ++++-------
>  arch/powerpc/kernel/process.c                 | 13 ++++----
>  arch/powerpc/kernel/prom.c                    |  2 ++
>  arch/powerpc/kernel/setup_64.c                |  5 +++
>  arch/powerpc/kexec/core_64.c                  |  4 +--
>  arch/powerpc/kexec/ranges.c                   |  4 +++
>  arch/powerpc/mm/book3s64/Makefile             | 15 +++++----
>  arch/powerpc/mm/book3s64/hugetlbpage.c        |  2 ++
>  arch/powerpc/mm/book3s64/mmu_context.c        | 32 +++++++++++++++----
>  arch/powerpc/mm/book3s64/pgtable.c            |  2 +-
>  arch/powerpc/mm/book3s64/radix_pgtable.c      |  4 +++
>  arch/powerpc/mm/copro_fault.c                 |  2 ++
>  arch/powerpc/mm/ptdump/Makefile               |  2 +-
>  arch/powerpc/platforms/powernv/idle.c         |  2 ++
>  arch/powerpc/platforms/powernv/setup.c        |  2 ++
>  arch/powerpc/platforms/pseries/lpar.c         | 11 +++++--
>  arch/powerpc/platforms/pseries/lparcfg.c      |  2 +-
>  arch/powerpc/platforms/pseries/mobility.c     |  6 ++++
>  arch/powerpc/platforms/pseries/ras.c          |  2 ++
>  arch/powerpc/platforms/pseries/reconfig.c     |  2 ++
>  arch/powerpc/platforms/pseries/setup.c        |  6 ++--
>  arch/powerpc/xmon/xmon.c                      |  8 +++--
>  drivers/misc/lkdtm/core.c                     |  2 +-
>  34 files changed, 173 insertions(+), 52 deletions(-)


I was able to clean up some of the ifdefs a little with the changes
below. I'll run these through some test builds and then squash them in.

cheers


diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 3004f3323144..21f780942911 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -523,8 +523,14 @@ void slb_save_contents(struct slb_entry *slb_ptr);
 void slb_dump_contents(struct slb_entry *slb_ptr);
 
 extern void slb_vmalloc_update(void);
-extern void slb_set_size(u16 size);
 void preload_new_slb_context(unsigned long start, unsigned long sp);
+
+#ifdef CONFIG_PPC_64S_HASH_MMU
+void slb_set_size(u16 size);
+#else
+static inline void slb_set_size(u16 size) { }
+#endif
+
 #endif /* __ASSEMBLY__ */
 
 /*
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index 2197404cdcc4..75678ff04dd7 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -231,10 +231,9 @@ static void __init check_cpu_pa_features(unsigned long node)
 		      ibm_pa_features, ARRAY_SIZE(ibm_pa_features));
 }
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 static void __init init_mmu_slb_size(unsigned long node)
 {
-#ifdef CONFIG_PPC_64S_HASH_MMU
 	const __be32 *slb_size_ptr;
 
 	slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL) ? :
@@ -242,7 +241,6 @@ static void __init init_mmu_slb_size(unsigned long node)
 
 	if (slb_size_ptr)
 		mmu_slb_size = be32_to_cpup(slb_size_ptr);
-#endif
 }
 #else
 #define init_mmu_slb_size(node) do { } while(0)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 22647bb82198..703a2e6ab08d 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -886,9 +886,7 @@ void __init setup_per_cpu_areas(void)
 		atom_size = SZ_1M;
 	} else if (radix_enabled()) {
 		atom_size = PAGE_SIZE;
-	} else {
-#ifdef CONFIG_PPC_64S_HASH_MMU
-
+	} else if (IS_ENABLED(CONFIG_PPC_64S_HASH_MMU)) {
 		/*
 		 * Linear mapping is one of 4K, 1M and 16M.  For 4K, no need
 		 * to group units.  For larger mappings, use 1M atom which
@@ -898,9 +896,6 @@ void __init setup_per_cpu_areas(void)
 			atom_size = PAGE_SIZE;
 		else
 			atom_size = SZ_1M;
-#else
-		BUILD_BUG(); // radix_enabled() should be constant true
-#endif
 	}
 
 	if (pcpu_chosen_fc != PCPU_FC_PAGE) {
diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c
index 92d831621fa0..563e9989a5bf 100644
--- a/arch/powerpc/kexec/ranges.c
+++ b/arch/powerpc/kexec/ranges.c
@@ -296,7 +296,7 @@ int add_initrd_mem_range(struct crash_mem **mem_ranges)
 	return ret;
 }
 
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_64S_HASH_MMU
 /**
  * add_htab_mem_range - Adds htab range to the given memory ranges list,
  *                      if it exists
@@ -306,14 +306,10 @@ int add_initrd_mem_range(struct crash_mem **mem_ranges)
  */
 int add_htab_mem_range(struct crash_mem **mem_ranges)
 {
-#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!htab_address)
 		return 0;
 
 	return add_mem_range(mem_ranges, __pa(htab_address), htab_size_bytes);
-#else
-	return 0;
-#endif
 }
 #endif
 
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 5f8cbeca8080..3c4f0ebe5df8 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -333,10 +333,8 @@ static void __init radix_init_pgtable(void)
 	phys_addr_t start, end;
 	u64 i;
 
-#ifdef CONFIG_PPC_64S_HASH_MMU
 	/* We don't support slb for radix */
-	mmu_slb_size = 0;
-#endif
+	slb_set_size(0);
 
 	/*
 	 * Create the linear mapping
diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
index 21b706bcea76..85033f392c78 100644
--- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -484,9 +484,7 @@ static int do_suspend(void)
 	ret = rtas_ibm_suspend_me(&status);
 	if (ret != 0) {
 		pr_err("ibm,suspend-me error: %d\n", status);
-#ifdef CONFIG_PPC_64S_HASH_MMU
 		slb_set_size(saved_slb_size);
-#endif
 	}
 
 	return ret;
diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
index 3544778e06d0..b4c63c481f33 100644
--- a/arch/powerpc/platforms/pseries/pseries.h
+++ b/arch/powerpc/platforms/pseries/pseries.h
@@ -113,6 +113,11 @@ int dlpar_workqueue_init(void);
 
 extern u32 pseries_security_flavor;
 void pseries_setup_security_mitigations(void);
+
+#ifdef CONFIG_PPC_64S_HASH_MMU
 void pseries_lpar_read_hblkrm_characteristics(void);
+#else
+static inline void pseries_lpar_read_hblkrm_characteristics(void) { }
+#endif
 
 #endif /* _PSERIES_PSERIES_H */
diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c
index 80dae18d6621..7f7369fec46b 100644
--- a/arch/powerpc/platforms/pseries/reconfig.c
+++ b/arch/powerpc/platforms/pseries/reconfig.c
@@ -337,10 +337,8 @@ static int do_update_property(char *buf, size_t bufsize)
 	if (!newprop)
 		return -ENOMEM;
 
-#ifdef CONFIG_PPC_64S_HASH_MMU
 	if (!strcmp(name, "slb-size") || !strcmp(name, "ibm,slb-size"))
 		slb_set_size(*(int *)value);
-#endif
 
 	return of_update_property(np, newprop);
 }

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls
  2021-12-01 14:41 ` [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls Nicholas Piggin
@ 2021-12-08  9:38   ` Christophe Leroy
  2021-12-09  8:25     ` Nicholas Piggin
  2021-12-08 10:00   ` Christophe Leroy
  1 sibling, 1 reply; 31+ messages in thread
From: Christophe Leroy @ 2021-12-08  9:38 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev



Le 01/12/2021 à 15:41, Nicholas Piggin a écrit :
> To avoid any functional changes to radix paths when building with hash
> MMU support disabled (and CONFIG_PPC_MM_SLICES=n), always define the
> arch get_unmapped_area calls on 64s platforms.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/book3s/64/hash.h |  4 ---
>   arch/powerpc/include/asm/book3s/64/mmu.h  |  6 ++++
>   arch/powerpc/mm/hugetlbpage.c             | 16 ++++++---
>   arch/powerpc/mm/mmap.c                    | 40 +++++++++++++++++++----
>   arch/powerpc/mm/slice.c                   | 20 ------------
>   5 files changed, 51 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
> index 674fe0e890dc..a7a0572f3846 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
> @@ -99,10 +99,6 @@
>    * Defines the address of the vmemap area, in its own region on
>    * hash table CPUs.
>    */
> -#ifdef CONFIG_PPC_MM_SLICES
> -#define HAVE_ARCH_UNMAPPED_AREA
> -#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
> -#endif /* CONFIG_PPC_MM_SLICES */
>   
>   /* PTEIDX nibble */
>   #define _PTEIDX_SECONDARY	0x8
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index c02f42d1031e..015d7d972d16 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -4,6 +4,12 @@
>   
>   #include <asm/page.h>
>   
> +#ifdef CONFIG_HUGETLB_PAGE
> +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
> +#endif
> +#define HAVE_ARCH_UNMAPPED_AREA
> +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
> +
>   #ifndef __ASSEMBLY__
>   /*
>    * Page size definition
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index 82d8b368ca6d..ddead41e2194 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -542,20 +542,26 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
>   	return page;
>   }
>   
> -#ifdef CONFIG_PPC_MM_SLICES
> +#ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
> +static inline int file_to_psize(struct file *file)
> +{
> +	struct hstate *hstate = hstate_file(file);
> +	return shift_to_mmu_psize(huge_page_shift(hstate));
> +}
> +
>   unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>   					unsigned long len, unsigned long pgoff,
>   					unsigned long flags)
>   {
> -	struct hstate *hstate = hstate_file(file);
> -	int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
> -
>   #ifdef CONFIG_PPC_RADIX_MMU
>   	if (radix_enabled())
>   		return radix__hugetlb_get_unmapped_area(file, addr, len,
>   						       pgoff, flags);
>   #endif
> -	return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
> +#ifdef CONFIG_PPC_MM_SLICES
> +	return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
> +#endif
> +	BUG();

We shouldn't had new instances of BUG().

BUILD_BUG() should do the trick here.

>   }
>   #endif
>   
> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
> index ae683fdc716c..c475cf810aa8 100644
> --- a/arch/powerpc/mm/mmap.c
> +++ b/arch/powerpc/mm/mmap.c
> @@ -80,6 +80,7 @@ static inline unsigned long mmap_base(unsigned long rnd,
>   	return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
>   }
>   
> +#ifdef HAVE_ARCH_UNMAPPED_AREA
>   #ifdef CONFIG_PPC_RADIX_MMU
>   /*
>    * Same function as generic code used only for radix, because we don't need to overload
> @@ -181,11 +182,42 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
>   	 */
>   	return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
>   }
> +#endif
> +
> +unsigned long arch_get_unmapped_area(struct file *filp,
> +				     unsigned long addr,
> +				     unsigned long len,
> +				     unsigned long pgoff,
> +				     unsigned long flags)
> +{
> +#ifdef CONFIG_PPC_MM_SLICES
> +	return slice_get_unmapped_area(addr, len, flags,
> +				       mm_ctx_user_psize(&current->mm->context), 0);
> +#else
> +	BUG();

Same.

And the #else isn't needed

> +#endif
> +}
> +
> +unsigned long arch_get_unmapped_area_topdown(struct file *filp,
> +					     const unsigned long addr0,
> +					     const unsigned long len,
> +					     const unsigned long pgoff,
> +					     const unsigned long flags)
> +{
> +#ifdef CONFIG_PPC_MM_SLICES
> +	return slice_get_unmapped_area(addr0, len, flags,
> +				       mm_ctx_user_psize(&current->mm->context), 1);
> +#else
> +	BUG();

Same

And the #else isn't needed

> +#endif
> +}
> +#endif /* HAVE_ARCH_UNMAPPED_AREA */
>   
>   static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
>   					unsigned long random_factor,
>   					struct rlimit *rlim_stack)
>   {
> +#ifdef CONFIG_PPC_RADIX_MMU
>   	if (mmap_is_legacy(rlim_stack)) {
>   		mm->mmap_base = TASK_UNMAPPED_BASE;
>   		mm->get_unmapped_area = radix__arch_get_unmapped_area;
> @@ -193,13 +225,9 @@ static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
>   		mm->mmap_base = mmap_base(random_factor, rlim_stack);
>   		mm->get_unmapped_area = radix__arch_get_unmapped_area_topdown;
>   	}
> -}
> -#else
> -/* dummy */
> -extern void radix__arch_pick_mmap_layout(struct mm_struct *mm,
> -					unsigned long random_factor,
> -					struct rlimit *rlim_stack);
>   #endif
> +}
> +
>   /*
>    * This function, called very early during the creation of a new
>    * process VM image, sets up which VM layout function to use:
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 82b45b1cb973..f42711f865f3 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -639,26 +639,6 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>   }
>   EXPORT_SYMBOL_GPL(slice_get_unmapped_area);
>   
> -unsigned long arch_get_unmapped_area(struct file *filp,
> -				     unsigned long addr,
> -				     unsigned long len,
> -				     unsigned long pgoff,
> -				     unsigned long flags)
> -{
> -	return slice_get_unmapped_area(addr, len, flags,
> -				       mm_ctx_user_psize(&current->mm->context), 0);
> -}
> -
> -unsigned long arch_get_unmapped_area_topdown(struct file *filp,
> -					     const unsigned long addr0,
> -					     const unsigned long len,
> -					     const unsigned long pgoff,
> -					     const unsigned long flags)
> -{
> -	return slice_get_unmapped_area(addr0, len, flags,
> -				       mm_ctx_user_psize(&current->mm->context), 1);
> -}
> -
>   unsigned int notrace get_slice_psize(struct mm_struct *mm, unsigned long addr)
>   {
>   	unsigned char *psizes;
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls
  2021-12-01 14:41 ` [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls Nicholas Piggin
  2021-12-08  9:38   ` Christophe Leroy
@ 2021-12-08 10:00   ` Christophe Leroy
  2021-12-09  8:29     ` Nicholas Piggin
  1 sibling, 1 reply; 31+ messages in thread
From: Christophe Leroy @ 2021-12-08 10:00 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev



Le 01/12/2021 à 15:41, Nicholas Piggin a écrit :
> To avoid any functional changes to radix paths when building with hash
> MMU support disabled (and CONFIG_PPC_MM_SLICES=n), always define the
> arch get_unmapped_area calls on 64s platforms.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/book3s/64/hash.h |  4 ---
>   arch/powerpc/include/asm/book3s/64/mmu.h  |  6 ++++
>   arch/powerpc/mm/hugetlbpage.c             | 16 ++++++---
>   arch/powerpc/mm/mmap.c                    | 40 +++++++++++++++++++----
>   arch/powerpc/mm/slice.c                   | 20 ------------
>   5 files changed, 51 insertions(+), 35 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
> index 674fe0e890dc..a7a0572f3846 100644
> --- a/arch/powerpc/include/asm/book3s/64/hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
> @@ -99,10 +99,6 @@
>    * Defines the address of the vmemap area, in its own region on
>    * hash table CPUs.
>    */
> -#ifdef CONFIG_PPC_MM_SLICES
> -#define HAVE_ARCH_UNMAPPED_AREA
> -#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
> -#endif /* CONFIG_PPC_MM_SLICES */
>   
>   /* PTEIDX nibble */
>   #define _PTEIDX_SECONDARY	0x8
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
> index c02f42d1031e..015d7d972d16 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
> @@ -4,6 +4,12 @@
>   
>   #include <asm/page.h>
>   
> +#ifdef CONFIG_HUGETLB_PAGE
> +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
> +#endif
> +#define HAVE_ARCH_UNMAPPED_AREA
> +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
> +
>   #ifndef __ASSEMBLY__
>   /*
>    * Page size definition
> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
> index 82d8b368ca6d..ddead41e2194 100644
> --- a/arch/powerpc/mm/hugetlbpage.c
> +++ b/arch/powerpc/mm/hugetlbpage.c
> @@ -542,20 +542,26 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
>   	return page;
>   }
>   
> -#ifdef CONFIG_PPC_MM_SLICES
> +#ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA

Could use CONFIG_PPC_BOOK3S_64 instead

> +static inline int file_to_psize(struct file *file)

'inline' is not needed.

> +{
> +	struct hstate *hstate = hstate_file(file);
> +	return shift_to_mmu_psize(huge_page_shift(hstate));
> +}
> +
>   unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>   					unsigned long len, unsigned long pgoff,
>   					unsigned long flags)
>   {
> -	struct hstate *hstate = hstate_file(file);
> -	int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
> -
>   #ifdef CONFIG_PPC_RADIX_MMU
>   	if (radix_enabled())
>   		return radix__hugetlb_get_unmapped_area(file, addr, len,
>   						       pgoff, flags);
>   #endif
> -	return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
> +#ifdef CONFIG_PPC_MM_SLICES
> +	return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
> +#endif
> +	BUG();
>   }
>   #endif
>   
> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
> index ae683fdc716c..c475cf810aa8 100644
> --- a/arch/powerpc/mm/mmap.c
> +++ b/arch/powerpc/mm/mmap.c
> @@ -80,6 +80,7 @@ static inline unsigned long mmap_base(unsigned long rnd,
>   	return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
>   }
>   
> +#ifdef HAVE_ARCH_UNMAPPED_AREA

Could use CONFIG_PPC_BOOK3S_64 instead. Or better, put all that stuff in 
a file in mm/book3s64/ directory

>   #ifdef CONFIG_PPC_RADIX_MMU
>   /*
>    * Same function as generic code used only for radix, because we don't need to overload
> @@ -181,11 +182,42 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
>   	 */
>   	return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
>   }
> +#endif
> +
> +unsigned long arch_get_unmapped_area(struct file *filp,
> +				     unsigned long addr,
> +				     unsigned long len,
> +				     unsigned long pgoff,
> +				     unsigned long flags)
> +{
> +#ifdef CONFIG_PPC_MM_SLICES
> +	return slice_get_unmapped_area(addr, len, flags,
> +				       mm_ctx_user_psize(&current->mm->context), 0);
> +#else
> +	BUG();
> +#endif
> +}
> +
> +unsigned long arch_get_unmapped_area_topdown(struct file *filp,
> +					     const unsigned long addr0,
> +					     const unsigned long len,
> +					     const unsigned long pgoff,
> +					     const unsigned long flags)
> +{
> +#ifdef CONFIG_PPC_MM_SLICES
> +	return slice_get_unmapped_area(addr0, len, flags,
> +				       mm_ctx_user_psize(&current->mm->context), 1);
> +#else
> +	BUG();
> +#endif
> +}
> +#endif /* HAVE_ARCH_UNMAPPED_AREA */
>   
>   static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
>   					unsigned long random_factor,
>   					struct rlimit *rlim_stack)
>   {
> +#ifdef CONFIG_PPC_RADIX_MMU
>   	if (mmap_is_legacy(rlim_stack)) {
>   		mm->mmap_base = TASK_UNMAPPED_BASE;
>   		mm->get_unmapped_area = radix__arch_get_unmapped_area;
> @@ -193,13 +225,9 @@ static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
>   		mm->mmap_base = mmap_base(random_factor, rlim_stack);
>   		mm->get_unmapped_area = radix__arch_get_unmapped_area_topdown;
>   	}
> -}
> -#else
> -/* dummy */
> -extern void radix__arch_pick_mmap_layout(struct mm_struct *mm,
> -					unsigned long random_factor,
> -					struct rlimit *rlim_stack);
>   #endif
> +}
> +
>   /*
>    * This function, called very early during the creation of a new
>    * process VM image, sets up which VM layout function to use:
> diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
> index 82b45b1cb973..f42711f865f3 100644
> --- a/arch/powerpc/mm/slice.c
> +++ b/arch/powerpc/mm/slice.c
> @@ -639,26 +639,6 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
>   }
>   EXPORT_SYMBOL_GPL(slice_get_unmapped_area);
>   
> -unsigned long arch_get_unmapped_area(struct file *filp,
> -				     unsigned long addr,
> -				     unsigned long len,
> -				     unsigned long pgoff,
> -				     unsigned long flags)
> -{
> -	return slice_get_unmapped_area(addr, len, flags,
> -				       mm_ctx_user_psize(&current->mm->context), 0);
> -}
> -
> -unsigned long arch_get_unmapped_area_topdown(struct file *filp,
> -					     const unsigned long addr0,
> -					     const unsigned long len,
> -					     const unsigned long pgoff,
> -					     const unsigned long flags)
> -{
> -	return slice_get_unmapped_area(addr0, len, flags,
> -				       mm_ctx_user_psize(&current->mm->context), 1);
> -}
> -
>   unsigned int notrace get_slice_psize(struct mm_struct *mm, unsigned long addr)
>   {
>   	unsigned char *psizes;
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls
  2021-12-08  9:38   ` Christophe Leroy
@ 2021-12-09  8:25     ` Nicholas Piggin
  2021-12-09  8:28       ` Christophe Leroy
  2021-12-09  9:30       ` Nicholas Piggin
  0 siblings, 2 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-09  8:25 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev

Excerpts from Christophe Leroy's message of December 8, 2021 7:38 pm:
> 
> 
> Le 01/12/2021 à 15:41, Nicholas Piggin a écrit :
>> To avoid any functional changes to radix paths when building with hash
>> MMU support disabled (and CONFIG_PPC_MM_SLICES=n), always define the
>> arch get_unmapped_area calls on 64s platforms.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/book3s/64/hash.h |  4 ---
>>   arch/powerpc/include/asm/book3s/64/mmu.h  |  6 ++++
>>   arch/powerpc/mm/hugetlbpage.c             | 16 ++++++---
>>   arch/powerpc/mm/mmap.c                    | 40 +++++++++++++++++++----
>>   arch/powerpc/mm/slice.c                   | 20 ------------
>>   5 files changed, 51 insertions(+), 35 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
>> index 674fe0e890dc..a7a0572f3846 100644
>> --- a/arch/powerpc/include/asm/book3s/64/hash.h
>> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
>> @@ -99,10 +99,6 @@
>>    * Defines the address of the vmemap area, in its own region on
>>    * hash table CPUs.
>>    */
>> -#ifdef CONFIG_PPC_MM_SLICES
>> -#define HAVE_ARCH_UNMAPPED_AREA
>> -#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>> -#endif /* CONFIG_PPC_MM_SLICES */
>>   
>>   /* PTEIDX nibble */
>>   #define _PTEIDX_SECONDARY	0x8
>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
>> index c02f42d1031e..015d7d972d16 100644
>> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
>> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
>> @@ -4,6 +4,12 @@
>>   
>>   #include <asm/page.h>
>>   
>> +#ifdef CONFIG_HUGETLB_PAGE
>> +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>> +#endif
>> +#define HAVE_ARCH_UNMAPPED_AREA
>> +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>> +
>>   #ifndef __ASSEMBLY__
>>   /*
>>    * Page size definition
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index 82d8b368ca6d..ddead41e2194 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -542,20 +542,26 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
>>   	return page;
>>   }
>>   
>> -#ifdef CONFIG_PPC_MM_SLICES
>> +#ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>> +static inline int file_to_psize(struct file *file)
>> +{
>> +	struct hstate *hstate = hstate_file(file);
>> +	return shift_to_mmu_psize(huge_page_shift(hstate));
>> +}
>> +
>>   unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>>   					unsigned long len, unsigned long pgoff,
>>   					unsigned long flags)
>>   {
>> -	struct hstate *hstate = hstate_file(file);
>> -	int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
>> -
>>   #ifdef CONFIG_PPC_RADIX_MMU
>>   	if (radix_enabled())
>>   		return radix__hugetlb_get_unmapped_area(file, addr, len,
>>   						       pgoff, flags);
>>   #endif
>> -	return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
>> +#ifdef CONFIG_PPC_MM_SLICES
>> +	return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
>> +#endif
>> +	BUG();
> 
> We shouldn't had new instances of BUG().
> 
> BUILD_BUG() should do the trick here.
> 
>>   }
>>   #endif
>>   
>> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
>> index ae683fdc716c..c475cf810aa8 100644
>> --- a/arch/powerpc/mm/mmap.c
>> +++ b/arch/powerpc/mm/mmap.c
>> @@ -80,6 +80,7 @@ static inline unsigned long mmap_base(unsigned long rnd,
>>   	return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
>>   }
>>   
>> +#ifdef HAVE_ARCH_UNMAPPED_AREA
>>   #ifdef CONFIG_PPC_RADIX_MMU
>>   /*
>>    * Same function as generic code used only for radix, because we don't need to overload
>> @@ -181,11 +182,42 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
>>   	 */
>>   	return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
>>   }
>> +#endif
>> +
>> +unsigned long arch_get_unmapped_area(struct file *filp,
>> +				     unsigned long addr,
>> +				     unsigned long len,
>> +				     unsigned long pgoff,
>> +				     unsigned long flags)
>> +{
>> +#ifdef CONFIG_PPC_MM_SLICES
>> +	return slice_get_unmapped_area(addr, len, flags,
>> +				       mm_ctx_user_psize(&current->mm->context), 0);
>> +#else
>> +	BUG();
> 
> Same.
> 
> And the #else isn't needed
> 
>> +#endif
>> +}
>> +
>> +unsigned long arch_get_unmapped_area_topdown(struct file *filp,
>> +					     const unsigned long addr0,
>> +					     const unsigned long len,
>> +					     const unsigned long pgoff,
>> +					     const unsigned long flags)
>> +{
>> +#ifdef CONFIG_PPC_MM_SLICES
>> +	return slice_get_unmapped_area(addr0, len, flags,
>> +				       mm_ctx_user_psize(&current->mm->context), 1);
>> +#else
>> +	BUG();
> 
> Same
> 
> And the #else isn't needed

Fair enough. I'll see if mpe can squash in an incremental patch.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls
  2021-12-09  8:25     ` Nicholas Piggin
@ 2021-12-09  8:28       ` Christophe Leroy
  2021-12-09  9:30       ` Nicholas Piggin
  1 sibling, 0 replies; 31+ messages in thread
From: Christophe Leroy @ 2021-12-09  8:28 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev



Le 09/12/2021 à 09:25, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of December 8, 2021 7:38 pm:
>>
>>
>> Le 01/12/2021 à 15:41, Nicholas Piggin a écrit :
>>> To avoid any functional changes to radix paths when building with hash
>>> MMU support disabled (and CONFIG_PPC_MM_SLICES=n), always define the
>>> arch get_unmapped_area calls on 64s platforms.
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>    arch/powerpc/include/asm/book3s/64/hash.h |  4 ---
>>>    arch/powerpc/include/asm/book3s/64/mmu.h  |  6 ++++
>>>    arch/powerpc/mm/hugetlbpage.c             | 16 ++++++---
>>>    arch/powerpc/mm/mmap.c                    | 40 +++++++++++++++++++----
>>>    arch/powerpc/mm/slice.c                   | 20 ------------
>>>    5 files changed, 51 insertions(+), 35 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
>>> index 674fe0e890dc..a7a0572f3846 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/hash.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
>>> @@ -99,10 +99,6 @@
>>>     * Defines the address of the vmemap area, in its own region on
>>>     * hash table CPUs.
>>>     */
>>> -#ifdef CONFIG_PPC_MM_SLICES
>>> -#define HAVE_ARCH_UNMAPPED_AREA
>>> -#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>>> -#endif /* CONFIG_PPC_MM_SLICES */
>>>    
>>>    /* PTEIDX nibble */
>>>    #define _PTEIDX_SECONDARY	0x8
>>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
>>> index c02f42d1031e..015d7d972d16 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
>>> @@ -4,6 +4,12 @@
>>>    
>>>    #include <asm/page.h>
>>>    
>>> +#ifdef CONFIG_HUGETLB_PAGE
>>> +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>>> +#endif
>>> +#define HAVE_ARCH_UNMAPPED_AREA
>>> +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>>> +
>>>    #ifndef __ASSEMBLY__
>>>    /*
>>>     * Page size definition
>>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>>> index 82d8b368ca6d..ddead41e2194 100644
>>> --- a/arch/powerpc/mm/hugetlbpage.c
>>> +++ b/arch/powerpc/mm/hugetlbpage.c
>>> @@ -542,20 +542,26 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
>>>    	return page;
>>>    }
>>>    
>>> -#ifdef CONFIG_PPC_MM_SLICES
>>> +#ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>>> +static inline int file_to_psize(struct file *file)
>>> +{
>>> +	struct hstate *hstate = hstate_file(file);
>>> +	return shift_to_mmu_psize(huge_page_shift(hstate));
>>> +}
>>> +
>>>    unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>>>    					unsigned long len, unsigned long pgoff,
>>>    					unsigned long flags)
>>>    {
>>> -	struct hstate *hstate = hstate_file(file);
>>> -	int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
>>> -
>>>    #ifdef CONFIG_PPC_RADIX_MMU
>>>    	if (radix_enabled())
>>>    		return radix__hugetlb_get_unmapped_area(file, addr, len,
>>>    						       pgoff, flags);
>>>    #endif
>>> -	return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
>>> +#ifdef CONFIG_PPC_MM_SLICES
>>> +	return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
>>> +#endif
>>> +	BUG();
>>
>> We shouldn't had new instances of BUG().
>>
>> BUILD_BUG() should do the trick here.
>>
>>>    }
>>>    #endif
>>>    
>>> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
>>> index ae683fdc716c..c475cf810aa8 100644
>>> --- a/arch/powerpc/mm/mmap.c
>>> +++ b/arch/powerpc/mm/mmap.c
>>> @@ -80,6 +80,7 @@ static inline unsigned long mmap_base(unsigned long rnd,
>>>    	return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
>>>    }
>>>    
>>> +#ifdef HAVE_ARCH_UNMAPPED_AREA
>>>    #ifdef CONFIG_PPC_RADIX_MMU
>>>    /*
>>>     * Same function as generic code used only for radix, because we don't need to overload
>>> @@ -181,11 +182,42 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
>>>    	 */
>>>    	return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
>>>    }
>>> +#endif
>>> +
>>> +unsigned long arch_get_unmapped_area(struct file *filp,
>>> +				     unsigned long addr,
>>> +				     unsigned long len,
>>> +				     unsigned long pgoff,
>>> +				     unsigned long flags)
>>> +{
>>> +#ifdef CONFIG_PPC_MM_SLICES
>>> +	return slice_get_unmapped_area(addr, len, flags,
>>> +				       mm_ctx_user_psize(&current->mm->context), 0);
>>> +#else
>>> +	BUG();
>>
>> Same.
>>
>> And the #else isn't needed
>>
>>> +#endif
>>> +}
>>> +
>>> +unsigned long arch_get_unmapped_area_topdown(struct file *filp,
>>> +					     const unsigned long addr0,
>>> +					     const unsigned long len,
>>> +					     const unsigned long pgoff,
>>> +					     const unsigned long flags)
>>> +{
>>> +#ifdef CONFIG_PPC_MM_SLICES
>>> +	return slice_get_unmapped_area(addr0, len, flags,
>>> +				       mm_ctx_user_psize(&current->mm->context), 1);
>>> +#else
>>> +	BUG();
>>
>> Same
>>
>> And the #else isn't needed
> 
> Fair enough. I'll see if mpe can squash in an incremental patch.
> 

Anyway, my follow-up series "Convert powerpc to default topdown mmap 
layout" cleans it up.

Christophe

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls
  2021-12-08 10:00   ` Christophe Leroy
@ 2021-12-09  8:29     ` Nicholas Piggin
  0 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-09  8:29 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev

Excerpts from Christophe Leroy's message of December 8, 2021 8:00 pm:
> 
> 
> Le 01/12/2021 à 15:41, Nicholas Piggin a écrit :
>> To avoid any functional changes to radix paths when building with hash
>> MMU support disabled (and CONFIG_PPC_MM_SLICES=n), always define the
>> arch get_unmapped_area calls on 64s platforms.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/book3s/64/hash.h |  4 ---
>>   arch/powerpc/include/asm/book3s/64/mmu.h  |  6 ++++
>>   arch/powerpc/mm/hugetlbpage.c             | 16 ++++++---
>>   arch/powerpc/mm/mmap.c                    | 40 +++++++++++++++++++----
>>   arch/powerpc/mm/slice.c                   | 20 ------------
>>   5 files changed, 51 insertions(+), 35 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
>> index 674fe0e890dc..a7a0572f3846 100644
>> --- a/arch/powerpc/include/asm/book3s/64/hash.h
>> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
>> @@ -99,10 +99,6 @@
>>    * Defines the address of the vmemap area, in its own region on
>>    * hash table CPUs.
>>    */
>> -#ifdef CONFIG_PPC_MM_SLICES
>> -#define HAVE_ARCH_UNMAPPED_AREA
>> -#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>> -#endif /* CONFIG_PPC_MM_SLICES */
>>   
>>   /* PTEIDX nibble */
>>   #define _PTEIDX_SECONDARY	0x8
>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
>> index c02f42d1031e..015d7d972d16 100644
>> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
>> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
>> @@ -4,6 +4,12 @@
>>   
>>   #include <asm/page.h>
>>   
>> +#ifdef CONFIG_HUGETLB_PAGE
>> +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>> +#endif
>> +#define HAVE_ARCH_UNMAPPED_AREA
>> +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>> +
>>   #ifndef __ASSEMBLY__
>>   /*
>>    * Page size definition
>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>> index 82d8b368ca6d..ddead41e2194 100644
>> --- a/arch/powerpc/mm/hugetlbpage.c
>> +++ b/arch/powerpc/mm/hugetlbpage.c
>> @@ -542,20 +542,26 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
>>   	return page;
>>   }
>>   
>> -#ifdef CONFIG_PPC_MM_SLICES
>> +#ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
> 
> Could use CONFIG_PPC_BOOK3S_64 instead

It was going to make it de-selectable with !HASH. I think your series 
cleans this stuff up so I dont' think it's a big deal.

> 
>> +static inline int file_to_psize(struct file *file)
> 
> 'inline' is not needed.

It is otherwise a !SLICES config causes it to give a defined but not 
used error.

> 
>> +{
>> +	struct hstate *hstate = hstate_file(file);
>> +	return shift_to_mmu_psize(huge_page_shift(hstate));
>> +}
>> +
>>   unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>>   					unsigned long len, unsigned long pgoff,
>>   					unsigned long flags)
>>   {
>> -	struct hstate *hstate = hstate_file(file);
>> -	int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
>> -
>>   #ifdef CONFIG_PPC_RADIX_MMU
>>   	if (radix_enabled())
>>   		return radix__hugetlb_get_unmapped_area(file, addr, len,
>>   						       pgoff, flags);
>>   #endif
>> -	return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
>> +#ifdef CONFIG_PPC_MM_SLICES
>> +	return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
>> +#endif
>> +	BUG();
>>   }
>>   #endif
>>   
>> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
>> index ae683fdc716c..c475cf810aa8 100644
>> --- a/arch/powerpc/mm/mmap.c
>> +++ b/arch/powerpc/mm/mmap.c
>> @@ -80,6 +80,7 @@ static inline unsigned long mmap_base(unsigned long rnd,
>>   	return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
>>   }
>>   
>> +#ifdef HAVE_ARCH_UNMAPPED_AREA
> 
> Could use CONFIG_PPC_BOOK3S_64 instead. Or better, put all that stuff in 
> a file in mm/book3s64/ directory

Seeing as you cleaned it up with your series, probably doesn't matter
much.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
  2021-12-07 13:00   ` Michael Ellerman
@ 2021-12-09  8:30     ` Nicholas Piggin
  0 siblings, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-09  8:30 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman

Excerpts from Michael Ellerman's message of December 7, 2021 11:00 pm:
> Nicholas Piggin <npiggin@gmail.com> writes:
>>  34 files changed, 173 insertions(+), 52 deletions(-)
> 
> 
> I was able to clean up some of the ifdefs a little with the changes
> below. I'll run these through some test builds and then squash them in.

Looks good to me.

Thanks,
Nick

> 
> cheers
> 
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> index 3004f3323144..21f780942911 100644
> --- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
> @@ -523,8 +523,14 @@ void slb_save_contents(struct slb_entry *slb_ptr);
>  void slb_dump_contents(struct slb_entry *slb_ptr);
>  
>  extern void slb_vmalloc_update(void);
> -extern void slb_set_size(u16 size);
>  void preload_new_slb_context(unsigned long start, unsigned long sp);
> +
> +#ifdef CONFIG_PPC_64S_HASH_MMU
> +void slb_set_size(u16 size);
> +#else
> +static inline void slb_set_size(u16 size) { }
> +#endif
> +
>  #endif /* __ASSEMBLY__ */
>  
>  /*
> diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
> index 2197404cdcc4..75678ff04dd7 100644
> --- a/arch/powerpc/kernel/prom.c
> +++ b/arch/powerpc/kernel/prom.c
> @@ -231,10 +231,9 @@ static void __init check_cpu_pa_features(unsigned long node)
>  		      ibm_pa_features, ARRAY_SIZE(ibm_pa_features));
>  }
>  
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  static void __init init_mmu_slb_size(unsigned long node)
>  {
> -#ifdef CONFIG_PPC_64S_HASH_MMU
>  	const __be32 *slb_size_ptr;
>  
>  	slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL) ? :
> @@ -242,7 +241,6 @@ static void __init init_mmu_slb_size(unsigned long node)
>  
>  	if (slb_size_ptr)
>  		mmu_slb_size = be32_to_cpup(slb_size_ptr);
> -#endif
>  }
>  #else
>  #define init_mmu_slb_size(node) do { } while(0)
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 22647bb82198..703a2e6ab08d 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -886,9 +886,7 @@ void __init setup_per_cpu_areas(void)
>  		atom_size = SZ_1M;
>  	} else if (radix_enabled()) {
>  		atom_size = PAGE_SIZE;
> -	} else {
> -#ifdef CONFIG_PPC_64S_HASH_MMU
> -
> +	} else if (IS_ENABLED(CONFIG_PPC_64S_HASH_MMU)) {
>  		/*
>  		 * Linear mapping is one of 4K, 1M and 16M.  For 4K, no need
>  		 * to group units.  For larger mappings, use 1M atom which
> @@ -898,9 +896,6 @@ void __init setup_per_cpu_areas(void)
>  			atom_size = PAGE_SIZE;
>  		else
>  			atom_size = SZ_1M;
> -#else
> -		BUILD_BUG(); // radix_enabled() should be constant true
> -#endif
>  	}
>  
>  	if (pcpu_chosen_fc != PCPU_FC_PAGE) {
> diff --git a/arch/powerpc/kexec/ranges.c b/arch/powerpc/kexec/ranges.c
> index 92d831621fa0..563e9989a5bf 100644
> --- a/arch/powerpc/kexec/ranges.c
> +++ b/arch/powerpc/kexec/ranges.c
> @@ -296,7 +296,7 @@ int add_initrd_mem_range(struct crash_mem **mem_ranges)
>  	return ret;
>  }
>  
> -#ifdef CONFIG_PPC_BOOK3S_64
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  /**
>   * add_htab_mem_range - Adds htab range to the given memory ranges list,
>   *                      if it exists
> @@ -306,14 +306,10 @@ int add_initrd_mem_range(struct crash_mem **mem_ranges)
>   */
>  int add_htab_mem_range(struct crash_mem **mem_ranges)
>  {
> -#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!htab_address)
>  		return 0;
>  
>  	return add_mem_range(mem_ranges, __pa(htab_address), htab_size_bytes);
> -#else
> -	return 0;
> -#endif
>  }
>  #endif
>  
> diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
> index 5f8cbeca8080..3c4f0ebe5df8 100644
> --- a/arch/powerpc/mm/book3s64/radix_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
> @@ -333,10 +333,8 @@ static void __init radix_init_pgtable(void)
>  	phys_addr_t start, end;
>  	u64 i;
>  
> -#ifdef CONFIG_PPC_64S_HASH_MMU
>  	/* We don't support slb for radix */
> -	mmu_slb_size = 0;
> -#endif
> +	slb_set_size(0);
>  
>  	/*
>  	 * Create the linear mapping
> diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
> index 21b706bcea76..85033f392c78 100644
> --- a/arch/powerpc/platforms/pseries/mobility.c
> +++ b/arch/powerpc/platforms/pseries/mobility.c
> @@ -484,9 +484,7 @@ static int do_suspend(void)
>  	ret = rtas_ibm_suspend_me(&status);
>  	if (ret != 0) {
>  		pr_err("ibm,suspend-me error: %d\n", status);
> -#ifdef CONFIG_PPC_64S_HASH_MMU
>  		slb_set_size(saved_slb_size);
> -#endif
>  	}
>  
>  	return ret;
> diff --git a/arch/powerpc/platforms/pseries/pseries.h b/arch/powerpc/platforms/pseries/pseries.h
> index 3544778e06d0..b4c63c481f33 100644
> --- a/arch/powerpc/platforms/pseries/pseries.h
> +++ b/arch/powerpc/platforms/pseries/pseries.h
> @@ -113,6 +113,11 @@ int dlpar_workqueue_init(void);
>  
>  extern u32 pseries_security_flavor;
>  void pseries_setup_security_mitigations(void);
> +
> +#ifdef CONFIG_PPC_64S_HASH_MMU
>  void pseries_lpar_read_hblkrm_characteristics(void);
> +#else
> +static inline void pseries_lpar_read_hblkrm_characteristics(void) { }
> +#endif
>  
>  #endif /* _PSERIES_PSERIES_H */
> diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c
> index 80dae18d6621..7f7369fec46b 100644
> --- a/arch/powerpc/platforms/pseries/reconfig.c
> +++ b/arch/powerpc/platforms/pseries/reconfig.c
> @@ -337,10 +337,8 @@ static int do_update_property(char *buf, size_t bufsize)
>  	if (!newprop)
>  		return -ENOMEM;
>  
> -#ifdef CONFIG_PPC_64S_HASH_MMU
>  	if (!strcmp(name, "slb-size") || !strcmp(name, "ibm,slb-size"))
>  		slb_set_size(*(int *)value);
> -#endif
>  
>  	return of_update_property(np, newprop);
>  }
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls
  2021-12-09  8:25     ` Nicholas Piggin
  2021-12-09  8:28       ` Christophe Leroy
@ 2021-12-09  9:30       ` Nicholas Piggin
  1 sibling, 0 replies; 31+ messages in thread
From: Nicholas Piggin @ 2021-12-09  9:30 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev

Excerpts from Nicholas Piggin's message of December 9, 2021 6:25 pm:
> Excerpts from Christophe Leroy's message of December 8, 2021 7:38 pm:
>> 
>> 
>> Le 01/12/2021 à 15:41, Nicholas Piggin a écrit :
>>> To avoid any functional changes to radix paths when building with hash
>>> MMU support disabled (and CONFIG_PPC_MM_SLICES=n), always define the
>>> arch get_unmapped_area calls on 64s platforms.
>>> 
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>   arch/powerpc/include/asm/book3s/64/hash.h |  4 ---
>>>   arch/powerpc/include/asm/book3s/64/mmu.h  |  6 ++++
>>>   arch/powerpc/mm/hugetlbpage.c             | 16 ++++++---
>>>   arch/powerpc/mm/mmap.c                    | 40 +++++++++++++++++++----
>>>   arch/powerpc/mm/slice.c                   | 20 ------------
>>>   5 files changed, 51 insertions(+), 35 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
>>> index 674fe0e890dc..a7a0572f3846 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/hash.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/hash.h
>>> @@ -99,10 +99,6 @@
>>>    * Defines the address of the vmemap area, in its own region on
>>>    * hash table CPUs.
>>>    */
>>> -#ifdef CONFIG_PPC_MM_SLICES
>>> -#define HAVE_ARCH_UNMAPPED_AREA
>>> -#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>>> -#endif /* CONFIG_PPC_MM_SLICES */
>>>   
>>>   /* PTEIDX nibble */
>>>   #define _PTEIDX_SECONDARY	0x8
>>> diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
>>> index c02f42d1031e..015d7d972d16 100644
>>> --- a/arch/powerpc/include/asm/book3s/64/mmu.h
>>> +++ b/arch/powerpc/include/asm/book3s/64/mmu.h
>>> @@ -4,6 +4,12 @@
>>>   
>>>   #include <asm/page.h>
>>>   
>>> +#ifdef CONFIG_HUGETLB_PAGE
>>> +#define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>>> +#endif
>>> +#define HAVE_ARCH_UNMAPPED_AREA
>>> +#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
>>> +
>>>   #ifndef __ASSEMBLY__
>>>   /*
>>>    * Page size definition
>>> diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
>>> index 82d8b368ca6d..ddead41e2194 100644
>>> --- a/arch/powerpc/mm/hugetlbpage.c
>>> +++ b/arch/powerpc/mm/hugetlbpage.c
>>> @@ -542,20 +542,26 @@ struct page *follow_huge_pd(struct vm_area_struct *vma,
>>>   	return page;
>>>   }
>>>   
>>> -#ifdef CONFIG_PPC_MM_SLICES
>>> +#ifdef HAVE_ARCH_HUGETLB_UNMAPPED_AREA
>>> +static inline int file_to_psize(struct file *file)
>>> +{
>>> +	struct hstate *hstate = hstate_file(file);
>>> +	return shift_to_mmu_psize(huge_page_shift(hstate));
>>> +}
>>> +
>>>   unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
>>>   					unsigned long len, unsigned long pgoff,
>>>   					unsigned long flags)
>>>   {
>>> -	struct hstate *hstate = hstate_file(file);
>>> -	int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
>>> -
>>>   #ifdef CONFIG_PPC_RADIX_MMU
>>>   	if (radix_enabled())
>>>   		return radix__hugetlb_get_unmapped_area(file, addr, len,
>>>   						       pgoff, flags);
>>>   #endif
>>> -	return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
>>> +#ifdef CONFIG_PPC_MM_SLICES
>>> +	return slice_get_unmapped_area(addr, len, flags, file_to_psize(file), 1);
>>> +#endif
>>> +	BUG();
>> 
>> We shouldn't had new instances of BUG().
>> 
>> BUILD_BUG() should do the trick here.
>> 
>>>   }
>>>   #endif
>>>   
>>> diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
>>> index ae683fdc716c..c475cf810aa8 100644
>>> --- a/arch/powerpc/mm/mmap.c
>>> +++ b/arch/powerpc/mm/mmap.c
>>> @@ -80,6 +80,7 @@ static inline unsigned long mmap_base(unsigned long rnd,
>>>   	return PAGE_ALIGN(DEFAULT_MAP_WINDOW - gap - rnd);
>>>   }
>>>   
>>> +#ifdef HAVE_ARCH_UNMAPPED_AREA
>>>   #ifdef CONFIG_PPC_RADIX_MMU
>>>   /*
>>>    * Same function as generic code used only for radix, because we don't need to overload
>>> @@ -181,11 +182,42 @@ radix__arch_get_unmapped_area_topdown(struct file *filp,
>>>   	 */
>>>   	return radix__arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
>>>   }
>>> +#endif
>>> +
>>> +unsigned long arch_get_unmapped_area(struct file *filp,
>>> +				     unsigned long addr,
>>> +				     unsigned long len,
>>> +				     unsigned long pgoff,
>>> +				     unsigned long flags)
>>> +{
>>> +#ifdef CONFIG_PPC_MM_SLICES
>>> +	return slice_get_unmapped_area(addr, len, flags,
>>> +				       mm_ctx_user_psize(&current->mm->context), 0);
>>> +#else
>>> +	BUG();
>> 
>> Same.
>> 
>> And the #else isn't needed
>> 
>>> +#endif
>>> +}
>>> +
>>> +unsigned long arch_get_unmapped_area_topdown(struct file *filp,
>>> +					     const unsigned long addr0,
>>> +					     const unsigned long len,
>>> +					     const unsigned long pgoff,
>>> +					     const unsigned long flags)
>>> +{
>>> +#ifdef CONFIG_PPC_MM_SLICES
>>> +	return slice_get_unmapped_area(addr0, len, flags,
>>> +				       mm_ctx_user_psize(&current->mm->context), 1);
>>> +#else
>>> +	BUG();
>> 
>> Same
>> 
>> And the #else isn't needed
> 
> Fair enough. I'll see if mpe can squash in an incremental patch.

Ah no we can't do that here because arch_get_unmapped_area* is not static
so BUILD_BUG() triggers.

I think we can just look at how it could be improved in future patches.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [PATCH v6 00/18] powerpc: Make hash MMU code build configurable
  2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
                   ` (17 preceding siblings ...)
  2021-12-01 14:41 ` [PATCH v6 18/18] powerpc/microwatt: add POWER9_CPU, clear PPC_64S_HASH_MMU Nicholas Piggin
@ 2021-12-15  0:24 ` Michael Ellerman
  18 siblings, 0 replies; 31+ messages in thread
From: Michael Ellerman @ 2021-12-15  0:24 UTC (permalink / raw)
  To: linuxppc-dev, Nicholas Piggin

On Thu, 2 Dec 2021 00:41:35 +1000, Nicholas Piggin wrote:
> Now that there's a platform that can make good use of it, here's
> a series that can prevent the hash MMU code being built for 64s
> platforms that don't need it.
> 
> Since v5:
> - Make cxl select hash.
> - Add new patch (15) to prevent radix using different get_unmapped_area
>   code when hash support is disabled. This is an intermediate step for
>   now, ideally we will end up with radix always going via the generic
>   code.
> 
> [...]

Applied to powerpc/next.

[01/18] powerpc: Remove unused FW_FEATURE_NATIVE references
        https://git.kernel.org/powerpc/c/79b74a68486765a4fe685ac4069bc71366c538f5
[02/18] powerpc: Rename PPC_NATIVE to PPC_HASH_MMU_NATIVE
        https://git.kernel.org/powerpc/c/7ebc49031d0418dc9ca8475b8133a3a161221ef5
[03/18] powerpc/pseries: Stop selecting PPC_HASH_MMU_NATIVE
        https://git.kernel.org/powerpc/c/a4135cbebde8375e2a9d91261b4546ce3f3b9b0f
[04/18] powerpc/64s: Move and rename do_bad_slb_fault as it is not hash specific
        https://git.kernel.org/powerpc/c/935b534c24f014325b72a3619bbbdc18191f9c3d
[05/18] powerpc/pseries: move process table registration away from hash-specific code
        https://git.kernel.org/powerpc/c/0c7cc15e92157c8886c8df3151eac2c43c3dfa2b
[06/18] powerpc/pseries: lparcfg don't include slb_size line in radix mode
        https://git.kernel.org/powerpc/c/3d3282fd34d82caac5005d9c4d4525054eb3cac1
[07/18] powerpc/64s: move THP trace point creation out of hash specific file
        https://git.kernel.org/powerpc/c/162b0889bba6e721c33d12e15971618785ca778e
[08/18] powerpc/64s: Make flush_and_reload_slb a no-op when radix is enabled
        https://git.kernel.org/powerpc/c/310dce6201fd27fda484e34bf543fb55c33d80b1
[09/18] powerpc/64s: move page size definitions from hash specific file
        https://git.kernel.org/powerpc/c/bdad5d57dfcc6d2b2f8d0bc9d7e85ee794d1d50e
[10/18] powerpc/64s: Rename hash_hugetlbpage.c to hugetlbpage.c
        https://git.kernel.org/powerpc/c/f43d2ffb47c9e86f5ec24e1de6ce6da6808634a2
[11/18] powerpc/64: pcpu setup avoid reading mmu_linear_psize on 64e or radix
        https://git.kernel.org/powerpc/c/ffbe5d21d10f9c7890c07fca17db772f941385bf
[12/18] powerpc: make memremap_compat_align 64s-only
        https://git.kernel.org/powerpc/c/20626177c9de726c48802c15e8635cc154645588
[13/18] powerpc/64e: remove mmu_linear_psize
        https://git.kernel.org/powerpc/c/8dbfc0092b5c8c50f011509893bf0396253cd2ab
[14/18] powerpc/64s: Fix radix MMU when MMU_FTR_HPTE_TABLE is clear
        https://git.kernel.org/powerpc/c/af3a0ea41cbf38e967611e262126357d2fd23955
[15/18] powerpc/64s: Always define arch unmapped area calls
        https://git.kernel.org/powerpc/c/debeda017189e40bff23d1c3d2e4567ca8541aed
[16/18] powerpc/64s: Make hash MMU support configurable
        https://git.kernel.org/powerpc/c/c28573744b74eb6de19add503d6a986795c4c137
[17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU
        https://git.kernel.org/powerpc/c/387e220a2e5e630794e1f5219ed6f11e56271c21
[18/18] powerpc/microwatt: add POWER9_CPU, clear PPC_64S_HASH_MMU
        https://git.kernel.org/powerpc/c/31284f703db2f1605b2dbc6bb0632b04d7be13e7

cheers

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2021-12-15  0:30 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-01 14:41 [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 01/18] powerpc: Remove unused FW_FEATURE_NATIVE references Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 02/18] powerpc: Rename PPC_NATIVE to PPC_HASH_MMU_NATIVE Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 03/18] powerpc/pseries: Stop selecting PPC_HASH_MMU_NATIVE Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 04/18] powerpc/64s: Move and rename do_bad_slb_fault as it is not hash specific Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 05/18] powerpc/pseries: move process table registration away from hash-specific code Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 06/18] powerpc/pseries: lparcfg don't include slb_size line in radix mode Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 07/18] powerpc/64s: move THP trace point creation out of hash specific file Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 08/18] powerpc/64s: Make flush_and_reload_slb a no-op when radix is enabled Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 09/18] powerpc/64s: move page size definitions from hash specific file Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 10/18] powerpc/64s: Rename hash_hugetlbpage.c to hugetlbpage.c Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 11/18] powerpc/64: pcpu setup avoid reading mmu_linear_psize on 64e or radix Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 12/18] powerpc: make memremap_compat_align 64s-only Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 13/18] powerpc/64e: remove mmu_linear_psize Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 14/18] powerpc/64s: Fix radix MMU when MMU_FTR_HPTE_TABLE is clear Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 15/18] powerpc/64s: Always define arch unmapped area calls Nicholas Piggin
2021-12-08  9:38   ` Christophe Leroy
2021-12-09  8:25     ` Nicholas Piggin
2021-12-09  8:28       ` Christophe Leroy
2021-12-09  9:30       ` Nicholas Piggin
2021-12-08 10:00   ` Christophe Leroy
2021-12-09  8:29     ` Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 16/18] powerpc/64s: Make hash MMU support configurable Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 17/18] powerpc/64s: Move hash MMU support code under CONFIG_PPC_64S_HASH_MMU Nicholas Piggin
2021-12-02 13:30   ` Fabiano Rosas
2021-12-03 12:34   ` Michael Ellerman
2021-12-07 12:58   ` Michael Ellerman
2021-12-07 13:00   ` Michael Ellerman
2021-12-09  8:30     ` Nicholas Piggin
2021-12-01 14:41 ` [PATCH v6 18/18] powerpc/microwatt: add POWER9_CPU, clear PPC_64S_HASH_MMU Nicholas Piggin
2021-12-15  0:24 ` [PATCH v6 00/18] powerpc: Make hash MMU code build configurable Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.