All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E
@ 2009-03-11  3:53 Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM Benjamin Herrenschmidt
                   ` (8 more replies)
  0 siblings, 9 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

This series of patches does some ground work to make 32 and 64-bit
page table and PTE access slightly more common and to facilitate
the addition of a new MMU type (Book3-E version 2.06 and later)
for which I'd like to be able to use the same definitions for both
32 and 64-bit implementations.

There should be little actual change to the compiled code, when there
is I should have mentioned it or separated the patch, unless I missed
something, it's mostly a matter of moving things around.

Note: some of the patches were already posted separately, those
shouldn't have changed since their were last sent to the list but
I'm resending the whole lot here for consistency.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-11 12:04   ` Kumar Gala
  2009-03-11  3:53 ` [PATCH 2/9] powerpc/mm: Split the various pgtable-* headers based on MMU type (v3) Benjamin Herrenschmidt
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

CONFIG_PPC_MULTIPLATFORM is a remain of the pre-powerpc days and isn't
really meaningful anymore. It was basically equivalent to PPC64 || 6xx.

This removes it along with the following changes:

 - 32-bit platforms that relied on PPC32 && PPC_MULTIPLATFORM now rely
   on 6xx which is what they want anyway.

 - A new symbol, PPC_BOOK3S, is defined that represent compliance with
   the "Server" variant of the architecture. This is set when either 6xx
   or PPC64 is set and open the door for future BOOK3E 64-bit.

 - 64-bit platforms that relied on PPC64 && PPC_MULTIPLATFORM now use
   PPC64 && PPC_BOOK3S

 - A separate and selectable CONFIG_PPC_OF_BOOT_TRAMPOLINE option is now
   used to control the use of prom_init.c

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/Kconfig                       |    2 +-
 arch/powerpc/Kconfig.debug                 |    2 +-
 arch/powerpc/kernel/Makefile               |    2 +-
 arch/powerpc/kernel/head_32.S              |    7 +++++--
 arch/powerpc/kernel/head_64.S              |    6 +++++-
 arch/powerpc/platforms/512x/Kconfig        |    4 ++--
 arch/powerpc/platforms/52xx/Kconfig        |    2 +-
 arch/powerpc/platforms/82xx/Kconfig        |    2 +-
 arch/powerpc/platforms/83xx/Kconfig        |    2 +-
 arch/powerpc/platforms/86xx/Kconfig        |    2 +-
 arch/powerpc/platforms/Kconfig             |   27 +++++++++++++++------------
 arch/powerpc/platforms/Kconfig.cputype     |   18 +++++++++++++-----
 arch/powerpc/platforms/amigaone/Kconfig    |    2 +-
 arch/powerpc/platforms/cell/Kconfig        |    6 +++---
 arch/powerpc/platforms/chrp/Kconfig        |    2 +-
 arch/powerpc/platforms/embedded6xx/Kconfig |    2 +-
 arch/powerpc/platforms/iseries/Kconfig     |    2 +-
 arch/powerpc/platforms/maple/Kconfig       |    2 +-
 arch/powerpc/platforms/pasemi/Kconfig      |    2 +-
 arch/powerpc/platforms/powermac/Kconfig    |    2 +-
 arch/powerpc/platforms/prep/Kconfig        |    2 +-
 arch/powerpc/platforms/ps3/Kconfig         |    2 +-
 arch/powerpc/platforms/pseries/Kconfig     |    2 +-
 23 files changed, 60 insertions(+), 42 deletions(-)

--- linux-work.orig/arch/powerpc/Kconfig	2009-02-24 15:04:13.000000000 +1100
+++ linux-work/arch/powerpc/Kconfig	2009-02-24 15:31:51.000000000 +1100
@@ -313,7 +313,7 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 
 config KEXEC
 	bool "kexec system call (EXPERIMENTAL)"
-	depends on (PPC_PRPMC2800 || PPC_MULTIPLATFORM) && EXPERIMENTAL
+	depends on BOOK3S && EXPERIMENTAL
 	help
 	  kexec is a system call that implements the ability to shutdown your
 	  current kernel, and to start another kernel.  It is like a reboot
Index: linux-work/arch/powerpc/kernel/Makefile
===================================================================
--- linux-work.orig/arch/powerpc/kernel/Makefile	2009-02-24 15:05:49.000000000 +1100
+++ linux-work/arch/powerpc/kernel/Makefile	2009-02-24 15:12:20.000000000 +1100
@@ -75,7 +75,7 @@ obj-y				+= time.o prom.o traps.o setup-
 obj-$(CONFIG_PPC32)		+= entry_32.o setup_32.o
 obj-$(CONFIG_PPC64)		+= dma-iommu.o iommu.o
 obj-$(CONFIG_KGDB)		+= kgdb.o
-obj-$(CONFIG_PPC_MULTIPLATFORM)	+= prom_init.o
+obj-$(CONFIG_PPC_OF_BOOT_TRAMPOLINE)	+= prom_init.o
 obj-$(CONFIG_MODULES)		+= ppc_ksyms.o
 obj-$(CONFIG_BOOTX_TEXT)	+= btext.o
 obj-$(CONFIG_SMP)		+= smp.o
Index: linux-work/arch/powerpc/kernel/head_32.S
===================================================================
--- linux-work.orig/arch/powerpc/kernel/head_32.S	2009-02-24 15:12:55.000000000 +1100
+++ linux-work/arch/powerpc/kernel/head_32.S	2009-02-24 16:24:44.000000000 +1100
@@ -108,18 +108,21 @@ __start:
  * because OF may have I/O devices mapped into that area
  * (particularly on CHRP).
  */
-#ifdef CONFIG_PPC_MULTIPLATFORM
 	cmpwi	0,r5,0
 	beq	1f
 
+#ifdef CONFIG_PPC_OF_BOOT_TRAMPOLINE
 	/* find out where we are now */
 	bcl	20,31,$+4
 0:	mflr	r8			/* r8 = runtime addr here */
 	addis	r8,r8,(_stext - 0b)@ha
 	addi	r8,r8,(_stext - 0b)@l	/* current runtime base addr */
 	bl	prom_init
+#endif /* CONFIG_PPC_OF_BOOT_TRAMPOLINE */
+
+	/* We never return. We also hit that trap if trying to boot
+	 * from OF while CONFIG_PPC_OF_BOOT_TRAMPOLINE isn't selected */
 	trap
-#endif
 
 /*
  * Check for BootX signature when supporting PowerMac and branch to
Index: linux-work/arch/powerpc/platforms/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/Kconfig	2009-02-24 15:02:47.000000000 +1100
+++ linux-work/arch/powerpc/platforms/Kconfig	2009-02-24 15:35:02.000000000 +1100
@@ -1,14 +1,5 @@
 menu "Platform support"
 
-config PPC_MULTIPLATFORM
-	bool
-	depends on PPC64 || 6xx
-	default y
-
-config CLASSIC32
-	def_bool y
-	depends on 6xx && PPC_MULTIPLATFORM
-
 source "arch/powerpc/platforms/pseries/Kconfig"
 source "arch/powerpc/platforms/iseries/Kconfig"
 source "arch/powerpc/platforms/chrp/Kconfig"
@@ -32,12 +23,24 @@ source "arch/powerpc/platforms/amigaone/
 
 config PPC_NATIVE
 	bool
-	depends on PPC_MULTIPLATFORM
+	depends on 6xx || PPC64
 	help
 	  Support for running natively on the hardware, i.e. without
 	  a hypervisor. This option is not user-selectable but should
 	  be selected by all platforms that need it.
 
+config PPC_OF_BOOT_TRAMPOLINE
+	bool "Support booting from Open Firmware or yaboot"
+	depends on 6xx || PPC64
+	default y
+	help
+	  Support from booting from Open Firmware or yaboot using an
+	  Open Firmware client interface. This enables the kernel to
+	  communicate with open firmware to retrieve system informations
+	  such as the device tree.
+
+	  In case of doubt, say Y
+
 config UDBG_RTAS_CONSOLE
 	bool "RTAS based debug console"
 	depends on PPC_RTAS
@@ -71,7 +74,7 @@ config PPC_I8259
 
 config U3_DART
 	bool
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64
 	default n
 
 config PPC_RTAS
@@ -188,7 +191,7 @@ config PPC601_SYNC_FIX
 
 config TAU
 	bool "On-chip CPU temperature sensor support"
-	depends on CLASSIC32
+	depends on 6xx
 	help
 	  G3 and G4 processors have an on-chip temperature sensor called the
 	  'Thermal Assist Unit (TAU)', which, in theory, can measure the on-die
Index: linux-work/arch/powerpc/platforms/512x/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/512x/Kconfig	2009-02-24 15:13:42.000000000 +1100
+++ linux-work/arch/powerpc/platforms/512x/Kconfig	2009-02-24 15:14:02.000000000 +1100
@@ -12,7 +12,7 @@ config PPC_MPC5121
 
 config MPC5121_ADS
 	bool "Freescale MPC5121E ADS"
-	depends on PPC_MULTIPLATFORM && PPC32
+	depends on 6xx
 	select DEFAULT_UIMAGE
 	select PPC_MPC5121
 	select MPC5121_ADS_CPLD
@@ -21,7 +21,7 @@ config MPC5121_ADS
 
 config MPC5121_GENERIC
 	bool "Generic support for simple MPC5121 based boards"
-	depends on PPC_MULTIPLATFORM && PPC32
+	depends on 6xx
 	select DEFAULT_UIMAGE
 	select PPC_MPC5121
 	help
Index: linux-work/arch/powerpc/platforms/52xx/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/52xx/Kconfig	2009-02-24 15:14:15.000000000 +1100
+++ linux-work/arch/powerpc/platforms/52xx/Kconfig	2009-02-24 15:14:18.000000000 +1100
@@ -1,6 +1,6 @@
 config PPC_MPC52xx
 	bool "52xx-based boards"
-	depends on PPC_MULTIPLATFORM && PPC32
+	depends on 6xx
 	select PPC_CLOCK
 	select PPC_PCI_CHOICE
 
Index: linux-work/arch/powerpc/platforms/82xx/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/82xx/Kconfig	2009-02-24 15:14:27.000000000 +1100
+++ linux-work/arch/powerpc/platforms/82xx/Kconfig	2009-02-24 15:14:32.000000000 +1100
@@ -1,6 +1,6 @@
 menuconfig PPC_82xx
 	bool "82xx-based boards (PQ II)"
-	depends on 6xx && PPC_MULTIPLATFORM
+	depends on 6xx
 
 if PPC_82xx
 
Index: linux-work/arch/powerpc/platforms/83xx/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/83xx/Kconfig	2009-02-24 15:14:38.000000000 +1100
+++ linux-work/arch/powerpc/platforms/83xx/Kconfig	2009-02-24 15:14:43.000000000 +1100
@@ -1,6 +1,6 @@
 menuconfig PPC_83xx
 	bool "83xx-based boards"
-	depends on 6xx && PPC_MULTIPLATFORM
+	depends on 6xx
 	select PPC_UDBG_16550
 	select PPC_PCI_CHOICE
 	select FSL_PCI if PCI
Index: linux-work/arch/powerpc/platforms/86xx/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/86xx/Kconfig	2009-02-24 15:14:52.000000000 +1100
+++ linux-work/arch/powerpc/platforms/86xx/Kconfig	2009-02-24 15:14:58.000000000 +1100
@@ -1,7 +1,7 @@
 config PPC_86xx
 menuconfig PPC_86xx
 	bool "86xx-based boards"
-	depends on 6xx && PPC_MULTIPLATFORM
+	depends on 6xx
 	select FSL_SOC
 	select ALTIVEC
 	help
Index: linux-work/arch/powerpc/platforms/Kconfig.cputype
===================================================================
--- linux-work.orig/arch/powerpc/platforms/Kconfig.cputype	2009-02-24 15:22:46.000000000 +1100
+++ linux-work/arch/powerpc/platforms/Kconfig.cputype	2009-02-24 15:35:09.000000000 +1100
@@ -57,9 +57,17 @@ config E200
 
 endchoice
 
+# Until we have a choice of exclusive CPU types on 64-bit, we always
+# use PPC_BOOK3S. On 32-bit, this is equivalent to 6xx which is
+# "classic" MMU
+
+config PPC_BOOK3S
+       def_bool y
+       depends on PPC64 || 6xx
+
 config POWER4_ONLY
 	bool "Optimize for POWER4"
-	depends on PPC64
+	depends on PPC64 && PPC_BOOK3S
 	default n
 	---help---
 	  Cause the compiler to optimize for POWER4/POWER5/PPC970 processors.
@@ -68,16 +76,16 @@ config POWER4_ONLY
 
 config POWER3
 	bool
-	depends on PPC64
+	depends on PPC64 && PPC_BOOK3S
 	default y if !POWER4_ONLY
 
 config POWER4
-	depends on PPC64
+	depends on PPC64 && PPC_BOOK3S
 	def_bool y
 
 config TUNE_CELL
 	bool "Optimize for Cell Broadband Engine"
-	depends on PPC64
+	depends on PPC64 && PPC_BOOK3S
 	help
 	  Cause the compiler to optimize for the PPE of the Cell Broadband
 	  Engine. This will make the code run considerably faster on Cell
@@ -147,7 +155,7 @@ config PHYS_64BIT
 
 config ALTIVEC
 	bool "AltiVec Support"
-	depends on CLASSIC32 || POWER4
+	depends on 6xx || POWER4
 	---help---
 	  This option enables kernel support for the Altivec extensions to the
 	  PowerPC processor. The kernel currently supports saving and restoring
Index: linux-work/arch/powerpc/platforms/amigaone/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/amigaone/Kconfig	2009-02-24 15:27:56.000000000 +1100
+++ linux-work/arch/powerpc/platforms/amigaone/Kconfig	2009-02-24 15:28:08.000000000 +1100
@@ -1,6 +1,6 @@
 config AMIGAONE
 	bool "Eyetech AmigaOne/MAI Teron"
-	depends on PPC32 && BROKEN_ON_SMP && PPC_MULTIPLATFORM
+	depends on 6xx && BROKEN_ON_SMP
 	select PPC_I8259
 	select PPC_INDIRECT_PCI
 	select PPC_UDBG_16550
Index: linux-work/arch/powerpc/platforms/cell/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/cell/Kconfig	2009-02-24 15:15:08.000000000 +1100
+++ linux-work/arch/powerpc/platforms/cell/Kconfig	2009-02-24 15:25:15.000000000 +1100
@@ -23,7 +23,7 @@ config PPC_CELL_NATIVE
 
 config PPC_IBM_CELL_BLADE
 	bool "IBM Cell Blade"
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	select PPC_CELL_NATIVE
 	select MMIO_NVRAM
 	select PPC_UDBG_16550
@@ -31,7 +31,7 @@ config PPC_IBM_CELL_BLADE
 
 config PPC_CELLEB
 	bool "Toshiba's Cell Reference Set 'Celleb' Architecture"
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	select PPC_CELL_NATIVE
 	select HAS_TXX9_SERIAL
 	select PPC_UDBG_BEAT
@@ -40,7 +40,7 @@ config PPC_CELLEB
 
 config PPC_CELL_QPACE
 	bool "IBM Cell - QPACE"
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	select PPC_CELL_COMMON
 
 menu "Cell Broadband Engine options"
Index: linux-work/arch/powerpc/platforms/chrp/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/chrp/Kconfig	2009-02-24 15:15:31.000000000 +1100
+++ linux-work/arch/powerpc/platforms/chrp/Kconfig	2009-02-24 15:15:44.000000000 +1100
@@ -1,6 +1,6 @@
 config PPC_CHRP
 	bool "Common Hardware Reference Platform (CHRP) based machines"
-	depends on PPC_MULTIPLATFORM && PPC32
+	depends on 6xx
 	select MPIC
 	select PPC_I8259
 	select PPC_INDIRECT_PCI
Index: linux-work/arch/powerpc/platforms/embedded6xx/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/embedded6xx/Kconfig	2009-02-24 15:15:58.000000000 +1100
+++ linux-work/arch/powerpc/platforms/embedded6xx/Kconfig	2009-02-24 15:16:21.000000000 +1100
@@ -1,6 +1,6 @@
 config EMBEDDED6xx
 	bool "Embedded 6xx/7xx/7xxx-based boards"
-	depends on PPC32 && BROKEN_ON_SMP && PPC_MULTIPLATFORM
+	depends on 6xx && BROKEN_ON_SMP
 
 config LINKSTATION
 	bool "Linkstation / Kurobox(HG) from Buffalo"
Index: linux-work/arch/powerpc/platforms/iseries/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/iseries/Kconfig	2009-02-24 15:21:42.000000000 +1100
+++ linux-work/arch/powerpc/platforms/iseries/Kconfig	2009-02-24 15:25:35.000000000 +1100
@@ -1,6 +1,6 @@
 config PPC_ISERIES
 	bool "IBM Legacy iSeries"
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	select PPC_INDIRECT_IO
 	select PPC_PCI_CHOICE if EMBEDDED
 
Index: linux-work/arch/powerpc/platforms/maple/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/maple/Kconfig	2009-02-24 15:22:00.000000000 +1100
+++ linux-work/arch/powerpc/platforms/maple/Kconfig	2009-02-24 15:25:41.000000000 +1100
@@ -1,5 +1,5 @@
 config PPC_MAPLE
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	bool "Maple 970FX Evaluation Board"
 	select PCI
 	select MPIC
Index: linux-work/arch/powerpc/platforms/pasemi/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/pasemi/Kconfig	2009-02-24 15:22:18.000000000 +1100
+++ linux-work/arch/powerpc/platforms/pasemi/Kconfig	2009-02-24 15:25:50.000000000 +1100
@@ -1,5 +1,5 @@
 config PPC_PASEMI
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	bool "PA Semi SoC-based platforms"
 	default n
 	select MPIC
Index: linux-work/arch/powerpc/platforms/powermac/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/powermac/Kconfig	2009-02-24 15:26:00.000000000 +1100
+++ linux-work/arch/powerpc/platforms/powermac/Kconfig	2009-02-24 15:26:11.000000000 +1100
@@ -1,6 +1,6 @@
 config PPC_PMAC
 	bool "Apple PowerMac based machines"
-	depends on PPC_MULTIPLATFORM
+	depends on PPC_BOOK3S
 	select MPIC
 	select PCI
 	select PPC_INDIRECT_PCI if PPC32
Index: linux-work/arch/powerpc/platforms/prep/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/prep/Kconfig	2009-02-24 15:26:55.000000000 +1100
+++ linux-work/arch/powerpc/platforms/prep/Kconfig	2009-02-24 15:27:04.000000000 +1100
@@ -1,6 +1,6 @@
 config PPC_PREP
 	bool "PowerPC Reference Platform (PReP) based machines"
-	depends on PPC_MULTIPLATFORM && PPC32 && BROKEN
+	depends on 6xx && BROKEN
 	select MPIC
 	select PPC_I8259
 	select PPC_INDIRECT_PCI
Index: linux-work/arch/powerpc/platforms/ps3/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/ps3/Kconfig	2009-02-24 15:27:20.000000000 +1100
+++ linux-work/arch/powerpc/platforms/ps3/Kconfig	2009-02-24 15:27:31.000000000 +1100
@@ -1,6 +1,6 @@
 config PPC_PS3
 	bool "Sony PS3"
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	select PPC_CELL
 	select USB_ARCH_HAS_OHCI
 	select USB_OHCI_LITTLE_ENDIAN
Index: linux-work/arch/powerpc/platforms/pseries/Kconfig
===================================================================
--- linux-work.orig/arch/powerpc/platforms/pseries/Kconfig	2009-02-24 15:27:36.000000000 +1100
+++ linux-work/arch/powerpc/platforms/pseries/Kconfig	2009-02-24 15:27:47.000000000 +1100
@@ -1,5 +1,5 @@
 config PPC_PSERIES
-	depends on PPC_MULTIPLATFORM && PPC64
+	depends on PPC64 && PPC_BOOK3S
 	bool "IBM pSeries & new (POWER5-based) iSeries"
 	select MPIC
 	select PPC_I8259
Index: linux-work/arch/powerpc/Kconfig.debug
===================================================================
--- linux-work.orig/arch/powerpc/Kconfig.debug	2009-02-24 15:30:26.000000000 +1100
+++ linux-work/arch/powerpc/Kconfig.debug	2009-02-24 15:31:01.000000000 +1100
@@ -129,7 +129,7 @@ config BDI_SWITCH
 
 config BOOTX_TEXT
 	bool "Support for early boot text console (BootX or OpenFirmware only)"
-	depends on PPC_OF && PPC_MULTIPLATFORM
+	depends on PPC_OF && PPC_BOOK3S
 	help
 	  Say Y here to see progress messages from the boot firmware in text
 	  mode. Requires either BootX or Open Firmware.
Index: linux-work/arch/powerpc/kernel/head_64.S
===================================================================
--- linux-work.orig/arch/powerpc/kernel/head_64.S	2009-02-24 16:23:30.000000000 +1100
+++ linux-work/arch/powerpc/kernel/head_64.S	2009-02-24 16:25:07.000000000 +1100
@@ -1360,6 +1360,7 @@ _GLOBAL(__start_initialization_multiplat
 	b	.__after_prom_start
 
 _INIT_STATIC(__boot_from_prom)
+#ifdef CONFIG_PPC_OF_BOOT_TRAMPOLINE
 	/* Save parameters */
 	mr	r31,r3
 	mr	r30,r4
@@ -1390,7 +1391,10 @@ _INIT_STATIC(__boot_from_prom)
 	/* Do all of the interaction with OF client interface */
 	mr	r8,r26
 	bl	.prom_init
-	/* We never return */
+#endif /* #CONFIG_PPC_OF_BOOT_TRAMPOLINE */
+
+	/* We never return. We also hit that trap if trying to boot
+	 * from OF while CONFIG_PPC_OF_BOOT_TRAMPOLINE isn't selected */
 	trap
 
 _STATIC(__after_prom_start)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 2/9] powerpc/mm: Split the various pgtable-* headers based on MMU type (v3)
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 3/9] powerpc/mm: Unify PTE_RPN_SHIFT and _PAGE_CHG_MASK definitions Benjamin Herrenschmidt
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

This patch moves the definition of the PTE format for each MMU type
to separate files instead of all in one file. This improves overall
maintainability and will make it easier to add new types.

On 64-bit, additionally, I've separated the headers relative to the
format of the page table tree (3 vs. 4 levels for 64K vs 4K pages)
from the headers specific to the PTE format for hash based processors,
this will make it easier to add support for Book3 "E" 64-bit
implementations.

There are still some type-related ifdef's in the generic headers,
we might remove them in the long run, but this patch shouldn't result
in any code change, -hopefully- just definitions being moved around.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

v2. rebase and use pte-* files instead of pgtable-* files
v3. fix #ifndef/#define include guard names

 arch/powerpc/include/asm/pgtable-4k.h        |  117 ---------
 arch/powerpc/include/asm/pgtable-64k.h       |  155 -------------
 arch/powerpc/include/asm/pgtable-ppc32.h     |  321 +--------------------------
 arch/powerpc/include/asm/pgtable-ppc64-4k.h  |   74 ++++++
 arch/powerpc/include/asm/pgtable-ppc64-64k.h |   42 +++
 arch/powerpc/include/asm/pgtable-ppc64.h     |   87 ++++---
 arch/powerpc/include/asm/pte-40x.h           |   64 +++++
 arch/powerpc/include/asm/pte-44x.h           |  102 ++++++++
 arch/powerpc/include/asm/pte-8xx.h           |   64 +++++
 arch/powerpc/include/asm/pte-fsl-booke.h     |   46 +++
 arch/powerpc/include/asm/pte-hash32.h        |   49 ++++
 arch/powerpc/include/asm/pte-hash64-4k.h     |   20 +
 arch/powerpc/include/asm/pte-hash64-64k.h    |  115 +++++++++
 arch/powerpc/include/asm/pte-hash64.h        |   47 +++
 14 files changed, 700 insertions(+), 603 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-03 13:25:30.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-10 15:26:51.000000000 +1100
@@ -19,55 +19,6 @@ extern int icache_44x_need_flush;
 #endif /* __ASSEMBLY__ */
 
 /*
- * The PowerPC MMU uses a hash table containing PTEs, together with
- * a set of 16 segment registers (on 32-bit implementations), to define
- * the virtual to physical address mapping.
- *
- * We use the hash table as an extended TLB, i.e. a cache of currently
- * active mappings.  We maintain a two-level page table tree, much
- * like that used by the i386, for the sake of the Linux memory
- * management code.  Low-level assembler code in hashtable.S
- * (procedure hash_page) is responsible for extracting ptes from the
- * tree and putting them into the hash table when necessary, and
- * updating the accessed and modified bits in the page table tree.
- */
-
-/*
- * The PowerPC MPC8xx uses a TLB with hardware assisted, software tablewalk.
- * We also use the two level tables, but we can put the real bits in them
- * needed for the TLB and tablewalk.  These definitions require Mx_CTR.PPM = 0,
- * Mx_CTR.PPCS = 0, and MD_CTR.TWAM = 1.  The level 2 descriptor has
- * additional page protection (when Mx_CTR.PPCS = 1) that allows TLB hit
- * based upon user/super access.  The TLB does not have accessed nor write
- * protect.  We assume that if the TLB get loaded with an entry it is
- * accessed, and overload the changed bit for write protect.  We use
- * two bits in the software pte that are supposed to be set to zero in
- * the TLB entry (24 and 25) for these indicators.  Although the level 1
- * descriptor contains the guarded and writethrough/copyback bits, we can
- * set these at the page level since they get copied from the Mx_TWC
- * register when the TLB entry is loaded.  We will use bit 27 for guard, since
- * that is where it exists in the MD_TWC, and bit 26 for writethrough.
- * These will get masked from the level 2 descriptor at TLB load time, and
- * copied to the MD_TWC before it gets loaded.
- * Large page sizes added.  We currently support two sizes, 4K and 8M.
- * This also allows a TLB hander optimization because we can directly
- * load the PMD into MD_TWC.  The 8M pages are only used for kernel
- * mapping of well known areas.  The PMD (PGD) entries contain control
- * flags in addition to the address, so care must be taken that the
- * software no longer assumes these are only pointers.
- */
-
-/*
- * At present, all PowerPC 400-class processors share a similar TLB
- * architecture. The instruction and data sides share a unified,
- * 64-entry, fully-associative TLB which is maintained totally under
- * software control. In addition, the instruction side has a
- * hardware-managed, 4-entry, fully-associative TLB which serves as a
- * first level to the shared TLB. These two TLBs are known as the UTLB
- * and ITLB, respectively (see "mmu.h" for definitions).
- */
-
-/*
  * The normal case is that PTEs are 32-bits and we have a 1-page
  * 1024-entry pgdir pointing to 1-page 1024-entry PTE pages.  -- paulus
  *
@@ -135,261 +86,25 @@ extern int icache_44x_need_flush;
  */
 
 #if defined(CONFIG_40x)
-
-/* There are several potential gotchas here.  The 40x hardware TLBLO
-   field looks like this:
-
-   0  1  2  3  4  ... 18 19 20 21 22 23 24 25 26 27 28 29 30 31
-   RPN.....................  0  0 EX WR ZSEL.......  W  I  M  G
-
-   Where possible we make the Linux PTE bits match up with this
-
-   - bits 20 and 21 must be cleared, because we use 4k pages (40x can
-     support down to 1k pages), this is done in the TLBMiss exception
-     handler.
-   - We use only zones 0 (for kernel pages) and 1 (for user pages)
-     of the 16 available.  Bit 24-26 of the TLB are cleared in the TLB
-     miss handler.  Bit 27 is PAGE_USER, thus selecting the correct
-     zone.
-   - PRESENT *must* be in the bottom two bits because swap cache
-     entries use the top 30 bits.  Because 40x doesn't support SMP
-     anyway, M is irrelevant so we borrow it for PAGE_PRESENT.  Bit 30
-     is cleared in the TLB miss handler before the TLB entry is loaded.
-   - All other bits of the PTE are loaded into TLBLO without
-     modification, leaving us only the bits 20, 21, 24, 25, 26, 30 for
-     software PTE bits.  We actually use use bits 21, 24, 25, and
-     30 respectively for the software bits: ACCESSED, DIRTY, RW, and
-     PRESENT.
-*/
-
-/* Definitions for 40x embedded chips. */
-#define	_PAGE_GUARDED	0x001	/* G: page is guarded from prefetch */
-#define _PAGE_FILE	0x001	/* when !present: nonlinear file mapping */
-#define _PAGE_PRESENT	0x002	/* software: PTE contains a translation */
-#define	_PAGE_NO_CACHE	0x004	/* I: caching is inhibited */
-#define	_PAGE_WRITETHRU	0x008	/* W: caching is write-through */
-#define	_PAGE_USER	0x010	/* matches one of the zone permission bits */
-#define	_PAGE_RW	0x040	/* software: Writes permitted */
-#define	_PAGE_DIRTY	0x080	/* software: dirty page */
-#define _PAGE_HWWRITE	0x100	/* hardware: Dirty & RW, set in exception */
-#define _PAGE_HWEXEC	0x200	/* hardware: EX permission */
-#define _PAGE_ACCESSED	0x400	/* software: R: page referenced */
-
-#define _PMD_PRESENT	0x400	/* PMD points to page of PTEs */
-#define _PMD_BAD	0x802
-#define _PMD_SIZE	0x0e0	/* size field, != 0 for large-page PMD entry */
-#define _PMD_SIZE_4M	0x0c0
-#define _PMD_SIZE_16M	0x0e0
-#define PMD_PAGE_SIZE(pmdval)	(1024 << (((pmdval) & _PMD_SIZE) >> 4))
-
-/* Until my rework is finished, 40x still needs atomic PTE updates */
-#define PTE_ATOMIC_UPDATES	1
-
+#include <asm/pte-40x.h>
 #elif defined(CONFIG_44x)
-/*
- * Definitions for PPC440
- *
- * Because of the 3 word TLB entries to support 36-bit addressing,
- * the attribute are difficult to map in such a fashion that they
- * are easily loaded during exception processing.  I decided to
- * organize the entry so the ERPN is the only portion in the
- * upper word of the PTE and the attribute bits below are packed
- * in as sensibly as they can be in the area below a 4KB page size
- * oriented RPN.  This at least makes it easy to load the RPN and
- * ERPN fields in the TLB. -Matt
- *
- * Note that these bits preclude future use of a page size
- * less than 4KB.
- *
- *
- * PPC 440 core has following TLB attribute fields;
- *
- *   TLB1:
- *   0  1  2  3  4  ... 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
- *   RPN.................................  -  -  -  -  -  - ERPN.......
- *
- *   TLB2:
- *   0  1  2  3  4  ... 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
- *   -  -  -  -  -    - U0 U1 U2 U3 W  I  M  G  E   - UX UW UR SX SW SR
- *
- * Newer 440 cores (440x6 as used on AMCC 460EX/460GT) have additional
- * TLB2 storage attibute fields. Those are:
- *
- *   TLB2:
- *   0...10    11   12   13   14   15   16...31
- *   no change WL1  IL1I IL1D IL2I IL2D no change
- *
- * There are some constrains and options, to decide mapping software bits
- * into TLB entry.
- *
- *   - PRESENT *must* be in the bottom three bits because swap cache
- *     entries use the top 29 bits for TLB2.
- *
- *   - FILE *must* be in the bottom three bits because swap cache
- *     entries use the top 29 bits for TLB2.
- *
- *   - CACHE COHERENT bit (M) has no effect on original PPC440 cores,
- *     because it doesn't support SMP. However, some later 460 variants
- *     have -some- form of SMP support and so I keep the bit there for
- *     future use
- *
- * With the PPC 44x Linux implementation, the 0-11th LSBs of the PTE are used
- * for memory protection related functions (see PTE structure in
- * include/asm-ppc/mmu.h).  The _PAGE_XXX definitions in this file map to the
- * above bits.  Note that the bit values are CPU specific, not architecture
- * specific.
- *
- * The kernel PTE entry holds an arch-dependent swp_entry structure under
- * certain situations. In other words, in such situations some portion of
- * the PTE bits are used as a swp_entry. In the PPC implementation, the
- * 3-24th LSB are shared with swp_entry, however the 0-2nd three LSB still
- * hold protection values. That means the three protection bits are
- * reserved for both PTE and SWAP entry at the most significant three
- * LSBs.
- *
- * There are three protection bits available for SWAP entry:
- *	_PAGE_PRESENT
- *	_PAGE_FILE
- *	_PAGE_HASHPTE (if HW has)
- *
- * So those three bits have to be inside of 0-2nd LSB of PTE.
- *
- */
-
-#define _PAGE_PRESENT	0x00000001		/* S: PTE valid */
-#define _PAGE_RW	0x00000002		/* S: Write permission */
-#define _PAGE_FILE	0x00000004		/* S: nonlinear file mapping */
-#define _PAGE_HWEXEC	0x00000004		/* H: Execute permission */
-#define _PAGE_ACCESSED	0x00000008		/* S: Page referenced */
-#define _PAGE_DIRTY	0x00000010		/* S: Page dirty */
-#define _PAGE_SPECIAL	0x00000020		/* S: Special page */
-#define _PAGE_USER	0x00000040		/* S: User page */
-#define _PAGE_ENDIAN	0x00000080		/* H: E bit */
-#define _PAGE_GUARDED	0x00000100		/* H: G bit */
-#define _PAGE_COHERENT	0x00000200		/* H: M bit */
-#define _PAGE_NO_CACHE	0x00000400		/* H: I bit */
-#define _PAGE_WRITETHRU	0x00000800		/* H: W bit */
-
-/* TODO: Add large page lowmem mapping support */
-#define _PMD_PRESENT	0
-#define _PMD_PRESENT_MASK (PAGE_MASK)
-#define _PMD_BAD	(~PAGE_MASK)
-
-/* ERPN in a PTE never gets cleared, ignore it */
-#define _PTE_NONE_MASK	0xffffffff00000000ULL
-
-#define __HAVE_ARCH_PTE_SPECIAL
-
+#include <asm/pte-44x.h>
 #elif defined(CONFIG_FSL_BOOKE)
-/*
-   MMU Assist Register 3:
-
-   32 33 34 35 36  ... 50 51 52 53 54 55 56 57 58 59 60 61 62 63
-   RPN......................  0  0 U0 U1 U2 U3 UX SX UW SW UR SR
-
-   - PRESENT *must* be in the bottom three bits because swap cache
-     entries use the top 29 bits.
-
-   - FILE *must* be in the bottom three bits because swap cache
-     entries use the top 29 bits.
-*/
-
-/* Definitions for FSL Book-E Cores */
-#define _PAGE_PRESENT	0x00001	/* S: PTE contains a translation */
-#define _PAGE_USER	0x00002	/* S: User page (maps to UR) */
-#define _PAGE_FILE	0x00002	/* S: when !present: nonlinear file mapping */
-#define _PAGE_RW	0x00004	/* S: Write permission (SW) */
-#define _PAGE_DIRTY	0x00008	/* S: Page dirty */
-#define _PAGE_HWEXEC	0x00010	/* H: SX permission */
-#define _PAGE_ACCESSED	0x00020	/* S: Page referenced */
-
-#define _PAGE_ENDIAN	0x00040	/* H: E bit */
-#define _PAGE_GUARDED	0x00080	/* H: G bit */
-#define _PAGE_COHERENT	0x00100	/* H: M bit */
-#define _PAGE_NO_CACHE	0x00200	/* H: I bit */
-#define _PAGE_WRITETHRU	0x00400	/* H: W bit */
-#define _PAGE_SPECIAL	0x00800 /* S: Special page */
-
-#ifdef CONFIG_PTE_64BIT
-/* ERPN in a PTE never gets cleared, ignore it */
-#define _PTE_NONE_MASK	0xffffffffffff0000ULL
-#endif
-
-#define _PMD_PRESENT	0
-#define _PMD_PRESENT_MASK (PAGE_MASK)
-#define _PMD_BAD	(~PAGE_MASK)
-
-#define __HAVE_ARCH_PTE_SPECIAL
-
+#include <asm/pte-fsl-booke.h>
 #elif defined(CONFIG_8xx)
-/* Definitions for 8xx embedded chips. */
-#define _PAGE_PRESENT	0x0001	/* Page is valid */
-#define _PAGE_FILE	0x0002	/* when !present: nonlinear file mapping */
-#define _PAGE_NO_CACHE	0x0002	/* I: cache inhibit */
-#define _PAGE_SHARED	0x0004	/* No ASID (context) compare */
-
-/* These five software bits must be masked out when the entry is loaded
- * into the TLB.
- */
-#define _PAGE_EXEC	0x0008	/* software: i-cache coherency required */
-#define _PAGE_GUARDED	0x0010	/* software: guarded access */
-#define _PAGE_DIRTY	0x0020	/* software: page changed */
-#define _PAGE_RW	0x0040	/* software: user write access allowed */
-#define _PAGE_ACCESSED	0x0080	/* software: page referenced */
-
-/* Setting any bits in the nibble with the follow two controls will
- * require a TLB exception handler change.  It is assumed unused bits
- * are always zero.
- */
-#define _PAGE_HWWRITE	0x0100	/* h/w write enable: never set in Linux PTE */
-#define _PAGE_USER	0x0800	/* One of the PP bits, the other is USER&~RW */
-
-#define _PMD_PRESENT	0x0001
-#define _PMD_BAD	0x0ff0
-#define _PMD_PAGE_MASK	0x000c
-#define _PMD_PAGE_8M	0x000c
-
-#define _PTE_NONE_MASK _PAGE_ACCESSED
-
-/* Until my rework is finished, 8xx still needs atomic PTE updates */
-#define PTE_ATOMIC_UPDATES	1
-
+#include <asm/pte-8xx.h>
 #else /* CONFIG_6xx */
-/* Definitions for 60x, 740/750, etc. */
-#define _PAGE_PRESENT	0x001	/* software: pte contains a translation */
-#define _PAGE_HASHPTE	0x002	/* hash_page has made an HPTE for this pte */
-#define _PAGE_FILE	0x004	/* when !present: nonlinear file mapping */
-#define _PAGE_USER	0x004	/* usermode access allowed */
-#define _PAGE_GUARDED	0x008	/* G: prohibit speculative access */
-#define _PAGE_COHERENT	0x010	/* M: enforce memory coherence (SMP systems) */
-#define _PAGE_NO_CACHE	0x020	/* I: cache inhibit */
-#define _PAGE_WRITETHRU	0x040	/* W: cache write-through */
-#define _PAGE_DIRTY	0x080	/* C: page changed */
-#define _PAGE_ACCESSED	0x100	/* R: page referenced */
-#define _PAGE_EXEC	0x200	/* software: i-cache coherency required */
-#define _PAGE_RW	0x400	/* software: user write access allowed */
-#define _PAGE_SPECIAL	0x800	/* software: Special page */
-
-#ifdef CONFIG_PTE_64BIT
-/* We never clear the high word of the pte */
-#define _PTE_NONE_MASK	(0xffffffff00000000ULL | _PAGE_HASHPTE)
-#else
-#define _PTE_NONE_MASK	_PAGE_HASHPTE
+#include <asm/pte-hash32.h>
 #endif
 
-#define _PMD_PRESENT	0
-#define _PMD_PRESENT_MASK (PAGE_MASK)
-#define _PMD_BAD	(~PAGE_MASK)
-
-/* Hash table based platforms need atomic updates of the linux PTE */
-#define PTE_ATOMIC_UPDATES	1
-
+/* If _PAGE_SPECIAL is defined, then we advertise our support for it */
+#ifdef _PAGE_SPECIAL
 #define __HAVE_ARCH_PTE_SPECIAL
-
 #endif
 
 /*
- * Some bits are only used on some cpu families...
+ * Some bits are only used on some cpu families... Make sure that all
+ * the undefined gets defined as 0
  */
 #ifndef _PAGE_HASHPTE
 #define _PAGE_HASHPTE	0
@@ -600,11 +315,19 @@ extern void flush_hash_entry(struct mm_s
 			     unsigned long address);
 
 /*
- * Atomic PTE updates.
- *
- * pte_update clears and sets bit atomically, and returns
- * the old pte value.  In the 64-bit PTE case we lock around the
- * low PTE word since we expect ALL flag bits to be there
+ * PTE updates. This function is called whenever an existing
+ * valid PTE is updated. This does -not- include set_pte_at()
+ * which nowadays only sets a new PTE.
+ *
+ * Depending on the type of MMU, we may need to use atomic updates
+ * and the PTE may be either 32 or 64 bit wide. In the later case,
+ * when using atomic updates, only the low part of the PTE is
+ * accessed atomically.
+ *
+ * In addition, on 44x, we also maintain a global flag indicating
+ * that an executable user mapping was modified, which is needed
+ * to properly flush the virtually tagged instruction cache of
+ * those implementations.
  */
 #ifndef CONFIG_PTE_64BIT
 static inline unsigned long pte_update(pte_t *p,
Index: linux-work/arch/powerpc/include/asm/pgtable-ppc64.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-03 13:25:30.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-10 15:26:49.000000000 +1100
@@ -11,9 +11,9 @@
 #endif /* __ASSEMBLY__ */
 
 #ifdef CONFIG_PPC_64K_PAGES
-#include <asm/pgtable-64k.h>
+#include <asm/pgtable-ppc64-64k.h>
 #else
-#include <asm/pgtable-4k.h>
+#include <asm/pgtable-ppc64-4k.h>
 #endif
 
 #define FIRST_USER_ADDRESS	0
@@ -25,6 +25,8 @@
                 	    PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
 #define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
 
+
+/* Some sanity checking */
 #if TASK_SIZE_USER64 > PGTABLE_RANGE
 #error TASK_SIZE_USER64 exceeds pagetable range
 #endif
@@ -33,7 +35,6 @@
 #error TASK_SIZE_USER64 exceeds user VSID range
 #endif
 
-
 /*
  * Define the address range of the vmalloc VM area.
  */
@@ -76,29 +77,26 @@
 
 
 /*
- * Common bits in a linux-style PTE.  These match the bits in the
- * (hardware-defined) PowerPC PTE as closely as possible. Additional
- * bits may be defined in pgtable-*.h
- */
-#define _PAGE_PRESENT	0x0001 /* software: pte contains a translation */
-#define _PAGE_USER	0x0002 /* matches one of the PP bits */
-#define _PAGE_FILE	0x0002 /* (!present only) software: pte holds file offset */
-#define _PAGE_EXEC	0x0004 /* No execute on POWER4 and newer (we invert) */
-#define _PAGE_GUARDED	0x0008
-#define _PAGE_COHERENT	0x0010 /* M: enforce memory coherence (SMP systems) */
-#define _PAGE_NO_CACHE	0x0020 /* I: cache inhibit */
-#define _PAGE_WRITETHRU	0x0040 /* W: cache write-through */
-#define _PAGE_DIRTY	0x0080 /* C: page changed */
-#define _PAGE_ACCESSED	0x0100 /* R: page referenced */
-#define _PAGE_RW	0x0200 /* software: user write access allowed */
-#define _PAGE_BUSY	0x0800 /* software: PTE & hash are busy */
+ * Include the PTE bits definitions
+ */
+#include <asm/pte-hash64.h>
 
-/* Strong Access Ordering */
-#define _PAGE_SAO	(_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
+/* To make some generic powerpc code happy */
+#ifndef _PAGE_HWEXEC
+#define _PAGE_HWEXEC		0
+#endif
+
+/* Some other useful definitions */
+#define PTE_RPN_MAX	(1UL << (64 - PTE_RPN_SHIFT))
+#define PTE_RPN_MASK	(~((1UL<<PTE_RPN_SHIFT)-1))
+
+/* _PAGE_CHG_MASK masks of bits that are to be preserved accross
+ * pgprot changes
+ */
+#define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+                         _PAGE_ACCESSED | _PAGE_SPECIAL)
 
-#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_COHERENT)
 
-#define _PAGE_WRENABLE	(_PAGE_RW | _PAGE_DIRTY)
 
 /* __pgprot defined in arch/powerpc/include/asm/page.h */
 #define PAGE_NONE	__pgprot(_PAGE_PRESENT | _PAGE_ACCESSED)
@@ -117,16 +115,9 @@
 #define PAGE_AGP	__pgprot(_PAGE_BASE | _PAGE_WRENABLE | _PAGE_NO_CACHE)
 #define HAVE_PAGE_AGP
 
-#define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | \
-			 _PAGE_NO_CACHE | _PAGE_WRITETHRU |		\
-			 _PAGE_4K_PFN | _PAGE_RW | _PAGE_USER |		\
-			 _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_EXEC)
-/* PTEIDX nibble */
-#define _PTEIDX_SECONDARY	0x8
-#define _PTEIDX_GROUP_IX	0x7
+/* We always have _PAGE_SPECIAL on 64 bit */
+#define __HAVE_ARCH_PTE_SPECIAL
 
-/* To make some generic powerpc code happy */
-#define _PAGE_HWEXEC		0
 
 /*
  * POWER4 and newer have per page execute protection, older chips can only
@@ -163,6 +154,38 @@
 #ifndef __ASSEMBLY__
 
 /*
+ * This is the default implementation of various PTE accessors, it's
+ * used in all cases except Book3S with 64K pages where we have a
+ * concept of sub-pages
+ */
+#ifndef __real_pte
+
+#ifdef STRICT_MM_TYPECHECKS
+#define __real_pte(e,p)		((real_pte_t){(e)})
+#define __rpte_to_pte(r)	((r).pte)
+#else
+#define __real_pte(e,p)		(e)
+#define __rpte_to_pte(r)	(__pte(r))
+#endif
+#define __rpte_to_hidx(r,index)	(pte_val(__rpte_to_pte(r)) >> 12)
+
+#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift)       \
+	do {							         \
+		index = 0;					         \
+		shift = mmu_psize_defs[psize].shift;		         \
+
+#define pte_iterate_hashed_end() } while(0)
+
+#ifdef CONFIG_PPC_HAS_HASH_64K
+#define pte_pagesize_index(mm, addr, pte)	get_slice_psize(mm, addr)
+#else
+#define pte_pagesize_index(mm, addr, pte)	MMU_PAGE_4K
+#endif
+
+#endif /* __real_pte */
+
+
+/*
  * Conversion functions: convert a page and protection to a page entry,
  * and a page entry and page directory to the page they refer to.
  *
Index: linux-work/arch/powerpc/include/asm/pgtable-4k.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable-4k.h	2009-03-02 10:08:13.000000000 +1100
+++ /dev/null	1970-01-01 00:00:00.000000000 +0000
@@ -1,117 +0,0 @@
-#ifndef _ASM_POWERPC_PGTABLE_4K_H
-#define _ASM_POWERPC_PGTABLE_4K_H
-/*
- * Entries per page directory level.  The PTE level must use a 64b record
- * for each page table entry.  The PMD and PGD level use a 32b record for
- * each entry by assuming that each entry is page aligned.
- */
-#define PTE_INDEX_SIZE  9
-#define PMD_INDEX_SIZE  7
-#define PUD_INDEX_SIZE  7
-#define PGD_INDEX_SIZE  9
-
-#ifndef __ASSEMBLY__
-#define PTE_TABLE_SIZE	(sizeof(pte_t) << PTE_INDEX_SIZE)
-#define PMD_TABLE_SIZE	(sizeof(pmd_t) << PMD_INDEX_SIZE)
-#define PUD_TABLE_SIZE	(sizeof(pud_t) << PUD_INDEX_SIZE)
-#define PGD_TABLE_SIZE	(sizeof(pgd_t) << PGD_INDEX_SIZE)
-#endif	/* __ASSEMBLY__ */
-
-#define PTRS_PER_PTE	(1 << PTE_INDEX_SIZE)
-#define PTRS_PER_PMD	(1 << PMD_INDEX_SIZE)
-#define PTRS_PER_PUD	(1 << PMD_INDEX_SIZE)
-#define PTRS_PER_PGD	(1 << PGD_INDEX_SIZE)
-
-/* PMD_SHIFT determines what a second-level page table entry can map */
-#define PMD_SHIFT	(PAGE_SHIFT + PTE_INDEX_SIZE)
-#define PMD_SIZE	(1UL << PMD_SHIFT)
-#define PMD_MASK	(~(PMD_SIZE-1))
-
-/* With 4k base page size, hugepage PTEs go at the PMD level */
-#define MIN_HUGEPTE_SHIFT	PMD_SHIFT
-
-/* PUD_SHIFT determines what a third-level page table entry can map */
-#define PUD_SHIFT	(PMD_SHIFT + PMD_INDEX_SIZE)
-#define PUD_SIZE	(1UL << PUD_SHIFT)
-#define PUD_MASK	(~(PUD_SIZE-1))
-
-/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
-#define PGDIR_SHIFT	(PUD_SHIFT + PUD_INDEX_SIZE)
-#define PGDIR_SIZE	(1UL << PGDIR_SHIFT)
-#define PGDIR_MASK	(~(PGDIR_SIZE-1))
-
-/* PTE bits */
-#define _PAGE_HASHPTE	0x0400 /* software: pte has an associated HPTE */
-#define _PAGE_SECONDARY 0x8000 /* software: HPTE is in secondary group */
-#define _PAGE_GROUP_IX  0x7000 /* software: HPTE index within group */
-#define _PAGE_F_SECOND  _PAGE_SECONDARY
-#define _PAGE_F_GIX     _PAGE_GROUP_IX
-#define _PAGE_SPECIAL	0x10000 /* software: special page */
-#define __HAVE_ARCH_PTE_SPECIAL
-
-/* PTE flags to conserve for HPTE identification */
-#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
-			 _PAGE_SECONDARY | _PAGE_GROUP_IX)
-
-/* There is no 4K PFN hack on 4K pages */
-#define _PAGE_4K_PFN	0
-
-/* PAGE_MASK gives the right answer below, but only by accident */
-/* It should be preserving the high 48 bits and then specifically */
-/* preserving _PAGE_SECONDARY | _PAGE_GROUP_IX */
-#define _PAGE_CHG_MASK	(PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \
-                         _PAGE_HPTEFLAGS | _PAGE_SPECIAL)
-
-/* Bits to mask out from a PMD to get to the PTE page */
-#define PMD_MASKED_BITS		0
-/* Bits to mask out from a PUD to get to the PMD page */
-#define PUD_MASKED_BITS		0
-/* Bits to mask out from a PGD to get to the PUD page */
-#define PGD_MASKED_BITS		0
-
-/* shift to put page number into pte */
-#define PTE_RPN_SHIFT	(17)
-
-#ifdef STRICT_MM_TYPECHECKS
-#define __real_pte(e,p)		((real_pte_t){(e)})
-#define __rpte_to_pte(r)	((r).pte)
-#else
-#define __real_pte(e,p)		(e)
-#define __rpte_to_pte(r)	(__pte(r))
-#endif
-#define __rpte_to_hidx(r,index)	(pte_val(__rpte_to_pte(r)) >> 12)
-
-#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift)       \
-	do {							         \
-		index = 0;					         \
-		shift = mmu_psize_defs[psize].shift;		         \
-
-#define pte_iterate_hashed_end() } while(0)
-
-#ifdef CONFIG_PPC_HAS_HASH_64K
-#define pte_pagesize_index(mm, addr, pte)	get_slice_psize(mm, addr)
-#else
-#define pte_pagesize_index(mm, addr, pte)	MMU_PAGE_4K
-#endif
-
-/*
- * 4-level page tables related bits
- */
-
-#define pgd_none(pgd)		(!pgd_val(pgd))
-#define pgd_bad(pgd)		(pgd_val(pgd) == 0)
-#define pgd_present(pgd)	(pgd_val(pgd) != 0)
-#define pgd_clear(pgdp)		(pgd_val(*(pgdp)) = 0)
-#define pgd_page_vaddr(pgd)	(pgd_val(pgd) & ~PGD_MASKED_BITS)
-#define pgd_page(pgd)		virt_to_page(pgd_page_vaddr(pgd))
-
-#define pud_offset(pgdp, addr)	\
-  (((pud_t *) pgd_page_vaddr(*(pgdp))) + \
-    (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)))
-
-#define pud_ERROR(e) \
-	printk("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pud_val(e))
-
-#define remap_4k_pfn(vma, addr, pfn, prot)	\
-	remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
-#endif /* _ASM_POWERPC_PGTABLE_4K_H */
Index: linux-work/arch/powerpc/include/asm/pgtable-64k.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable-64k.h	2009-03-02 10:08:13.000000000 +1100
+++ /dev/null	1970-01-01 00:00:00.000000000 +0000
@@ -1,155 +0,0 @@
-#ifndef _ASM_POWERPC_PGTABLE_64K_H
-#define _ASM_POWERPC_PGTABLE_64K_H
-
-#include <asm-generic/pgtable-nopud.h>
-
-
-#define PTE_INDEX_SIZE  12
-#define PMD_INDEX_SIZE  12
-#define PUD_INDEX_SIZE	0
-#define PGD_INDEX_SIZE  4
-
-#ifndef __ASSEMBLY__
-#define PTE_TABLE_SIZE	(sizeof(real_pte_t) << PTE_INDEX_SIZE)
-#define PMD_TABLE_SIZE	(sizeof(pmd_t) << PMD_INDEX_SIZE)
-#define PGD_TABLE_SIZE	(sizeof(pgd_t) << PGD_INDEX_SIZE)
-
-#define PTRS_PER_PTE	(1 << PTE_INDEX_SIZE)
-#define PTRS_PER_PMD	(1 << PMD_INDEX_SIZE)
-#define PTRS_PER_PGD	(1 << PGD_INDEX_SIZE)
-
-#ifdef CONFIG_PPC_SUBPAGE_PROT
-/*
- * For the sub-page protection option, we extend the PGD with one of
- * these.  Basically we have a 3-level tree, with the top level being
- * the protptrs array.  To optimize speed and memory consumption when
- * only addresses < 4GB are being protected, pointers to the first
- * four pages of sub-page protection words are stored in the low_prot
- * array.
- * Each page of sub-page protection words protects 1GB (4 bytes
- * protects 64k).  For the 3-level tree, each page of pointers then
- * protects 8TB.
- */
-struct subpage_prot_table {
-	unsigned long maxaddr;	/* only addresses < this are protected */
-	unsigned int **protptrs[2];
-	unsigned int *low_prot[4];
-};
-
-#undef PGD_TABLE_SIZE
-#define PGD_TABLE_SIZE		((sizeof(pgd_t) << PGD_INDEX_SIZE) + \
-				 sizeof(struct subpage_prot_table))
-
-#define SBP_L1_BITS		(PAGE_SHIFT - 2)
-#define SBP_L2_BITS		(PAGE_SHIFT - 3)
-#define SBP_L1_COUNT		(1 << SBP_L1_BITS)
-#define SBP_L2_COUNT		(1 << SBP_L2_BITS)
-#define SBP_L2_SHIFT		(PAGE_SHIFT + SBP_L1_BITS)
-#define SBP_L3_SHIFT		(SBP_L2_SHIFT + SBP_L2_BITS)
-
-extern void subpage_prot_free(pgd_t *pgd);
-
-static inline struct subpage_prot_table *pgd_subpage_prot(pgd_t *pgd)
-{
-	return (struct subpage_prot_table *)(pgd + PTRS_PER_PGD);
-}
-#endif /* CONFIG_PPC_SUBPAGE_PROT */
-#endif	/* __ASSEMBLY__ */
-
-/* With 4k base page size, hugepage PTEs go at the PMD level */
-#define MIN_HUGEPTE_SHIFT	PAGE_SHIFT
-
-/* PMD_SHIFT determines what a second-level page table entry can map */
-#define PMD_SHIFT	(PAGE_SHIFT + PTE_INDEX_SIZE)
-#define PMD_SIZE	(1UL << PMD_SHIFT)
-#define PMD_MASK	(~(PMD_SIZE-1))
-
-/* PGDIR_SHIFT determines what a third-level page table entry can map */
-#define PGDIR_SHIFT	(PMD_SHIFT + PMD_INDEX_SIZE)
-#define PGDIR_SIZE	(1UL << PGDIR_SHIFT)
-#define PGDIR_MASK	(~(PGDIR_SIZE-1))
-
-/* Additional PTE bits (don't change without checking asm in hash_low.S) */
-#define __HAVE_ARCH_PTE_SPECIAL
-#define _PAGE_SPECIAL	0x00000400 /* software: special page */
-#define _PAGE_HPTE_SUB	0x0ffff000 /* combo only: sub pages HPTE bits */
-#define _PAGE_HPTE_SUB0	0x08000000 /* combo only: first sub page */
-#define _PAGE_COMBO	0x10000000 /* this is a combo 4k page */
-#define _PAGE_4K_PFN	0x20000000 /* PFN is for a single 4k page */
-
-/* For 64K page, we don't have a separate _PAGE_HASHPTE bit. Instead,
- * we set that to be the whole sub-bits mask. The C code will only
- * test this, so a multi-bit mask will work. For combo pages, this
- * is equivalent as effectively, the old _PAGE_HASHPTE was an OR of
- * all the sub bits. For real 64k pages, we now have the assembly set
- * _PAGE_HPTE_SUB0 in addition to setting the HIDX bits which overlap
- * that mask. This is fine as long as the HIDX bits are never set on
- * a PTE that isn't hashed, which is the case today.
- *
- * A little nit is for the huge page C code, which does the hashing
- * in C, we need to provide which bit to use.
- */
-#define _PAGE_HASHPTE	_PAGE_HPTE_SUB
-
-/* Note the full page bits must be in the same location as for normal
- * 4k pages as the same asssembly will be used to insert 64K pages
- * wether the kernel has CONFIG_PPC_64K_PAGES or not
- */
-#define _PAGE_F_SECOND  0x00008000 /* full page: hidx bits */
-#define _PAGE_F_GIX     0x00007000 /* full page: hidx bits */
-
-/* PTE flags to conserve for HPTE identification */
-#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | _PAGE_COMBO)
-
-/* Shift to put page number into pte.
- *
- * That gives us a max RPN of 34 bits, which means a max of 50 bits
- * of addressable physical space, or 46 bits for the special 4k PFNs.
- */
-#define PTE_RPN_SHIFT	(30)
-#define PTE_RPN_MAX	(1UL << (64 - PTE_RPN_SHIFT))
-#define PTE_RPN_MASK	(~((1UL<<PTE_RPN_SHIFT)-1))
-
-/* _PAGE_CHG_MASK masks of bits that are to be preserved accross
- * pgprot changes
- */
-#define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
-                         _PAGE_ACCESSED | _PAGE_SPECIAL)
-
-/* Bits to mask out from a PMD to get to the PTE page */
-#define PMD_MASKED_BITS		0x1ff
-/* Bits to mask out from a PGD/PUD to get to the PMD page */
-#define PUD_MASKED_BITS		0x1ff
-
-/* Manipulate "rpte" values */
-#define __real_pte(e,p) 	((real_pte_t) { \
-	(e), pte_val(*((p) + PTRS_PER_PTE)) })
-#define __rpte_to_hidx(r,index)	((pte_val((r).pte) & _PAGE_COMBO) ? \
-        (((r).hidx >> ((index)<<2)) & 0xf) : ((pte_val((r).pte) >> 12) & 0xf))
-#define __rpte_to_pte(r)	((r).pte)
-#define __rpte_sub_valid(rpte, index) \
-	(pte_val(rpte.pte) & (_PAGE_HPTE_SUB0 >> (index)))
-
-
-/* Trick: we set __end to va + 64k, which happens works for
- * a 16M page as well as we want only one iteration
- */
-#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift)	    \
-        do {                                                                \
-                unsigned long __end = va + PAGE_SIZE;                       \
-                unsigned __split = (psize == MMU_PAGE_4K ||                 \
-				    psize == MMU_PAGE_64K_AP);              \
-                shift = mmu_psize_defs[psize].shift;                        \
-		for (index = 0; va < __end; index++, va += (1L << shift)) { \
-		        if (!__split || __rpte_sub_valid(rpte, index)) do { \
-
-#define pte_iterate_hashed_end() } while(0); } } while(0)
-
-#define pte_pagesize_index(mm, addr, pte)	\
-	(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
-
-#define remap_4k_pfn(vma, addr, pfn, prot)				\
-	remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,		\
-			__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN))
-
-#endif /* _ASM_POWERPC_PGTABLE_64K_H */
Index: linux-work/arch/powerpc/include/asm/pgtable-ppc64-4k.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc64-4k.h	2009-03-10 15:27:23.000000000 +1100
@@ -0,0 +1,74 @@
+#ifndef _ASM_POWERPC_PGTABLE_PPC64_4K_H
+#define _ASM_POWERPC_PGTABLE_PPC64_4K_H
+/*
+ * Entries per page directory level.  The PTE level must use a 64b record
+ * for each page table entry.  The PMD and PGD level use a 32b record for
+ * each entry by assuming that each entry is page aligned.
+ */
+#define PTE_INDEX_SIZE  9
+#define PMD_INDEX_SIZE  7
+#define PUD_INDEX_SIZE  7
+#define PGD_INDEX_SIZE  9
+
+#ifndef __ASSEMBLY__
+#define PTE_TABLE_SIZE	(sizeof(pte_t) << PTE_INDEX_SIZE)
+#define PMD_TABLE_SIZE	(sizeof(pmd_t) << PMD_INDEX_SIZE)
+#define PUD_TABLE_SIZE	(sizeof(pud_t) << PUD_INDEX_SIZE)
+#define PGD_TABLE_SIZE	(sizeof(pgd_t) << PGD_INDEX_SIZE)
+#endif	/* __ASSEMBLY__ */
+
+#define PTRS_PER_PTE	(1 << PTE_INDEX_SIZE)
+#define PTRS_PER_PMD	(1 << PMD_INDEX_SIZE)
+#define PTRS_PER_PUD	(1 << PMD_INDEX_SIZE)
+#define PTRS_PER_PGD	(1 << PGD_INDEX_SIZE)
+
+/* PMD_SHIFT determines what a second-level page table entry can map */
+#define PMD_SHIFT	(PAGE_SHIFT + PTE_INDEX_SIZE)
+#define PMD_SIZE	(1UL << PMD_SHIFT)
+#define PMD_MASK	(~(PMD_SIZE-1))
+
+/* With 4k base page size, hugepage PTEs go at the PMD level */
+#define MIN_HUGEPTE_SHIFT	PMD_SHIFT
+
+/* PUD_SHIFT determines what a third-level page table entry can map */
+#define PUD_SHIFT	(PMD_SHIFT + PMD_INDEX_SIZE)
+#define PUD_SIZE	(1UL << PUD_SHIFT)
+#define PUD_MASK	(~(PUD_SIZE-1))
+
+/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
+#define PGDIR_SHIFT	(PUD_SHIFT + PUD_INDEX_SIZE)
+#define PGDIR_SIZE	(1UL << PGDIR_SHIFT)
+#define PGDIR_MASK	(~(PGDIR_SIZE-1))
+
+/* Bits to mask out from a PMD to get to the PTE page */
+#define PMD_MASKED_BITS		0
+/* Bits to mask out from a PUD to get to the PMD page */
+#define PUD_MASKED_BITS		0
+/* Bits to mask out from a PGD to get to the PUD page */
+#define PGD_MASKED_BITS		0
+
+
+/*
+ * 4-level page tables related bits
+ */
+
+#define pgd_none(pgd)		(!pgd_val(pgd))
+#define pgd_bad(pgd)		(pgd_val(pgd) == 0)
+#define pgd_present(pgd)	(pgd_val(pgd) != 0)
+#define pgd_clear(pgdp)		(pgd_val(*(pgdp)) = 0)
+#define pgd_page_vaddr(pgd)	(pgd_val(pgd) & ~PGD_MASKED_BITS)
+#define pgd_page(pgd)		virt_to_page(pgd_page_vaddr(pgd))
+
+#define pud_offset(pgdp, addr)	\
+  (((pud_t *) pgd_page_vaddr(*(pgdp))) + \
+    (((addr) >> PUD_SHIFT) & (PTRS_PER_PUD - 1)))
+
+#define pud_ERROR(e) \
+	printk("%s:%d: bad pud %08lx.\n", __FILE__, __LINE__, pud_val(e))
+
+/*
+ * On all 4K setups, remap_4k_pfn() equates to remap_pfn_range() */
+#define remap_4k_pfn(vma, addr, pfn, prot)	\
+	remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
+
+#endif /* _ASM_POWERPC_PGTABLE_PPC64_4K_H */
Index: linux-work/arch/powerpc/include/asm/pgtable-ppc64-64k.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc64-64k.h	2009-03-10 15:27:41.000000000 +1100
@@ -0,0 +1,42 @@
+#ifndef _ASM_POWERPC_PGTABLE_PPC64_64K_H
+#define _ASM_POWERPC_PGTABLE_PPC64_64K_H
+
+#include <asm-generic/pgtable-nopud.h>
+
+
+#define PTE_INDEX_SIZE  12
+#define PMD_INDEX_SIZE  12
+#define PUD_INDEX_SIZE	0
+#define PGD_INDEX_SIZE  4
+
+#ifndef __ASSEMBLY__
+
+#define PTE_TABLE_SIZE	(sizeof(real_pte_t) << PTE_INDEX_SIZE)
+#define PMD_TABLE_SIZE	(sizeof(pmd_t) << PMD_INDEX_SIZE)
+#define PGD_TABLE_SIZE	(sizeof(pgd_t) << PGD_INDEX_SIZE)
+
+#define PTRS_PER_PTE	(1 << PTE_INDEX_SIZE)
+#define PTRS_PER_PMD	(1 << PMD_INDEX_SIZE)
+#define PTRS_PER_PGD	(1 << PGD_INDEX_SIZE)
+
+/* With 4k base page size, hugepage PTEs go at the PMD level */
+#define MIN_HUGEPTE_SHIFT	PAGE_SHIFT
+
+/* PMD_SHIFT determines what a second-level page table entry can map */
+#define PMD_SHIFT	(PAGE_SHIFT + PTE_INDEX_SIZE)
+#define PMD_SIZE	(1UL << PMD_SHIFT)
+#define PMD_MASK	(~(PMD_SIZE-1))
+
+/* PGDIR_SHIFT determines what a third-level page table entry can map */
+#define PGDIR_SHIFT	(PMD_SHIFT + PMD_INDEX_SIZE)
+#define PGDIR_SIZE	(1UL << PGDIR_SHIFT)
+#define PGDIR_MASK	(~(PGDIR_SIZE-1))
+
+#endif	/* __ASSEMBLY__ */
+
+/* Bits to mask out from a PMD to get to the PTE page */
+#define PMD_MASKED_BITS		0x1ff
+/* Bits to mask out from a PGD/PUD to get to the PMD page */
+#define PUD_MASKED_BITS		0x1ff
+
+#endif /* _ASM_POWERPC_PGTABLE_PPC64_64K_H */
Index: linux-work/arch/powerpc/include/asm/pte-40x.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-40x.h	2009-03-10 15:28:02.000000000 +1100
@@ -0,0 +1,64 @@
+#ifndef _ASM_POWERPC_PTE_40x_H
+#define _ASM_POWERPC_PTE_40x_H
+#ifdef __KERNEL__
+
+/*
+ * At present, all PowerPC 400-class processors share a similar TLB
+ * architecture. The instruction and data sides share a unified,
+ * 64-entry, fully-associative TLB which is maintained totally under
+ * software control. In addition, the instruction side has a
+ * hardware-managed, 4-entry, fully-associative TLB which serves as a
+ * first level to the shared TLB. These two TLBs are known as the UTLB
+ * and ITLB, respectively (see "mmu.h" for definitions).
+ *
+ * There are several potential gotchas here.  The 40x hardware TLBLO
+ * field looks like this:
+ *
+ * 0  1  2  3  4  ... 18 19 20 21 22 23 24 25 26 27 28 29 30 31
+ * RPN.....................  0  0 EX WR ZSEL.......  W  I  M  G
+ *
+ * Where possible we make the Linux PTE bits match up with this
+ *
+ * - bits 20 and 21 must be cleared, because we use 4k pages (40x can
+ *   support down to 1k pages), this is done in the TLBMiss exception
+ *   handler.
+ * - We use only zones 0 (for kernel pages) and 1 (for user pages)
+ *   of the 16 available.  Bit 24-26 of the TLB are cleared in the TLB
+ *   miss handler.  Bit 27 is PAGE_USER, thus selecting the correct
+ *   zone.
+ * - PRESENT *must* be in the bottom two bits because swap cache
+ *   entries use the top 30 bits.  Because 40x doesn't support SMP
+ *   anyway, M is irrelevant so we borrow it for PAGE_PRESENT.  Bit 30
+ *   is cleared in the TLB miss handler before the TLB entry is loaded.
+ * - All other bits of the PTE are loaded into TLBLO without
+ *   modification, leaving us only the bits 20, 21, 24, 25, 26, 30 for
+ *   software PTE bits.  We actually use use bits 21, 24, 25, and
+ *   30 respectively for the software bits: ACCESSED, DIRTY, RW, and
+ *   PRESENT.
+ */
+
+#define	_PAGE_GUARDED	0x001	/* G: page is guarded from prefetch */
+#define _PAGE_FILE	0x001	/* when !present: nonlinear file mapping */
+#define _PAGE_PRESENT	0x002	/* software: PTE contains a translation */
+#define	_PAGE_NO_CACHE	0x004	/* I: caching is inhibited */
+#define	_PAGE_WRITETHRU	0x008	/* W: caching is write-through */
+#define	_PAGE_USER	0x010	/* matches one of the zone permission bits */
+#define	_PAGE_RW	0x040	/* software: Writes permitted */
+#define	_PAGE_DIRTY	0x080	/* software: dirty page */
+#define _PAGE_HWWRITE	0x100	/* hardware: Dirty & RW, set in exception */
+#define _PAGE_HWEXEC	0x200	/* hardware: EX permission */
+#define _PAGE_ACCESSED	0x400	/* software: R: page referenced */
+
+#define _PMD_PRESENT	0x400	/* PMD points to page of PTEs */
+#define _PMD_BAD	0x802
+#define _PMD_SIZE	0x0e0	/* size field, != 0 for large-page PMD entry */
+#define _PMD_SIZE_4M	0x0c0
+#define _PMD_SIZE_16M	0x0e0
+
+#define PMD_PAGE_SIZE(pmdval)	(1024 << (((pmdval) & _PMD_SIZE) >> 4))
+
+/* Until my rework is finished, 40x still needs atomic PTE updates */
+#define PTE_ATOMIC_UPDATES	1
+
+#endif /* __KERNEL__ */
+#endif /*  _ASM_POWERPC_PTE_40x_H */
Index: linux-work/arch/powerpc/include/asm/pte-44x.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-44x.h	2009-03-10 15:28:16.000000000 +1100
@@ -0,0 +1,102 @@
+#ifndef _ASM_POWERPC_PTE_44x_H
+#define _ASM_POWERPC_PTE_44x_H
+#ifdef __KERNEL__
+
+/*
+ * Definitions for PPC440
+ *
+ * Because of the 3 word TLB entries to support 36-bit addressing,
+ * the attribute are difficult to map in such a fashion that they
+ * are easily loaded during exception processing.  I decided to
+ * organize the entry so the ERPN is the only portion in the
+ * upper word of the PTE and the attribute bits below are packed
+ * in as sensibly as they can be in the area below a 4KB page size
+ * oriented RPN.  This at least makes it easy to load the RPN and
+ * ERPN fields in the TLB. -Matt
+ *
+ * This isn't entirely true anymore, at least some bits are now
+ * easier to move into the TLB from the PTE. -BenH.
+ *
+ * Note that these bits preclude future use of a page size
+ * less than 4KB.
+ *
+ *
+ * PPC 440 core has following TLB attribute fields;
+ *
+ *   TLB1:
+ *   0  1  2  3  4  ... 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
+ *   RPN.................................  -  -  -  -  -  - ERPN.......
+ *
+ *   TLB2:
+ *   0  1  2  3  4  ... 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
+ *   -  -  -  -  -    - U0 U1 U2 U3 W  I  M  G  E   - UX UW UR SX SW SR
+ *
+ * Newer 440 cores (440x6 as used on AMCC 460EX/460GT) have additional
+ * TLB2 storage attibute fields. Those are:
+ *
+ *   TLB2:
+ *   0...10    11   12   13   14   15   16...31
+ *   no change WL1  IL1I IL1D IL2I IL2D no change
+ *
+ * There are some constrains and options, to decide mapping software bits
+ * into TLB entry.
+ *
+ *   - PRESENT *must* be in the bottom three bits because swap cache
+ *     entries use the top 29 bits for TLB2.
+ *
+ *   - FILE *must* be in the bottom three bits because swap cache
+ *     entries use the top 29 bits for TLB2.
+ *
+ *   - CACHE COHERENT bit (M) has no effect on original PPC440 cores,
+ *     because it doesn't support SMP. However, some later 460 variants
+ *     have -some- form of SMP support and so I keep the bit there for
+ *     future use
+ *
+ * With the PPC 44x Linux implementation, the 0-11th LSBs of the PTE are used
+ * for memory protection related functions (see PTE structure in
+ * include/asm-ppc/mmu.h).  The _PAGE_XXX definitions in this file map to the
+ * above bits.  Note that the bit values are CPU specific, not architecture
+ * specific.
+ *
+ * The kernel PTE entry holds an arch-dependent swp_entry structure under
+ * certain situations. In other words, in such situations some portion of
+ * the PTE bits are used as a swp_entry. In the PPC implementation, the
+ * 3-24th LSB are shared with swp_entry, however the 0-2nd three LSB still
+ * hold protection values. That means the three protection bits are
+ * reserved for both PTE and SWAP entry at the most significant three
+ * LSBs.
+ *
+ * There are three protection bits available for SWAP entry:
+ *	_PAGE_PRESENT
+ *	_PAGE_FILE
+ *	_PAGE_HASHPTE (if HW has)
+ *
+ * So those three bits have to be inside of 0-2nd LSB of PTE.
+ *
+ */
+
+#define _PAGE_PRESENT	0x00000001		/* S: PTE valid */
+#define _PAGE_RW	0x00000002		/* S: Write permission */
+#define _PAGE_FILE	0x00000004		/* S: nonlinear file mapping */
+#define _PAGE_HWEXEC	0x00000004		/* H: Execute permission */
+#define _PAGE_ACCESSED	0x00000008		/* S: Page referenced */
+#define _PAGE_DIRTY	0x00000010		/* S: Page dirty */
+#define _PAGE_SPECIAL	0x00000020		/* S: Special page */
+#define _PAGE_USER	0x00000040		/* S: User page */
+#define _PAGE_ENDIAN	0x00000080		/* H: E bit */
+#define _PAGE_GUARDED	0x00000100		/* H: G bit */
+#define _PAGE_COHERENT	0x00000200		/* H: M bit */
+#define _PAGE_NO_CACHE	0x00000400		/* H: I bit */
+#define _PAGE_WRITETHRU	0x00000800		/* H: W bit */
+
+/* TODO: Add large page lowmem mapping support */
+#define _PMD_PRESENT	0
+#define _PMD_PRESENT_MASK (PAGE_MASK)
+#define _PMD_BAD	(~PAGE_MASK)
+
+/* ERPN in a PTE never gets cleared, ignore it */
+#define _PTE_NONE_MASK	0xffffffff00000000ULL
+
+
+#endif /* __KERNEL__ */
+#endif /*  _ASM_POWERPC_PTE_44x_H */
Index: linux-work/arch/powerpc/include/asm/pte-8xx.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-8xx.h	2009-03-10 15:28:29.000000000 +1100
@@ -0,0 +1,64 @@
+#ifndef _ASM_POWERPC_PTE_8xx_H
+#define _ASM_POWERPC_PTE_8xx_H
+#ifdef __KERNEL__
+
+/*
+ * The PowerPC MPC8xx uses a TLB with hardware assisted, software tablewalk.
+ * We also use the two level tables, but we can put the real bits in them
+ * needed for the TLB and tablewalk.  These definitions require Mx_CTR.PPM = 0,
+ * Mx_CTR.PPCS = 0, and MD_CTR.TWAM = 1.  The level 2 descriptor has
+ * additional page protection (when Mx_CTR.PPCS = 1) that allows TLB hit
+ * based upon user/super access.  The TLB does not have accessed nor write
+ * protect.  We assume that if the TLB get loaded with an entry it is
+ * accessed, and overload the changed bit for write protect.  We use
+ * two bits in the software pte that are supposed to be set to zero in
+ * the TLB entry (24 and 25) for these indicators.  Although the level 1
+ * descriptor contains the guarded and writethrough/copyback bits, we can
+ * set these at the page level since they get copied from the Mx_TWC
+ * register when the TLB entry is loaded.  We will use bit 27 for guard, since
+ * that is where it exists in the MD_TWC, and bit 26 for writethrough.
+ * These will get masked from the level 2 descriptor at TLB load time, and
+ * copied to the MD_TWC before it gets loaded.
+ * Large page sizes added.  We currently support two sizes, 4K and 8M.
+ * This also allows a TLB hander optimization because we can directly
+ * load the PMD into MD_TWC.  The 8M pages are only used for kernel
+ * mapping of well known areas.  The PMD (PGD) entries contain control
+ * flags in addition to the address, so care must be taken that the
+ * software no longer assumes these are only pointers.
+ */
+
+/* Definitions for 8xx embedded chips. */
+#define _PAGE_PRESENT	0x0001	/* Page is valid */
+#define _PAGE_FILE	0x0002	/* when !present: nonlinear file mapping */
+#define _PAGE_NO_CACHE	0x0002	/* I: cache inhibit */
+#define _PAGE_SHARED	0x0004	/* No ASID (context) compare */
+
+/* These five software bits must be masked out when the entry is loaded
+ * into the TLB.
+ */
+#define _PAGE_EXEC	0x0008	/* software: i-cache coherency required */
+#define _PAGE_GUARDED	0x0010	/* software: guarded access */
+#define _PAGE_DIRTY	0x0020	/* software: page changed */
+#define _PAGE_RW	0x0040	/* software: user write access allowed */
+#define _PAGE_ACCESSED	0x0080	/* software: page referenced */
+
+/* Setting any bits in the nibble with the follow two controls will
+ * require a TLB exception handler change.  It is assumed unused bits
+ * are always zero.
+ */
+#define _PAGE_HWWRITE	0x0100	/* h/w write enable: never set in Linux PTE */
+#define _PAGE_USER	0x0800	/* One of the PP bits, the other is USER&~RW */
+
+#define _PMD_PRESENT	0x0001
+#define _PMD_BAD	0x0ff0
+#define _PMD_PAGE_MASK	0x000c
+#define _PMD_PAGE_8M	0x000c
+
+#define _PTE_NONE_MASK _PAGE_ACCESSED
+
+/* Until my rework is finished, 8xx still needs atomic PTE updates */
+#define PTE_ATOMIC_UPDATES	1
+
+
+#endif /* __KERNEL__ */
+#endif /*  _ASM_POWERPC_PTE_8xx_H */
Index: linux-work/arch/powerpc/include/asm/pte-fsl-booke.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-fsl-booke.h	2009-03-10 15:28:38.000000000 +1100
@@ -0,0 +1,46 @@
+#ifndef _ASM_POWERPC_PTE_FSL_BOOKE_H
+#define _ASM_POWERPC_PTE_FSL_BOOKE_H
+#ifdef __KERNEL__
+
+/* PTE bit definitions for Freescale BookE SW loaded TLB MMU based
+ * processors
+ *
+   MMU Assist Register 3:
+
+   32 33 34 35 36  ... 50 51 52 53 54 55 56 57 58 59 60 61 62 63
+   RPN......................  0  0 U0 U1 U2 U3 UX SX UW SW UR SR
+
+   - PRESENT *must* be in the bottom three bits because swap cache
+     entries use the top 29 bits.
+
+   - FILE *must* be in the bottom three bits because swap cache
+     entries use the top 29 bits.
+*/
+
+/* Definitions for FSL Book-E Cores */
+#define _PAGE_PRESENT	0x00001	/* S: PTE contains a translation */
+#define _PAGE_USER	0x00002	/* S: User page (maps to UR) */
+#define _PAGE_FILE	0x00002	/* S: when !present: nonlinear file mapping */
+#define _PAGE_RW	0x00004	/* S: Write permission (SW) */
+#define _PAGE_DIRTY	0x00008	/* S: Page dirty */
+#define _PAGE_HWEXEC	0x00010	/* H: SX permission */
+#define _PAGE_ACCESSED	0x00020	/* S: Page referenced */
+
+#define _PAGE_ENDIAN	0x00040	/* H: E bit */
+#define _PAGE_GUARDED	0x00080	/* H: G bit */
+#define _PAGE_COHERENT	0x00100	/* H: M bit */
+#define _PAGE_NO_CACHE	0x00200	/* H: I bit */
+#define _PAGE_WRITETHRU	0x00400	/* H: W bit */
+#define _PAGE_SPECIAL	0x00800 /* S: Special page */
+
+#ifdef CONFIG_PTE_64BIT
+/* ERPN in a PTE never gets cleared, ignore it */
+#define _PTE_NONE_MASK	0xffffffffffff0000ULL
+#endif
+
+#define _PMD_PRESENT	0
+#define _PMD_PRESENT_MASK (PAGE_MASK)
+#define _PMD_BAD	(~PAGE_MASK)
+
+#endif /* __KERNEL__ */
+#endif /*  _ASM_POWERPC_PTE_FSL_BOOKE_H */
Index: linux-work/arch/powerpc/include/asm/pte-hash32.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-hash32.h	2009-03-10 15:28:47.000000000 +1100
@@ -0,0 +1,49 @@
+#ifndef _ASM_POWERPC_PTE_HASH32_H
+#define _ASM_POWERPC_PTE_HASH32_H
+#ifdef __KERNEL__
+
+/*
+ * The "classic" 32-bit implementation of the PowerPC MMU uses a hash
+ * table containing PTEs, together with a set of 16 segment registers,
+ * to define the virtual to physical address mapping.
+ *
+ * We use the hash table as an extended TLB, i.e. a cache of currently
+ * active mappings.  We maintain a two-level page table tree, much
+ * like that used by the i386, for the sake of the Linux memory
+ * management code.  Low-level assembler code in hash_low_32.S
+ * (procedure hash_page) is responsible for extracting ptes from the
+ * tree and putting them into the hash table when necessary, and
+ * updating the accessed and modified bits in the page table tree.
+ */
+
+#define _PAGE_PRESENT	0x001	/* software: pte contains a translation */
+#define _PAGE_HASHPTE	0x002	/* hash_page has made an HPTE for this pte */
+#define _PAGE_FILE	0x004	/* when !present: nonlinear file mapping */
+#define _PAGE_USER	0x004	/* usermode access allowed */
+#define _PAGE_GUARDED	0x008	/* G: prohibit speculative access */
+#define _PAGE_COHERENT	0x010	/* M: enforce memory coherence (SMP systems) */
+#define _PAGE_NO_CACHE	0x020	/* I: cache inhibit */
+#define _PAGE_WRITETHRU	0x040	/* W: cache write-through */
+#define _PAGE_DIRTY	0x080	/* C: page changed */
+#define _PAGE_ACCESSED	0x100	/* R: page referenced */
+#define _PAGE_EXEC	0x200	/* software: i-cache coherency required */
+#define _PAGE_RW	0x400	/* software: user write access allowed */
+#define _PAGE_SPECIAL	0x800	/* software: Special page */
+
+#ifdef CONFIG_PTE_64BIT
+/* We never clear the high word of the pte */
+#define _PTE_NONE_MASK	(0xffffffff00000000ULL | _PAGE_HASHPTE)
+#else
+#define _PTE_NONE_MASK	_PAGE_HASHPTE
+#endif
+
+#define _PMD_PRESENT	0
+#define _PMD_PRESENT_MASK (PAGE_MASK)
+#define _PMD_BAD	(~PAGE_MASK)
+
+/* Hash table based platforms need atomic updates of the linux PTE */
+#define PTE_ATOMIC_UPDATES	1
+
+
+#endif /* __KERNEL__ */
+#endif /*  _ASM_POWERPC_PTE_HASH32_H */
Index: linux-work/arch/powerpc/include/asm/pte-hash64-4k.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-hash64-4k.h	2009-03-10 15:26:49.000000000 +1100
@@ -0,0 +1,20 @@
+/* To be include by pgtable-hash64.h only */
+
+/* PTE bits */
+#define _PAGE_HASHPTE	0x0400 /* software: pte has an associated HPTE */
+#define _PAGE_SECONDARY 0x8000 /* software: HPTE is in secondary group */
+#define _PAGE_GROUP_IX  0x7000 /* software: HPTE index within group */
+#define _PAGE_F_SECOND  _PAGE_SECONDARY
+#define _PAGE_F_GIX     _PAGE_GROUP_IX
+#define _PAGE_SPECIAL	0x10000 /* software: special page */
+
+/* There is no 4K PFN hack on 4K pages */
+#define _PAGE_4K_PFN	0
+
+/* PTE flags to conserve for HPTE identification */
+#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
+			 _PAGE_SECONDARY | _PAGE_GROUP_IX)
+
+/* shift to put page number into pte */
+#define PTE_RPN_SHIFT	(17)
+
Index: linux-work/arch/powerpc/include/asm/pte-hash64-64k.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-hash64-64k.h	2009-03-03 16:20:45.000000000 +1100
@@ -0,0 +1,115 @@
+/* To be include by pgtable-hash64.h only */
+
+/* Additional PTE bits (don't change without checking asm in hash_low.S) */
+#define _PAGE_SPECIAL	0x00000400 /* software: special page */
+#define _PAGE_HPTE_SUB	0x0ffff000 /* combo only: sub pages HPTE bits */
+#define _PAGE_HPTE_SUB0	0x08000000 /* combo only: first sub page */
+#define _PAGE_COMBO	0x10000000 /* this is a combo 4k page */
+#define _PAGE_4K_PFN	0x20000000 /* PFN is for a single 4k page */
+
+/* For 64K page, we don't have a separate _PAGE_HASHPTE bit. Instead,
+ * we set that to be the whole sub-bits mask. The C code will only
+ * test this, so a multi-bit mask will work. For combo pages, this
+ * is equivalent as effectively, the old _PAGE_HASHPTE was an OR of
+ * all the sub bits. For real 64k pages, we now have the assembly set
+ * _PAGE_HPTE_SUB0 in addition to setting the HIDX bits which overlap
+ * that mask. This is fine as long as the HIDX bits are never set on
+ * a PTE that isn't hashed, which is the case today.
+ *
+ * A little nit is for the huge page C code, which does the hashing
+ * in C, we need to provide which bit to use.
+ */
+#define _PAGE_HASHPTE	_PAGE_HPTE_SUB
+
+/* Note the full page bits must be in the same location as for normal
+ * 4k pages as the same asssembly will be used to insert 64K pages
+ * wether the kernel has CONFIG_PPC_64K_PAGES or not
+ */
+#define _PAGE_F_SECOND  0x00008000 /* full page: hidx bits */
+#define _PAGE_F_GIX     0x00007000 /* full page: hidx bits */
+
+/* PTE flags to conserve for HPTE identification */
+#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | _PAGE_COMBO)
+
+/* Shift to put page number into pte.
+ *
+ * That gives us a max RPN of 34 bits, which means a max of 50 bits
+ * of addressable physical space, or 46 bits for the special 4k PFNs.
+ */
+#define PTE_RPN_SHIFT	(30)
+
+#ifndef __ASSEMBLY__
+
+/*
+ * With 64K pages on hash table, we have a special PTE format that
+ * uses a second "half" of the page table to encode sub-page information
+ * in order to deal with 64K made of 4K HW pages. Thus we override the
+ * generic accessors and iterators here
+ */
+#define __real_pte(e,p) 	((real_pte_t) { \
+	(e), pte_val(*((p) + PTRS_PER_PTE)) })
+#define __rpte_to_hidx(r,index)	((pte_val((r).pte) & _PAGE_COMBO) ? \
+        (((r).hidx >> ((index)<<2)) & 0xf) : ((pte_val((r).pte) >> 12) & 0xf))
+#define __rpte_to_pte(r)	((r).pte)
+#define __rpte_sub_valid(rpte, index) \
+	(pte_val(rpte.pte) & (_PAGE_HPTE_SUB0 >> (index)))
+
+/* Trick: we set __end to va + 64k, which happens works for
+ * a 16M page as well as we want only one iteration
+ */
+#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift)	    \
+        do {                                                                \
+                unsigned long __end = va + PAGE_SIZE;                       \
+                unsigned __split = (psize == MMU_PAGE_4K ||                 \
+				    psize == MMU_PAGE_64K_AP);              \
+                shift = mmu_psize_defs[psize].shift;                        \
+		for (index = 0; va < __end; index++, va += (1L << shift)) { \
+		        if (!__split || __rpte_sub_valid(rpte, index)) do { \
+
+#define pte_iterate_hashed_end() } while(0); } } while(0)
+
+#define pte_pagesize_index(mm, addr, pte)	\
+	(((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
+
+#define remap_4k_pfn(vma, addr, pfn, prot)				\
+	remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE,		\
+			__pgprot(pgprot_val((prot)) | _PAGE_4K_PFN))
+
+
+#ifdef CONFIG_PPC_SUBPAGE_PROT
+/*
+ * For the sub-page protection option, we extend the PGD with one of
+ * these.  Basically we have a 3-level tree, with the top level being
+ * the protptrs array.  To optimize speed and memory consumption when
+ * only addresses < 4GB are being protected, pointers to the first
+ * four pages of sub-page protection words are stored in the low_prot
+ * array.
+ * Each page of sub-page protection words protects 1GB (4 bytes
+ * protects 64k).  For the 3-level tree, each page of pointers then
+ * protects 8TB.
+ */
+struct subpage_prot_table {
+	unsigned long maxaddr;	/* only addresses < this are protected */
+	unsigned int **protptrs[2];
+	unsigned int *low_prot[4];
+};
+
+#undef PGD_TABLE_SIZE
+#define PGD_TABLE_SIZE		((sizeof(pgd_t) << PGD_INDEX_SIZE) + \
+				 sizeof(struct subpage_prot_table))
+
+#define SBP_L1_BITS		(PAGE_SHIFT - 2)
+#define SBP_L2_BITS		(PAGE_SHIFT - 3)
+#define SBP_L1_COUNT		(1 << SBP_L1_BITS)
+#define SBP_L2_COUNT		(1 << SBP_L2_BITS)
+#define SBP_L2_SHIFT		(PAGE_SHIFT + SBP_L1_BITS)
+#define SBP_L3_SHIFT		(SBP_L2_SHIFT + SBP_L2_BITS)
+
+extern void subpage_prot_free(pgd_t *pgd);
+
+static inline struct subpage_prot_table *pgd_subpage_prot(pgd_t *pgd)
+{
+	return (struct subpage_prot_table *)(pgd + PTRS_PER_PGD);
+}
+#endif /* CONFIG_PPC_SUBPAGE_PROT */
+#endif	/* __ASSEMBLY__ */
Index: linux-work/arch/powerpc/include/asm/pte-hash64.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-hash64.h	2009-03-10 15:29:01.000000000 +1100
@@ -0,0 +1,47 @@
+#ifndef _ASM_POWERPC_PTE_HASH64_H
+#define _ASM_POWERPC_PTE_HASH64_H
+#ifdef __KERNEL__
+
+/*
+ * Common bits between 4K and 64K pages in a linux-style PTE.
+ * These match the bits in the (hardware-defined) PowerPC PTE as closely
+ * as possible. Additional bits may be defined in pgtable-hash64-*.h
+ */
+#define _PAGE_PRESENT	0x0001 /* software: pte contains a translation */
+#define _PAGE_USER	0x0002 /* matches one of the PP bits */
+#define _PAGE_FILE	0x0002 /* (!present only) software: pte holds file offset */
+#define _PAGE_EXEC	0x0004 /* No execute on POWER4 and newer (we invert) */
+#define _PAGE_GUARDED	0x0008
+#define _PAGE_COHERENT	0x0010 /* M: enforce memory coherence (SMP systems) */
+#define _PAGE_NO_CACHE	0x0020 /* I: cache inhibit */
+#define _PAGE_WRITETHRU	0x0040 /* W: cache write-through */
+#define _PAGE_DIRTY	0x0080 /* C: page changed */
+#define _PAGE_ACCESSED	0x0100 /* R: page referenced */
+#define _PAGE_RW	0x0200 /* software: user write access allowed */
+#define _PAGE_BUSY	0x0800 /* software: PTE & hash are busy */
+
+/* Strong Access Ordering */
+#define _PAGE_SAO	(_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
+
+#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_COHERENT)
+
+#define _PAGE_WRENABLE	(_PAGE_RW | _PAGE_DIRTY)
+
+/* PTEIDX nibble */
+#define _PTEIDX_SECONDARY	0x8
+#define _PTEIDX_GROUP_IX	0x7
+
+#define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | \
+			 _PAGE_NO_CACHE | _PAGE_WRITETHRU |		\
+			 _PAGE_4K_PFN | _PAGE_RW | _PAGE_USER |		\
+			 _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_EXEC)
+
+
+#ifdef CONFIG_PPC_64K_PAGES
+#include <asm/pte-hash64-64k.h>
+#else
+#include <asm/pte-hash64-4k.h>
+#endif
+
+#endif /* __KERNEL__ */
+#endif /*  _ASM_POWERPC_PTE_HASH64_H */

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 3/9] powerpc/mm: Unify PTE_RPN_SHIFT and _PAGE_CHG_MASK definitions
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 2/9] powerpc/mm: Split the various pgtable-* headers based on MMU type (v3) Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 4/9] powerpc/mm: Tweak PTE bit combination definitions (v2) Benjamin Herrenschmidt
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

This updates the 32-bit headers to use the same definitions for the RPN
shift inside the PTE as 64-bit, and thus updates _PAGE_CHG_MASK to
become identical.

This does introduce a runtime visible difference, which is that now,
_PAGE_HASHPTE will be part of _PAGE_CHG_MASK and thus preserved. However
this should have no practical effect as it should have been preserved in
the first place and we got away with not having it there due to our
PTE access functions preserving it anyway.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/include/asm/pgtable-ppc32.h |   36 ++++++++++++++++++++-----------
 arch/powerpc/include/asm/pte-fsl-booke.h |    2 +
 2 files changed, 26 insertions(+), 12 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-02 13:41:49.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-02 13:41:50.000000000 +1100
@@ -146,9 +146,29 @@ extern int icache_44x_need_flush;
 
 #define _PAGE_HPTEFLAGS _PAGE_HASHPTE
 
-#define _PAGE_CHG_MASK	(PAGE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY | \
-			 _PAGE_SPECIAL)
+/* Location of the PFN in the PTE. Most platforms use the same as _PAGE_SHIFT
+ * here (ie, naturally aligned). Platform who don't just pre-define the
+ * value so we don't override it here
+ */
+#ifndef PTE_RPN_SHIFT
+#define PTE_RPN_SHIFT	(PAGE_SHIFT)
+#endif
+
+#ifdef CONFIG_PTE_64BIT
+#define PTE_RPN_MAX	(1ULL << (64 - PTE_RPN_SHIFT))
+#define PTE_RPN_MASK	(~((1ULL<<PTE_RPN_SHIFT)-1))
+#else
+#define PTE_RPN_MAX	(1UL << (32 - PTE_RPN_SHIFT))
+#define PTE_RPN_MASK	(~((1UL<<PTE_RPN_SHIFT)-1))
+#endif
 
+/* _PAGE_CHG_MASK masks of bits that are to be preserved accross
+ * pgprot changes
+ */
+#define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+                         _PAGE_ACCESSED | _PAGE_SPECIAL)
+
+/* Mask of bits returned by pte_pgprot() */
 #define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
 			 _PAGE_WRITETHRU | _PAGE_ENDIAN | \
 			 _PAGE_USER | _PAGE_ACCESSED | \
@@ -236,18 +256,10 @@ extern unsigned long bad_call_to_PMD_PAG
  * Conversions between PTE values and page frame numbers.
  */
 
-/* in some case we want to additionaly adjust where the pfn is in the pte to
- * allow room for more flags */
-#if defined(CONFIG_FSL_BOOKE) && defined(CONFIG_PTE_64BIT)
-#define PFN_SHIFT_OFFSET	(PAGE_SHIFT + 8)
-#else
-#define PFN_SHIFT_OFFSET	(PAGE_SHIFT)
-#endif
-
-#define pte_pfn(x)		(pte_val(x) >> PFN_SHIFT_OFFSET)
+#define pte_pfn(x)		(pte_val(x) >> PTE_RPN_SHIFT)
 #define pte_page(x)		pfn_to_page(pte_pfn(x))
 
-#define pfn_pte(pfn, prot)	__pte(((pte_basic_t)(pfn) << PFN_SHIFT_OFFSET) |\
+#define pfn_pte(pfn, prot)	__pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |\
 					pgprot_val(prot))
 #define mk_pte(page, prot)	pfn_pte(page_to_pfn(page), prot)
 #endif /* __ASSEMBLY__ */
Index: linux-work/arch/powerpc/include/asm/pte-fsl-booke.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pte-fsl-booke.h	2009-03-02 13:41:49.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pte-fsl-booke.h	2009-03-02 13:41:50.000000000 +1100
@@ -36,6 +36,8 @@
 #ifdef CONFIG_PTE_64BIT
 /* ERPN in a PTE never gets cleared, ignore it */
 #define _PTE_NONE_MASK	0xffffffffffff0000ULL
+/* We extend the size of the PTE flags area when using 64-bit PTEs */
+#define PTE_RPN_SHIFT	(PAGE_SHIFT + 8)
 #endif
 
 #define _PMD_PRESENT	0

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 4/9] powerpc/mm: Tweak PTE bit combination definitions (v2)
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
                   ` (2 preceding siblings ...)
  2009-03-11  3:53 ` [PATCH 3/9] powerpc/mm: Unify PTE_RPN_SHIFT and _PAGE_CHG_MASK definitions Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-13  3:44   ` Michael Ellerman
  2009-03-11  3:53 ` [PATCH 5/9] powerpc/mm: Merge various PTE bits and accessors " Benjamin Herrenschmidt
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

This patch tweaks the way some PTE bit combinations are defined, in such a
way that the 32 and 64-bit variant become almost identical and that will
make it easier to bring in a new common pte-* file for the new variant
of the Book3-E support.

The combination of bits defining access to kernel pages are now clearly
separated from the combination used by userspace and the core VM. The
resulting generated code should remain identical unless I made a mistake.

Note: While at it, I removed a non-sensical statement related to CONFIG_KGDB
in ppc_mmu_32.c which could cause kernel mappings to be user accessible when
that option is enabled. Probably something that bitrot.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

v2: Fix mixup with next patch

 arch/powerpc/include/asm/fixmap.h        |    2 -
 arch/powerpc/include/asm/pgtable-ppc32.h |   39 ++++++++++++-------------
 arch/powerpc/include/asm/pgtable-ppc64.h |   44 +++++++++++++++++------------
 arch/powerpc/include/asm/pgtable.h       |    4 ++
 arch/powerpc/include/asm/pte-8xx.h       |    3 +
 arch/powerpc/include/asm/pte-hash32.h    |    1 
 arch/powerpc/include/asm/pte-hash64-4k.h |    3 -
 arch/powerpc/include/asm/pte-hash64.h    |   47 +++++++++++++++++--------------
 arch/powerpc/mm/fsl_booke_mmu.c          |    2 -
 arch/powerpc/mm/pgtable_32.c             |    4 +-
 arch/powerpc/mm/ppc_mmu_32.c             |   10 +-----
 arch/powerpc/sysdev/cpm_common.c         |    2 -
 12 files changed, 86 insertions(+), 75 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-10 15:30:02.000000000 +1100
@@ -81,11 +81,6 @@
  */
 #include <asm/pte-hash64.h>
 
-/* To make some generic powerpc code happy */
-#ifndef _PAGE_HWEXEC
-#define _PAGE_HWEXEC		0
-#endif
-
 /* Some other useful definitions */
 #define PTE_RPN_MAX	(1UL << (64 - PTE_RPN_SHIFT))
 #define PTE_RPN_MASK	(~((1UL<<PTE_RPN_SHIFT)-1))
@@ -96,24 +91,38 @@
 #define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
                          _PAGE_ACCESSED | _PAGE_SPECIAL)
 
+#define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
+#define _PAGE_BASE	(_PAGE_BASE_NC | _PAGE_COHERENT)
 
 
-/* __pgprot defined in arch/powerpc/include/asm/page.h */
-#define PAGE_NONE	__pgprot(_PAGE_PRESENT | _PAGE_ACCESSED)
-
-#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_USER)
-#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_USER | _PAGE_EXEC)
+/* Permission masks used to generate the __P and __S table,
+ *
+ * Note:__pgprot is defined in arch/powerpc/include/asm/page.h
+ */
+#define PAGE_NONE	__pgprot(_PAGE_BASE)
+#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
+#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
 #define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER)
 #define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
 #define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER)
 #define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
-#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_WRENABLE)
-#define PAGE_KERNEL_CI	__pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
-			       _PAGE_WRENABLE | _PAGE_NO_CACHE | _PAGE_GUARDED)
-#define PAGE_KERNEL_EXEC __pgprot(_PAGE_BASE | _PAGE_WRENABLE | _PAGE_EXEC)
 
-#define PAGE_AGP	__pgprot(_PAGE_BASE | _PAGE_WRENABLE | _PAGE_NO_CACHE)
-#define HAVE_PAGE_AGP
+/* Permission masks used for kernel mappings */
+#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
+#define PAGE_KERNEL_NC	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+				 _PAGE_NO_CACHE)
+#define PAGE_KERNEL_NCG	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+				 _PAGE_NO_CACHE | _PAGE_GUARDED)
+#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW | _PAGE_EXEC)
+#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
+#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO | _PAGE_EXEC)
+
+/* Protection bits for use by pte_pgprot() */
+#define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | \
+			 _PAGE_NO_CACHE | _PAGE_WRITETHRU |		\
+			 _PAGE_4K_PFN | _PAGE_USER | _PAGE_RW |		\
+			 _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_EXEC)
+
 
 /* We always have _PAGE_SPECIAL on 64 bit */
 #define __HAVE_ARCH_PTE_SPECIAL
@@ -395,7 +404,8 @@ static inline void pte_clear(struct mm_s
 static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
 {
 	unsigned long bits = pte_val(entry) &
-		(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC);
+		(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW |
+		 _PAGE_EXEC | _PAGE_HWEXEC);
 	unsigned long old, tmp;
 
 	__asm__ __volatile__(
Index: linux-work/arch/powerpc/include/asm/pte-hash64-4k.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pte-hash64-4k.h	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pte-hash64-4k.h	2009-03-10 15:30:02.000000000 +1100
@@ -8,9 +8,6 @@
 #define _PAGE_F_GIX     _PAGE_GROUP_IX
 #define _PAGE_SPECIAL	0x10000 /* software: special page */
 
-/* There is no 4K PFN hack on 4K pages */
-#define _PAGE_4K_PFN	0
-
 /* PTE flags to conserve for HPTE identification */
 #define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
 			 _PAGE_SECONDARY | _PAGE_GROUP_IX)
Index: linux-work/arch/powerpc/include/asm/pte-hash64.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pte-hash64.h	2009-03-10 15:29:01.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pte-hash64.h	2009-03-10 15:30:02.000000000 +1100
@@ -6,36 +6,41 @@
  * Common bits between 4K and 64K pages in a linux-style PTE.
  * These match the bits in the (hardware-defined) PowerPC PTE as closely
  * as possible. Additional bits may be defined in pgtable-hash64-*.h
+ *
+ * Note: We only support user read/write permissions. Supervisor always
+ * have full read/write to pages above PAGE_OFFSET (pages below that
+ * always use the user access permissions).
+ *
+ * We could create separate kernel read-only if we used the 3 PP bits
+ * combinations that newer processors provide but we currently don't.
  */
-#define _PAGE_PRESENT	0x0001 /* software: pte contains a translation */
-#define _PAGE_USER	0x0002 /* matches one of the PP bits */
-#define _PAGE_FILE	0x0002 /* (!present only) software: pte holds file offset */
-#define _PAGE_EXEC	0x0004 /* No execute on POWER4 and newer (we invert) */
-#define _PAGE_GUARDED	0x0008
-#define _PAGE_COHERENT	0x0010 /* M: enforce memory coherence (SMP systems) */
-#define _PAGE_NO_CACHE	0x0020 /* I: cache inhibit */
-#define _PAGE_WRITETHRU	0x0040 /* W: cache write-through */
-#define _PAGE_DIRTY	0x0080 /* C: page changed */
-#define _PAGE_ACCESSED	0x0100 /* R: page referenced */
-#define _PAGE_RW	0x0200 /* software: user write access allowed */
-#define _PAGE_BUSY	0x0800 /* software: PTE & hash are busy */
+#define _PAGE_PRESENT		0x0001 /* software: pte contains a translation */
+#define _PAGE_USER		0x0002 /* matches one of the PP bits */
+#define _PAGE_FILE		0x0002 /* (!present only) software: pte holds file offset */
+#define _PAGE_EXEC		0x0004 /* No execute on POWER4 and newer (we invert) */
+#define _PAGE_GUARDED		0x0008
+#define _PAGE_COHERENT		0x0010 /* M: enforce memory coherence (SMP systems) */
+#define _PAGE_NO_CACHE		0x0020 /* I: cache inhibit */
+#define _PAGE_WRITETHRU		0x0040 /* W: cache write-through */
+#define _PAGE_DIRTY		0x0080 /* C: page changed */
+#define _PAGE_ACCESSED		0x0100 /* R: page referenced */
+#define _PAGE_RW		0x0200 /* software: user write access allowed */
+#define _PAGE_BUSY		0x0800 /* software: PTE & hash are busy */
+
+/* No separate kernel read-only */
+#define _PAGE_KERNEL_RW		(_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
+#define _PAGE_KERNEL_RO		 _PAGE_KERNEL_RW
 
 /* Strong Access Ordering */
-#define _PAGE_SAO	(_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
+#define _PAGE_SAO		(_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
 
-#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_COHERENT)
-
-#define _PAGE_WRENABLE	(_PAGE_RW | _PAGE_DIRTY)
+/* No page size encoding in the linux PTE */
+#define _PAGE_PSIZE		0
 
 /* PTEIDX nibble */
 #define _PTEIDX_SECONDARY	0x8
 #define _PTEIDX_GROUP_IX	0x7
 
-#define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | \
-			 _PAGE_NO_CACHE | _PAGE_WRITETHRU |		\
-			 _PAGE_4K_PFN | _PAGE_RW | _PAGE_USER |		\
-			 _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_EXEC)
-
 
 #ifdef CONFIG_PPC_64K_PAGES
 #include <asm/pte-hash64-64k.h>
Index: linux-work/arch/powerpc/include/asm/pgtable-ppc32.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-10 15:30:00.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-10 15:30:02.000000000 +1100
@@ -144,6 +144,13 @@ extern int icache_44x_need_flush;
 #define PMD_PAGE_SIZE(pmd)	bad_call_to_PMD_PAGE_SIZE()
 #endif
 
+#ifndef _PAGE_KERNEL_RO
+#define _PAGE_KERNEL_RO	0
+#endif
+#ifndef _PAGE_KERNEL_RW
+#define _PAGE_KERNEL_RW	(_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE)
+#endif
+
 #define _PAGE_HPTEFLAGS _PAGE_HASHPTE
 
 /* Location of the PFN in the PTE. Most platforms use the same as _PAGE_SHIFT
@@ -186,30 +193,25 @@ extern int icache_44x_need_flush;
 #else
 #define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED)
 #endif
-#define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_NO_CACHE)
+#define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED)
 
-#define _PAGE_WRENABLE	(_PAGE_RW | _PAGE_DIRTY | _PAGE_HWWRITE)
-#define _PAGE_KERNEL	(_PAGE_BASE | _PAGE_SHARED | _PAGE_WRENABLE)
-#define _PAGE_KERNEL_NC	(_PAGE_BASE_NC | _PAGE_SHARED | _PAGE_WRENABLE)
-
-#ifdef CONFIG_PPC_STD_MMU
-/* On standard PPC MMU, no user access implies kernel read/write access,
- * so to write-protect kernel memory we must turn on user access */
-#define _PAGE_KERNEL_RO	(_PAGE_BASE | _PAGE_SHARED | _PAGE_USER)
-#else
-#define _PAGE_KERNEL_RO	(_PAGE_BASE | _PAGE_SHARED)
-#endif
-
-#define _PAGE_IO	(_PAGE_KERNEL_NC | _PAGE_GUARDED)
-#define _PAGE_RAM	(_PAGE_KERNEL | _PAGE_HWEXEC)
+/* Permission masks used for kernel mappings */
+#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
+#define PAGE_KERNEL_NC	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+				 _PAGE_NO_CACHE)
+#define PAGE_KERNEL_NCG	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+				 _PAGE_NO_CACHE | _PAGE_GUARDED)
+#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW | _PAGE_EXEC)
+#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
+#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO | _PAGE_EXEC)
 
 #if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
 	defined(CONFIG_KPROBES)
 /* We want the debuggers to be able to set breakpoints anywhere, so
  * don't write protect the kernel text */
-#define _PAGE_RAM_TEXT	_PAGE_RAM
+#define PAGE_KERNEL_TEXT	PAGE_KERNEL_X
 #else
-#define _PAGE_RAM_TEXT	(_PAGE_KERNEL_RO | _PAGE_HWEXEC)
+#define PAGE_KERNEL_TEXT	PAGE_KERNEL_ROX
 #endif
 
 #define PAGE_NONE	__pgprot(_PAGE_BASE)
@@ -220,9 +222,6 @@ extern int icache_44x_need_flush;
 #define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER)
 #define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
 
-#define PAGE_KERNEL		__pgprot(_PAGE_RAM)
-#define PAGE_KERNEL_NOCACHE	__pgprot(_PAGE_IO)
-
 /*
  * The PowerPC can only do execute protection on a segment (256MB) basis,
  * not on a page basis.  So we consider execute permission the same as read.
Index: linux-work/arch/powerpc/include/asm/pte-8xx.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pte-8xx.h	2009-03-10 15:28:29.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pte-8xx.h	2009-03-10 15:30:02.000000000 +1100
@@ -59,6 +59,9 @@
 /* Until my rework is finished, 8xx still needs atomic PTE updates */
 #define PTE_ATOMIC_UPDATES	1
 
+/* We need to add _PAGE_SHARED to kernel pages */
+#define _PAGE_KERNEL_RO	(_PAGE_SHARED)
+#define _PAGE_KERNEL_RW	(_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE)
 
 #endif /* __KERNEL__ */
 #endif /*  _ASM_POWERPC_PTE_8xx_H */
Index: linux-work/arch/powerpc/include/asm/pte-hash32.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pte-hash32.h	2009-03-10 15:28:47.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pte-hash32.h	2009-03-10 15:30:02.000000000 +1100
@@ -44,6 +44,5 @@
 /* Hash table based platforms need atomic updates of the linux PTE */
 #define PTE_ATOMIC_UPDATES	1
 
-
 #endif /* __KERNEL__ */
 #endif /*  _ASM_POWERPC_PTE_HASH32_H */
Index: linux-work/arch/powerpc/mm/pgtable_32.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/pgtable_32.c	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/mm/pgtable_32.c	2009-03-10 15:30:02.000000000 +1100
@@ -164,7 +164,7 @@ __ioremap_caller(phys_addr_t addr, unsig
 
 	/* Make sure we have the base flags */
 	if ((flags & _PAGE_PRESENT) == 0)
-		flags |= _PAGE_KERNEL;
+		flags |= PAGE_KERNEL;
 
 	/* Non-cacheable page cannot be coherent */
 	if (flags & _PAGE_NO_CACHE)
@@ -296,7 +296,7 @@ void __init mapin_ram(void)
 	p = memstart_addr + s;
 	for (; s < total_lowmem; s += PAGE_SIZE) {
 		ktext = ((char *) v >= _stext && (char *) v < etext);
-		f = ktext ?_PAGE_RAM_TEXT : _PAGE_RAM;
+		f = ktext ? PAGE_KERNEL_TEXT : PAGE_KERNEL;
 		map_page(v, p, f);
 #ifdef CONFIG_PPC_STD_MMU_32
 		if (ktext)
Index: linux-work/arch/powerpc/mm/ppc_mmu_32.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/ppc_mmu_32.c	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/mm/ppc_mmu_32.c	2009-03-10 15:30:02.000000000 +1100
@@ -74,9 +74,6 @@ unsigned long p_mapped_by_bats(phys_addr
 
 unsigned long __init mmu_mapin_ram(void)
 {
-#ifdef CONFIG_POWER4
-	return 0;
-#else
 	unsigned long tot, bl, done;
 	unsigned long max_size = (256<<20);
 
@@ -95,7 +92,7 @@ unsigned long __init mmu_mapin_ram(void)
 			break;
 	}
 
-	setbat(2, PAGE_OFFSET, 0, bl, _PAGE_RAM);
+	setbat(2, PAGE_OFFSET, 0, bl, PAGE_KERNEL_X);
 	done = (unsigned long)bat_addrs[2].limit - PAGE_OFFSET + 1;
 	if ((done < tot) && !bat_addrs[3].limit) {
 		/* use BAT3 to cover a bit more */
@@ -103,12 +100,11 @@ unsigned long __init mmu_mapin_ram(void)
 		for (bl = 128<<10; bl < max_size; bl <<= 1)
 			if (bl * 2 > tot)
 				break;
-		setbat(3, PAGE_OFFSET+done, done, bl, _PAGE_RAM);
+		setbat(3, PAGE_OFFSET+done, done, bl, PAGE_KERNEL_X);
 		done = (unsigned long)bat_addrs[3].limit - PAGE_OFFSET + 1;
 	}
 
 	return done;
-#endif
 }
 
 /*
@@ -136,9 +132,7 @@ void __init setbat(int index, unsigned l
 		wimgxpp |= (flags & _PAGE_RW)? BPP_RW: BPP_RX;
 		bat[1].batu = virt | (bl << 2) | 2; /* Vs=1, Vp=0 */
 		bat[1].batl = BAT_PHYS_ADDR(phys) | wimgxpp;
-#ifndef CONFIG_KGDB /* want user access for breakpoints */
 		if (flags & _PAGE_USER)
-#endif
 			bat[1].batu |= 1; 	/* Vp = 1 */
 		if (flags & _PAGE_GUARDED) {
 			/* G bit must be zero in IBATs */
Index: linux-work/arch/powerpc/sysdev/cpm_common.c
===================================================================
--- linux-work.orig/arch/powerpc/sysdev/cpm_common.c	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/sysdev/cpm_common.c	2009-03-10 15:30:02.000000000 +1100
@@ -56,7 +56,7 @@ void __init udbg_init_cpm(void)
 {
 	if (cpm_udbg_txdesc) {
 #ifdef CONFIG_CPM2
-		setbat(1, 0xf0000000, 0xf0000000, 1024*1024, _PAGE_IO);
+		setbat(1, 0xf0000000, 0xf0000000, 1024*1024, PAGE_KERNEL_NCG);
 #endif
 		udbg_putc = udbg_putc_cpm;
 	}
Index: linux-work/arch/powerpc/include/asm/fixmap.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/fixmap.h	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/fixmap.h	2009-03-10 15:30:02.000000000 +1100
@@ -61,7 +61,7 @@ extern void __set_fixmap (enum fixed_add
  * Some hardware wants to get fixmapped without caching.
  */
 #define set_fixmap_nocache(idx, phys) \
-		__set_fixmap(idx, phys, PAGE_KERNEL_NOCACHE)
+		__set_fixmap(idx, phys, PAGE_KERNEL_NCG)
 
 #define clear_fixmap(idx) \
 		__set_fixmap(idx, 0, __pgprot(0))
Index: linux-work/arch/powerpc/include/asm/pgtable.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable.h	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable.h	2009-03-10 15:30:02.000000000 +1100
@@ -25,6 +25,10 @@ static inline void assert_pte_locked(str
 #  include <asm/pgtable-ppc32.h>
 #endif
 
+/* Special mapping for AGP */
+#define PAGE_AGP	(PAGE_KERNEL_NC)
+#define HAVE_PAGE_AGP
+
 #ifndef __ASSEMBLY__
 
 /* Insert a PTE, top-level function is out of line. It uses an inline
Index: linux-work/arch/powerpc/mm/fsl_booke_mmu.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/fsl_booke_mmu.c	2009-03-10 15:26:49.000000000 +1100
+++ linux-work/arch/powerpc/mm/fsl_booke_mmu.c	2009-03-10 15:30:02.000000000 +1100
@@ -162,7 +162,7 @@ unsigned long __init mmu_mapin_ram(void)
 	phys_addr_t phys = memstart_addr;
 
 	while (cam[tlbcam_index] && tlbcam_index < ARRAY_SIZE(cam)) {
-		settlbcam(tlbcam_index, virt, phys, cam[tlbcam_index], _PAGE_KERNEL, 0);
+		settlbcam(tlbcam_index, virt, phys, cam[tlbcam_index], PAGE_KERNEL_X, 0);
 		virt += cam[tlbcam_index];
 		phys += cam[tlbcam_index];
 		tlbcam_index++;

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 5/9] powerpc/mm: Merge various PTE bits and accessors definitions (v2)
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
                   ` (3 preceding siblings ...)
  2009-03-11  3:53 ` [PATCH 4/9] powerpc/mm: Tweak PTE bit combination definitions (v2) Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-19 16:16   ` Kumar Gala
  2009-03-11  3:53 ` [PATCH 6/9] powerpc/mm: Rename arch/powerpc/kernel/mmap.c to mmap_64.c Benjamin Herrenschmidt
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

Now that they are almost identical, we can merge some of the definitions
related to the PTE format into common files.

This creates a new pte-common.h which is included by both 32 and 64-bit
right after the CPU specific pte-*.h file, and which defines some
bits to "default" values if they haven't been defined already, and
then provides a generic definition of most of the bit combinations
based on these and exposed to the rest of the kernel.

I also moved to the common pgtable.h most of the "small" accessors to the
PTE bits and modification helpers (pte_mk*). The actual accessors remain
in their separate files.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

v2: Fix mixup with previous patch

 arch/powerpc/include/asm/pgtable-ppc32.h |  204 -------------------------------
 arch/powerpc/include/asm/pgtable-ppc64.h |  130 -------------------
 arch/powerpc/include/asm/pgtable.h       |   53 +++++++-
 arch/powerpc/include/asm/pte-common.h    |  177 ++++++++++++++++++++++++++
 4 files changed, 229 insertions(+), 335 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-03 16:20:48.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc32.h	2009-03-03 16:20:48.000000000 +1100
@@ -97,174 +97,11 @@ extern int icache_44x_need_flush;
 #include <asm/pte-hash32.h>
 #endif
 
-/* If _PAGE_SPECIAL is defined, then we advertise our support for it */
-#ifdef _PAGE_SPECIAL
-#define __HAVE_ARCH_PTE_SPECIAL
-#endif
-
-/*
- * Some bits are only used on some cpu families... Make sure that all
- * the undefined gets defined as 0
- */
-#ifndef _PAGE_HASHPTE
-#define _PAGE_HASHPTE	0
-#endif
-#ifndef _PTE_NONE_MASK
-#define _PTE_NONE_MASK 0
-#endif
-#ifndef _PAGE_SHARED
-#define _PAGE_SHARED	0
-#endif
-#ifndef _PAGE_HWWRITE
-#define _PAGE_HWWRITE	0
-#endif
-#ifndef _PAGE_HWEXEC
-#define _PAGE_HWEXEC	0
-#endif
-#ifndef _PAGE_EXEC
-#define _PAGE_EXEC	0
-#endif
-#ifndef _PAGE_ENDIAN
-#define _PAGE_ENDIAN	0
-#endif
-#ifndef _PAGE_COHERENT
-#define _PAGE_COHERENT	0
-#endif
-#ifndef _PAGE_WRITETHRU
-#define _PAGE_WRITETHRU	0
-#endif
-#ifndef _PAGE_SPECIAL
-#define _PAGE_SPECIAL	0
-#endif
-#ifndef _PMD_PRESENT_MASK
-#define _PMD_PRESENT_MASK	_PMD_PRESENT
-#endif
-#ifndef _PMD_SIZE
-#define _PMD_SIZE	0
-#define PMD_PAGE_SIZE(pmd)	bad_call_to_PMD_PAGE_SIZE()
-#endif
-
-#ifndef _PAGE_KERNEL_RO
-#define _PAGE_KERNEL_RO	0
-#endif
-#ifndef _PAGE_KERNEL_RW
-#define _PAGE_KERNEL_RW	(_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE)
-#endif
-
-#define _PAGE_HPTEFLAGS _PAGE_HASHPTE
-
-/* Location of the PFN in the PTE. Most platforms use the same as _PAGE_SHIFT
- * here (ie, naturally aligned). Platform who don't just pre-define the
- * value so we don't override it here
- */
-#ifndef PTE_RPN_SHIFT
-#define PTE_RPN_SHIFT	(PAGE_SHIFT)
-#endif
-
-#ifdef CONFIG_PTE_64BIT
-#define PTE_RPN_MAX	(1ULL << (64 - PTE_RPN_SHIFT))
-#define PTE_RPN_MASK	(~((1ULL<<PTE_RPN_SHIFT)-1))
-#else
-#define PTE_RPN_MAX	(1UL << (32 - PTE_RPN_SHIFT))
-#define PTE_RPN_MASK	(~((1UL<<PTE_RPN_SHIFT)-1))
-#endif
-
-/* _PAGE_CHG_MASK masks of bits that are to be preserved accross
- * pgprot changes
- */
-#define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
-                         _PAGE_ACCESSED | _PAGE_SPECIAL)
-
-/* Mask of bits returned by pte_pgprot() */
-#define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
-			 _PAGE_WRITETHRU | _PAGE_ENDIAN | \
-			 _PAGE_USER | _PAGE_ACCESSED | \
-			 _PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | \
-			 _PAGE_EXEC | _PAGE_HWEXEC)
-
-/*
- * We define 2 sets of base prot bits, one for basic pages (ie,
- * cacheable kernel and user pages) and one for non cacheable
- * pages. We always set _PAGE_COHERENT when SMP is enabled or
- * the processor might need it for DMA coherency.
- */
-#if defined(CONFIG_SMP) || defined(CONFIG_PPC_STD_MMU)
-#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_COHERENT)
-#else
-#define _PAGE_BASE	(_PAGE_PRESENT | _PAGE_ACCESSED)
-#endif
-#define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED)
-
-/* Permission masks used for kernel mappings */
-#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
-#define PAGE_KERNEL_NC	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
-				 _PAGE_NO_CACHE)
-#define PAGE_KERNEL_NCG	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
-				 _PAGE_NO_CACHE | _PAGE_GUARDED)
-#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW | _PAGE_EXEC)
-#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
-#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO | _PAGE_EXEC)
-
-#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
-	defined(CONFIG_KPROBES)
-/* We want the debuggers to be able to set breakpoints anywhere, so
- * don't write protect the kernel text */
-#define PAGE_KERNEL_TEXT	PAGE_KERNEL_X
-#else
-#define PAGE_KERNEL_TEXT	PAGE_KERNEL_ROX
-#endif
-
-#define PAGE_NONE	__pgprot(_PAGE_BASE)
-#define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER)
-#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
-#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
-#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
-#define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER)
-#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
-
-/*
- * The PowerPC can only do execute protection on a segment (256MB) basis,
- * not on a page basis.  So we consider execute permission the same as read.
- * Also, write permissions imply read permissions.
- * This is the closest we can get..
- */
-#define __P000	PAGE_NONE
-#define __P001	PAGE_READONLY_X
-#define __P010	PAGE_COPY
-#define __P011	PAGE_COPY_X
-#define __P100	PAGE_READONLY
-#define __P101	PAGE_READONLY_X
-#define __P110	PAGE_COPY
-#define __P111	PAGE_COPY_X
-
-#define __S000	PAGE_NONE
-#define __S001	PAGE_READONLY_X
-#define __S010	PAGE_SHARED
-#define __S011	PAGE_SHARED_X
-#define __S100	PAGE_READONLY
-#define __S101	PAGE_READONLY_X
-#define __S110	PAGE_SHARED
-#define __S111	PAGE_SHARED_X
+/* And here we include common definitions */
+#include <asm/pte-common.h>
 
 #ifndef __ASSEMBLY__
-/* Make sure we get a link error if PMD_PAGE_SIZE is ever called on a
- * kernel without large page PMD support */
-extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
 
-/*
- * Conversions between PTE values and page frame numbers.
- */
-
-#define pte_pfn(x)		(pte_val(x) >> PTE_RPN_SHIFT)
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
-
-#define pfn_pte(pfn, prot)	__pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |\
-					pgprot_val(prot))
-#define mk_pte(page, prot)	pfn_pte(page_to_pfn(page), prot)
-#endif /* __ASSEMBLY__ */
-
-#define pte_none(pte)		((pte_val(pte) & ~_PTE_NONE_MASK) == 0)
-#define pte_present(pte)	(pte_val(pte) & _PAGE_PRESENT)
 #define pte_clear(mm, addr, ptep) \
 	do { pte_update(ptep, ~_PAGE_HASHPTE, 0); } while (0)
 
@@ -273,43 +110,6 @@ extern unsigned long bad_call_to_PMD_PAG
 #define	pmd_present(pmd)	(pmd_val(pmd) & _PMD_PRESENT_MASK)
 #define	pmd_clear(pmdp)		do { pmd_val(*(pmdp)) = 0; } while (0)
 
-#ifndef __ASSEMBLY__
-/*
- * The following only work if pte_present() is true.
- * Undefined behaviour if not..
- */
-static inline int pte_write(pte_t pte)		{ return pte_val(pte) & _PAGE_RW; }
-static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
-static inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
-static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
-static inline int pte_special(pte_t pte)	{ return pte_val(pte) & _PAGE_SPECIAL; }
-
-static inline pte_t pte_wrprotect(pte_t pte) {
-	pte_val(pte) &= ~(_PAGE_RW | _PAGE_HWWRITE); return pte; }
-static inline pte_t pte_mkclean(pte_t pte) {
-	pte_val(pte) &= ~(_PAGE_DIRTY | _PAGE_HWWRITE); return pte; }
-static inline pte_t pte_mkold(pte_t pte) {
-	pte_val(pte) &= ~_PAGE_ACCESSED; return pte; }
-
-static inline pte_t pte_mkwrite(pte_t pte) {
-	pte_val(pte) |= _PAGE_RW; return pte; }
-static inline pte_t pte_mkdirty(pte_t pte) {
-	pte_val(pte) |= _PAGE_DIRTY; return pte; }
-static inline pte_t pte_mkyoung(pte_t pte) {
-	pte_val(pte) |= _PAGE_ACCESSED; return pte; }
-static inline pte_t pte_mkspecial(pte_t pte) {
-	pte_val(pte) |= _PAGE_SPECIAL; return pte; }
-static inline pgprot_t pte_pgprot(pte_t pte)
-{
-	return __pgprot(pte_val(pte) & PAGE_PROT_BITS);
-}
-
-static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
-{
-	pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot);
-	return pte;
-}
-
 /*
  * When flushing the tlb entry for a page, we also need to flush the hash
  * table entry.  flush_hash_pages is assembler (for speed) in hashtable.S.
Index: linux-work/arch/powerpc/include/asm/pgtable-ppc64.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-03 16:20:48.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-03 16:20:48.000000000 +1100
@@ -80,80 +80,8 @@
  * Include the PTE bits definitions
  */
 #include <asm/pte-hash64.h>
+#include <asm/pte-common.h>
 
-/* Some other useful definitions */
-#define PTE_RPN_MAX	(1UL << (64 - PTE_RPN_SHIFT))
-#define PTE_RPN_MASK	(~((1UL<<PTE_RPN_SHIFT)-1))
-
-/* _PAGE_CHG_MASK masks of bits that are to be preserved accross
- * pgprot changes
- */
-#define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
-                         _PAGE_ACCESSED | _PAGE_SPECIAL)
-
-#define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
-#define _PAGE_BASE	(_PAGE_BASE_NC | _PAGE_COHERENT)
-
-
-/* Permission masks used to generate the __P and __S table,
- *
- * Note:__pgprot is defined in arch/powerpc/include/asm/page.h
- */
-#define PAGE_NONE	__pgprot(_PAGE_BASE)
-#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
-#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
-#define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER)
-#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
-#define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER)
-#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
-
-/* Permission masks used for kernel mappings */
-#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
-#define PAGE_KERNEL_NC	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
-				 _PAGE_NO_CACHE)
-#define PAGE_KERNEL_NCG	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
-				 _PAGE_NO_CACHE | _PAGE_GUARDED)
-#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW | _PAGE_EXEC)
-#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
-#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO | _PAGE_EXEC)
-
-/* Protection bits for use by pte_pgprot() */
-#define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | \
-			 _PAGE_NO_CACHE | _PAGE_WRITETHRU |		\
-			 _PAGE_4K_PFN | _PAGE_USER | _PAGE_RW |		\
-			 _PAGE_ACCESSED | _PAGE_DIRTY | _PAGE_EXEC)
-
-
-/* We always have _PAGE_SPECIAL on 64 bit */
-#define __HAVE_ARCH_PTE_SPECIAL
-
-
-/*
- * POWER4 and newer have per page execute protection, older chips can only
- * do this on a segment (256MB) basis.
- *
- * Also, write permissions imply read permissions.
- * This is the closest we can get..
- *
- * Note due to the way vm flags are laid out, the bits are XWR
- */
-#define __P000	PAGE_NONE
-#define __P001	PAGE_READONLY
-#define __P010	PAGE_COPY
-#define __P011	PAGE_COPY
-#define __P100	PAGE_READONLY_X
-#define __P101	PAGE_READONLY_X
-#define __P110	PAGE_COPY_X
-#define __P111	PAGE_COPY_X
-
-#define __S000	PAGE_NONE
-#define __S001	PAGE_READONLY
-#define __S010	PAGE_SHARED
-#define __S011	PAGE_SHARED
-#define __S100	PAGE_READONLY_X
-#define __S101	PAGE_READONLY_X
-#define __S110	PAGE_SHARED_X
-#define __S111	PAGE_SHARED_X
 
 #ifdef CONFIG_PPC_MM_SLICES
 #define HAVE_ARCH_UNMAPPED_AREA
@@ -194,34 +122,8 @@
 #endif /* __real_pte */
 
 
-/*
- * Conversion functions: convert a page and protection to a page entry,
- * and a page entry and page directory to the page they refer to.
- *
- * mk_pte takes a (struct page *) as input
- */
-#define mk_pte(page, pgprot)	pfn_pte(page_to_pfn(page), (pgprot))
-
-static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
-{
-	pte_t pte;
-
-
-	pte_val(pte) = (pfn << PTE_RPN_SHIFT) | pgprot_val(pgprot);
-	return pte;
-}
-
-#define pte_modify(_pte, newprot) \
-  (__pte((pte_val(_pte) & _PAGE_CHG_MASK) | pgprot_val(newprot)))
-
-#define pte_none(pte)		((pte_val(pte) & ~_PAGE_HPTEFLAGS) == 0)
-#define pte_present(pte)	(pte_val(pte) & _PAGE_PRESENT)
-
 /* pte_clear moved to later in this file */
 
-#define pte_pfn(x)		((unsigned long)((pte_val(x)>>PTE_RPN_SHIFT)))
-#define pte_page(x)		pfn_to_page(pte_pfn(x))
-
 #define PMD_BAD_BITS		(PTE_TABLE_SIZE-1)
 #define PUD_BAD_BITS		(PMD_TABLE_SIZE-1)
 
@@ -269,36 +171,6 @@ static inline pte_t pfn_pte(unsigned lon
 /* This now only contains the vmalloc pages */
 #define pgd_offset_k(address) pgd_offset(&init_mm, address)
 
-/*
- * The following only work if pte_present() is true.
- * Undefined behaviour if not..
- */
-static inline int pte_write(pte_t pte) { return pte_val(pte) & _PAGE_RW;}
-static inline int pte_dirty(pte_t pte) { return pte_val(pte) & _PAGE_DIRTY;}
-static inline int pte_young(pte_t pte) { return pte_val(pte) & _PAGE_ACCESSED;}
-static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE;}
-static inline int pte_special(pte_t pte) { return pte_val(pte) & _PAGE_SPECIAL; }
-
-static inline pte_t pte_wrprotect(pte_t pte) {
-	pte_val(pte) &= ~(_PAGE_RW); return pte; }
-static inline pte_t pte_mkclean(pte_t pte) {
-	pte_val(pte) &= ~(_PAGE_DIRTY); return pte; }
-static inline pte_t pte_mkold(pte_t pte) {
-	pte_val(pte) &= ~_PAGE_ACCESSED; return pte; }
-static inline pte_t pte_mkwrite(pte_t pte) {
-	pte_val(pte) |= _PAGE_RW; return pte; }
-static inline pte_t pte_mkdirty(pte_t pte) {
-	pte_val(pte) |= _PAGE_DIRTY; return pte; }
-static inline pte_t pte_mkyoung(pte_t pte) {
-	pte_val(pte) |= _PAGE_ACCESSED; return pte; }
-static inline pte_t pte_mkhuge(pte_t pte) {
-	return pte; }
-static inline pte_t pte_mkspecial(pte_t pte) {
-	pte_val(pte) |= _PAGE_SPECIAL; return pte; }
-static inline pgprot_t pte_pgprot(pte_t pte)
-{
-	return __pgprot(pte_val(pte) & PAGE_PROT_BITS);
-}
 
 /* Atomic PTE updates */
 static inline unsigned long pte_update(struct mm_struct *mm,
Index: linux-work/arch/powerpc/include/asm/pgtable.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pgtable.h	2009-03-03 16:20:48.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable.h	2009-03-03 16:20:48.000000000 +1100
@@ -25,12 +25,57 @@ static inline void assert_pte_locked(str
 #  include <asm/pgtable-ppc32.h>
 #endif
 
-/* Special mapping for AGP */
-#define PAGE_AGP	(PAGE_KERNEL_NC)
-#define HAVE_PAGE_AGP
-
 #ifndef __ASSEMBLY__
 
+/* Generic accessors to PTE bits */
+static inline int pte_write(pte_t pte)		{ return pte_val(pte) & _PAGE_RW; }
+static inline int pte_dirty(pte_t pte)		{ return pte_val(pte) & _PAGE_DIRTY; }
+static inline int pte_young(pte_t pte)		{ return pte_val(pte) & _PAGE_ACCESSED; }
+static inline int pte_file(pte_t pte)		{ return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_special(pte_t pte)	{ return pte_val(pte) & _PAGE_SPECIAL; }
+static inline int pte_present(pte_t pte)	{ return pte_val(pte) & _PAGE_PRESENT; }
+static inline int pte_none(pte_t pte)		{ return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
+static inline pgprot_t pte_pgprot(pte_t pte)	{ return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
+
+/* Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ *
+ * Even if PTEs can be unsigned long long, a PFN is always an unsigned
+ * long for now.
+ */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) {
+	return __pte((pfn << PTE_RPN_SHIFT) | pgprot_val(pgprot)); }
+static inline unsigned long pte_pfn(pte_t pte)	{
+	return pte_val(pte) >> PTE_RPN_SHIFT; }
+
+/* Keep these as a macros to avoid include dependency mess */
+#define pte_page(x)		pfn_to_page(pte_pfn(x))
+#define mk_pte(page, pgprot)	pfn_pte(page_to_pfn(page), (pgprot))
+
+/* Generic modifiers for PTE bits */
+static inline pte_t pte_wrprotect(pte_t pte) {
+	pte_val(pte) &= ~(_PAGE_RW | _PAGE_HWWRITE); return pte; }
+static inline pte_t pte_mkclean(pte_t pte) {
+	pte_val(pte) &= ~(_PAGE_DIRTY | _PAGE_HWWRITE); return pte; }
+static inline pte_t pte_mkold(pte_t pte) {
+	pte_val(pte) &= ~_PAGE_ACCESSED; return pte; }
+static inline pte_t pte_mkwrite(pte_t pte) {
+	pte_val(pte) |= _PAGE_RW; return pte; }
+static inline pte_t pte_mkdirty(pte_t pte) {
+	pte_val(pte) |= _PAGE_DIRTY; return pte; }
+static inline pte_t pte_mkyoung(pte_t pte) {
+	pte_val(pte) |= _PAGE_ACCESSED; return pte; }
+static inline pte_t pte_mkspecial(pte_t pte) {
+	pte_val(pte) |= _PAGE_SPECIAL; return pte; }
+static inline pte_t pte_mkhuge(pte_t pte) {
+	return pte; }
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+	pte_val(pte) = (pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot);
+	return pte;
+}
+
+
 /* Insert a PTE, top-level function is out of line. It uses an inline
  * low level function in the respective pgtable-* files
  */
Index: linux-work/arch/powerpc/include/asm/pte-common.h
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/include/asm/pte-common.h	2009-03-03 16:20:48.000000000 +1100
@@ -0,0 +1,177 @@
+/* Included from asm/pgtable-*.h only ! */
+
+/*
+ * Some bits are only used on some cpu families... Make sure that all
+ * the undefined gets a sensible default
+ */
+#ifndef _PAGE_HASHPTE
+#define _PAGE_HASHPTE	0
+#endif
+#ifndef _PAGE_SHARED
+#define _PAGE_SHARED	0
+#endif
+#ifndef _PAGE_HWWRITE
+#define _PAGE_HWWRITE	0
+#endif
+#ifndef _PAGE_HWEXEC
+#define _PAGE_HWEXEC	0
+#endif
+#ifndef _PAGE_EXEC
+#define _PAGE_EXEC	0
+#endif
+#ifndef _PAGE_ENDIAN
+#define _PAGE_ENDIAN	0
+#endif
+#ifndef _PAGE_COHERENT
+#define _PAGE_COHERENT	0
+#endif
+#ifndef _PAGE_WRITETHRU
+#define _PAGE_WRITETHRU	0
+#endif
+#ifndef _PAGE_SPECIAL
+#define _PAGE_SPECIAL	0
+#endif
+#ifndef _PAGE_4K_PFN
+#define _PAGE_4K_PFN		0
+#endif
+#ifndef _PAGE_PSIZE
+#define _PAGE_PSIZE		0
+#endif
+#ifndef _PMD_PRESENT_MASK
+#define _PMD_PRESENT_MASK	_PMD_PRESENT
+#endif
+#ifndef _PMD_SIZE
+#define _PMD_SIZE	0
+#define PMD_PAGE_SIZE(pmd)	bad_call_to_PMD_PAGE_SIZE()
+#endif
+#ifndef _PAGE_KERNEL_RO
+#define _PAGE_KERNEL_RO	0
+#endif
+#ifndef _PAGE_KERNEL_RW
+#define _PAGE_KERNEL_RW	(_PAGE_DIRTY | _PAGE_RW | _PAGE_HWWRITE)
+#endif
+#ifndef _PAGE_HPTEFLAGS
+#define _PAGE_HPTEFLAGS _PAGE_HASHPTE
+#endif
+#ifndef _PTE_NONE_MASK
+#define _PTE_NONE_MASK	_PAGE_HPTEFLAGS
+#endif
+
+/* Make sure we get a link error if PMD_PAGE_SIZE is ever called on a
+ * kernel without large page PMD support
+ */
+#ifndef __ASSEMBLY__
+extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
+#endif /* __ASSEMBLY__ */
+
+/* Location of the PFN in the PTE. Most 32-bit platforms use the same
+ * as _PAGE_SHIFT here (ie, naturally aligned).
+ * Platform who don't just pre-define the value so we don't override it here
+ */
+#ifndef PTE_RPN_SHIFT
+#define PTE_RPN_SHIFT	(PAGE_SHIFT)
+#endif
+
+/* The mask convered by the RPN must be a ULL on 32-bit platforms with
+ * 64-bit PTEs
+ */
+#if defined(CONFIG_PPC32) && defined(CONFIG_PTE_64BIT)
+#define PTE_RPN_MAX	(1ULL << (64 - PTE_RPN_SHIFT))
+#define PTE_RPN_MASK	(~((1ULL<<PTE_RPN_SHIFT)-1))
+#else
+#define PTE_RPN_MAX	(1UL << (32 - PTE_RPN_SHIFT))
+#define PTE_RPN_MASK	(~((1UL<<PTE_RPN_SHIFT)-1))
+#endif
+
+/* _PAGE_CHG_MASK masks of bits that are to be preserved accross
+ * pgprot changes
+ */
+#define _PAGE_CHG_MASK	(PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+                         _PAGE_ACCESSED | _PAGE_SPECIAL)
+
+/* Mask of bits returned by pte_pgprot() */
+#define PAGE_PROT_BITS	(_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
+			 _PAGE_WRITETHRU | _PAGE_ENDIAN | _PAGE_4K_PFN | \
+			 _PAGE_USER | _PAGE_ACCESSED | \
+			 _PAGE_RW | _PAGE_HWWRITE | _PAGE_DIRTY | \
+			 _PAGE_EXEC | _PAGE_HWEXEC)
+
+/*
+ * We define 2 sets of base prot bits, one for basic pages (ie,
+ * cacheable kernel and user pages) and one for non cacheable
+ * pages. We always set _PAGE_COHERENT when SMP is enabled or
+ * the processor might need it for DMA coherency.
+ */
+#define _PAGE_BASE_NC	(_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
+#if defined(CONFIG_SMP) || defined(CONFIG_PPC_STD_MMU)
+#define _PAGE_BASE	(_PAGE_BASE_NC | _PAGE_COHERENT)
+#else
+#define _PAGE_BASE	(_PAGE_BASE_NC)
+#endif
+
+/* Permission masks used to generate the __P and __S table,
+ *
+ * Note:__pgprot is defined in arch/powerpc/include/asm/page.h
+ *
+ * Write permissions imply read permissions for now (we could make write-only
+ * pages on BookE but we don't bother for now). Execute permission control is
+ * possible on platforms that define _PAGE_EXEC
+ *
+ * Note due to the way vm flags are laid out, the bits are XWR
+ */
+#define PAGE_NONE	__pgprot(_PAGE_BASE)
+#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
+#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
+#define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER)
+#define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
+#define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER)
+#define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
+
+#define __P000	PAGE_NONE
+#define __P001	PAGE_READONLY
+#define __P010	PAGE_COPY
+#define __P011	PAGE_COPY
+#define __P100	PAGE_READONLY_X
+#define __P101	PAGE_READONLY_X
+#define __P110	PAGE_COPY_X
+#define __P111	PAGE_COPY_X
+
+#define __S000	PAGE_NONE
+#define __S001	PAGE_READONLY
+#define __S010	PAGE_SHARED
+#define __S011	PAGE_SHARED
+#define __S100	PAGE_READONLY_X
+#define __S101	PAGE_READONLY_X
+#define __S110	PAGE_SHARED_X
+#define __S111	PAGE_SHARED_X
+
+/* Permission masks used for kernel mappings */
+#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
+#define PAGE_KERNEL_NC	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+				 _PAGE_NO_CACHE)
+#define PAGE_KERNEL_NCG	__pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+				 _PAGE_NO_CACHE | _PAGE_GUARDED)
+#define PAGE_KERNEL_X	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RW | _PAGE_EXEC)
+#define PAGE_KERNEL_RO	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
+#define PAGE_KERNEL_ROX	__pgprot(_PAGE_BASE | _PAGE_KERNEL_RO | _PAGE_EXEC)
+
+/* Protection used for kernel text. We want the debuggers to be able to
+ * set breakpoints anywhere, so don't write protect the kernel text
+ * on platforms where such control is possible.
+ */
+#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
+	defined(CONFIG_KPROBES)
+#define PAGE_KERNEL_TEXT	PAGE_KERNEL_X
+#else
+#define PAGE_KERNEL_TEXT	PAGE_KERNEL_ROX
+#endif
+
+/* Advertise special mapping type for AGP */
+#define PAGE_AGP	(PAGE_KERNEL_NC)
+#define HAVE_PAGE_AGP
+
+/* Advertise support for _PAGE_SPECIAL */
+#ifdef _PAGE_SPECIAL
+#define __HAVE_ARCH_PTE_SPECIAL
+#endif
+

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 6/9] powerpc/mm: Rename arch/powerpc/kernel/mmap.c to mmap_64.c
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
                   ` (4 preceding siblings ...)
  2009-03-11  3:53 ` [PATCH 5/9] powerpc/mm: Merge various PTE bits and accessors " Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 7/9] powerpc/mm: Fix printk type warning in mmu_context_nohash Benjamin Herrenschmidt
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

This file is only useful on 64-bit, so we name it accordingly.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/mm/Makefile  |    2 
 arch/powerpc/mm/mmap.c    |  109 ----------------------------------------------
 arch/powerpc/mm/mmap_64.c |  109 ++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 110 insertions(+), 110 deletions(-)

--- linux-work.orig/arch/powerpc/mm/Makefile	2009-03-10 16:08:34.000000000 +1100
+++ linux-work/arch/powerpc/mm/Makefile	2009-03-10 16:23:44.000000000 +1100
@@ -14,7 +14,7 @@ obj-$(CONFIG_PPC_MMU_NOHASH)	+= mmu_cont
 hash-$(CONFIG_PPC_NATIVE)	:= hash_native_64.o
 obj-$(CONFIG_PPC64)		+= hash_utils_64.o \
 				   slb_low.o slb.o stab.o \
-				   mmap.o $(hash-y)
+				   mmap_64.o $(hash-y)
 obj-$(CONFIG_PPC_STD_MMU_32)	+= ppc_mmu_32.o
 obj-$(CONFIG_PPC_STD_MMU)	+= hash_low_$(CONFIG_WORD_SIZE).o \
 				   tlb_hash$(CONFIG_WORD_SIZE).o \
Index: linux-work/arch/powerpc/mm/mmap.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/mmap.c	2009-03-10 16:08:18.000000000 +1100
+++ /dev/null	1970-01-01 00:00:00.000000000 +0000
@@ -1,109 +0,0 @@
-/*
- *  flexible mmap layout support
- *
- * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
- * All Rights Reserved.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
- *
- *
- * Started by Ingo Molnar <mingo@elte.hu>
- */
-
-#include <linux/personality.h>
-#include <linux/mm.h>
-#include <linux/random.h>
-#include <linux/sched.h>
-
-/*
- * Top of mmap area (just below the process stack).
- *
- * Leave at least a ~128 MB hole on 32bit applications.
- *
- * On 64bit applications we randomise the stack by 1GB so we need to
- * space our mmap start address by a further 1GB, otherwise there is a
- * chance the mmap area will end up closer to the stack than our ulimit
- * requires.
- */
-#define MIN_GAP32 (128*1024*1024)
-#define MIN_GAP64 ((128 + 1024)*1024*1024UL)
-#define MIN_GAP ((is_32bit_task()) ? MIN_GAP32 : MIN_GAP64)
-#define MAX_GAP (TASK_SIZE/6*5)
-
-static inline int mmap_is_legacy(void)
-{
-	if (current->personality & ADDR_COMPAT_LAYOUT)
-		return 1;
-
-	if (current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY)
-		return 1;
-
-	return sysctl_legacy_va_layout;
-}
-
-/*
- * Since get_random_int() returns the same value within a 1 jiffy window,
- * we will almost always get the same randomisation for the stack and mmap
- * region. This will mean the relative distance between stack and mmap will
- * be the same.
- *
- * To avoid this we can shift the randomness by 1 bit.
- */
-static unsigned long mmap_rnd(void)
-{
-	unsigned long rnd = 0;
-
-	if (current->flags & PF_RANDOMIZE) {
-		/* 8MB for 32bit, 1GB for 64bit */
-		if (is_32bit_task())
-			rnd = (long)(get_random_int() % (1<<(22-PAGE_SHIFT)));
-		else
-			rnd = (long)(get_random_int() % (1<<(29-PAGE_SHIFT)));
-	}
-	return (rnd << PAGE_SHIFT) * 2;
-}
-
-static inline unsigned long mmap_base(void)
-{
-	unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
-
-	if (gap < MIN_GAP)
-		gap = MIN_GAP;
-	else if (gap > MAX_GAP)
-		gap = MAX_GAP;
-
-	return PAGE_ALIGN(TASK_SIZE - gap - mmap_rnd());
-}
-
-/*
- * This function, called very early during the creation of a new
- * process VM image, sets up which VM layout function to use:
- */
-void arch_pick_mmap_layout(struct mm_struct *mm)
-{
-	/*
-	 * Fall back to the standard layout if the personality
-	 * bit is set, or if the expected stack growth is unlimited:
-	 */
-	if (mmap_is_legacy()) {
-		mm->mmap_base = TASK_UNMAPPED_BASE;
-		mm->get_unmapped_area = arch_get_unmapped_area;
-		mm->unmap_area = arch_unmap_area;
-	} else {
-		mm->mmap_base = mmap_base();
-		mm->get_unmapped_area = arch_get_unmapped_area_topdown;
-		mm->unmap_area = arch_unmap_area_topdown;
-	}
-}
Index: linux-work/arch/powerpc/mm/mmap_64.c
===================================================================
--- /dev/null	1970-01-01 00:00:00.000000000 +0000
+++ linux-work/arch/powerpc/mm/mmap_64.c	2009-03-10 16:08:34.000000000 +1100
@@ -0,0 +1,109 @@
+/*
+ *  flexible mmap layout support
+ *
+ * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
+ * All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ *
+ *
+ * Started by Ingo Molnar <mingo@elte.hu>
+ */
+
+#include <linux/personality.h>
+#include <linux/mm.h>
+#include <linux/random.h>
+#include <linux/sched.h>
+
+/*
+ * Top of mmap area (just below the process stack).
+ *
+ * Leave at least a ~128 MB hole on 32bit applications.
+ *
+ * On 64bit applications we randomise the stack by 1GB so we need to
+ * space our mmap start address by a further 1GB, otherwise there is a
+ * chance the mmap area will end up closer to the stack than our ulimit
+ * requires.
+ */
+#define MIN_GAP32 (128*1024*1024)
+#define MIN_GAP64 ((128 + 1024)*1024*1024UL)
+#define MIN_GAP ((is_32bit_task()) ? MIN_GAP32 : MIN_GAP64)
+#define MAX_GAP (TASK_SIZE/6*5)
+
+static inline int mmap_is_legacy(void)
+{
+	if (current->personality & ADDR_COMPAT_LAYOUT)
+		return 1;
+
+	if (current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY)
+		return 1;
+
+	return sysctl_legacy_va_layout;
+}
+
+/*
+ * Since get_random_int() returns the same value within a 1 jiffy window,
+ * we will almost always get the same randomisation for the stack and mmap
+ * region. This will mean the relative distance between stack and mmap will
+ * be the same.
+ *
+ * To avoid this we can shift the randomness by 1 bit.
+ */
+static unsigned long mmap_rnd(void)
+{
+	unsigned long rnd = 0;
+
+	if (current->flags & PF_RANDOMIZE) {
+		/* 8MB for 32bit, 1GB for 64bit */
+		if (is_32bit_task())
+			rnd = (long)(get_random_int() % (1<<(22-PAGE_SHIFT)));
+		else
+			rnd = (long)(get_random_int() % (1<<(29-PAGE_SHIFT)));
+	}
+	return (rnd << PAGE_SHIFT) * 2;
+}
+
+static inline unsigned long mmap_base(void)
+{
+	unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
+
+	if (gap < MIN_GAP)
+		gap = MIN_GAP;
+	else if (gap > MAX_GAP)
+		gap = MAX_GAP;
+
+	return PAGE_ALIGN(TASK_SIZE - gap - mmap_rnd());
+}
+
+/*
+ * This function, called very early during the creation of a new
+ * process VM image, sets up which VM layout function to use:
+ */
+void arch_pick_mmap_layout(struct mm_struct *mm)
+{
+	/*
+	 * Fall back to the standard layout if the personality
+	 * bit is set, or if the expected stack growth is unlimited:
+	 */
+	if (mmap_is_legacy()) {
+		mm->mmap_base = TASK_UNMAPPED_BASE;
+		mm->get_unmapped_area = arch_get_unmapped_area;
+		mm->unmap_area = arch_unmap_area;
+	} else {
+		mm->mmap_base = mmap_base();
+		mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+		mm->unmap_area = arch_unmap_area_topdown;
+	}
+}

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 7/9] powerpc/mm: Fix printk type warning in mmu_context_nohash
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
                   ` (5 preceding siblings ...)
  2009-03-11  3:53 ` [PATCH 6/9] powerpc/mm: Rename arch/powerpc/kernel/mmap.c to mmap_64.c Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 8/9] powerpc/mm: Add option for non-atomic PTE updates to ppc64 Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 9/9] powerpc/mm: Introduce early_init_mmu() on 64-bit Benjamin Herrenschmidt
  8 siblings, 0 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

We need to use %zu instead of %d when printing a sizeof()

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/mm/mmu_context_nohash.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- linux-work.orig/arch/powerpc/mm/mmu_context_nohash.c	2009-03-11 11:51:05.000000000 +1100
+++ linux-work/arch/powerpc/mm/mmu_context_nohash.c	2009-03-11 11:52:55.000000000 +1100
@@ -380,7 +380,7 @@ void __init mmu_context_init(void)
 #endif
 
 	printk(KERN_INFO
-	       "MMU: Allocated %d bytes of context maps for %d contexts\n",
+	       "MMU: Allocated %zu bytes of context maps for %d contexts\n",
 	       2 * CTX_MAP_SIZE + (sizeof(void *) * (last_context + 1)),
 	       last_context - first_context + 1);
 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 8/9] powerpc/mm: Add option for non-atomic PTE updates to ppc64
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
                   ` (6 preceding siblings ...)
  2009-03-11  3:53 ` [PATCH 7/9] powerpc/mm: Fix printk type warning in mmu_context_nohash Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  2009-03-11  3:53 ` [PATCH 9/9] powerpc/mm: Introduce early_init_mmu() on 64-bit Benjamin Herrenschmidt
  8 siblings, 0 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

ppc32 has it already, add it to ppc64 as a preliminary for adding
support for Book3E 64-bit support

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/include/asm/pgtable-ppc64.h |   12 +++++++++++-
 arch/powerpc/include/asm/pte-hash64.h    |    2 ++
 2 files changed, 13 insertions(+), 1 deletion(-)

--- linux-work.orig/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-10 15:43:26.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pgtable-ppc64.h	2009-03-10 16:08:30.000000000 +1100
@@ -178,6 +178,7 @@ static inline unsigned long pte_update(s
 				       pte_t *ptep, unsigned long clr,
 				       int huge)
 {
+#ifdef PTE_ATOMIC_UPDATES
 	unsigned long old, tmp;
 
 	__asm__ __volatile__(
@@ -190,7 +191,10 @@ static inline unsigned long pte_update(s
 	: "=&r" (old), "=&r" (tmp), "=m" (*ptep)
 	: "r" (ptep), "r" (clr), "m" (*ptep), "i" (_PAGE_BUSY)
 	: "cc" );
-
+#else
+	unsigned long old = pte_val(*ptep);
+	*ptep = __pte(old & ~clr);
+#endif
 	/* huge pages use the old page table lock */
 	if (!huge)
 		assert_pte_locked(mm, addr);
@@ -278,6 +282,8 @@ static inline void __ptep_set_access_fla
 	unsigned long bits = pte_val(entry) &
 		(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW |
 		 _PAGE_EXEC | _PAGE_HWEXEC);
+
+#ifdef PTE_ATOMIC_UPDATES
 	unsigned long old, tmp;
 
 	__asm__ __volatile__(
@@ -290,6 +296,10 @@ static inline void __ptep_set_access_fla
 	:"=&r" (old), "=&r" (tmp), "=m" (*ptep)
 	:"r" (bits), "r" (ptep), "m" (*ptep), "i" (_PAGE_BUSY)
 	:"cc");
+#else
+	unsigned long old = pte_val(*ptep);
+	*ptep = __pte(old | bits);
+#endif
 }
 
 #define __HAVE_ARCH_PTE_SAME
Index: linux-work/arch/powerpc/include/asm/pte-hash64.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/pte-hash64.h	2009-03-10 15:49:50.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/pte-hash64.h	2009-03-10 15:49:55.000000000 +1100
@@ -41,6 +41,8 @@
 #define _PTEIDX_SECONDARY	0x8
 #define _PTEIDX_GROUP_IX	0x7
 
+/* Hash table based platforms need atomic updates of the linux PTE */
+#define PTE_ATOMIC_UPDATES	1
 
 #ifdef CONFIG_PPC_64K_PAGES
 #include <asm/pte-hash64-64k.h>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 9/9] powerpc/mm: Introduce early_init_mmu() on 64-bit
  2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
                   ` (7 preceding siblings ...)
  2009-03-11  3:53 ` [PATCH 8/9] powerpc/mm: Add option for non-atomic PTE updates to ppc64 Benjamin Herrenschmidt
@ 2009-03-11  3:53 ` Benjamin Herrenschmidt
  8 siblings, 0 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11  3:53 UTC (permalink / raw)
  To: linuxppc-dev

This moves some MMU related init code out of setup_64.c into hash_utils_64.c
and calls it early_init_mmu() and early_init_mmu_secondary(). This will
make it easier to plug in a new MMU type.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
---

 arch/powerpc/include/asm/mmu-hash64.h |    2 -
 arch/powerpc/include/asm/mmu.h        |    4 +++
 arch/powerpc/kernel/setup_64.c        |   35 ++++-----------------------------
 arch/powerpc/mm/hash_utils_64.c       |   36 ++++++++++++++++++++++++++++++++--
 4 files changed, 43 insertions(+), 34 deletions(-)

--- linux-work.orig/arch/powerpc/include/asm/mmu-hash64.h	2009-03-11 13:43:26.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/mmu-hash64.h	2009-03-11 13:43:29.000000000 +1100
@@ -284,8 +284,6 @@ extern void add_gpage(unsigned long addr
 			  unsigned long number_of_pages);
 extern void demote_segment_4k(struct mm_struct *mm, unsigned long addr);
 
-extern void htab_initialize(void);
-extern void htab_initialize_secondary(void);
 extern void hpte_init_native(void);
 extern void hpte_init_lpar(void);
 extern void hpte_init_iSeries(void);
Index: linux-work/arch/powerpc/include/asm/mmu.h
===================================================================
--- linux-work.orig/arch/powerpc/include/asm/mmu.h	2009-03-11 13:43:39.000000000 +1100
+++ linux-work/arch/powerpc/include/asm/mmu.h	2009-03-11 13:48:40.000000000 +1100
@@ -56,6 +56,10 @@ static inline int mmu_has_feature(unsign
 
 extern unsigned int __start___mmu_ftr_fixup, __stop___mmu_ftr_fixup;
 
+/* MMU initialization (64-bit only fo now) */
+extern void early_init_mmu(void);
+extern void early_init_mmu_secondary(void);
+
 #endif /* !__ASSEMBLY__ */
 
 
Index: linux-work/arch/powerpc/kernel/setup_64.c
===================================================================
--- linux-work.orig/arch/powerpc/kernel/setup_64.c	2009-03-11 13:42:09.000000000 +1100
+++ linux-work/arch/powerpc/kernel/setup_64.c	2009-03-11 13:55:15.000000000 +1100
@@ -202,8 +202,6 @@ void __init early_setup(unsigned long dt
 
 	/* Fix up paca fields required for the boot cpu */
 	get_paca()->cpu_start = 1;
-	get_paca()->stab_real = __pa((u64)&initial_stab);
-	get_paca()->stab_addr = (u64)&initial_stab;
 
 	/* Probe the machine type */
 	probe_machine();
@@ -212,20 +210,8 @@ void __init early_setup(unsigned long dt
 
 	DBG("Found, Initializing memory management...\n");
 
-	/*
-	 * Initialize the MMU Hash table and create the linear mapping
-	 * of memory. Has to be done before stab/slb initialization as
-	 * this is currently where the page size encoding is obtained
-	 */
-	htab_initialize();
-
-	/*
-	 * Initialize stab / SLB management except on iSeries
-	 */
-	if (cpu_has_feature(CPU_FTR_SLB))
-		slb_initialize();
-	else if (!firmware_has_feature(FW_FEATURE_ISERIES))
-		stab_initialize(get_paca()->stab_real);
+	/* Initialize the hash table or TLB handling */
+	early_init_mmu();
 
 	DBG(" <- early_setup()\n");
 }
@@ -233,22 +219,11 @@ void __init early_setup(unsigned long dt
 #ifdef CONFIG_SMP
 void early_setup_secondary(void)
 {
-	struct paca_struct *lpaca = get_paca();
-
 	/* Mark interrupts enabled in PACA */
-	lpaca->soft_enabled = 0;
-
-	/* Initialize hash table for that CPU */
-	htab_initialize_secondary();
+	get_paca()->soft_enabled = 0;
 
-	/* Initialize STAB/SLB. We use a virtual address as it works
-	 * in real mode on pSeries and we want a virutal address on
-	 * iSeries anyway
-	 */
-	if (cpu_has_feature(CPU_FTR_SLB))
-		slb_initialize();
-	else
-		stab_initialize(lpaca->stab_addr);
+	/* Initialize the hash table or TLB handling */
+	early_init_mmu_secondary();
 }
 
 #endif /* CONFIG_SMP */
Index: linux-work/arch/powerpc/mm/hash_utils_64.c
===================================================================
--- linux-work.orig/arch/powerpc/mm/hash_utils_64.c	2009-03-11 13:41:36.000000000 +1100
+++ linux-work/arch/powerpc/mm/hash_utils_64.c	2009-03-11 13:54:50.000000000 +1100
@@ -590,7 +590,7 @@ static void __init htab_finish_init(void
 	make_bl(htab_call_hpte_updatepp, ppc_md.hpte_updatepp);
 }
 
-void __init htab_initialize(void)
+static void __init htab_initialize(void)
 {
 	unsigned long table;
 	unsigned long pteg_count;
@@ -732,11 +732,43 @@ void __init htab_initialize(void)
 #undef KB
 #undef MB
 
-void htab_initialize_secondary(void)
+void __init early_init_mmu(void)
 {
+	/* Setup initial STAB address in the PACA */
+	get_paca()->stab_real = __pa((u64)&initial_stab);
+	get_paca()->stab_addr = (u64)&initial_stab;
+
+	/* Initialize the MMU Hash table and create the linear mapping
+	 * of memory. Has to be done before stab/slb initialization as
+	 * this is currently where the page size encoding is obtained
+	 */
+	htab_initialize();
+
+	/* Initialize stab / SLB management except on iSeries
+	 */
+	if (cpu_has_feature(CPU_FTR_SLB))
+		slb_initialize();
+	else if (!firmware_has_feature(FW_FEATURE_ISERIES))
+		stab_initialize(get_paca()->stab_real);
+}
+
+#ifdef CONFIG_SMP
+void __init early_init_mmu_secondary(void)
+{
+	/* Initialize hash table for that CPU */
 	if (!firmware_has_feature(FW_FEATURE_LPAR))
 		mtspr(SPRN_SDR1, _SDR1);
+
+	/* Initialize STAB/SLB. We use a virtual address as it works
+	 * in real mode on pSeries and we want a virutal address on
+	 * iSeries anyway
+	 */
+	if (cpu_has_feature(CPU_FTR_SLB))
+		slb_initialize();
+	else
+		stab_initialize(get_paca()->stab_addr);
 }
+#endif /* CONFIG_SMP */
 
 /*
  * Called by asm hashtable.S for doing lazy icache flush

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM
  2009-03-11  3:53 ` [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM Benjamin Herrenschmidt
@ 2009-03-11 12:04   ` Kumar Gala
  2009-03-11 21:38     ` Benjamin Herrenschmidt
  0 siblings, 1 reply; 14+ messages in thread
From: Kumar Gala @ 2009-03-11 12:04 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev


On Mar 10, 2009, at 10:53 PM, Benjamin Herrenschmidt wrote:

>
> config PPC_NATIVE
> 	bool
> -	depends on PPC_MULTIPLATFORM
> +	depends on 6xx || PPC64
> 	help
> 	  Support for running natively on the hardware, i.e. without
> 	  a hypervisor. This option is not user-selectable but should
> 	  be selected by all platforms that need it.

Should this really just be PPC64 && BOOK3S ?  It doesnt look to be  
used for anything beyond using hash_native_64.S

- k

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM
  2009-03-11 12:04   ` Kumar Gala
@ 2009-03-11 21:38     ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 14+ messages in thread
From: Benjamin Herrenschmidt @ 2009-03-11 21:38 UTC (permalink / raw)
  To: Kumar Gala; +Cc: linuxppc-dev

On Wed, 2009-03-11 at 07:04 -0500, Kumar Gala wrote:
> On Mar 10, 2009, at 10:53 PM, Benjamin Herrenschmidt wrote:
> 
> >
> > config PPC_NATIVE
> > 	bool
> > -	depends on PPC_MULTIPLATFORM
> > +	depends on 6xx || PPC64
> > 	help
> > 	  Support for running natively on the hardware, i.e. without
> > 	  a hypervisor. This option is not user-selectable but should
> > 	  be selected by all platforms that need it.
> 
> Should this really just be PPC64 && BOOK3S ?  It doesnt look to be  
> used for anything beyond using hash_native_64.S

Maybe... In this case I didn't want to change it from what it was but
you're right, it probably is hash64 only.

Ben

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 4/9] powerpc/mm: Tweak PTE bit combination definitions (v2)
  2009-03-11  3:53 ` [PATCH 4/9] powerpc/mm: Tweak PTE bit combination definitions (v2) Benjamin Herrenschmidt
@ 2009-03-13  3:44   ` Michael Ellerman
  0 siblings, 0 replies; 14+ messages in thread
From: Michael Ellerman @ 2009-03-13  3:44 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev

[-- Attachment #1: Type: text/plain, Size: 1682 bytes --]

On Wed, 2009-03-11 at 14:53 +1100, Benjamin Herrenschmidt wrote:
> This patch tweaks the way some PTE bit combinations are defined, in such a
> way that the 32 and 64-bit variant become almost identical and that will
> make it easier to bring in a new common pte-* file for the new variant
> of the Book3-E support.
> 
<snip>
> +/* Permission masks used to generate the __P and __S table,
> + *
> + * Note:__pgprot is defined in arch/powerpc/include/asm/page.h
> + */
> +#define PAGE_NONE	__pgprot(_PAGE_BASE)
> +#define PAGE_SHARED	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
> +#define PAGE_SHARED_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | _PAGE_EXEC)
>  #define PAGE_COPY	__pgprot(_PAGE_BASE | _PAGE_USER)
>  #define PAGE_COPY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
>  #define PAGE_READONLY	__pgprot(_PAGE_BASE | _PAGE_USER)
>  #define PAGE_READONLY_X	__pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
> -#define PAGE_KERNEL	__pgprot(_PAGE_BASE | _PAGE_WRENABLE)
> -#define PAGE_KERNEL_CI	__pgprot(_PAGE_PRESENT | _PAGE_ACCESSED | \
> -			       _PAGE_WRENABLE | _PAGE_NO_CACHE | _PAGE_GUARDED)
> -#define PAGE_KERNEL_EXEC __pgprot(_PAGE_BASE | _PAGE_WRENABLE | _PAGE_EXEC)

Generic code needs PAGE_KERNEL_EXEC:

mm/vmalloc.c:

#ifndef PAGE_KERNEL_EXEC
# define PAGE_KERNEL_EXEC PAGE_KERNEL
#endif


Not having it breaks modules because we don't map the text executable.

cheers

-- 
Michael Ellerman
OzLabs, IBM Australia Development Lab

wwweb: http://michael.ellerman.id.au
phone: +61 2 6212 1183 (tie line 70 21183)

We do not inherit the earth from our ancestors,
we borrow it from our children. - S.M.A.R.T Person

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 197 bytes --]

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 5/9] powerpc/mm: Merge various PTE bits and accessors definitions (v2)
  2009-03-11  3:53 ` [PATCH 5/9] powerpc/mm: Merge various PTE bits and accessors " Benjamin Herrenschmidt
@ 2009-03-19 16:16   ` Kumar Gala
  0 siblings, 0 replies; 14+ messages in thread
From: Kumar Gala @ 2009-03-19 16:16 UTC (permalink / raw)
  To: Benjamin Herrenschmidt; +Cc: linuxppc-dev


On Mar 10, 2009, at 10:53 PM, Benjamin Herrenschmidt wrote:

> +
> +/* Conversion functions: convert a page and protection to a page  
> entry,
> + * and a page entry and page directory to the page they refer to.
> + *
> + * Even if PTEs can be unsigned long long, a PFN is always an  
> unsigned
> + * long for now.
> + */
> +static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot) {
> +	return __pte((pfn << PTE_RPN_SHIFT) | pgprot_val(pgprot)); }

This needs to be:

	return __pte(((pte_basic_t)(pfn) << PTE_RPN_SHIFT) |  
pgprot_val(pgprot)); }

Otherwise on ppc32 w/36-bit physical we lose the upper bits of the pfn.

- k

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2009-03-19 16:16 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-03-11  3:53 [PATCH 0/9] powerpc: MMU cleanup and propare for new Book3-E Benjamin Herrenschmidt
2009-03-11  3:53 ` [PATCH 1/9] powerpc/kconfig: Kill PPC_MULTIPLATFORM Benjamin Herrenschmidt
2009-03-11 12:04   ` Kumar Gala
2009-03-11 21:38     ` Benjamin Herrenschmidt
2009-03-11  3:53 ` [PATCH 2/9] powerpc/mm: Split the various pgtable-* headers based on MMU type (v3) Benjamin Herrenschmidt
2009-03-11  3:53 ` [PATCH 3/9] powerpc/mm: Unify PTE_RPN_SHIFT and _PAGE_CHG_MASK definitions Benjamin Herrenschmidt
2009-03-11  3:53 ` [PATCH 4/9] powerpc/mm: Tweak PTE bit combination definitions (v2) Benjamin Herrenschmidt
2009-03-13  3:44   ` Michael Ellerman
2009-03-11  3:53 ` [PATCH 5/9] powerpc/mm: Merge various PTE bits and accessors " Benjamin Herrenschmidt
2009-03-19 16:16   ` Kumar Gala
2009-03-11  3:53 ` [PATCH 6/9] powerpc/mm: Rename arch/powerpc/kernel/mmap.c to mmap_64.c Benjamin Herrenschmidt
2009-03-11  3:53 ` [PATCH 7/9] powerpc/mm: Fix printk type warning in mmu_context_nohash Benjamin Herrenschmidt
2009-03-11  3:53 ` [PATCH 8/9] powerpc/mm: Add option for non-atomic PTE updates to ppc64 Benjamin Herrenschmidt
2009-03-11  3:53 ` [PATCH 9/9] powerpc/mm: Introduce early_init_mmu() on 64-bit Benjamin Herrenschmidt

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.