linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/63] 64bit PowerPC little endian support
@ 2013-08-06 16:01 Anton Blanchard
  2013-08-06 16:01 ` [PATCH 01/63] powerpc: Align p_toc Anton Blanchard
                   ` (62 more replies)
  0 siblings, 63 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

This patchset adds support for building a 64bit PowerPC little
endian kernel.

binutils and gcc support (powerpcle/powerpc64le) is already
upstream. For gcc you can use gcc tip, or for the less adventurous
the gcc 4.8 branch works too.

QEMU patches to boot a little endian kernel will be posted over the
next day.

Anton
--

Alistair Popple (4):
  powerpc: More little endian fixes for prom.c
  powerpc: More little endian fixes for setup-common.c
  powerpc: Little endian fixes for legacy_serial.c
  powerpc: Make NUMA device node code endian safe

Anton Blanchard (53):
  powerpc: Align p_toc
  powerpc: handle unaligned ldbrx/stdbrx
  powerpc: Wrap MSR macros with parentheses
  powerpc: Remove SAVE_VSRU and REST_VSRU macros
  powerpc: Simplify logic in include/uapi/asm/elf.h
  powerpc/pseries: Simplify H_GET_TERM_CHAR
  powerpc: Fix a number of sparse warnings
  powerpc/pci: Don't use bitfield for force_32bit_msi
  powerpc: Stop using non-architected shared_proc field in lppaca
  powerpc: Make RTAS device tree accesses endian safe
  powerpc: Make cache info device tree accesses endian safe
  powerpc: Make RTAS calls endian safe
  powerpc: Make logical to real cpu mapping code endian safe
  powerpc: Add some endian annotations to time and xics code
  powerpc: Fix some endian issues in xics code
  powerpc: of_parse_dma_window should take a __be32 *dma_window
  powerpc: Make device tree accesses in cache info code endian safe
  powerpc: Make device tree accesses in HVC VIO console endian safe
  powerpc: Make device tree accesses in VIO subsystem endian safe
  powerpc: Make OF PCI device tree accesses endian safe
  powerpc: Make PCI device node device tree accesses endian safe
  powerpc: Add endian annotations to lppaca, slb_shadow and dtl_entry
  powerpc: Fix little endian lppaca, slb_shadow and dtl_entry
  powerpc: Emulate instructions in little endian mode
  powerpc: Little endian SMP IPI demux
  powerpc/pseries: Fix endian issues in H_GET_TERM_CHAR/H_PUT_TERM_CHAR
  powerpc: Fix little endian coredumps
  powerpc: Make rwlocks endian safe
  powerpc: Fix endian issues in VMX copy loops
  powerpc: Book 3S MMU little endian support
  powerpc: Fix offset of FPRs in VSX registers in little endian builds
  powerpc: PTRACE_PEEKUSR/PTRACE_POKEUSER of FPR registers in little
    endian builds
  powerpc: Little endian builds double word swap VSX state during
    context save/restore
  powerpc: Add little endian support for word-at-a-time functions
  powerpc: Set MSR_LE bit on little endian builds
  powerpc: Reset MSR_LE on signal entry
  powerpc: Add endian safe trampoline to pseries secondary thread entry
  pseries: Add H_SET_MODE to change exception endianness
  powerpc/kvm/book3s_hv: Add little endian guest support
  powerpc: Remove open coded byte swap macro in alignment handler
  powerpc: Remove hard coded FP offsets in alignment handler
  powerpc: Alignment handler shouldn't access VSX registers with TS_FPR
  powerpc: Add little endian support to alignment handler
  powerpc: Handle VSX alignment faults in little endian mode
  ibmveth: Fix little endian issues
  ibmvscsi: Fix little endian issues
  [SCSI] lpfc: Don't force CONFIG_GENERIC_CSUM on
  powerpc: Use generic checksum code in little endian
  powerpc: Use generic memcpy code in little endian
  powerpc: uname should return ppc64le/ppcle on little endian builds
  powerpc: Don't set HAVE_EFFICIENT_UNALIGNED_ACCESS on little endian
    builds
  powerpc: Work around little endian gcc bug
  powerpc: Add pseries_le_defconfig

Benjamin Herrenschmidt (2):
  powerpc: Make prom_init.c endian safe
  powerpc: endian safe trampoline

Ian Munsie (4):
  powerpc: Make prom.c device tree accesses endian safe
  powerpc: Support endian agnostic MMIO
  powerpc: Include the appropriate endianness header
  powerpc: Add ability to build little endian kernels

 arch/powerpc/Kconfig                            |   5 +-
 arch/powerpc/Makefile                           |  37 ++-
 arch/powerpc/boot/Makefile                      |   3 +-
 arch/powerpc/configs/pseries_le_defconfig       | 347 ++++++++++++++++++++++++
 arch/powerpc/include/asm/asm-compat.h           |   9 +
 arch/powerpc/include/asm/checksum.h             |   5 +
 arch/powerpc/include/asm/hvcall.h               |   2 +
 arch/powerpc/include/asm/io.h                   |  67 +++--
 arch/powerpc/include/asm/kvm_host.h             |   1 +
 arch/powerpc/include/asm/lppaca.h               |  68 +++--
 arch/powerpc/include/asm/mmu-hash64.h           |   4 +-
 arch/powerpc/include/asm/paca.h                 |   5 +
 arch/powerpc/include/asm/pci-bridge.h           |   2 +-
 arch/powerpc/include/asm/ppc-opcode.h           |   3 +
 arch/powerpc/include/asm/ppc_asm.h              |  68 +++--
 arch/powerpc/include/asm/processor.h            |  12 +-
 arch/powerpc/include/asm/prom.h                 |   5 +-
 arch/powerpc/include/asm/reg.h                  |  13 +-
 arch/powerpc/include/asm/reg_booke.h            |   8 +-
 arch/powerpc/include/asm/rtas.h                 |   8 +-
 arch/powerpc/include/asm/spinlock.h             |   6 +-
 arch/powerpc/include/asm/string.h               |   4 +
 arch/powerpc/include/asm/word-at-a-time.h       |  71 +++++
 arch/powerpc/include/uapi/asm/byteorder.h       |   4 +
 arch/powerpc/include/uapi/asm/elf.h             |  21 +-
 arch/powerpc/kernel/align.c                     | 172 ++++++++----
 arch/powerpc/kernel/asm-offsets.c               |   1 +
 arch/powerpc/kernel/cacheinfo.c                 |  12 +-
 arch/powerpc/kernel/entry_64.S                  |  47 ++--
 arch/powerpc/kernel/head_64.S                   |   4 +
 arch/powerpc/kernel/legacy_serial.c             |   8 +-
 arch/powerpc/kernel/lparcfg.c                   |  14 +-
 arch/powerpc/kernel/paca.c                      |  10 +-
 arch/powerpc/kernel/pci-common.c                |  10 +-
 arch/powerpc/kernel/pci_64.c                    |   4 +-
 arch/powerpc/kernel/pci_dn.c                    |  20 +-
 arch/powerpc/kernel/pci_of_scan.c               |  23 +-
 arch/powerpc/kernel/ppc_ksyms.c                 |   4 +
 arch/powerpc/kernel/prom.c                      |  64 ++---
 arch/powerpc/kernel/prom_init.c                 | 253 +++++++++--------
 arch/powerpc/kernel/prom_parse.c                |  17 +-
 arch/powerpc/kernel/ptrace.c                    |   8 +-
 arch/powerpc/kernel/rtas.c                      |  66 ++---
 arch/powerpc/kernel/setup-common.c              |  13 +-
 arch/powerpc/kernel/setup_64.c                  |  14 +-
 arch/powerpc/kernel/signal_32.c                 |   3 +-
 arch/powerpc/kernel/signal_64.c                 |  11 +-
 arch/powerpc/kernel/smp.c                       |  21 +-
 arch/powerpc/kernel/time.c                      |  18 +-
 arch/powerpc/kernel/traps.c                     |   2 +-
 arch/powerpc/kernel/vdso32/vdso32.lds.S         |   4 +
 arch/powerpc/kernel/vdso64/vdso64.lds.S         |   4 +
 arch/powerpc/kernel/vio.c                       |  33 ++-
 arch/powerpc/kvm/book3s_64_mmu_hv.c             |   2 +-
 arch/powerpc/kvm/book3s_64_slb.S                |   4 +
 arch/powerpc/kvm/book3s_hv.c                    |  46 +++-
 arch/powerpc/kvm/book3s_hv_rm_mmu.c             |   4 +
 arch/powerpc/kvm/book3s_hv_rmhandlers.S         |  27 +-
 arch/powerpc/lib/Makefile                       |  18 +-
 arch/powerpc/lib/copyuser_power7.S              |  54 ++--
 arch/powerpc/lib/locks.c                        |   4 +-
 arch/powerpc/lib/memcpy_power7.S                |  55 ++--
 arch/powerpc/mm/fault.c                         |   6 +-
 arch/powerpc/mm/hash_native_64.c                |  46 ++--
 arch/powerpc/mm/hash_utils_64.c                 |  40 ++-
 arch/powerpc/mm/numa.c                          | 102 +++----
 arch/powerpc/mm/slb.c                           |   9 +-
 arch/powerpc/mm/subpage-prot.c                  |   4 +-
 arch/powerpc/perf/core-book3s.c                 |   2 +-
 arch/powerpc/platforms/Kconfig.cputype          |  11 +
 arch/powerpc/platforms/cell/iommu.c             |   2 +-
 arch/powerpc/platforms/powernv/opal.c           |   2 +-
 arch/powerpc/platforms/pseries/dtl.c            |   2 +-
 arch/powerpc/platforms/pseries/hotplug-cpu.c    |   4 +-
 arch/powerpc/platforms/pseries/hvconsole.c      |  17 +-
 arch/powerpc/platforms/pseries/iommu.c          |   8 +-
 arch/powerpc/platforms/pseries/lpar.c           |  21 +-
 arch/powerpc/platforms/pseries/plpar_wrappers.h |  50 ++--
 arch/powerpc/platforms/pseries/processor_idle.c |   8 +-
 arch/powerpc/platforms/pseries/pseries_energy.c |   4 +-
 arch/powerpc/platforms/pseries/setup.c          |  46 +++-
 arch/powerpc/sysdev/xics/icp-native.c           |   2 +-
 arch/powerpc/sysdev/xics/xics-common.c          |  10 +-
 drivers/net/ethernet/ibm/ibmveth.c              |   4 +-
 drivers/net/ethernet/ibm/ibmveth.h              |  19 +-
 drivers/scsi/Kconfig                            |   1 -
 drivers/scsi/ibmvscsi/ibmvscsi.c                | 153 ++++++-----
 drivers/scsi/ibmvscsi/viosrp.h                  |  46 ++--
 drivers/tty/hvc/hvc_vio.c                       |   4 +-
 89 files changed, 1697 insertions(+), 778 deletions(-)
 create mode 100644 arch/powerpc/configs/pseries_le_defconfig

-- 
1.8.1.2

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH 01/63] powerpc: Align p_toc
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 02/63] powerpc: handle unaligned ldbrx/stdbrx Anton Blanchard
                   ` (61 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

p_toc is an 8 byte relative offset to the TOC that we place in the
text section. This means it is only 4 byte aligned where it should
be 8 byte aligned. Add an explicit alignment.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/head_64.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index b61363d..3d11d80 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -703,6 +703,7 @@ _GLOBAL(relative_toc)
 	mtlr	r0
 	blr
 
+.balign 8
 p_toc:	.llong	__toc_start + 0x8000 - 0b
 
 /*
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 02/63] powerpc: handle unaligned ldbrx/stdbrx
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
  2013-08-06 16:01 ` [PATCH 01/63] powerpc: Align p_toc Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 03/63] powerpc: Wrap MSR macros with parentheses Anton Blanchard
                   ` (60 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Normally when we haven't implemented an alignment handler for
a load or store instruction the process will be terminated.

The alignment handler uses the DSISR (or a pseudo one) to locate
the right handler. Unfortunately ldbrx and stdbrx overlap lfs and
stfs so we incorrectly think ldbrx is an lfs and stdbrx is an
stfs.

This bug is particularly nasty - instead of terminating the
process we apply an incorrect fixup and continue on.

With more and more overlapping instructions we should stop
creating a pseudo DSISR and index using the instruction directly,
but for now add a special case to catch ldbrx/stdbrx.

Signed-off-by: Anton Blanchard <anton@samba.org>
Cc: <stable@vger.kernel.org>
---
 arch/powerpc/kernel/align.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index ee5b690..52e5758 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -764,6 +764,16 @@ int fix_alignment(struct pt_regs *regs)
 	nb = aligninfo[instr].len;
 	flags = aligninfo[instr].flags;
 
+	/* ldbrx/stdbrx overlap lfs/stfs in the DSISR unfortunately */
+	if (IS_XFORM(instruction) && ((instruction >> 1) & 0x3ff) == 532) {
+		nb = 8;
+		flags = LD+SW;
+	} else if (IS_XFORM(instruction) &&
+		   ((instruction >> 1) & 0x3ff) == 660) {
+		nb = 8;
+		flags = ST+SW;
+	}
+
 	/* Byteswap little endian loads and stores */
 	swiz = 0;
 	if (regs->msr & MSR_LE) {
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 03/63] powerpc: Wrap MSR macros with parentheses
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
  2013-08-06 16:01 ` [PATCH 01/63] powerpc: Align p_toc Anton Blanchard
  2013-08-06 16:01 ` [PATCH 02/63] powerpc: handle unaligned ldbrx/stdbrx Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 04/63] powerpc: Remove SAVE_VSRU and REST_VSRU macros Anton Blanchard
                   ` (59 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Not having parentheses around a macro is asking for trouble.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/reg.h       | 8 ++++----
 arch/powerpc/include/asm/reg_booke.h | 8 ++++----
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index a6840e4..a312e0c 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -115,10 +115,10 @@
 #define MSR_64BIT	MSR_SF
 
 /* Server variant */
-#define MSR_		MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_ISF |MSR_HV
-#define MSR_KERNEL	MSR_ | MSR_64BIT
-#define MSR_USER32	MSR_ | MSR_PR | MSR_EE
-#define MSR_USER64	MSR_USER32 | MSR_64BIT
+#define MSR_		(MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_ISF |MSR_HV)
+#define MSR_KERNEL	(MSR_ | MSR_64BIT)
+#define MSR_USER32	(MSR_ | MSR_PR | MSR_EE)
+#define MSR_USER64	(MSR_USER32 | MSR_64BIT)
 #elif defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_8xx)
 /* Default MSR for kernel mode. */
 #define MSR_KERNEL	(MSR_ME|MSR_RI|MSR_IR|MSR_DR)
diff --git a/arch/powerpc/include/asm/reg_booke.h b/arch/powerpc/include/asm/reg_booke.h
index b417de3..ed8f836 100644
--- a/arch/powerpc/include/asm/reg_booke.h
+++ b/arch/powerpc/include/asm/reg_booke.h
@@ -29,10 +29,10 @@
 #if defined(CONFIG_PPC_BOOK3E_64)
 #define MSR_64BIT	MSR_CM
 
-#define MSR_		MSR_ME | MSR_CE
-#define MSR_KERNEL	MSR_ | MSR_64BIT
-#define MSR_USER32	MSR_ | MSR_PR | MSR_EE
-#define MSR_USER64	MSR_USER32 | MSR_64BIT
+#define MSR_		(MSR_ME | MSR_CE)
+#define MSR_KERNEL	(MSR_ | MSR_64BIT)
+#define MSR_USER32	(MSR_ | MSR_PR | MSR_EE)
+#define MSR_USER64	(MSR_USER32 | MSR_64BIT)
 #elif defined (CONFIG_40x)
 #define MSR_KERNEL	(MSR_ME|MSR_RI|MSR_IR|MSR_DR|MSR_CE)
 #define MSR_USER	(MSR_KERNEL|MSR_PR|MSR_EE)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 04/63] powerpc: Remove SAVE_VSRU and REST_VSRU macros
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (2 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 03/63] powerpc: Wrap MSR macros with parentheses Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 05/63] powerpc: Simplify logic in include/uapi/asm/elf.h Anton Blanchard
                   ` (58 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We always use VMX loads and stores to manage the high 32
VSRs. Remove these unused macros.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/ppc_asm.h | 13 -------------
 1 file changed, 13 deletions(-)

diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 2f1b6c5..b5c85f1 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -219,19 +219,6 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 #define REST_8VSRS(n,b,base)	REST_4VSRS(n,b,base); REST_4VSRS(n+4,b,base)
 #define REST_16VSRS(n,b,base)	REST_8VSRS(n,b,base); REST_8VSRS(n+8,b,base)
 #define REST_32VSRS(n,b,base)	REST_16VSRS(n,b,base); REST_16VSRS(n+16,b,base)
-/* Save the upper 32 VSRs (32-63) in the thread VSX region (0-31) */
-#define SAVE_VSRU(n,b,base)	li b,THREAD_VR0+(16*(n));  STXVD2X(n+32,R##base,R##b)
-#define SAVE_2VSRSU(n,b,base)	SAVE_VSRU(n,b,base); SAVE_VSRU(n+1,b,base)
-#define SAVE_4VSRSU(n,b,base)	SAVE_2VSRSU(n,b,base); SAVE_2VSRSU(n+2,b,base)
-#define SAVE_8VSRSU(n,b,base)	SAVE_4VSRSU(n,b,base); SAVE_4VSRSU(n+4,b,base)
-#define SAVE_16VSRSU(n,b,base)	SAVE_8VSRSU(n,b,base); SAVE_8VSRSU(n+8,b,base)
-#define SAVE_32VSRSU(n,b,base)	SAVE_16VSRSU(n,b,base); SAVE_16VSRSU(n+16,b,base)
-#define REST_VSRU(n,b,base)	li b,THREAD_VR0+(16*(n)); LXVD2X(n+32,R##base,R##b)
-#define REST_2VSRSU(n,b,base)	REST_VSRU(n,b,base); REST_VSRU(n+1,b,base)
-#define REST_4VSRSU(n,b,base)	REST_2VSRSU(n,b,base); REST_2VSRSU(n+2,b,base)
-#define REST_8VSRSU(n,b,base)	REST_4VSRSU(n,b,base); REST_4VSRSU(n+4,b,base)
-#define REST_16VSRSU(n,b,base)	REST_8VSRSU(n,b,base); REST_8VSRSU(n+8,b,base)
-#define REST_32VSRSU(n,b,base)	REST_16VSRSU(n,b,base); REST_16VSRSU(n+16,b,base)
 
 /*
  * b = base register for addressing, o = base offset from register of 1st EVR
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 05/63] powerpc: Simplify logic in include/uapi/asm/elf.h
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (3 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 04/63] powerpc: Remove SAVE_VSRU and REST_VSRU macros Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 06/63] powerpc/pseries: Simplify H_GET_TERM_CHAR Anton Blanchard
                   ` (57 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Simplify things by putting all the 32bit and 64bit defines
together instead of in two spots.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/uapi/asm/elf.h | 19 +++++++------------
 1 file changed, 7 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/uapi/asm/elf.h b/arch/powerpc/include/uapi/asm/elf.h
index 05b8d56..89fa042 100644
--- a/arch/powerpc/include/uapi/asm/elf.h
+++ b/arch/powerpc/include/uapi/asm/elf.h
@@ -107,6 +107,11 @@ typedef elf_gregset_t32 compat_elf_gregset_t;
 # define ELF_NVRREG	34	/* includes vscr & vrsave in split vectors */
 # define ELF_NVSRHALFREG 32	/* Half the vsx registers */
 # define ELF_GREG_TYPE	elf_greg_t64
+# define ELF_ARCH	EM_PPC64
+# define ELF_CLASS	ELFCLASS64
+# define ELF_DATA	ELFDATA2MSB
+typedef elf_greg_t64 elf_greg_t;
+typedef elf_gregset_t64 elf_gregset_t;
 #else
 # define ELF_NEVRREG	34	/* includes acc (as 2) */
 # define ELF_NVRREG	33	/* includes vscr */
@@ -114,20 +119,10 @@ typedef elf_gregset_t32 compat_elf_gregset_t;
 # define ELF_ARCH	EM_PPC
 # define ELF_CLASS	ELFCLASS32
 # define ELF_DATA	ELFDATA2MSB
+typedef elf_greg_t32 elf_greg_t;
+typedef elf_gregset_t32 elf_gregset_t;
 #endif /* __powerpc64__ */
 
-#ifndef ELF_ARCH
-# define ELF_ARCH	EM_PPC64
-# define ELF_CLASS	ELFCLASS64
-# define ELF_DATA	ELFDATA2MSB
-  typedef elf_greg_t64 elf_greg_t;
-  typedef elf_gregset_t64 elf_gregset_t;
-#else
-  /* Assumption: ELF_ARCH == EM_PPC and ELF_CLASS == ELFCLASS32 */
-  typedef elf_greg_t32 elf_greg_t;
-  typedef elf_gregset_t32 elf_gregset_t;
-#endif /* ELF_ARCH */
-
 /* Floating point registers */
 typedef double elf_fpreg_t;
 typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 06/63] powerpc/pseries: Simplify H_GET_TERM_CHAR
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (4 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 05/63] powerpc: Simplify logic in include/uapi/asm/elf.h Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 07/63] powerpc: Fix a number of sparse warnings Anton Blanchard
                   ` (56 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

plpar_get_term_char is only used once and just adds a layer
of complexity to H_GET_TERM_CHAR. plpar_put_term_char isn't
used at all so we can remove it.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/platforms/pseries/hvconsole.c      | 12 +++++++++---
 arch/powerpc/platforms/pseries/plpar_wrappers.h | 24 ------------------------
 2 files changed, 9 insertions(+), 27 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index b344f94..aa0aa37 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -40,10 +40,16 @@
  */
 int hvc_get_chars(uint32_t vtermno, char *buf, int count)
 {
-	unsigned long got;
+	long ret;
+	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
+	unsigned long *lbuf = (unsigned long *)buf;
+
+	ret = plpar_hcall(H_GET_TERM_CHAR, retbuf, vtermno);
+	lbuf[0] = retbuf[1];
+	lbuf[1] = retbuf[2];
 
-	if (plpar_get_term_char(vtermno, &got, buf) == H_SUCCESS)
-		return got;
+	if (ret == H_SUCCESS)
+		return retbuf[0];
 
 	return 0;
 }
diff --git a/arch/powerpc/platforms/pseries/plpar_wrappers.h b/arch/powerpc/platforms/pseries/plpar_wrappers.h
index f35787b..417d0bf 100644
--- a/arch/powerpc/platforms/pseries/plpar_wrappers.h
+++ b/arch/powerpc/platforms/pseries/plpar_wrappers.h
@@ -256,30 +256,6 @@ static inline long plpar_tce_stuff(unsigned long liobn, unsigned long ioba,
 	return plpar_hcall_norets(H_STUFF_TCE, liobn, ioba, tceval, count);
 }
 
-static inline long plpar_get_term_char(unsigned long termno,
-		unsigned long *len_ret, char *buf_ret)
-{
-	long rc;
-	unsigned long retbuf[PLPAR_HCALL_BUFSIZE];
-	unsigned long *lbuf = (unsigned long *)buf_ret;	/* TODO: alignment? */
-
-	rc = plpar_hcall(H_GET_TERM_CHAR, retbuf, termno);
-
-	*len_ret = retbuf[0];
-	lbuf[0] = retbuf[1];
-	lbuf[1] = retbuf[2];
-
-	return rc;
-}
-
-static inline long plpar_put_term_char(unsigned long termno, unsigned long len,
-		const char *buffer)
-{
-	unsigned long *lbuf = (unsigned long *)buffer;	/* TODO: alignment? */
-	return plpar_hcall_norets(H_PUT_TERM_CHAR, termno, len, lbuf[0],
-			lbuf[1]);
-}
-
 /* Set various resource mode parameters */
 static inline long plpar_set_mode(unsigned long mflags, unsigned long resource,
 		unsigned long value1, unsigned long value2)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 07/63] powerpc: Fix a number of sparse warnings
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (5 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 06/63] powerpc/pseries: Simplify H_GET_TERM_CHAR Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 08/63] powerpc/pci: Don't use bitfield for force_32bit_msi Anton Blanchard
                   ` (55 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Address some of the trivial sparse warnings in arch/powerpc.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/legacy_serial.c             | 2 +-
 arch/powerpc/kernel/pci-common.c                | 4 ++--
 arch/powerpc/kernel/pci_64.c                    | 2 +-
 arch/powerpc/kernel/setup_64.c                  | 4 ++--
 arch/powerpc/kernel/signal_64.c                 | 8 ++++----
 arch/powerpc/kernel/smp.c                       | 2 +-
 arch/powerpc/mm/hash_utils_64.c                 | 2 +-
 arch/powerpc/mm/subpage-prot.c                  | 4 ++--
 arch/powerpc/perf/core-book3s.c                 | 2 +-
 arch/powerpc/platforms/powernv/opal.c           | 2 +-
 arch/powerpc/platforms/pseries/lpar.c           | 2 +-
 arch/powerpc/platforms/pseries/pseries_energy.c | 4 ++--
 arch/powerpc/platforms/pseries/setup.c          | 2 +-
 13 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/kernel/legacy_serial.c b/arch/powerpc/kernel/legacy_serial.c
index 0733b05..af1c63f 100644
--- a/arch/powerpc/kernel/legacy_serial.c
+++ b/arch/powerpc/kernel/legacy_serial.c
@@ -99,7 +99,7 @@ static int __init add_legacy_port(struct device_node *np, int want_index,
 		legacy_serial_count = index + 1;
 
 	/* Check if there is a port who already claimed our slot */
-	if (legacy_serial_infos[index].np != 0) {
+	if (legacy_serial_infos[index].np != NULL) {
 		/* if we still have some room, move it, else override */
 		if (legacy_serial_count < MAX_LEGACY_SERIAL_PORTS) {
 			printk(KERN_DEBUG "Moved legacy port %d -> %d\n",
diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
index 7d22a67..22fe401 100644
--- a/arch/powerpc/kernel/pci-common.c
+++ b/arch/powerpc/kernel/pci-common.c
@@ -306,7 +306,7 @@ static struct resource *__pci_mmap_make_offset(struct pci_dev *dev,
 	unsigned long io_offset = 0;
 	int i, res_bit;
 
-	if (hose == 0)
+	if (hose == NULL)
 		return NULL;		/* should never happen */
 
 	/* If memory, add on the PCI bridge address offset */
@@ -1578,7 +1578,7 @@ fake_pci_bus(struct pci_controller *hose, int busnr)
 {
 	static struct pci_bus bus;
 
-	if (hose == 0) {
+	if (hose == NULL) {
 		printk(KERN_ERR "Can't find hose for PCI bus %d!\n", busnr);
 	}
 	bus.number = busnr;
diff --git a/arch/powerpc/kernel/pci_64.c b/arch/powerpc/kernel/pci_64.c
index 2e86296..cdf5aa1 100644
--- a/arch/powerpc/kernel/pci_64.c
+++ b/arch/powerpc/kernel/pci_64.c
@@ -109,7 +109,7 @@ int pcibios_unmap_io_space(struct pci_bus *bus)
 	hose = pci_bus_to_host(bus);
 
 	/* Check if we have IOs allocated */
-	if (hose->io_base_alloc == 0)
+	if (hose->io_base_alloc == NULL)
 		return 0;
 
 	pr_debug("IO unmapping for PHB %s\n", hose->dn->full_name);
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 389fb807..b3b5fd3 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -322,7 +322,7 @@ static void __init initialize_cache_info(void)
 							 NULL);
 			if (lsizep != NULL)
 				lsize = *lsizep;
-			if (sizep == 0 || lsizep == 0)
+			if (sizep == NULL || lsizep == NULL)
 				DBG("Argh, can't find dcache properties ! "
 				    "sizep: %p, lsizep: %p\n", sizep, lsizep);
 
@@ -344,7 +344,7 @@ static void __init initialize_cache_info(void)
 							 NULL);
 			if (lsizep != NULL)
 				lsize = *lsizep;
-			if (sizep == 0 || lsizep == 0)
+			if (sizep == NULL || lsizep == NULL)
 				DBG("Argh, can't find icache properties ! "
 				    "sizep: %p, lsizep: %p\n", sizep, lsizep);
 
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index 887e99d..cbd2692 100644
--- a/arch/powerpc/kernel/signal_64.c
+++ b/arch/powerpc/kernel/signal_64.c
@@ -346,13 +346,13 @@ static long restore_sigcontext(struct pt_regs *regs, sigset_t *set, int sig,
 	if (v_regs && !access_ok(VERIFY_READ, v_regs, 34 * sizeof(vector128)))
 		return -EFAULT;
 	/* Copy 33 vec registers (vr0..31 and vscr) from the stack */
-	if (v_regs != 0 && (msr & MSR_VEC) != 0)
+	if (v_regs != NULL && (msr & MSR_VEC) != 0)
 		err |= __copy_from_user(current->thread.vr, v_regs,
 					33 * sizeof(vector128));
 	else if (current->thread.used_vr)
 		memset(current->thread.vr, 0, 33 * sizeof(vector128));
 	/* Always get VRSAVE back */
-	if (v_regs != 0)
+	if (v_regs != NULL)
 		err |= __get_user(current->thread.vrsave, (u32 __user *)&v_regs[33]);
 	else
 		current->thread.vrsave = 0;
@@ -463,7 +463,7 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
 				    tm_v_regs, 34 * sizeof(vector128)))
 		return -EFAULT;
 	/* Copy 33 vec registers (vr0..31 and vscr) from the stack */
-	if (v_regs != 0 && tm_v_regs != 0 && (msr & MSR_VEC) != 0) {
+	if (v_regs != NULL && tm_v_regs != NULL && (msr & MSR_VEC) != 0) {
 		err |= __copy_from_user(current->thread.vr, v_regs,
 					33 * sizeof(vector128));
 		err |= __copy_from_user(current->thread.transact_vr, tm_v_regs,
@@ -474,7 +474,7 @@ static long restore_tm_sigcontexts(struct pt_regs *regs,
 		memset(current->thread.transact_vr, 0, 33 * sizeof(vector128));
 	}
 	/* Always get VRSAVE back */
-	if (v_regs != 0 && tm_v_regs != 0) {
+	if (v_regs != NULL && tm_v_regs != NULL) {
 		err |= __get_user(current->thread.vrsave,
 				  (u32 __user *)&v_regs[33]);
 		err |= __get_user(current->thread.transact_vrsave,
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 38b0ba6..98822400 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -172,7 +172,7 @@ int smp_request_message_ipi(int virq, int msg)
 #endif
 	err = request_irq(virq, smp_ipi_action[msg],
 			  IRQF_PERCPU | IRQF_NO_THREAD | IRQF_NO_SUSPEND,
-			  smp_ipi_name[msg], 0);
+			  smp_ipi_name[msg], NULL);
 	WARN(err < 0, "unable to request_irq %d for %s (rc %d)\n",
 		virq, smp_ipi_name[msg], err);
 
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 6ecc38b..bde8b55 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -907,7 +907,7 @@ static int subpage_protection(struct mm_struct *mm, unsigned long ea)
 
 	if (ea >= spt->maxaddr)
 		return 0;
-	if (ea < 0x100000000) {
+	if (ea < 0x100000000UL) {
 		/* addresses below 4GB use spt->low_prot */
 		sbpm = spt->low_prot;
 	} else {
diff --git a/arch/powerpc/mm/subpage-prot.c b/arch/powerpc/mm/subpage-prot.c
index aa74acb..a770df2d 100644
--- a/arch/powerpc/mm/subpage-prot.c
+++ b/arch/powerpc/mm/subpage-prot.c
@@ -105,7 +105,7 @@ static void subpage_prot_clear(unsigned long addr, unsigned long len)
 		limit = spt->maxaddr;
 	for (; addr < limit; addr = next) {
 		next = pmd_addr_end(addr, limit);
-		if (addr < 0x100000000) {
+		if (addr < 0x100000000UL) {
 			spm = spt->low_prot;
 		} else {
 			spm = spt->protptrs[addr >> SBP_L3_SHIFT];
@@ -219,7 +219,7 @@ long sys_subpage_prot(unsigned long addr, unsigned long len, u32 __user *map)
 	for (limit = addr + len; addr < limit; addr = next) {
 		next = pmd_addr_end(addr, limit);
 		err = -ENOMEM;
-		if (addr < 0x100000000) {
+		if (addr < 0x100000000UL) {
 			spm = spt->low_prot;
 		} else {
 			spm = spt->protptrs[addr >> SBP_L3_SHIFT];
diff --git a/arch/powerpc/perf/core-book3s.c b/arch/powerpc/perf/core-book3s.c
index eeae308..29b89e8 100644
--- a/arch/powerpc/perf/core-book3s.c
+++ b/arch/powerpc/perf/core-book3s.c
@@ -24,7 +24,7 @@
 #define BHRB_MAX_ENTRIES	32
 #define BHRB_TARGET		0x0000000000000002
 #define BHRB_PREDICTION		0x0000000000000001
-#define BHRB_EA			0xFFFFFFFFFFFFFFFC
+#define BHRB_EA			0xFFFFFFFFFFFFFFFCUL
 
 struct cpu_hw_events {
 	int n_events;
diff --git a/arch/powerpc/platforms/powernv/opal.c b/arch/powerpc/platforms/powernv/opal.c
index 106301f..7c25346 100644
--- a/arch/powerpc/platforms/powernv/opal.c
+++ b/arch/powerpc/platforms/powernv/opal.c
@@ -422,7 +422,7 @@ void opal_shutdown(void)
 
 	for (i = 0; i < opal_irq_count; i++) {
 		if (opal_irqs[i])
-			free_irq(opal_irqs[i], 0);
+			free_irq(opal_irqs[i], NULL);
 		opal_irqs[i] = 0;
 	}
 }
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 8bad880..60b6f4e 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -724,7 +724,7 @@ int h_get_mpp(struct hvcall_mpp_data *mpp_data)
 
 	mpp_data->mem_weight = (retbuf[3] >> 7 * 8) & 0xff;
 	mpp_data->unallocated_mem_weight = (retbuf[3] >> 6 * 8) & 0xff;
-	mpp_data->unallocated_entitlement = retbuf[3] & 0xffffffffffff;
+	mpp_data->unallocated_entitlement = retbuf[3] & 0xffffffffffffUL;
 
 	mpp_data->pool_size = retbuf[4];
 	mpp_data->loan_request = retbuf[5];
diff --git a/arch/powerpc/platforms/pseries/pseries_energy.c b/arch/powerpc/platforms/pseries/pseries_energy.c
index a91e6da..9276779 100644
--- a/arch/powerpc/platforms/pseries/pseries_energy.c
+++ b/arch/powerpc/platforms/pseries/pseries_energy.c
@@ -108,8 +108,8 @@ err:
  * energy consumption.
  */
 
-#define FLAGS_MODE1	0x004E200000080E01
-#define FLAGS_MODE2	0x004E200000080401
+#define FLAGS_MODE1	0x004E200000080E01UL
+#define FLAGS_MODE2	0x004E200000080401UL
 #define FLAGS_ACTIVATE  0x100
 
 static ssize_t get_best_energy_list(char *page, int activate)
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index c11c823..b19cd83 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -183,7 +183,7 @@ static void __init pseries_mpic_init_IRQ(void)
 	np = of_find_node_by_path("/");
 	naddr = of_n_addr_cells(np);
 	opprop = of_get_property(np, "platform-open-pic", &opplen);
-	if (opprop != 0) {
+	if (opprop != NULL) {
 		openpic_addr = of_read_number(opprop, naddr);
 		printk(KERN_DEBUG "OpenPIC addr: %lx\n", openpic_addr);
 	}
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 08/63] powerpc/pci: Don't use bitfield for force_32bit_msi
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (6 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 07/63] powerpc: Fix a number of sparse warnings Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 09/63] powerpc: Stop using non-architected shared_proc field in lppaca Anton Blanchard
                   ` (54 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Fix a sparse warning about force_32bit_msi being a one bit bitfield.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/pci-bridge.h | 2 +-
 arch/powerpc/kernel/pci_64.c          | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
index 32d0d20..4ca90a3 100644
--- a/arch/powerpc/include/asm/pci-bridge.h
+++ b/arch/powerpc/include/asm/pci-bridge.h
@@ -159,7 +159,7 @@ struct pci_dn {
 
 	int	pci_ext_config_space;	/* for pci devices */
 
-	int	force_32bit_msi:1;
+	bool	force_32bit_msi;
 
 	struct	pci_dev *pcidev;	/* back-pointer to the pci device */
 #ifdef CONFIG_EEH
diff --git a/arch/powerpc/kernel/pci_64.c b/arch/powerpc/kernel/pci_64.c
index cdf5aa1..a9e311f 100644
--- a/arch/powerpc/kernel/pci_64.c
+++ b/arch/powerpc/kernel/pci_64.c
@@ -272,7 +272,7 @@ static void quirk_radeon_32bit_msi(struct pci_dev *dev)
 	struct pci_dn *pdn = pci_get_pdn(dev);
 
 	if (pdn)
-		pdn->force_32bit_msi = 1;
+		pdn->force_32bit_msi = true;
 }
 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0x68f2, quirk_radeon_32bit_msi);
 DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_ATI, 0xaa68, quirk_radeon_32bit_msi);
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 09/63] powerpc: Stop using non-architected shared_proc field in lppaca
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (7 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 08/63] powerpc/pci: Don't use bitfield for force_32bit_msi Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 10/63] powerpc: Make prom.c device tree accesses endian safe Anton Blanchard
                   ` (53 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Although the shared_proc field in the lppaca works today, it is
not architected. A shared processor partition will always have a non
zero yield_count so use that instead. Create a wrapper so users
don't have to know about the details.

In order for older kernels to continue to work on KVM we need
to set the shared_proc bit. While here, remove the ugly bitfield.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/lppaca.h               | 18 ++++++++++++++----
 arch/powerpc/include/asm/spinlock.h             |  2 +-
 arch/powerpc/kernel/lparcfg.c                   |  5 +++--
 arch/powerpc/kvm/book3s_hv.c                    |  2 +-
 arch/powerpc/mm/numa.c                          |  2 +-
 arch/powerpc/platforms/pseries/hotplug-cpu.c    |  4 ++--
 arch/powerpc/platforms/pseries/processor_idle.c |  2 +-
 7 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/arch/powerpc/include/asm/lppaca.h b/arch/powerpc/include/asm/lppaca.h
index 9b12f88..bc8def0 100644
--- a/arch/powerpc/include/asm/lppaca.h
+++ b/arch/powerpc/include/asm/lppaca.h
@@ -50,10 +50,8 @@ struct lppaca {
 
 	u32	desc;			/* Eye catcher 0xD397D781 */
 	u16	size;			/* Size of this struct */
-	u16	reserved1;
-	u16	reserved2:14;
-	u8	shared_proc:1;		/* Shared processor indicator */
-	u8	secondary_thread:1;	/* Secondary thread indicator */
+	u8	reserved1[3];
+	u8	__old_status;		/* Old status, including shared proc */
 	u8	reserved3[14];
 	volatile u32 dyn_hw_node_id;	/* Dynamic hardware node id */
 	volatile u32 dyn_hw_proc_id;	/* Dynamic hardware proc id */
@@ -108,6 +106,18 @@ extern struct lppaca lppaca[];
 #define lppaca_of(cpu)	(*paca[cpu].lppaca_ptr)
 
 /*
+ * Old kernels used a reserved bit in the VPA to determine if it was running
+ * in shared processor mode. New kernels look for a non zero yield count
+ * but KVM still needs to set the bit to keep the old stuff happy.
+ */
+#define LPPACA_OLD_SHARED_PROC		2
+
+static inline bool lppaca_shared_proc(struct lppaca *l)
+{
+	return l->yield_count != 0;
+}
+
+/*
  * SLB shadow buffer structure as defined in the PAPR.  The save_area
  * contains adjacent ESID and VSID pairs for each shadowed SLB.  The
  * ESID is stored in the lower 64bits, then the VSID.
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 5b23f91..7c345b6 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -96,7 +96,7 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
 
 #if defined(CONFIG_PPC_SPLPAR)
 /* We only yield to the hypervisor if we are in shared processor mode */
-#define SHARED_PROCESSOR (local_paca->lppaca_ptr->shared_proc)
+#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))
 extern void __spin_yield(arch_spinlock_t *lock);
 extern void __rw_yield(arch_rwlock_t *lock);
 #else /* SPLPAR */
diff --git a/arch/powerpc/kernel/lparcfg.c b/arch/powerpc/kernel/lparcfg.c
index d92f387..e6024c2 100644
--- a/arch/powerpc/kernel/lparcfg.c
+++ b/arch/powerpc/kernel/lparcfg.c
@@ -165,7 +165,7 @@ static void parse_ppp_data(struct seq_file *m)
 	           ppp_data.active_system_procs);
 
 	/* pool related entries are appropriate for shared configs */
-	if (lppaca_of(0).shared_proc) {
+	if (lppaca_shared_proc(get_lppaca())) {
 		unsigned long pool_idle_time, pool_procs;
 
 		seq_printf(m, "pool=%d\n", ppp_data.pool_num);
@@ -473,7 +473,8 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v)
 	seq_printf(m, "partition_potential_processors=%d\n",
 		   partition_potential_processors);
 
-	seq_printf(m, "shared_processor_mode=%d\n", lppaca_of(0).shared_proc);
+	seq_printf(m, "shared_processor_mode=%d\n",
+		   lppaca_shared_proc(get_lppaca()));
 
 	seq_printf(m, "slb_size=%d\n", mmu_slb_size);
 
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 2efa9dd..cf39bf4 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -217,7 +217,7 @@ struct kvm_vcpu *kvmppc_find_vcpu(struct kvm *kvm, int id)
 
 static void init_vpa(struct kvm_vcpu *vcpu, struct lppaca *vpa)
 {
-	vpa->shared_proc = 1;
+	vpa->__old_status |= LPPACA_OLD_SHARED_PROC;
 	vpa->yield_count = 1;
 }
 
diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 5850798..501e32c 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -1609,7 +1609,7 @@ int start_topology_update(void)
 #endif
 		}
 	} else if (firmware_has_feature(FW_FEATURE_VPHN) &&
-		   get_lppaca()->shared_proc) {
+		   lppaca_shared_proc(get_lppaca())) {
 		if (!vphn_enabled) {
 			prrn_enabled = 0;
 			vphn_enabled = 1;
diff --git a/arch/powerpc/platforms/pseries/hotplug-cpu.c b/arch/powerpc/platforms/pseries/hotplug-cpu.c
index 217ca5c..1e490cf 100644
--- a/arch/powerpc/platforms/pseries/hotplug-cpu.c
+++ b/arch/powerpc/platforms/pseries/hotplug-cpu.c
@@ -123,7 +123,7 @@ static void pseries_mach_cpu_die(void)
 		cede_latency_hint = 2;
 
 		get_lppaca()->idle = 1;
-		if (!get_lppaca()->shared_proc)
+		if (!lppaca_shared_proc(get_lppaca()))
 			get_lppaca()->donate_dedicated_cpu = 1;
 
 		while (get_preferred_offline_state(cpu) == CPU_STATE_INACTIVE) {
@@ -137,7 +137,7 @@ static void pseries_mach_cpu_die(void)
 
 		local_irq_disable();
 
-		if (!get_lppaca()->shared_proc)
+		if (!lppaca_shared_proc(get_lppaca()))
 			get_lppaca()->donate_dedicated_cpu = 0;
 		get_lppaca()->idle = 0;
 
diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
index 4644efa0..92db881 100644
--- a/arch/powerpc/platforms/pseries/processor_idle.c
+++ b/arch/powerpc/platforms/pseries/processor_idle.c
@@ -308,7 +308,7 @@ static int pseries_idle_probe(void)
 		return -EPERM;
 	}
 
-	if (get_lppaca()->shared_proc)
+	if (lppaca_shared_proc(get_lppaca()))
 		cpuidle_state_table = shared_states;
 	else
 		cpuidle_state_table = dedicated_states;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 10/63] powerpc: Make prom.c device tree accesses endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (8 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 09/63] powerpc: Stop using non-architected shared_proc field in lppaca Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 11/63] powerpc: More little endian fixes for prom.c Anton Blanchard
                   ` (52 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Ian Munsie <imunsie@au1.ibm.com>

On PowerPC the device tree is always big endian, but the CPU could be
either, so add be32_to_cpu where appropriate and change the types of
device tree data to __be32 etc to allow sparse to locate endian issues.

Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Acked-by: Grant Likely <grant.likely@secretlab.ca>
---
 arch/powerpc/kernel/prom.c | 60 ++++++++++++++++++++++++----------------------
 1 file changed, 31 insertions(+), 29 deletions(-)

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index eb23ac9..d072f67 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -215,16 +215,16 @@ static void __init check_cpu_pa_features(unsigned long node)
 #ifdef CONFIG_PPC_STD_MMU_64
 static void __init check_cpu_slb_size(unsigned long node)
 {
-	u32 *slb_size_ptr;
+	__be32 *slb_size_ptr;
 
 	slb_size_ptr = of_get_flat_dt_prop(node, "slb-size", NULL);
 	if (slb_size_ptr != NULL) {
-		mmu_slb_size = *slb_size_ptr;
+		mmu_slb_size = be32_to_cpup(slb_size_ptr);
 		return;
 	}
 	slb_size_ptr = of_get_flat_dt_prop(node, "ibm,slb-size", NULL);
 	if (slb_size_ptr != NULL) {
-		mmu_slb_size = *slb_size_ptr;
+		mmu_slb_size = be32_to_cpup(slb_size_ptr);
 	}
 }
 #else
@@ -279,11 +279,11 @@ static void __init check_cpu_feature_properties(unsigned long node)
 {
 	unsigned long i;
 	struct feature_property *fp = feature_properties;
-	const u32 *prop;
+	const __be32 *prop;
 
 	for (i = 0; i < ARRAY_SIZE(feature_properties); ++i, ++fp) {
 		prop = of_get_flat_dt_prop(node, fp->name, NULL);
-		if (prop && *prop >= fp->min_value) {
+		if (prop && be32_to_cpup(prop) >= fp->min_value) {
 			cur_cpu_spec->cpu_features |= fp->cpu_feature;
 			cur_cpu_spec->cpu_user_features |= fp->cpu_user_ftr;
 		}
@@ -295,8 +295,8 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
 					  void *data)
 {
 	char *type = of_get_flat_dt_prop(node, "device_type", NULL);
-	const u32 *prop;
-	const u32 *intserv;
+	const __be32 *prop;
+	const __be32 *intserv;
 	int i, nthreads;
 	unsigned long len;
 	int found = -1;
@@ -324,8 +324,9 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
 		 * version 2 of the kexec param format adds the phys cpuid of
 		 * booted proc.
 		 */
-		if (initial_boot_params->version >= 2) {
-			if (intserv[i] == initial_boot_params->boot_cpuid_phys) {
+		if (be32_to_cpu(initial_boot_params->version) >= 2) {
+			if (be32_to_cpu(intserv[i]) ==
+			    be32_to_cpu(initial_boot_params->boot_cpuid_phys)) {
 				found = boot_cpu_count;
 				found_thread = i;
 			}
@@ -347,9 +348,10 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
 
 	if (found >= 0) {
 		DBG("boot cpu: logical %d physical %d\n", found,
-			intserv[found_thread]);
+			be32_to_cpu(intserv[found_thread]));
 		boot_cpuid = found;
-		set_hard_smp_processor_id(found, intserv[found_thread]);
+		set_hard_smp_processor_id(found,
+			be32_to_cpu(intserv[found_thread]));
 
 		/*
 		 * PAPR defines "logical" PVR values for cpus that
@@ -366,8 +368,8 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
 		 * it uses 0x0f000001.
 		 */
 		prop = of_get_flat_dt_prop(node, "cpu-version", NULL);
-		if (prop && (*prop & 0xff000000) == 0x0f000000)
-			identify_cpu(0, *prop);
+		if (prop && (be32_to_cpup(prop) & 0xff000000) == 0x0f000000)
+			identify_cpu(0, be32_to_cpup(prop));
 
 		identical_pvr_fixup(node);
 	}
@@ -389,7 +391,7 @@ static int __init early_init_dt_scan_cpus(unsigned long node,
 int __init early_init_dt_scan_chosen_ppc(unsigned long node, const char *uname,
 					 int depth, void *data)
 {
-	unsigned long *lprop;
+	unsigned long *lprop; /* All these set by kernel, so no need to convert endian */
 
 	/* Use common scan routine to determine if this is the chosen node */
 	if (early_init_dt_scan_chosen(node, uname, depth, data) == 0)
@@ -591,16 +593,16 @@ static void __init early_reserve_mem_dt(void)
 static void __init early_reserve_mem(void)
 {
 	u64 base, size;
-	u64 *reserve_map;
+	__be64 *reserve_map;
 	unsigned long self_base;
 	unsigned long self_size;
 
-	reserve_map = (u64 *)(((unsigned long)initial_boot_params) +
-					initial_boot_params->off_mem_rsvmap);
+	reserve_map = (__be64 *)(((unsigned long)initial_boot_params) +
+			be32_to_cpu(initial_boot_params->off_mem_rsvmap));
 
 	/* before we do anything, lets reserve the dt blob */
 	self_base = __pa((unsigned long)initial_boot_params);
-	self_size = initial_boot_params->totalsize;
+	self_size = be32_to_cpu(initial_boot_params->totalsize);
 	memblock_reserve(self_base, self_size);
 
 	/* Look for the new "reserved-regions" property in the DT */
@@ -620,15 +622,15 @@ static void __init early_reserve_mem(void)
 	 * Handle the case where we might be booting from an old kexec
 	 * image that setup the mem_rsvmap as pairs of 32-bit values
 	 */
-	if (*reserve_map > 0xffffffffull) {
+	if (be64_to_cpup(reserve_map) > 0xffffffffull) {
 		u32 base_32, size_32;
-		u32 *reserve_map_32 = (u32 *)reserve_map;
+		__be32 *reserve_map_32 = (__be32 *)reserve_map;
 
 		DBG("Found old 32-bit reserve map\n");
 
 		while (1) {
-			base_32 = *(reserve_map_32++);
-			size_32 = *(reserve_map_32++);
+			base_32 = be32_to_cpup(reserve_map_32++);
+			size_32 = be32_to_cpup(reserve_map_32++);
 			if (size_32 == 0)
 				break;
 			/* skip if the reservation is for the blob */
@@ -644,8 +646,8 @@ static void __init early_reserve_mem(void)
 
 	/* Handle the reserve map in the fdt blob if it exists */
 	while (1) {
-		base = *(reserve_map++);
-		size = *(reserve_map++);
+		base = be64_to_cpup(reserve_map++);
+		size = be64_to_cpup(reserve_map++);
 		if (size == 0)
 			break;
 		DBG("reserving: %llx -> %llx\n", base, size);
@@ -877,7 +879,7 @@ struct device_node *of_get_cpu_node(int cpu, unsigned int *thread)
 	hardid = get_hard_smp_processor_id(cpu);
 
 	for_each_node_by_type(np, "cpu") {
-		const u32 *intserv;
+		const __be32 *intserv;
 		unsigned int plen, t;
 
 		/* Check for ibm,ppc-interrupt-server#s. If it doesn't exist
@@ -886,10 +888,10 @@ struct device_node *of_get_cpu_node(int cpu, unsigned int *thread)
 		intserv = of_get_property(np, "ibm,ppc-interrupt-server#s",
 				&plen);
 		if (intserv == NULL) {
-			const u32 *reg = of_get_property(np, "reg", NULL);
+			const __be32 *reg = of_get_property(np, "reg", NULL);
 			if (reg == NULL)
 				continue;
-			if (*reg == hardid) {
+			if (be32_to_cpup(reg) == hardid) {
 				if (thread)
 					*thread = 0;
 				return np;
@@ -897,7 +899,7 @@ struct device_node *of_get_cpu_node(int cpu, unsigned int *thread)
 		} else {
 			plen /= sizeof(u32);
 			for (t = 0; t < plen; t++) {
-				if (hardid == intserv[t]) {
+				if (hardid == be32_to_cpu(intserv[t])) {
 					if (thread)
 						*thread = t;
 					return np;
@@ -917,7 +919,7 @@ static int __init export_flat_device_tree(void)
 	struct dentry *d;
 
 	flat_dt_blob.data = initial_boot_params;
-	flat_dt_blob.size = initial_boot_params->totalsize;
+	flat_dt_blob.size = be32_to_cpu(initial_boot_params->totalsize);
 
 	d = debugfs_create_blob("flat-device-tree", S_IFREG | S_IRUSR,
 				powerpc_debugfs_root, &flat_dt_blob);
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 11/63] powerpc: More little endian fixes for prom.c
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (9 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 10/63] powerpc: Make prom.c device tree accesses endian safe Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 12/63] powerpc: Make RTAS device tree accesses endian safe Anton Blanchard
                   ` (51 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Alistair Popple <alistair@popple.id.au>

Signed-off-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/kernel/prom.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index d072f67..987a4fb 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -456,7 +456,7 @@ static int __init early_init_dt_scan_drconf_memory(unsigned long node)
 	if (dm == NULL || l < sizeof(__be32))
 		return 0;
 
-	n = *dm++;	/* number of entries */
+	n = of_read_number(dm++, 1);	/* number of entries */
 	if (l < (n * (dt_root_addr_cells + 4) + 1) * sizeof(__be32))
 		return 0;
 
@@ -468,7 +468,7 @@ static int __init early_init_dt_scan_drconf_memory(unsigned long node)
 
 	for (; n != 0; --n) {
 		base = dt_mem_next_cell(dt_root_addr_cells, &dm);
-		flags = dm[3];
+		flags = of_read_number(&dm[3], 1);
 		/* skip DRC index, pad, assoc. list index, flags */
 		dm += 4;
 		/* skip this block if the reserved bit is set in flags (0x80)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 12/63] powerpc: Make RTAS device tree accesses endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (10 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 11/63] powerpc: More little endian fixes for prom.c Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 13/63] powerpc: Make cache info " Anton Blanchard
                   ` (50 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/rtas.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
index 80b5ef4..98b26af 100644
--- a/arch/powerpc/kernel/rtas.c
+++ b/arch/powerpc/kernel/rtas.c
@@ -204,7 +204,7 @@ void rtas_progress(char *s, unsigned short hex)
 {
 	struct device_node *root;
 	int width;
-	const int *p;
+	const __be32 *p;
 	char *os;
 	static int display_character, set_indicator;
 	static int display_width, display_lines, form_feed;
@@ -221,13 +221,13 @@ void rtas_progress(char *s, unsigned short hex)
 		if ((root = of_find_node_by_path("/rtas"))) {
 			if ((p = of_get_property(root,
 					"ibm,display-line-length", NULL)))
-				display_width = *p;
+				display_width = be32_to_cpu(*p);
 			if ((p = of_get_property(root,
 					"ibm,form-feed", NULL)))
-				form_feed = *p;
+				form_feed = be32_to_cpu(*p);
 			if ((p = of_get_property(root,
 					"ibm,display-number-of-lines", NULL)))
-				display_lines = *p;
+				display_lines = be32_to_cpu(*p);
 			row_width = of_get_property(root,
 					"ibm,display-truncation-length", NULL);
 			of_node_put(root);
@@ -322,11 +322,11 @@ EXPORT_SYMBOL(rtas_progress);		/* needed by rtas_flash module */
 
 int rtas_token(const char *service)
 {
-	const int *tokp;
+	const __be32 *tokp;
 	if (rtas.dev == NULL)
 		return RTAS_UNKNOWN_SERVICE;
 	tokp = of_get_property(rtas.dev, service, NULL);
-	return tokp ? *tokp : RTAS_UNKNOWN_SERVICE;
+	return tokp ? be32_to_cpu(*tokp) : RTAS_UNKNOWN_SERVICE;
 }
 EXPORT_SYMBOL(rtas_token);
 
@@ -588,8 +588,8 @@ bool rtas_indicator_present(int token, int *maxindex)
 {
 	int proplen, count, i;
 	const struct indicator_elem {
-		u32 token;
-		u32 maxindex;
+		__be32 token;
+		__be32 maxindex;
 	} *indicators;
 
 	indicators = of_get_property(rtas.dev, "rtas-indicators", &proplen);
@@ -599,10 +599,10 @@ bool rtas_indicator_present(int token, int *maxindex)
 	count = proplen / sizeof(struct indicator_elem);
 
 	for (i = 0; i < count; i++) {
-		if (indicators[i].token != token)
+		if (__be32_to_cpu(indicators[i].token) != token)
 			continue;
 		if (maxindex)
-			*maxindex = indicators[i].maxindex;
+			*maxindex = __be32_to_cpu(indicators[i].maxindex);
 		return true;
 	}
 
@@ -1097,19 +1097,19 @@ void __init rtas_initialize(void)
 	 */
 	rtas.dev = of_find_node_by_name(NULL, "rtas");
 	if (rtas.dev) {
-		const u32 *basep, *entryp, *sizep;
+		const __be32 *basep, *entryp, *sizep;
 
 		basep = of_get_property(rtas.dev, "linux,rtas-base", NULL);
 		sizep = of_get_property(rtas.dev, "rtas-size", NULL);
 		if (basep != NULL && sizep != NULL) {
-			rtas.base = *basep;
-			rtas.size = *sizep;
+			rtas.base = __be32_to_cpu(*basep);
+			rtas.size = __be32_to_cpu(*sizep);
 			entryp = of_get_property(rtas.dev,
 					"linux,rtas-entry", NULL);
 			if (entryp == NULL) /* Ugh */
 				rtas.entry = rtas.base;
 			else
-				rtas.entry = *entryp;
+				rtas.entry = __be32_to_cpu(*entryp);
 		} else
 			rtas.dev = NULL;
 	}
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 13/63] powerpc: Make cache info device tree accesses endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (11 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 12/63] powerpc: Make RTAS device tree accesses endian safe Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 14/63] powerpc: Make RTAS calls " Anton Blanchard
                   ` (49 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/setup_64.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index b3b5fd3..00dfcc5 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -305,14 +305,14 @@ static void __init initialize_cache_info(void)
 		 * d-cache and i-cache sizes... -Peter
 		 */
 		if (num_cpus == 1) {
-			const u32 *sizep, *lsizep;
+			const __be32 *sizep, *lsizep;
 			u32 size, lsize;
 
 			size = 0;
 			lsize = cur_cpu_spec->dcache_bsize;
 			sizep = of_get_property(np, "d-cache-size", NULL);
 			if (sizep != NULL)
-				size = *sizep;
+				size = be32_to_cpu(*sizep);
 			lsizep = of_get_property(np, "d-cache-block-size",
 						 NULL);
 			/* fallback if block size missing */
@@ -321,7 +321,7 @@ static void __init initialize_cache_info(void)
 							 "d-cache-line-size",
 							 NULL);
 			if (lsizep != NULL)
-				lsize = *lsizep;
+				lsize = be32_to_cpu(*lsizep);
 			if (sizep == NULL || lsizep == NULL)
 				DBG("Argh, can't find dcache properties ! "
 				    "sizep: %p, lsizep: %p\n", sizep, lsizep);
@@ -335,7 +335,7 @@ static void __init initialize_cache_info(void)
 			lsize = cur_cpu_spec->icache_bsize;
 			sizep = of_get_property(np, "i-cache-size", NULL);
 			if (sizep != NULL)
-				size = *sizep;
+				size = be32_to_cpu(*sizep);
 			lsizep = of_get_property(np, "i-cache-block-size",
 						 NULL);
 			if (lsizep == NULL)
@@ -343,7 +343,7 @@ static void __init initialize_cache_info(void)
 							 "i-cache-line-size",
 							 NULL);
 			if (lsizep != NULL)
-				lsize = *lsizep;
+				lsize = be32_to_cpu(*lsizep);
 			if (sizep == NULL || lsizep == NULL)
 				DBG("Argh, can't find icache properties ! "
 				    "sizep: %p, lsizep: %p\n", sizep, lsizep);
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 14/63] powerpc: Make RTAS calls endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (12 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 13/63] powerpc: Make cache info " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 15/63] powerpc: Make logical to real cpu mapping code " Anton Blanchard
                   ` (48 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

RTAS expects arguments in the call buffer to be big endian so we
need to byteswap on little endian builds

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/rtas.h |  8 ++++----
 arch/powerpc/kernel/rtas.c      | 38 +++++++++++++++++++-------------------
 2 files changed, 23 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/rtas.h b/arch/powerpc/include/asm/rtas.h
index c7a8bfc..9bd52c6 100644
--- a/arch/powerpc/include/asm/rtas.h
+++ b/arch/powerpc/include/asm/rtas.h
@@ -44,12 +44,12 @@
  *
  */
 
-typedef u32 rtas_arg_t;
+typedef __be32 rtas_arg_t;
 
 struct rtas_args {
-	u32 token;
-	u32 nargs;
-	u32 nret; 
+	__be32 token;
+	__be32 nargs;
+	__be32 nret; 
 	rtas_arg_t args[16];
 	rtas_arg_t *rets;     /* Pointer to return values in args[]. */
 };  
diff --git a/arch/powerpc/kernel/rtas.c b/arch/powerpc/kernel/rtas.c
index 98b26af..4cf674d 100644
--- a/arch/powerpc/kernel/rtas.c
+++ b/arch/powerpc/kernel/rtas.c
@@ -91,7 +91,7 @@ static void unlock_rtas(unsigned long flags)
  * are designed only for very early low-level debugging, which
  * is why the token is hard-coded to 10.
  */
-static void call_rtas_display_status(char c)
+static void call_rtas_display_status(unsigned char c)
 {
 	struct rtas_args *args = &rtas.args;
 	unsigned long s;
@@ -100,11 +100,11 @@ static void call_rtas_display_status(char c)
 		return;
 	s = lock_rtas();
 
-	args->token = 10;
-	args->nargs = 1;
-	args->nret  = 1;
-	args->rets  = (rtas_arg_t *)&(args->args[1]);
-	args->args[0] = (unsigned char)c;
+	args->token = cpu_to_be32(10);
+	args->nargs = cpu_to_be32(1);
+	args->nret  = cpu_to_be32(1);
+	args->rets  = &(args->args[1]);
+	args->args[0] = cpu_to_be32(c);
 
 	enter_rtas(__pa(args));
 
@@ -380,11 +380,11 @@ static char *__fetch_rtas_last_error(char *altbuf)
 
 	bufsz = rtas_get_error_log_max();
 
-	err_args.token = rtas_last_error_token;
-	err_args.nargs = 2;
-	err_args.nret = 1;
-	err_args.args[0] = (rtas_arg_t)__pa(rtas_err_buf);
-	err_args.args[1] = bufsz;
+	err_args.token = cpu_to_be32(rtas_last_error_token);
+	err_args.nargs = cpu_to_be32(2);
+	err_args.nret = cpu_to_be32(1);
+	err_args.args[0] = cpu_to_be32(__pa(rtas_err_buf));
+	err_args.args[1] = cpu_to_be32(bufsz);
 	err_args.args[2] = 0;
 
 	save_args = rtas.args;
@@ -433,13 +433,13 @@ int rtas_call(int token, int nargs, int nret, int *outputs, ...)
 	s = lock_rtas();
 	rtas_args = &rtas.args;
 
-	rtas_args->token = token;
-	rtas_args->nargs = nargs;
-	rtas_args->nret  = nret;
-	rtas_args->rets  = (rtas_arg_t *)&(rtas_args->args[nargs]);
+	rtas_args->token = cpu_to_be32(token);
+	rtas_args->nargs = cpu_to_be32(nargs);
+	rtas_args->nret  = cpu_to_be32(nret);
+	rtas_args->rets  = &(rtas_args->args[nargs]);
 	va_start(list, outputs);
 	for (i = 0; i < nargs; ++i)
-		rtas_args->args[i] = va_arg(list, rtas_arg_t);
+		rtas_args->args[i] = cpu_to_be32(va_arg(list, __u32));
 	va_end(list);
 
 	for (i = 0; i < nret; ++i)
@@ -449,13 +449,13 @@ int rtas_call(int token, int nargs, int nret, int *outputs, ...)
 
 	/* A -1 return code indicates that the last command couldn't
 	   be completed due to a hardware error. */
-	if (rtas_args->rets[0] == -1)
+	if (be32_to_cpu(rtas_args->rets[0]) == -1)
 		buff_copy = __fetch_rtas_last_error(NULL);
 
 	if (nret > 1 && outputs != NULL)
 		for (i = 0; i < nret-1; ++i)
-			outputs[i] = rtas_args->rets[i+1];
-	ret = (nret > 0)? rtas_args->rets[0]: 0;
+			outputs[i] = be32_to_cpu(rtas_args->rets[i+1]);
+	ret = (nret > 0)? be32_to_cpu(rtas_args->rets[0]): 0;
 
 	unlock_rtas(s);
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 15/63] powerpc: Make logical to real cpu mapping code endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (13 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 14/63] powerpc: Make RTAS calls " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 16/63] powerpc: More little endian fixes for setup-common.c Anton Blanchard
                   ` (47 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/setup-common.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 63d051f..ee0e055 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -436,7 +436,7 @@ void __init smp_setup_cpu_maps(void)
 	DBG("smp_setup_cpu_maps()\n");
 
 	while ((dn = of_find_node_by_type(dn, "cpu")) && cpu < nr_cpu_ids) {
-		const int *intserv;
+		const __be32 *intserv;
 		int j, len;
 
 		DBG("  * %s...\n", dn->full_name);
@@ -456,9 +456,9 @@ void __init smp_setup_cpu_maps(void)
 
 		for (j = 0; j < nthreads && cpu < nr_cpu_ids; j++) {
 			DBG("    thread %d -> cpu %d (hard id %d)\n",
-			    j, cpu, intserv[j]);
+			    j, cpu, be32_to_cpu(intserv[j]));
 			set_cpu_present(cpu, true);
-			set_hard_smp_processor_id(cpu, intserv[j]);
+			set_hard_smp_processor_id(cpu, be32_to_cpu(intserv[j]));
 			set_cpu_possible(cpu, true);
 			cpu++;
 		}
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 16/63] powerpc: More little endian fixes for setup-common.c
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (14 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 15/63] powerpc: Make logical to real cpu mapping code " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 17/63] powerpc: Add some endian annotations to time and xics code Anton Blanchard
                   ` (46 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Alistair Popple <alistair@popple.id.au>

Signed-off-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/kernel/setup-common.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index ee0e055..3d261c0 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -437,6 +437,7 @@ void __init smp_setup_cpu_maps(void)
 
 	while ((dn = of_find_node_by_type(dn, "cpu")) && cpu < nr_cpu_ids) {
 		const __be32 *intserv;
+		__be32 cpu_be;
 		int j, len;
 
 		DBG("  * %s...\n", dn->full_name);
@@ -450,8 +451,10 @@ void __init smp_setup_cpu_maps(void)
 		} else {
 			DBG("    no ibm,ppc-interrupt-server#s -> 1 thread\n");
 			intserv = of_get_property(dn, "reg", NULL);
-			if (!intserv)
-				intserv = &cpu;	/* assume logical == phys */
+			if (!intserv) {
+				cpu_be = cpu_to_be32(cpu);
+				intserv = &cpu_be;	/* assume logical == phys */
+			}
 		}
 
 		for (j = 0; j < nthreads && cpu < nr_cpu_ids; j++) {
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 17/63] powerpc: Add some endian annotations to time and xics code
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (15 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 16/63] powerpc: More little endian fixes for setup-common.c Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 18/63] powerpc: Fix some endian issues in " Anton Blanchard
                   ` (45 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Fix a couple of sparse warnings.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/time.c            | 2 +-
 arch/powerpc/sysdev/xics/icp-native.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 65ab9e9..c863aa1 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -612,7 +612,7 @@ unsigned long long sched_clock(void)
 static int __init get_freq(char *name, int cells, unsigned long *val)
 {
 	struct device_node *cpu;
-	const unsigned int *fp;
+	const __be32 *fp;
 	int found = 0;
 
 	/* The cpu node should have timebase and clock frequency properties */
diff --git a/arch/powerpc/sysdev/xics/icp-native.c b/arch/powerpc/sysdev/xics/icp-native.c
index 7cd728b..9dee470 100644
--- a/arch/powerpc/sysdev/xics/icp-native.c
+++ b/arch/powerpc/sysdev/xics/icp-native.c
@@ -216,7 +216,7 @@ static int __init icp_native_init_one_node(struct device_node *np,
 					   unsigned int *indx)
 {
 	unsigned int ilen;
-	const u32 *ireg;
+	const __be32 *ireg;
 	int i;
 	int reg_tuple_size;
 	int num_servers = 0;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 18/63] powerpc: Fix some endian issues in xics code
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (16 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 17/63] powerpc: Add some endian annotations to time and xics code Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 19/63] powerpc: of_parse_dma_window should take a __be32 *dma_window Anton Blanchard
                   ` (44 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/sysdev/xics/xics-common.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/sysdev/xics/xics-common.c b/arch/powerpc/sysdev/xics/xics-common.c
index 9049d9f..fe0cca4 100644
--- a/arch/powerpc/sysdev/xics/xics-common.c
+++ b/arch/powerpc/sysdev/xics/xics-common.c
@@ -49,7 +49,7 @@ void xics_update_irq_servers(void)
 	int i, j;
 	struct device_node *np;
 	u32 ilen;
-	const u32 *ireg;
+	const __be32 *ireg;
 	u32 hcpuid;
 
 	/* Find the server numbers for the boot cpu. */
@@ -75,8 +75,8 @@ void xics_update_irq_servers(void)
 	 * default distribution server
 	 */
 	for (j = 0; j < i; j += 2) {
-		if (ireg[j] == hcpuid) {
-			xics_default_distrib_server = ireg[j+1];
+		if (be32_to_cpu(ireg[j]) == hcpuid) {
+			xics_default_distrib_server = be32_to_cpu(ireg[j+1]);
 			break;
 		}
 	}
@@ -383,7 +383,7 @@ void __init xics_register_ics(struct ics *ics)
 static void __init xics_get_server_size(void)
 {
 	struct device_node *np;
-	const u32 *isize;
+	const __be32 *isize;
 
 	/* We fetch the interrupt server size from the first ICS node
 	 * we find if any
@@ -394,7 +394,7 @@ static void __init xics_get_server_size(void)
 	isize = of_get_property(np, "ibm,interrupt-server#-size", NULL);
 	if (!isize)
 		return;
-	xics_interrupt_server_size = *isize;
+	xics_interrupt_server_size = be32_to_cpu(*isize);
 	of_node_put(np);
 }
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 19/63] powerpc: of_parse_dma_window should take a __be32 *dma_window
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (17 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 18/63] powerpc: Fix some endian issues in " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 20/63] powerpc: Make device tree accesses in cache info code endian safe Anton Blanchard
                   ` (43 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We pass dma_window to of_parse_dma_window as a void * and then
run through hoops to cast it back to a u32 array. In the process
we lose endian annotation.

Simplify it by just passing a __be32 * down.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/prom.h        |  5 +++--
 arch/powerpc/kernel/prom_parse.c       | 17 ++++++++---------
 arch/powerpc/kernel/vio.c              |  2 +-
 arch/powerpc/platforms/cell/iommu.c    |  2 +-
 arch/powerpc/platforms/pseries/iommu.c |  8 ++++----
 5 files changed, 17 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/include/asm/prom.h b/arch/powerpc/include/asm/prom.h
index bc2da15..fa2f30c 100644
--- a/arch/powerpc/include/asm/prom.h
+++ b/arch/powerpc/include/asm/prom.h
@@ -38,8 +38,9 @@ extern unsigned long pci_address_to_pio(phys_addr_t address);
 /* Parse the ibm,dma-window property of an OF node into the busno, phys and
  * size parameters.
  */
-void of_parse_dma_window(struct device_node *dn, const void *dma_window_prop,
-		unsigned long *busno, unsigned long *phys, unsigned long *size);
+void of_parse_dma_window(struct device_node *dn, const __be32 *dma_window,
+			 unsigned long *busno, unsigned long *phys,
+			 unsigned long *size);
 
 extern void kdump_move_device_tree(void);
 
diff --git a/arch/powerpc/kernel/prom_parse.c b/arch/powerpc/kernel/prom_parse.c
index 4e1331b..6295e64 100644
--- a/arch/powerpc/kernel/prom_parse.c
+++ b/arch/powerpc/kernel/prom_parse.c
@@ -7,28 +7,27 @@
 #include <linux/of_address.h>
 #include <asm/prom.h>
 
-void of_parse_dma_window(struct device_node *dn, const void *dma_window_prop,
-		unsigned long *busno, unsigned long *phys, unsigned long *size)
+void of_parse_dma_window(struct device_node *dn, const __be32 *dma_window,
+			 unsigned long *busno, unsigned long *phys,
+			 unsigned long *size)
 {
-	const u32 *dma_window;
 	u32 cells;
-	const unsigned char *prop;
-
-	dma_window = dma_window_prop;
+	const __be32 *prop;
 
 	/* busno is always one cell */
-	*busno = *(dma_window++);
+	*busno = of_read_number(dma_window, 1);
+	dma_window++;
 
 	prop = of_get_property(dn, "ibm,#dma-address-cells", NULL);
 	if (!prop)
 		prop = of_get_property(dn, "#address-cells", NULL);
 
-	cells = prop ? *(u32 *)prop : of_n_addr_cells(dn);
+	cells = prop ? of_read_number(prop, 1) : of_n_addr_cells(dn);
 	*phys = of_read_number(dma_window, cells);
 
 	dma_window += cells;
 
 	prop = of_get_property(dn, "ibm,#dma-size-cells", NULL);
-	cells = prop ? *(u32 *)prop : of_n_size_cells(dn);
+	cells = prop ? of_read_number(prop, 1) : of_n_size_cells(dn);
 	*size = of_read_number(dma_window, cells);
 }
diff --git a/arch/powerpc/kernel/vio.c b/arch/powerpc/kernel/vio.c
index 536016d..31875a6 100644
--- a/arch/powerpc/kernel/vio.c
+++ b/arch/powerpc/kernel/vio.c
@@ -1153,7 +1153,7 @@ EXPORT_SYMBOL(vio_h_cop_sync);
 
 static struct iommu_table *vio_build_iommu_table(struct vio_dev *dev)
 {
-	const unsigned char *dma_window;
+	const __be32 *dma_window;
 	struct iommu_table *tbl;
 	unsigned long offset, size;
 
diff --git a/arch/powerpc/platforms/cell/iommu.c b/arch/powerpc/platforms/cell/iommu.c
index 946306b..b535606 100644
--- a/arch/powerpc/platforms/cell/iommu.c
+++ b/arch/powerpc/platforms/cell/iommu.c
@@ -697,7 +697,7 @@ static int __init cell_iommu_get_window(struct device_node *np,
 					 unsigned long *base,
 					 unsigned long *size)
 {
-	const void *dma_window;
+	const __be32 *dma_window;
 	unsigned long index;
 
 	/* Use ibm,dma-window if available, else, hard code ! */
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index 23fc1dc..9087f97 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -530,7 +530,7 @@ static void iommu_table_setparms(struct pci_controller *phb,
 static void iommu_table_setparms_lpar(struct pci_controller *phb,
 				      struct device_node *dn,
 				      struct iommu_table *tbl,
-				      const void *dma_window)
+				      const __be32 *dma_window)
 {
 	unsigned long offset, size;
 
@@ -630,7 +630,7 @@ static void pci_dma_bus_setup_pSeriesLP(struct pci_bus *bus)
 	struct iommu_table *tbl;
 	struct device_node *dn, *pdn;
 	struct pci_dn *ppci;
-	const void *dma_window = NULL;
+	const __be32 *dma_window = NULL;
 
 	dn = pci_bus_to_OF_node(bus);
 
@@ -1152,7 +1152,7 @@ static void pci_dma_dev_setup_pSeriesLP(struct pci_dev *dev)
 {
 	struct device_node *pdn, *dn;
 	struct iommu_table *tbl;
-	const void *dma_window = NULL;
+	const __be32 *dma_window = NULL;
 	struct pci_dn *pci;
 
 	pr_debug("pci_dma_dev_setup_pSeriesLP: %s\n", pci_name(dev));
@@ -1201,7 +1201,7 @@ static int dma_set_mask_pSeriesLP(struct device *dev, u64 dma_mask)
 	bool ddw_enabled = false;
 	struct device_node *pdn, *dn;
 	struct pci_dev *pdev;
-	const void *dma_window = NULL;
+	const __be32 *dma_window = NULL;
 	u64 dma_offset;
 
 	if (!dev->dma_mask)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 20/63] powerpc: Make device tree accesses in cache info code endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (18 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 19/63] powerpc: of_parse_dma_window should take a __be32 *dma_window Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 21/63] powerpc: Make prom_init.c " Anton Blanchard
                   ` (42 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/cacheinfo.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/cacheinfo.c b/arch/powerpc/kernel/cacheinfo.c
index 9262cf2..6549327 100644
--- a/arch/powerpc/kernel/cacheinfo.c
+++ b/arch/powerpc/kernel/cacheinfo.c
@@ -196,7 +196,7 @@ static void cache_cpu_set(struct cache *cache, int cpu)
 static int cache_size(const struct cache *cache, unsigned int *ret)
 {
 	const char *propname;
-	const u32 *cache_size;
+	const __be32 *cache_size;
 
 	propname = cache_type_info[cache->type].size_prop;
 
@@ -204,7 +204,7 @@ static int cache_size(const struct cache *cache, unsigned int *ret)
 	if (!cache_size)
 		return -ENODEV;
 
-	*ret = *cache_size;
+	*ret = of_read_number(cache_size, 1);
 	return 0;
 }
 
@@ -222,7 +222,7 @@ static int cache_size_kb(const struct cache *cache, unsigned int *ret)
 /* not cache_line_size() because that's a macro in include/linux/cache.h */
 static int cache_get_line_size(const struct cache *cache, unsigned int *ret)
 {
-	const u32 *line_size;
+	const __be32 *line_size;
 	int i, lim;
 
 	lim = ARRAY_SIZE(cache_type_info[cache->type].line_size_props);
@@ -239,14 +239,14 @@ static int cache_get_line_size(const struct cache *cache, unsigned int *ret)
 	if (!line_size)
 		return -ENODEV;
 
-	*ret = *line_size;
+	*ret = of_read_number(line_size, 1);
 	return 0;
 }
 
 static int cache_nr_sets(const struct cache *cache, unsigned int *ret)
 {
 	const char *propname;
-	const u32 *nr_sets;
+	const __be32 *nr_sets;
 
 	propname = cache_type_info[cache->type].nr_sets_prop;
 
@@ -254,7 +254,7 @@ static int cache_nr_sets(const struct cache *cache, unsigned int *ret)
 	if (!nr_sets)
 		return -ENODEV;
 
-	*ret = *nr_sets;
+	*ret = of_read_number(nr_sets, 1);
 	return 0;
 }
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 21/63] powerpc: Make prom_init.c endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (19 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 20/63] powerpc: Make device tree accesses in cache info code endian safe Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 22/63] powerpc: Make device tree accesses in HVC VIO console " Anton Blanchard
                   ` (41 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Benjamin Herrenschmidt <benh@kernel.crashing.org>

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/prom_init.c | 253 +++++++++++++++++++++++-----------------
 1 file changed, 147 insertions(+), 106 deletions(-)

diff --git a/arch/powerpc/kernel/prom_init.c b/arch/powerpc/kernel/prom_init.c
index 6079024..5996a2983 100644
--- a/arch/powerpc/kernel/prom_init.c
+++ b/arch/powerpc/kernel/prom_init.c
@@ -107,10 +107,10 @@ int of_workarounds;
 typedef u32 prom_arg_t;
 
 struct prom_args {
-        u32 service;
-        u32 nargs;
-        u32 nret;
-        prom_arg_t args[10];
+        __be32 service;
+        __be32 nargs;
+        __be32 nret;
+        __be32 args[10];
 };
 
 struct prom_t {
@@ -123,11 +123,11 @@ struct prom_t {
 };
 
 struct mem_map_entry {
-	u64	base;
-	u64	size;
+	__be64	base;
+	__be64	size;
 };
 
-typedef u32 cell_t;
+typedef __be32 cell_t;
 
 extern void __start(unsigned long r3, unsigned long r4, unsigned long r5,
 		    unsigned long r6, unsigned long r7, unsigned long r8,
@@ -219,13 +219,13 @@ static int __init call_prom(const char *service, int nargs, int nret, ...)
 	struct prom_args args;
 	va_list list;
 
-	args.service = ADDR(service);
-	args.nargs = nargs;
-	args.nret = nret;
+	args.service = cpu_to_be32(ADDR(service));
+	args.nargs = cpu_to_be32(nargs);
+	args.nret = cpu_to_be32(nret);
 
 	va_start(list, nret);
 	for (i = 0; i < nargs; i++)
-		args.args[i] = va_arg(list, prom_arg_t);
+		args.args[i] = cpu_to_be32(va_arg(list, prom_arg_t));
 	va_end(list);
 
 	for (i = 0; i < nret; i++)
@@ -234,7 +234,7 @@ static int __init call_prom(const char *service, int nargs, int nret, ...)
 	if (enter_prom(&args, prom_entry) < 0)
 		return PROM_ERROR;
 
-	return (nret > 0) ? args.args[nargs] : 0;
+	return (nret > 0) ? be32_to_cpu(args.args[nargs]) : 0;
 }
 
 static int __init call_prom_ret(const char *service, int nargs, int nret,
@@ -244,13 +244,13 @@ static int __init call_prom_ret(const char *service, int nargs, int nret,
 	struct prom_args args;
 	va_list list;
 
-	args.service = ADDR(service);
-	args.nargs = nargs;
-	args.nret = nret;
+	args.service = cpu_to_be32(ADDR(service));
+	args.nargs = cpu_to_be32(nargs);
+	args.nret = cpu_to_be32(nret);
 
 	va_start(list, rets);
 	for (i = 0; i < nargs; i++)
-		args.args[i] = va_arg(list, prom_arg_t);
+		args.args[i] = cpu_to_be32(va_arg(list, prom_arg_t));
 	va_end(list);
 
 	for (i = 0; i < nret; i++)
@@ -261,9 +261,9 @@ static int __init call_prom_ret(const char *service, int nargs, int nret,
 
 	if (rets != NULL)
 		for (i = 1; i < nret; ++i)
-			rets[i-1] = args.args[nargs+i];
+			rets[i-1] = be32_to_cpu(args.args[nargs+i]);
 
-	return (nret > 0) ? args.args[nargs] : 0;
+	return (nret > 0) ? be32_to_cpu(args.args[nargs]) : 0;
 }
 
 
@@ -527,7 +527,7 @@ static int __init prom_setprop(phandle node, const char *nodename,
 #define islower(c)	('a' <= (c) && (c) <= 'z')
 #define toupper(c)	(islower(c) ? ((c) - 'a' + 'A') : (c))
 
-unsigned long prom_strtoul(const char *cp, const char **endp)
+static unsigned long prom_strtoul(const char *cp, const char **endp)
 {
 	unsigned long result = 0, base = 10, value;
 
@@ -552,7 +552,7 @@ unsigned long prom_strtoul(const char *cp, const char **endp)
 	return result;
 }
 
-unsigned long prom_memparse(const char *ptr, const char **retptr)
+static unsigned long prom_memparse(const char *ptr, const char **retptr)
 {
 	unsigned long ret = prom_strtoul(ptr, retptr);
 	int shift = 0;
@@ -724,7 +724,8 @@ unsigned char ibm_architecture_vec[] = {
 
 };
 
-/* Old method - ELF header with PT_NOTE sections */
+/* Old method - ELF header with PT_NOTE sections only works on BE */
+#ifdef __BIG_ENDIAN__
 static struct fake_elf {
 	Elf32_Ehdr	elfhdr;
 	Elf32_Phdr	phdr[2];
@@ -810,6 +811,7 @@ static struct fake_elf {
 		}
 	}
 };
+#endif /* __BIG_ENDIAN__ */
 
 static int __init prom_count_smt_threads(void)
 {
@@ -852,9 +854,9 @@ static int __init prom_count_smt_threads(void)
 
 static void __init prom_send_capabilities(void)
 {
-	ihandle elfloader, root;
+	ihandle root;
 	prom_arg_t ret;
-	u32 *cores;
+	__be32 *cores;
 
 	root = call_prom("open", 1, 1, ADDR("/"));
 	if (root != 0) {
@@ -864,15 +866,15 @@ static void __init prom_send_capabilities(void)
 		 * (we assume this is the same for all cores) and use it to
 		 * divide NR_CPUS.
 		 */
-		cores = (u32 *)&ibm_architecture_vec[IBM_ARCH_VEC_NRCORES_OFFSET];
-		if (*cores != NR_CPUS) {
+		cores = (__be32 *)&ibm_architecture_vec[IBM_ARCH_VEC_NRCORES_OFFSET];
+		if (be32_to_cpup(cores) != NR_CPUS) {
 			prom_printf("WARNING ! "
 				    "ibm_architecture_vec structure inconsistent: %lu!\n",
-				    *cores);
+				    be32_to_cpup(cores));
 		} else {
-			*cores = DIV_ROUND_UP(NR_CPUS, prom_count_smt_threads());
+			*cores = cpu_to_be32(DIV_ROUND_UP(NR_CPUS, prom_count_smt_threads()));
 			prom_printf("Max number of cores passed to firmware: %lu (NR_CPUS = %lu)\n",
-				    *cores, NR_CPUS);
+				    be32_to_cpup(cores), NR_CPUS);
 		}
 
 		/* try calling the ibm,client-architecture-support method */
@@ -893,17 +895,24 @@ static void __init prom_send_capabilities(void)
 		prom_printf(" not implemented\n");
 	}
 
-	/* no ibm,client-architecture-support call, try the old way */
-	elfloader = call_prom("open", 1, 1, ADDR("/packages/elf-loader"));
-	if (elfloader == 0) {
-		prom_printf("couldn't open /packages/elf-loader\n");
-		return;
+#ifdef __BIG_ENDIAN__
+	{
+		ihandle elfloader;
+
+		/* no ibm,client-architecture-support call, try the old way */
+		elfloader = call_prom("open", 1, 1,
+				      ADDR("/packages/elf-loader"));
+		if (elfloader == 0) {
+			prom_printf("couldn't open /packages/elf-loader\n");
+			return;
+		}
+		call_prom("call-method", 3, 1, ADDR("process-elf-header"),
+			  elfloader, ADDR(&fake_elf));
+		call_prom("close", 1, 0, elfloader);
 	}
-	call_prom("call-method", 3, 1, ADDR("process-elf-header"),
-			elfloader, ADDR(&fake_elf));
-	call_prom("close", 1, 0, elfloader);
+#endif /* __BIG_ENDIAN__ */
 }
-#endif
+#endif /* #if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
 
 /*
  * Memory allocation strategy... our layout is normally:
@@ -1050,11 +1059,11 @@ static unsigned long __init prom_next_cell(int s, cell_t **cellp)
 		p++;
 		s--;
 	}
-	r = *p++;
+	r = be32_to_cpu(*p++);
 #ifdef CONFIG_PPC64
 	if (s > 1) {
 		r <<= 32;
-		r |= *(p++);
+		r |= be32_to_cpu(*(p++));
 	}
 #endif
 	*cellp = p;
@@ -1087,8 +1096,8 @@ static void __init reserve_mem(u64 base, u64 size)
 
 	if (cnt >= (MEM_RESERVE_MAP_SIZE - 1))
 		prom_panic("Memory reserve map exhausted !\n");
-	mem_reserve_map[cnt].base = base;
-	mem_reserve_map[cnt].size = size;
+	mem_reserve_map[cnt].base = cpu_to_be64(base);
+	mem_reserve_map[cnt].size = cpu_to_be64(size);
 	mem_reserve_cnt = cnt + 1;
 }
 
@@ -1102,6 +1111,7 @@ static void __init prom_init_mem(void)
 	char *path, type[64];
 	unsigned int plen;
 	cell_t *p, *endp;
+	__be32 val;
 	u32 rac, rsc;
 
 	/*
@@ -1109,12 +1119,14 @@ static void __init prom_init_mem(void)
 	 * 1) top of RMO (first node)
 	 * 2) top of memory
 	 */
-	rac = 2;
-	prom_getprop(prom.root, "#address-cells", &rac, sizeof(rac));
-	rsc = 1;
-	prom_getprop(prom.root, "#size-cells", &rsc, sizeof(rsc));
-	prom_debug("root_addr_cells: %x\n", (unsigned long) rac);
-	prom_debug("root_size_cells: %x\n", (unsigned long) rsc);
+	val = cpu_to_be32(2);
+	prom_getprop(prom.root, "#address-cells", &val, sizeof(val));
+	rac = be32_to_cpu(val);
+	val = cpu_to_be32(1);
+	prom_getprop(prom.root, "#size-cells", &val, sizeof(rsc));
+	rsc = be32_to_cpu(val);
+	prom_debug("root_addr_cells: %x\n", rac);
+	prom_debug("root_size_cells: %x\n", rsc);
 
 	prom_debug("scanning memory:\n");
 	path = prom_scratch;
@@ -1222,25 +1234,23 @@ static void __init prom_init_mem(void)
 
 static void __init prom_close_stdin(void)
 {
-	ihandle val;
+	__be32 val;
+	ihandle stdin;
 
-	if (prom_getprop(prom.chosen, "stdin", &val, sizeof(val)) > 0)
-		call_prom("close", 1, 0, val);
+	if (prom_getprop(prom.chosen, "stdin", &val, sizeof(val)) > 0) {
+		stdin = be32_to_cpu(val);
+		call_prom("close", 1, 0, stdin);
+	}
 }
 
 #ifdef CONFIG_PPC_POWERNV
 
-static u64 __initdata prom_opal_size;
-static u64 __initdata prom_opal_align;
-static int __initdata prom_rtas_start_cpu;
-static u64 __initdata prom_rtas_data;
-static u64 __initdata prom_rtas_entry;
-
 #ifdef CONFIG_PPC_EARLY_DEBUG_OPAL
 static u64 __initdata prom_opal_base;
 static u64 __initdata prom_opal_entry;
 #endif
 
+#ifdef __BIG_ENDIAN__
 /* XXX Don't change this structure without updating opal-takeover.S */
 static struct opal_secondary_data {
 	s64				ack;	/*  0 */
@@ -1248,6 +1258,12 @@ static struct opal_secondary_data {
 	struct opal_takeover_args	args;	/* 16 */
 } opal_secondary_data;
 
+static u64 __initdata prom_opal_align;
+static u64 __initdata prom_opal_size;
+static int __initdata prom_rtas_start_cpu;
+static u64 __initdata prom_rtas_data;
+static u64 __initdata prom_rtas_entry;
+
 extern char opal_secondary_entry;
 
 static void __init prom_query_opal(void)
@@ -1265,6 +1281,7 @@ static void __init prom_query_opal(void)
 	}
 
 	prom_printf("Querying for OPAL presence... ");
+
 	rc = opal_query_takeover(&prom_opal_size,
 				 &prom_opal_align);
 	prom_debug("(rc = %ld) ", rc);
@@ -1425,6 +1442,7 @@ static void __init prom_opal_takeover(void)
 	for (;;)
 		opal_do_takeover(args);
 }
+#endif /* __BIG_ENDIAN__ */
 
 /*
  * Allocate room for and instantiate OPAL
@@ -1435,6 +1453,7 @@ static void __init prom_instantiate_opal(void)
 	ihandle opal_inst;
 	u64 base, entry;
 	u64 size = 0, align = 0x10000;
+	__be64 val64;
 	u32 rets[2];
 
 	prom_debug("prom_instantiate_opal: start...\n");
@@ -1444,11 +1463,14 @@ static void __init prom_instantiate_opal(void)
 	if (!PHANDLE_VALID(opal_node))
 		return;
 
-	prom_getprop(opal_node, "opal-runtime-size", &size, sizeof(size));
+	val64 = 0;
+	prom_getprop(opal_node, "opal-runtime-size", &val64, sizeof(val64));
+	size = be64_to_cpu(val64);
 	if (size == 0)
 		return;
-	prom_getprop(opal_node, "opal-runtime-alignment", &align,
-		     sizeof(align));
+	val64 = 0;
+	prom_getprop(opal_node, "opal-runtime-alignment", &val64,sizeof(val64));
+	align = be64_to_cpu(val64);
 
 	base = alloc_down(size, align, 0);
 	if (base == 0) {
@@ -1505,6 +1527,7 @@ static void __init prom_instantiate_rtas(void)
 	phandle rtas_node;
 	ihandle rtas_inst;
 	u32 base, entry = 0;
+	__be32 val;
 	u32 size = 0;
 
 	prom_debug("prom_instantiate_rtas: start...\n");
@@ -1514,7 +1537,9 @@ static void __init prom_instantiate_rtas(void)
 	if (!PHANDLE_VALID(rtas_node))
 		return;
 
-	prom_getprop(rtas_node, "rtas-size", &size, sizeof(size));
+	val = 0;
+	prom_getprop(rtas_node, "rtas-size", &val, sizeof(size));
+	size = be32_to_cpu(val);
 	if (size == 0)
 		return;
 
@@ -1541,12 +1566,14 @@ static void __init prom_instantiate_rtas(void)
 
 	reserve_mem(base, size);
 
+	val = cpu_to_be32(base);
 	prom_setprop(rtas_node, "/rtas", "linux,rtas-base",
-		     &base, sizeof(base));
+		     &val, sizeof(val));
+	val = cpu_to_be32(entry);
 	prom_setprop(rtas_node, "/rtas", "linux,rtas-entry",
-		     &entry, sizeof(entry));
+		     &val, sizeof(val));
 
-#ifdef CONFIG_PPC_POWERNV
+#if defined(CONFIG_PPC_POWERNV) && defined(__BIG_ENDIAN__)
 	/* PowerVN takeover hack */
 	prom_rtas_data = base;
 	prom_rtas_entry = entry;
@@ -1620,6 +1647,7 @@ static void __init prom_instantiate_sml(void)
 /*
  * Allocate room for and initialize TCE tables
  */
+#ifdef __BIG_ENDIAN__
 static void __init prom_initialize_tce_table(void)
 {
 	phandle node;
@@ -1748,7 +1776,8 @@ static void __init prom_initialize_tce_table(void)
 	/* Flag the first invalid entry */
 	prom_debug("ending prom_initialize_tce_table\n");
 }
-#endif
+#endif /* __BIG_ENDIAN__ */
+#endif /* CONFIG_PPC64 */
 
 /*
  * With CHRP SMP we need to use the OF to start the other processors.
@@ -1777,7 +1806,6 @@ static void __init prom_initialize_tce_table(void)
 static void __init prom_hold_cpus(void)
 {
 	unsigned long i;
-	unsigned int reg;
 	phandle node;
 	char type[64];
 	unsigned long *spinloop
@@ -1803,6 +1831,9 @@ static void __init prom_hold_cpus(void)
 
 	/* look for cpus */
 	for (node = 0; prom_next_node(&node); ) {
+		unsigned int cpu_no;
+		__be32 reg;
+
 		type[0] = 0;
 		prom_getprop(node, "device_type", type, sizeof(type));
 		if (strcmp(type, "cpu") != 0)
@@ -1813,10 +1844,11 @@ static void __init prom_hold_cpus(void)
 			if (strcmp(type, "okay") != 0)
 				continue;
 
-		reg = -1;
+		reg = cpu_to_be32(-1); /* make sparse happy */
 		prom_getprop(node, "reg", &reg, sizeof(reg));
+		cpu_no = be32_to_cpu(reg);
 
-		prom_debug("cpu hw idx   = %lu\n", reg);
+		prom_debug("cpu hw idx   = %lu\n", cpu_no);
 
 		/* Init the acknowledge var which will be reset by
 		 * the secondary cpu when it awakens from its OF
@@ -1824,24 +1856,24 @@ static void __init prom_hold_cpus(void)
 		 */
 		*acknowledge = (unsigned long)-1;
 
-		if (reg != prom.cpu) {
+		if (cpu_no != prom.cpu) {
 			/* Primary Thread of non-boot cpu or any thread */
-			prom_printf("starting cpu hw idx %lu... ", reg);
+			prom_printf("starting cpu hw idx %lu... ", cpu_no);
 			call_prom("start-cpu", 3, 0, node,
-				  secondary_hold, reg);
+				  secondary_hold, cpu_no);
 
 			for (i = 0; (i < 100000000) && 
 			     (*acknowledge == ((unsigned long)-1)); i++ )
 				mb();
 
-			if (*acknowledge == reg)
+			if (*acknowledge == cpu_no)
 				prom_printf("done\n");
 			else
 				prom_printf("failed: %x\n", *acknowledge);
 		}
 #ifdef CONFIG_SMP
 		else
-			prom_printf("boot cpu hw idx %lu\n", reg);
+			prom_printf("boot cpu hw idx %lu\n", cpu_no);
 #endif /* CONFIG_SMP */
 	}
 
@@ -1895,6 +1927,7 @@ static void __init prom_find_mmu(void)
 	prom.memory = call_prom("open", 1, 1, ADDR("/memory"));
 	prom_getprop(prom.chosen, "mmu", &prom.mmumap,
 		     sizeof(prom.mmumap));
+	prom.mmumap = be32_to_cpu(prom.mmumap);
 	if (!IHANDLE_VALID(prom.memory) || !IHANDLE_VALID(prom.mmumap))
 		of_workarounds &= ~OF_WA_CLAIM;		/* hmmm */
 }
@@ -1906,17 +1939,19 @@ static void __init prom_init_stdout(void)
 {
 	char *path = of_stdout_device;
 	char type[16];
-	u32 val;
+	phandle stdout_node;
+	__be32 val;
 
 	if (prom_getprop(prom.chosen, "stdout", &val, sizeof(val)) <= 0)
 		prom_panic("cannot find stdout");
 
-	prom.stdout = val;
+	prom.stdout = be32_to_cpu(val);
 
 	/* Get the full OF pathname of the stdout device */
 	memset(path, 0, 256);
 	call_prom("instance-to-path", 3, 1, prom.stdout, path, 255);
-	val = call_prom("instance-to-package", 1, 1, prom.stdout);
+	stdout_node = call_prom("instance-to-package", 1, 1, prom.stdout);
+	val = cpu_to_be32(stdout_node);
 	prom_setprop(prom.chosen, "/chosen", "linux,stdout-package",
 		     &val, sizeof(val));
 	prom_printf("OF stdout device is: %s\n", of_stdout_device);
@@ -1925,9 +1960,9 @@ static void __init prom_init_stdout(void)
 
 	/* If it's a display, note it */
 	memset(type, 0, sizeof(type));
-	prom_getprop(val, "device_type", type, sizeof(type));
+	prom_getprop(stdout_node, "device_type", type, sizeof(type));
 	if (strcmp(type, "display") == 0)
-		prom_setprop(val, path, "linux,boot-display", NULL, 0);
+		prom_setprop(stdout_node, path, "linux,boot-display", NULL, 0);
 }
 
 static int __init prom_find_machine_type(void)
@@ -2117,8 +2152,10 @@ static void __init *make_room(unsigned long *mem_start, unsigned long *mem_end,
 	return ret;
 }
 
-#define dt_push_token(token, mem_start, mem_end) \
-	do { *((u32 *)make_room(mem_start, mem_end, 4, 4)) = token; } while(0)
+#define dt_push_token(token, mem_start, mem_end) do { 			\
+		void *room = make_room(mem_start, mem_end, 4, 4);	\
+		*(__be32 *)room = cpu_to_be32(token);			\
+	} while(0)
 
 static unsigned long __init dt_find_string(char *str)
 {
@@ -2291,7 +2328,7 @@ static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start,
 			dt_push_token(4, mem_start, mem_end);
 			dt_push_token(soff, mem_start, mem_end);
 			valp = make_room(mem_start, mem_end, 4, 4);
-			*(u32 *)valp = node;
+			*(__be32 *)valp = cpu_to_be32(node);
 		}
 	}
 
@@ -2364,16 +2401,16 @@ static void __init flatten_device_tree(void)
 	dt_struct_end = PAGE_ALIGN(mem_start);
 
 	/* Finish header */
-	hdr->boot_cpuid_phys = prom.cpu;
-	hdr->magic = OF_DT_HEADER;
-	hdr->totalsize = dt_struct_end - dt_header_start;
-	hdr->off_dt_struct = dt_struct_start - dt_header_start;
-	hdr->off_dt_strings = dt_string_start - dt_header_start;
-	hdr->dt_strings_size = dt_string_end - dt_string_start;
-	hdr->off_mem_rsvmap = ((unsigned long)rsvmap) - dt_header_start;
-	hdr->version = OF_DT_VERSION;
+	hdr->boot_cpuid_phys = cpu_to_be32(prom.cpu);
+	hdr->magic = cpu_to_be32(OF_DT_HEADER);
+	hdr->totalsize = cpu_to_be32(dt_struct_end - dt_header_start);
+	hdr->off_dt_struct = cpu_to_be32(dt_struct_start - dt_header_start);
+	hdr->off_dt_strings = cpu_to_be32(dt_string_start - dt_header_start);
+	hdr->dt_strings_size = cpu_to_be32(dt_string_end - dt_string_start);
+	hdr->off_mem_rsvmap = cpu_to_be32(((unsigned long)rsvmap) - dt_header_start);
+	hdr->version = cpu_to_be32(OF_DT_VERSION);
 	/* Version 16 is not backward compatible */
-	hdr->last_comp_version = 0x10;
+	hdr->last_comp_version = cpu_to_be32(0x10);
 
 	/* Copy the reserve map in */
 	memcpy(rsvmap, mem_reserve_map, sizeof(mem_reserve_map));
@@ -2384,8 +2421,8 @@ static void __init flatten_device_tree(void)
 		prom_printf("reserved memory map:\n");
 		for (i = 0; i < mem_reserve_cnt; i++)
 			prom_printf("  %x - %x\n",
-				    mem_reserve_map[i].base,
-				    mem_reserve_map[i].size);
+				    be64_to_cpu(mem_reserve_map[i].base),
+				    be64_to_cpu(mem_reserve_map[i].size));
 	}
 #endif
 	/* Bump mem_reserve_cnt to cause further reservations to fail
@@ -2397,7 +2434,6 @@ static void __init flatten_device_tree(void)
 		    dt_string_start, dt_string_end);
 	prom_printf("Device tree struct  0x%x -> 0x%x\n",
 		    dt_struct_start, dt_struct_end);
-
 }
 
 #ifdef CONFIG_PPC_MAPLE
@@ -2730,18 +2766,19 @@ static void __init fixup_device_tree(void)
 
 static void __init prom_find_boot_cpu(void)
 {
-	u32 getprop_rval;
+	__be32 rval;
 	ihandle prom_cpu;
 	phandle cpu_pkg;
 
-	prom.cpu = 0;
-	if (prom_getprop(prom.chosen, "cpu", &prom_cpu, sizeof(prom_cpu)) <= 0)
+	rval = 0;
+	if (prom_getprop(prom.chosen, "cpu", &rval, sizeof(rval)) <= 0)
 		return;
+	prom_cpu = be32_to_cpu(rval);
 
 	cpu_pkg = call_prom("instance-to-package", 1, 1, prom_cpu);
 
-	prom_getprop(cpu_pkg, "reg", &getprop_rval, sizeof(getprop_rval));
-	prom.cpu = getprop_rval;
+	prom_getprop(cpu_pkg, "reg", &rval, sizeof(rval));
+	prom.cpu = be32_to_cpu(rval);
 
 	prom_debug("Booting CPU hw index = %lu\n", prom.cpu);
 }
@@ -2750,15 +2787,15 @@ static void __init prom_check_initrd(unsigned long r3, unsigned long r4)
 {
 #ifdef CONFIG_BLK_DEV_INITRD
 	if (r3 && r4 && r4 != 0xdeadbeef) {
-		unsigned long val;
+		__be64 val;
 
 		prom_initrd_start = is_kernel_addr(r3) ? __pa(r3) : r3;
 		prom_initrd_end = prom_initrd_start + r4;
 
-		val = prom_initrd_start;
+		val = cpu_to_be64(prom_initrd_start);
 		prom_setprop(prom.chosen, "/chosen", "linux,initrd-start",
 			     &val, sizeof(val));
-		val = prom_initrd_end;
+		val = cpu_to_be64(prom_initrd_end);
 		prom_setprop(prom.chosen, "/chosen", "linux,initrd-end",
 			     &val, sizeof(val));
 
@@ -2915,7 +2952,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 	 */
 	prom_check_displays();
 
-#ifdef CONFIG_PPC64
+#if defined(CONFIG_PPC64) && defined(__BIG_ENDIAN__)
 	/*
 	 * Initialize IOMMU (TCE tables) on pSeries. Do that before anything else
 	 * that uses the allocator, we need to make sure we get the top of memory
@@ -2934,6 +2971,7 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 		prom_instantiate_rtas();
 
 #ifdef CONFIG_PPC_POWERNV
+#ifdef __BIG_ENDIAN__
 	/* Detect HAL and try instanciating it & doing takeover */
 	if (of_platform == PLATFORM_PSERIES_LPAR) {
 		prom_query_opal();
@@ -2941,9 +2979,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 			prom_opal_hold_cpus();
 			prom_opal_takeover();
 		}
-	} else if (of_platform == PLATFORM_OPAL)
+	} else
+#endif /* __BIG_ENDIAN__ */
+	if (of_platform == PLATFORM_OPAL)
 		prom_instantiate_opal();
-#endif
+#endif /* CONFIG_PPC_POWERNV */
 
 #ifdef CONFIG_PPC64
 	/* instantiate sml */
@@ -2962,10 +3002,11 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4,
 	/*
 	 * Fill in some infos for use by the kernel later on
 	 */
-	if (prom_memory_limit)
+	if (prom_memory_limit) {
+		__be64 val = cpu_to_be64(prom_memory_limit);
 		prom_setprop(prom.chosen, "/chosen", "linux,memory-limit",
-			     &prom_memory_limit,
-			     sizeof(prom_memory_limit));
+			     &val, sizeof(val));
+	}
 #ifdef CONFIG_PPC64
 	if (prom_iommu_off)
 		prom_setprop(prom.chosen, "/chosen", "linux,iommu-off",
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 22/63] powerpc: Make device tree accesses in HVC VIO console endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (20 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 21/63] powerpc: Make prom_init.c " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 23/63] powerpc: Make device tree accesses in VIO subsystem " Anton Blanchard
                   ` (40 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 drivers/tty/hvc/hvc_vio.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/tty/hvc/hvc_vio.c b/drivers/tty/hvc/hvc_vio.c
index 0c62980..c791b18 100644
--- a/drivers/tty/hvc/hvc_vio.c
+++ b/drivers/tty/hvc/hvc_vio.c
@@ -404,7 +404,7 @@ module_exit(hvc_vio_exit);
 void __init hvc_vio_init_early(void)
 {
 	struct device_node *stdout_node;
-	const u32 *termno;
+	const __be32 *termno;
 	const char *name;
 	const struct hv_ops *ops;
 
@@ -429,7 +429,7 @@ void __init hvc_vio_init_early(void)
 	termno = of_get_property(stdout_node, "reg", NULL);
 	if (termno == NULL)
 		goto out;
-	hvterm_priv0.termno = *termno;
+	hvterm_priv0.termno = of_read_number(termno, 1);
 	spin_lock_init(&hvterm_priv0.buf_lock);
 	hvterm_privs[0] = &hvterm_priv0;
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 23/63] powerpc: Make device tree accesses in VIO subsystem endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (21 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 22/63] powerpc: Make device tree accesses in HVC VIO console " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 24/63] powerpc: Make OF PCI device tree accesses " Anton Blanchard
                   ` (39 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/vio.c | 31 +++++++++++++++++--------------
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/arch/powerpc/kernel/vio.c b/arch/powerpc/kernel/vio.c
index 31875a6..78a3506 100644
--- a/arch/powerpc/kernel/vio.c
+++ b/arch/powerpc/kernel/vio.c
@@ -1312,8 +1312,7 @@ struct vio_dev *vio_register_device_node(struct device_node *of_node)
 {
 	struct vio_dev *viodev;
 	struct device_node *parent_node;
-	const unsigned int *unit_address;
-	const unsigned int *pfo_resid = NULL;
+	const __be32 *prop;
 	enum vio_dev_family family;
 	const char *of_node_name = of_node->name ? of_node->name : "<unknown>";
 
@@ -1360,6 +1359,8 @@ struct vio_dev *vio_register_device_node(struct device_node *of_node)
 	/* we need the 'device_type' property, in order to match with drivers */
 	viodev->family = family;
 	if (viodev->family == VDEVICE) {
+		unsigned int unit_address;
+
 		if (of_node->type != NULL)
 			viodev->type = of_node->type;
 		else {
@@ -1368,24 +1369,24 @@ struct vio_dev *vio_register_device_node(struct device_node *of_node)
 			goto out;
 		}
 
-		unit_address = of_get_property(of_node, "reg", NULL);
-		if (unit_address == NULL) {
+		prop = of_get_property(of_node, "reg", NULL);
+		if (prop == NULL) {
 			pr_warn("%s: node %s missing 'reg'\n",
 					__func__, of_node_name);
 			goto out;
 		}
-		dev_set_name(&viodev->dev, "%x", *unit_address);
+		unit_address = of_read_number(prop, 1);
+		dev_set_name(&viodev->dev, "%x", unit_address);
 		viodev->irq = irq_of_parse_and_map(of_node, 0);
-		viodev->unit_address = *unit_address;
+		viodev->unit_address = unit_address;
 	} else {
 		/* PFO devices need their resource_id for submitting COP_OPs
 		 * This is an optional field for devices, but is required when
 		 * performing synchronous ops */
-		pfo_resid = of_get_property(of_node, "ibm,resource-id", NULL);
-		if (pfo_resid != NULL)
-			viodev->resource_id = *pfo_resid;
+		prop = of_get_property(of_node, "ibm,resource-id", NULL);
+		if (prop != NULL)
+			viodev->resource_id = of_read_number(prop, 1);
 
-		unit_address = NULL;
 		dev_set_name(&viodev->dev, "%s", of_node_name);
 		viodev->type = of_node_name;
 		viodev->irq = 0;
@@ -1622,7 +1623,6 @@ static struct vio_dev *vio_find_name(const char *name)
  */
 struct vio_dev *vio_find_node(struct device_node *vnode)
 {
-	const uint32_t *unit_address;
 	char kobj_name[20];
 	struct device_node *vnode_parent;
 	const char *dev_type;
@@ -1638,10 +1638,13 @@ struct vio_dev *vio_find_node(struct device_node *vnode)
 
 	/* construct the kobject name from the device node */
 	if (!strcmp(dev_type, "vdevice")) {
-		unit_address = of_get_property(vnode, "reg", NULL);
-		if (!unit_address)
+		const __be32 *prop;
+		
+		prop = of_get_property(vnode, "reg", NULL);
+		if (!prop)
 			return NULL;
-		snprintf(kobj_name, sizeof(kobj_name), "%x", *unit_address);
+		snprintf(kobj_name, sizeof(kobj_name), "%x",
+			 (uint32_t)of_read_number(prop, 1));
 	} else if (!strcmp(dev_type, "ibm,platform-facilities"))
 		snprintf(kobj_name, sizeof(kobj_name), "%s", vnode->name);
 	else
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 24/63] powerpc: Make OF PCI device tree accesses endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (22 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 23/63] powerpc: Make device tree accesses in VIO subsystem " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 25/63] powerpc: Make PCI device node " Anton Blanchard
                   ` (38 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/pci-common.c  |  6 +++---
 arch/powerpc/kernel/pci_of_scan.c | 23 +++++++++++++----------
 2 files changed, 16 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/kernel/pci-common.c b/arch/powerpc/kernel/pci-common.c
index 22fe401..7d72f8f 100644
--- a/arch/powerpc/kernel/pci-common.c
+++ b/arch/powerpc/kernel/pci-common.c
@@ -667,7 +667,7 @@ void pci_resource_to_user(const struct pci_dev *dev, int bar,
 void pci_process_bridge_OF_ranges(struct pci_controller *hose,
 				  struct device_node *dev, int primary)
 {
-	const u32 *ranges;
+	const __be32 *ranges;
 	int rlen;
 	int pna = of_n_addr_cells(dev);
 	int np = pna + 5;
@@ -687,7 +687,7 @@ void pci_process_bridge_OF_ranges(struct pci_controller *hose,
 	/* Parse it */
 	while ((rlen -= np * 4) >= 0) {
 		/* Read next ranges element */
-		pci_space = ranges[0];
+		pci_space = of_read_number(ranges, 1);
 		pci_addr = of_read_number(ranges + 1, 2);
 		cpu_addr = of_translate_address(dev, ranges + 3);
 		size = of_read_number(ranges + pna + 3, 2);
@@ -704,7 +704,7 @@ void pci_process_bridge_OF_ranges(struct pci_controller *hose,
 		/* Now consume following elements while they are contiguous */
 		for (; rlen >= np * sizeof(u32);
 		     ranges += np, rlen -= np * 4) {
-			if (ranges[0] != pci_space)
+			if (of_read_number(ranges, 1) != pci_space)
 				break;
 			pci_next = of_read_number(ranges + 1, 2);
 			cpu_next = of_translate_address(dev, ranges + 3);
diff --git a/arch/powerpc/kernel/pci_of_scan.c b/arch/powerpc/kernel/pci_of_scan.c
index 15d9105..4368ec6 100644
--- a/arch/powerpc/kernel/pci_of_scan.c
+++ b/arch/powerpc/kernel/pci_of_scan.c
@@ -24,12 +24,12 @@
  */
 static u32 get_int_prop(struct device_node *np, const char *name, u32 def)
 {
-	const u32 *prop;
+	const __be32 *prop;
 	int len;
 
 	prop = of_get_property(np, name, &len);
 	if (prop && len >= 4)
-		return *prop;
+		return of_read_number(prop, 1);
 	return def;
 }
 
@@ -77,7 +77,7 @@ static void of_pci_parse_addrs(struct device_node *node, struct pci_dev *dev)
 	unsigned int flags;
 	struct pci_bus_region region;
 	struct resource *res;
-	const u32 *addrs;
+	const __be32 *addrs;
 	u32 i;
 	int proplen;
 
@@ -86,14 +86,14 @@ static void of_pci_parse_addrs(struct device_node *node, struct pci_dev *dev)
 		return;
 	pr_debug("    parse addresses (%d bytes) @ %p\n", proplen, addrs);
 	for (; proplen >= 20; proplen -= 20, addrs += 5) {
-		flags = pci_parse_of_flags(addrs[0], 0);
+		flags = pci_parse_of_flags(of_read_number(addrs, 1), 0);
 		if (!flags)
 			continue;
 		base = of_read_number(&addrs[1], 2);
 		size = of_read_number(&addrs[3], 2);
 		if (!size)
 			continue;
-		i = addrs[0] & 0xff;
+		i = of_read_number(addrs, 1) & 0xff;
 		pr_debug("  base: %llx, size: %llx, i: %x\n",
 			 (unsigned long long)base,
 			 (unsigned long long)size, i);
@@ -207,7 +207,7 @@ void of_scan_pci_bridge(struct pci_dev *dev)
 {
 	struct device_node *node = dev->dev.of_node;
 	struct pci_bus *bus;
-	const u32 *busrange, *ranges;
+	const __be32 *busrange, *ranges;
 	int len, i, mode;
 	struct pci_bus_region region;
 	struct resource *res;
@@ -230,9 +230,11 @@ void of_scan_pci_bridge(struct pci_dev *dev)
 		return;
 	}
 
-	bus = pci_find_bus(pci_domain_nr(dev->bus), busrange[0]);
+	bus = pci_find_bus(pci_domain_nr(dev->bus),
+			   of_read_number(busrange, 1));
 	if (!bus) {
-		bus = pci_add_new_bus(dev->bus, dev, busrange[0]);
+		bus = pci_add_new_bus(dev->bus, dev,
+				      of_read_number(busrange, 1));
 		if (!bus) {
 			printk(KERN_ERR "Failed to create pci bus for %s\n",
 			       node->full_name);
@@ -241,7 +243,8 @@ void of_scan_pci_bridge(struct pci_dev *dev)
 	}
 
 	bus->primary = dev->bus->number;
-	pci_bus_insert_busn_res(bus, busrange[0], busrange[1]);
+	pci_bus_insert_busn_res(bus, of_read_number(busrange, 1),
+				of_read_number(busrange+1, 1));
 	bus->bridge_ctl = 0;
 
 	/* parse ranges property */
@@ -254,7 +257,7 @@ void of_scan_pci_bridge(struct pci_dev *dev)
 	}
 	i = 1;
 	for (; len >= 32; len -= 32, ranges += 8) {
-		flags = pci_parse_of_flags(ranges[0], 1);
+		flags = pci_parse_of_flags(of_read_number(ranges, 1), 1);
 		size = of_read_number(&ranges[6], 2);
 		if (flags == 0 || size == 0)
 			continue;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 25/63] powerpc: Make PCI device node device tree accesses endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (23 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 24/63] powerpc: Make OF PCI device tree accesses " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 26/63] powerpc: Little endian fixes for legacy_serial.c Anton Blanchard
                   ` (37 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/pci_dn.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kernel/pci_dn.c b/arch/powerpc/kernel/pci_dn.c
index df03844..1f61fab 100644
--- a/arch/powerpc/kernel/pci_dn.c
+++ b/arch/powerpc/kernel/pci_dn.c
@@ -47,9 +47,8 @@ struct pci_dn *pci_get_pdn(struct pci_dev *pdev)
 void *update_dn_pci_info(struct device_node *dn, void *data)
 {
 	struct pci_controller *phb = data;
-	const int *type =
-		of_get_property(dn, "ibm,pci-config-space-type", NULL);
-	const u32 *regs;
+	const __be32 *type = of_get_property(dn, "ibm,pci-config-space-type", NULL);
+	const __be32 *regs;
 	struct pci_dn *pdn;
 
 	pdn = zalloc_maybe_bootmem(sizeof(*pdn), GFP_KERNEL);
@@ -63,12 +62,14 @@ void *update_dn_pci_info(struct device_node *dn, void *data)
 #endif
 	regs = of_get_property(dn, "reg", NULL);
 	if (regs) {
+		u32 addr = of_read_number(regs, 1);
+
 		/* First register entry is addr (00BBSS00)  */
-		pdn->busno = (regs[0] >> 16) & 0xff;
-		pdn->devfn = (regs[0] >> 8) & 0xff;
+		pdn->busno = (addr >> 16) & 0xff;
+		pdn->devfn = (addr >> 8) & 0xff;
 	}
 
-	pdn->pci_ext_config_space = (type && *type == 1);
+	pdn->pci_ext_config_space = (type && of_read_number(type, 1) == 1);
 	return NULL;
 }
 
@@ -98,12 +99,13 @@ void *traverse_pci_devices(struct device_node *start, traverse_func pre,
 
 	/* We started with a phb, iterate all childs */
 	for (dn = start->child; dn; dn = nextdn) {
-		const u32 *classp;
-		u32 class;
+		const __be32 *classp;
+		u32 class = 0;
 
 		nextdn = NULL;
 		classp = of_get_property(dn, "class-code", NULL);
-		class = classp ? *classp : 0;
+		if (classp)
+			class = of_read_number(classp, 1);
 
 		if (pre && ((ret = pre(dn, data)) != NULL))
 			return ret;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 26/63] powerpc: Little endian fixes for legacy_serial.c
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (24 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 25/63] powerpc: Make PCI device node " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 27/63] powerpc: Make NUMA device node code endian safe Anton Blanchard
                   ` (36 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Alistair Popple <alistair@popple.id.au>

Signed-off-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/kernel/legacy_serial.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/legacy_serial.c b/arch/powerpc/kernel/legacy_serial.c
index af1c63f..179beea 100644
--- a/arch/powerpc/kernel/legacy_serial.c
+++ b/arch/powerpc/kernel/legacy_serial.c
@@ -152,7 +152,7 @@ static int __init add_legacy_soc_port(struct device_node *np,
 				      struct device_node *soc_dev)
 {
 	u64 addr;
-	const u32 *addrp;
+	const __be32 *addrp;
 	upf_t flags = UPF_BOOT_AUTOCONF | UPF_SKIP_TEST | UPF_SHARE_IRQ
 		| UPF_FIXED_PORT;
 	struct device_node *tsi = of_get_parent(np);
@@ -237,7 +237,7 @@ static int __init add_legacy_pci_port(struct device_node *np,
 				      struct device_node *pci_dev)
 {
 	u64 addr, base;
-	const u32 *addrp;
+	const __be32 *addrp;
 	unsigned int flags;
 	int iotype, index = -1, lindex = 0;
 
@@ -270,7 +270,7 @@ static int __init add_legacy_pci_port(struct device_node *np,
 	if (iotype == UPIO_MEM)
 		base = addr;
 	else
-		base = addrp[2];
+		base = of_read_number(&addrp[2], 1);
 
 	/* Try to guess an index... If we have subdevices of the pci dev,
 	 * we get to their "reg" property
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 27/63] powerpc: Make NUMA device node code endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (25 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 26/63] powerpc: Little endian fixes for legacy_serial.c Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 28/63] powerpc: Add endian annotations to lppaca, slb_shadow and dtl_entry Anton Blanchard
                   ` (35 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Alistair Popple <alistair@popple.id.au>

The device tree is big endian so make sure we byteswap on little
endian. We assume any pHyp calls also return big endian results in
memory.

Signed-off-by: Alistair Popple <alistair@popple.id.au>
---
 arch/powerpc/mm/numa.c | 100 +++++++++++++++++++++++++------------------------
 1 file changed, 52 insertions(+), 48 deletions(-)

diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 501e32c..c916127 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -58,7 +58,7 @@ static int form1_affinity;
 
 #define MAX_DISTANCE_REF_POINTS 4
 static int distance_ref_points_depth;
-static const unsigned int *distance_ref_points;
+static const __be32 *distance_ref_points;
 static int distance_lookup_table[MAX_NUMNODES][MAX_DISTANCE_REF_POINTS];
 
 /*
@@ -179,7 +179,7 @@ static void unmap_cpu_from_node(unsigned long cpu)
 #endif /* CONFIG_HOTPLUG_CPU || CONFIG_PPC_SPLPAR */
 
 /* must hold reference to node during call */
-static const int *of_get_associativity(struct device_node *dev)
+static const __be32 *of_get_associativity(struct device_node *dev)
 {
 	return of_get_property(dev, "ibm,associativity", NULL);
 }
@@ -189,9 +189,9 @@ static const int *of_get_associativity(struct device_node *dev)
  * it exists (the property exists only in kexec/kdump kernels,
  * added by kexec-tools)
  */
-static const u32 *of_get_usable_memory(struct device_node *memory)
+static const __be32 *of_get_usable_memory(struct device_node *memory)
 {
-	const u32 *prop;
+	const __be32 *prop;
 	u32 len;
 	prop = of_get_property(memory, "linux,drconf-usable-memory", &len);
 	if (!prop || len < sizeof(unsigned int))
@@ -219,7 +219,7 @@ int __node_distance(int a, int b)
 }
 
 static void initialize_distance_lookup_table(int nid,
-		const unsigned int *associativity)
+		const __be32 *associativity)
 {
 	int i;
 
@@ -227,29 +227,32 @@ static void initialize_distance_lookup_table(int nid,
 		return;
 
 	for (i = 0; i < distance_ref_points_depth; i++) {
-		distance_lookup_table[nid][i] =
-			associativity[distance_ref_points[i]];
+		const __be32 *entry;
+
+		entry = &associativity[be32_to_cpu(distance_ref_points[i])];
+		distance_lookup_table[nid][i] = of_read_number(entry, 1);
 	}
 }
 
 /* Returns nid in the range [0..MAX_NUMNODES-1], or -1 if no useful numa
  * info is found.
  */
-static int associativity_to_nid(const unsigned int *associativity)
+static int associativity_to_nid(const __be32 *associativity)
 {
 	int nid = -1;
 
 	if (min_common_depth == -1)
 		goto out;
 
-	if (associativity[0] >= min_common_depth)
-		nid = associativity[min_common_depth];
+	if (of_read_number(associativity, 1) >= min_common_depth)
+		nid = of_read_number(&associativity[min_common_depth], 1);
 
 	/* POWER4 LPAR uses 0xffff as invalid node */
 	if (nid == 0xffff || nid >= MAX_NUMNODES)
 		nid = -1;
 
-	if (nid > 0 && associativity[0] >= distance_ref_points_depth)
+	if (nid > 0 &&
+	    of_read_number(associativity, 1) >= distance_ref_points_depth)
 		initialize_distance_lookup_table(nid, associativity);
 
 out:
@@ -262,7 +265,7 @@ out:
 static int of_node_to_nid_single(struct device_node *device)
 {
 	int nid = -1;
-	const unsigned int *tmp;
+	const __be32 *tmp;
 
 	tmp = of_get_associativity(device);
 	if (tmp)
@@ -334,7 +337,7 @@ static int __init find_min_common_depth(void)
 	}
 
 	if (form1_affinity) {
-		depth = distance_ref_points[0];
+		depth = of_read_number(distance_ref_points, 1);
 	} else {
 		if (distance_ref_points_depth < 2) {
 			printk(KERN_WARNING "NUMA: "
@@ -342,7 +345,7 @@ static int __init find_min_common_depth(void)
 			goto err;
 		}
 
-		depth = distance_ref_points[1];
+		depth = of_read_number(&distance_ref_points[1], 1);
 	}
 
 	/*
@@ -376,12 +379,12 @@ static void __init get_n_mem_cells(int *n_addr_cells, int *n_size_cells)
 	of_node_put(memory);
 }
 
-static unsigned long read_n_cells(int n, const unsigned int **buf)
+static unsigned long read_n_cells(int n, const __be32 **buf)
 {
 	unsigned long result = 0;
 
 	while (n--) {
-		result = (result << 32) | **buf;
+		result = (result << 32) | of_read_number(*buf, 1);
 		(*buf)++;
 	}
 	return result;
@@ -391,17 +394,17 @@ static unsigned long read_n_cells(int n, const unsigned int **buf)
  * Read the next memblock list entry from the ibm,dynamic-memory property
  * and return the information in the provided of_drconf_cell structure.
  */
-static void read_drconf_cell(struct of_drconf_cell *drmem, const u32 **cellp)
+static void read_drconf_cell(struct of_drconf_cell *drmem, const __be32 **cellp)
 {
-	const u32 *cp;
+	const __be32 *cp;
 
 	drmem->base_addr = read_n_cells(n_mem_addr_cells, cellp);
 
 	cp = *cellp;
-	drmem->drc_index = cp[0];
-	drmem->reserved = cp[1];
-	drmem->aa_index = cp[2];
-	drmem->flags = cp[3];
+	drmem->drc_index = of_read_number(cp, 1);
+	drmem->reserved = of_read_number(&cp[1], 1);
+	drmem->aa_index = of_read_number(&cp[2], 1);
+	drmem->flags = of_read_number(&cp[3], 1);
 
 	*cellp = cp + 4;
 }
@@ -413,16 +416,16 @@ static void read_drconf_cell(struct of_drconf_cell *drmem, const u32 **cellp)
  * list entries followed by N memblock list entries.  Each memblock list entry
  * contains information as laid out in the of_drconf_cell struct above.
  */
-static int of_get_drconf_memory(struct device_node *memory, const u32 **dm)
+static int of_get_drconf_memory(struct device_node *memory, const __be32 **dm)
 {
-	const u32 *prop;
+	const __be32 *prop;
 	u32 len, entries;
 
 	prop = of_get_property(memory, "ibm,dynamic-memory", &len);
 	if (!prop || len < sizeof(unsigned int))
 		return 0;
 
-	entries = *prop++;
+	entries = of_read_number(prop++, 1);
 
 	/* Now that we know the number of entries, revalidate the size
 	 * of the property read in to ensure we have everything
@@ -440,7 +443,7 @@ static int of_get_drconf_memory(struct device_node *memory, const u32 **dm)
  */
 static u64 of_get_lmb_size(struct device_node *memory)
 {
-	const u32 *prop;
+	const __be32 *prop;
 	u32 len;
 
 	prop = of_get_property(memory, "ibm,lmb-size", &len);
@@ -453,7 +456,7 @@ static u64 of_get_lmb_size(struct device_node *memory)
 struct assoc_arrays {
 	u32	n_arrays;
 	u32	array_sz;
-	const u32 *arrays;
+	const __be32 *arrays;
 };
 
 /*
@@ -469,15 +472,15 @@ struct assoc_arrays {
 static int of_get_assoc_arrays(struct device_node *memory,
 			       struct assoc_arrays *aa)
 {
-	const u32 *prop;
+	const __be32 *prop;
 	u32 len;
 
 	prop = of_get_property(memory, "ibm,associativity-lookup-arrays", &len);
 	if (!prop || len < 2 * sizeof(unsigned int))
 		return -1;
 
-	aa->n_arrays = *prop++;
-	aa->array_sz = *prop++;
+	aa->n_arrays = of_read_number(prop++, 1);
+	aa->array_sz = of_read_number(prop++, 1);
 
 	/* Now that we know the number of arrays and size of each array,
 	 * revalidate the size of the property read in.
@@ -504,7 +507,7 @@ static int of_drconf_to_nid_single(struct of_drconf_cell *drmem,
 	    !(drmem->flags & DRCONF_MEM_AI_INVALID) &&
 	    drmem->aa_index < aa->n_arrays) {
 		index = drmem->aa_index * aa->array_sz + min_common_depth - 1;
-		nid = aa->arrays[index];
+		nid = of_read_number(&aa->arrays[index], 1);
 
 		if (nid == 0xffff || nid >= MAX_NUMNODES)
 			nid = default_nid;
@@ -595,7 +598,7 @@ static unsigned long __init numa_enforce_memory_limit(unsigned long start,
  * Reads the counter for a given entry in
  * linux,drconf-usable-memory property
  */
-static inline int __init read_usm_ranges(const u32 **usm)
+static inline int __init read_usm_ranges(const __be32 **usm)
 {
 	/*
 	 * For each lmb in ibm,dynamic-memory a corresponding
@@ -612,7 +615,7 @@ static inline int __init read_usm_ranges(const u32 **usm)
  */
 static void __init parse_drconf_memory(struct device_node *memory)
 {
-	const u32 *uninitialized_var(dm), *usm;
+	const __be32 *uninitialized_var(dm), *usm;
 	unsigned int n, rc, ranges, is_kexec_kdump = 0;
 	unsigned long lmb_size, base, size, sz;
 	int nid;
@@ -721,7 +724,7 @@ static int __init parse_numa_properties(void)
 		unsigned long size;
 		int nid;
 		int ranges;
-		const unsigned int *memcell_buf;
+		const __be32 *memcell_buf;
 		unsigned int len;
 
 		memcell_buf = of_get_property(memory,
@@ -1106,7 +1109,7 @@ early_param("numa", early_numa);
 static int hot_add_drconf_scn_to_nid(struct device_node *memory,
 				     unsigned long scn_addr)
 {
-	const u32 *dm;
+	const __be32 *dm;
 	unsigned int drconf_cell_cnt, rc;
 	unsigned long lmb_size;
 	struct assoc_arrays aa;
@@ -1159,7 +1162,7 @@ int hot_add_node_scn_to_nid(unsigned long scn_addr)
 	for_each_node_by_type(memory, "memory") {
 		unsigned long start, size;
 		int ranges;
-		const unsigned int *memcell_buf;
+		const __be32 *memcell_buf;
 		unsigned int len;
 
 		memcell_buf = of_get_property(memory, "reg", &len);
@@ -1232,7 +1235,7 @@ static u64 hot_add_drconf_memory_max(void)
         struct device_node *memory = NULL;
         unsigned int drconf_cell_cnt = 0;
         u64 lmb_size = 0;
-        const u32 *dm = 0;
+	const __be32 *dm = 0;
 
         memory = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
         if (memory) {
@@ -1337,40 +1340,41 @@ static int update_cpu_associativity_changes_mask(void)
  * Convert the associativity domain numbers returned from the hypervisor
  * to the sequence they would appear in the ibm,associativity property.
  */
-static int vphn_unpack_associativity(const long *packed, unsigned int *unpacked)
+static int vphn_unpack_associativity(const long *packed, __be32 *unpacked)
 {
 	int i, nr_assoc_doms = 0;
-	const u16 *field = (const u16*) packed;
+	const __be16 *field = (const __be16 *) packed;
 
 #define VPHN_FIELD_UNUSED	(0xffff)
 #define VPHN_FIELD_MSB		(0x8000)
 #define VPHN_FIELD_MASK		(~VPHN_FIELD_MSB)
 
 	for (i = 1; i < VPHN_ASSOC_BUFSIZE; i++) {
-		if (*field == VPHN_FIELD_UNUSED) {
+		if (be16_to_cpup(field) == VPHN_FIELD_UNUSED) {
 			/* All significant fields processed, and remaining
 			 * fields contain the reserved value of all 1's.
 			 * Just store them.
 			 */
-			unpacked[i] = *((u32*)field);
+			unpacked[i] = *((__be32 *)field);
 			field += 2;
-		} else if (*field & VPHN_FIELD_MSB) {
+		} else if (be16_to_cpup(field) & VPHN_FIELD_MSB) {
 			/* Data is in the lower 15 bits of this field */
-			unpacked[i] = *field & VPHN_FIELD_MASK;
+			unpacked[i] = cpu_to_be32(
+				be16_to_cpup(field) & VPHN_FIELD_MASK);
 			field++;
 			nr_assoc_doms++;
 		} else {
 			/* Data is in the lower 15 bits of this field
 			 * concatenated with the next 16 bit field
 			 */
-			unpacked[i] = *((u32*)field);
+			unpacked[i] = *((__be32 *)field);
 			field += 2;
 			nr_assoc_doms++;
 		}
 	}
 
 	/* The first cell contains the length of the property */
-	unpacked[0] = nr_assoc_doms;
+	unpacked[0] = cpu_to_be32(nr_assoc_doms);
 
 	return nr_assoc_doms;
 }
@@ -1379,7 +1383,7 @@ static int vphn_unpack_associativity(const long *packed, unsigned int *unpacked)
  * Retrieve the new associativity information for a virtual processor's
  * home node.
  */
-static long hcall_vphn(unsigned long cpu, unsigned int *associativity)
+static long hcall_vphn(unsigned long cpu, __be32 *associativity)
 {
 	long rc;
 	long retbuf[PLPAR_HCALL9_BUFSIZE] = {0};
@@ -1393,7 +1397,7 @@ static long hcall_vphn(unsigned long cpu, unsigned int *associativity)
 }
 
 static long vphn_get_associativity(unsigned long cpu,
-					unsigned int *associativity)
+					__be32 *associativity)
 {
 	long rc;
 
@@ -1450,7 +1454,7 @@ int arch_update_cpu_topology(void)
 {
 	unsigned int cpu, sibling, changed = 0;
 	struct topology_update_data *updates, *ud;
-	unsigned int associativity[VPHN_ASSOC_BUFSIZE] = {0};
+	__be32 associativity[VPHN_ASSOC_BUFSIZE] = {0};
 	cpumask_t updated_cpus;
 	struct device *dev;
 	int weight, new_nid, i = 0;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 28/63] powerpc: Add endian annotations to lppaca, slb_shadow and dtl_entry
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (26 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 27/63] powerpc: Make NUMA device node code endian safe Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 29/63] powerpc: Fix little endian " Anton Blanchard
                   ` (34 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Add endian annotation to various hypervisor structures which
are defined as big endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/lppaca.h | 50 +++++++++++++++++++--------------------
 1 file changed, 25 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/include/asm/lppaca.h b/arch/powerpc/include/asm/lppaca.h
index bc8def0..4470d1e 100644
--- a/arch/powerpc/include/asm/lppaca.h
+++ b/arch/powerpc/include/asm/lppaca.h
@@ -48,13 +48,13 @@
 struct lppaca {
 	/* cacheline 1 contains read-only data */
 
-	u32	desc;			/* Eye catcher 0xD397D781 */
-	u16	size;			/* Size of this struct */
+	__be32	desc;			/* Eye catcher 0xD397D781 */
+	__be16	size;			/* Size of this struct */
 	u8	reserved1[3];
 	u8	__old_status;		/* Old status, including shared proc */
 	u8	reserved3[14];
-	volatile u32 dyn_hw_node_id;	/* Dynamic hardware node id */
-	volatile u32 dyn_hw_proc_id;	/* Dynamic hardware proc id */
+	volatile __be32 dyn_hw_node_id;	/* Dynamic hardware node id */
+	volatile __be32 dyn_hw_proc_id;	/* Dynamic hardware proc id */
 	u8	reserved4[56];
 	volatile u8 vphn_assoc_counts[8]; /* Virtual processor home node */
 					  /* associativity change counters */
@@ -71,9 +71,9 @@ struct lppaca {
 	u8	fpregs_in_use;
 	u8	pmcregs_in_use;
 	u8	reserved8[28];
-	u64	wait_state_cycles;	/* Wait cycles for this proc */
+	__be64	wait_state_cycles;	/* Wait cycles for this proc */
 	u8	reserved9[28];
-	u16	slb_count;		/* # of SLBs to maintain */
+	__be16	slb_count;		/* # of SLBs to maintain */
 	u8	idle;			/* Indicate OS is idle */
 	u8	vmxregs_in_use;
 
@@ -87,17 +87,17 @@ struct lppaca {
 	 * NOTE: This value will ALWAYS be zero for dedicated processors and
 	 * will NEVER be zero for shared processors (ie, initialized to a 1).
 	 */
-	volatile u32 yield_count;
-	volatile u32 dispersion_count;	/* dispatch changed physical cpu */
-	volatile u64 cmo_faults;	/* CMO page fault count */
-	volatile u64 cmo_fault_time;	/* CMO page fault time */
+	volatile __be32 yield_count;
+	volatile __be32 dispersion_count; /* dispatch changed physical cpu */
+	volatile __be64 cmo_faults;	/* CMO page fault count */
+	volatile __be64 cmo_fault_time;	/* CMO page fault time */
 	u8	reserved10[104];
 
 	/* cacheline 4-5 */
 
-	u32	page_ins;		/* CMO Hint - # page ins by OS */
+	__be32	page_ins;		/* CMO Hint - # page ins by OS */
 	u8	reserved11[148];
-	volatile u64 dtl_idx;		/* Dispatch Trace Log head index */
+	volatile __be64 dtl_idx;		/* Dispatch Trace Log head index */
 	u8	reserved12[96];
 } __attribute__((__aligned__(0x400)));
 
@@ -123,12 +123,12 @@ static inline bool lppaca_shared_proc(struct lppaca *l)
  * ESID is stored in the lower 64bits, then the VSID.
  */
 struct slb_shadow {
-	u32	persistent;		/* Number of persistent SLBs */
-	u32	buffer_length;		/* Total shadow buffer length */
-	u64	reserved;
+	__be32	persistent;		/* Number of persistent SLBs */
+	__be32	buffer_length;		/* Total shadow buffer length */
+	__be64	reserved;
 	struct	{
-		u64     esid;
-		u64	vsid;
+		__be64     esid;
+		__be64	vsid;
 	} save_area[SLB_NUM_BOLTED];
 } ____cacheline_aligned;
 
@@ -140,14 +140,14 @@ extern struct slb_shadow slb_shadow[];
 struct dtl_entry {
 	u8	dispatch_reason;
 	u8	preempt_reason;
-	u16	processor_id;
-	u32	enqueue_to_dispatch_time;
-	u32	ready_to_enqueue_time;
-	u32	waiting_to_ready_time;
-	u64	timebase;
-	u64	fault_addr;
-	u64	srr0;
-	u64	srr1;
+	__be16	processor_id;
+	__be32	enqueue_to_dispatch_time;
+	__be32	ready_to_enqueue_time;
+	__be32	waiting_to_ready_time;
+	__be64	timebase;
+	__be64	fault_addr;
+	__be64	srr0;
+	__be64	srr1;
 };
 
 #define DISPATCH_LOG_BYTES	4096	/* bytes per cpu */
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 29/63] powerpc: Fix little endian lppaca, slb_shadow and dtl_entry
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (27 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 28/63] powerpc: Add endian annotations to lppaca, slb_shadow and dtl_entry Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 30/63] powerpc: Emulate instructions in little endian mode Anton Blanchard
                   ` (33 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The lppaca, slb_shadow and dtl_entry hypervisor structures are
big endian, so we have to byte swap them in little endian builds.

LE KVM hosts will also need to be fixed but for now add an #error
to remind us.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/asm-compat.h           |  9 +++++++++
 arch/powerpc/include/asm/ppc_asm.h              |  3 ++-
 arch/powerpc/kernel/entry_64.S                  | 11 +++++++----
 arch/powerpc/kernel/lparcfg.c                   |  9 +++++----
 arch/powerpc/kernel/paca.c                      | 10 +++++-----
 arch/powerpc/kernel/time.c                      | 16 ++++++++--------
 arch/powerpc/kvm/book3s_64_slb.S                |  4 ++++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S         |  4 ++++
 arch/powerpc/lib/locks.c                        |  4 ++--
 arch/powerpc/mm/fault.c                         |  6 +++++-
 arch/powerpc/mm/slb.c                           |  9 ++++++---
 arch/powerpc/platforms/pseries/dtl.c            |  2 +-
 arch/powerpc/platforms/pseries/lpar.c           |  2 +-
 arch/powerpc/platforms/pseries/processor_idle.c |  6 +++++-
 arch/powerpc/platforms/pseries/setup.c          |  2 +-
 15 files changed, 65 insertions(+), 32 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-compat.h b/arch/powerpc/include/asm/asm-compat.h
index 6e82f5f..4b237aa 100644
--- a/arch/powerpc/include/asm/asm-compat.h
+++ b/arch/powerpc/include/asm/asm-compat.h
@@ -32,6 +32,15 @@
 #define PPC_MTOCRF(FXM, RS) MTOCRF((FXM), RS)
 #define PPC_LR_STKOFF	16
 #define PPC_MIN_STKFRM	112
+
+#ifdef __BIG_ENDIAN__
+#define LDX_BE	stringify_in_c(ldx)
+#define STDX_BE	stringify_in_c(stdx)
+#else
+#define LDX_BE	stringify_in_c(ldbrx)
+#define STDX_BE	stringify_in_c(stdbrx)
+#endif
+
 #else /* 32-bit */
 
 /* operations for longs and pointers */
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index b5c85f1..4ebb4f8 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -54,7 +54,8 @@ BEGIN_FW_FTR_SECTION;							\
 	/* from user - see if there are any DTL entries to process */	\
 	ld	r10,PACALPPACAPTR(r13);	/* get ptr to VPA */		\
 	ld	r11,PACA_DTL_RIDX(r13);	/* get log read index */	\
-	ld	r10,LPPACA_DTLIDX(r10);	/* get log write index */	\
+	addi	r10,r10,LPPACA_DTLIDX;					\
+	LDX_BE	r10,0,r10;		/* get log write index */	\
 	cmpd	cr1,r11,r10;						\
 	beq+	cr1,33f;						\
 	bl	.accumulate_stolen_time;				\
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index ab15b8d..209e8ce 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -102,7 +102,8 @@ BEGIN_FW_FTR_SECTION
 	/* if from user, see if there are any DTL entries to process */
 	ld	r10,PACALPPACAPTR(r13)	/* get ptr to VPA */
 	ld	r11,PACA_DTL_RIDX(r13)	/* get log read index */
-	ld	r10,LPPACA_DTLIDX(r10)	/* get log write index */
+	addi	r10,r10,LPPACA_DTLIDX
+	LDX_BE	r10,0,r10		/* get log write index */
 	cmpd	cr1,r11,r10
 	beq+	cr1,33f
 	bl	.accumulate_stolen_time
@@ -531,9 +532,11 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
 	 */
 	ld	r9,PACA_SLBSHADOWPTR(r13)
 	li	r12,0
-	std	r12,SLBSHADOW_STACKESID(r9) /* Clear ESID */
-	std	r7,SLBSHADOW_STACKVSID(r9)  /* Save VSID */
-	std	r0,SLBSHADOW_STACKESID(r9)  /* Save ESID */
+	std	r12,SLBSHADOW_STACKESID(r9)	/* Clear ESID */
+	li	r12,SLBSHADOW_STACKVSID
+	STDX_BE	r7,r12,r9			/* Save VSID */
+	li	r12,SLBSHADOW_STACKESID
+	STDX_BE	r0,r12,r9			/* Save ESID */
 
 	/* No need to check for MMU_FTR_NO_SLBIE_B here, since when
 	 * we have 1TB segments, the only CPUs known to have the errata
diff --git a/arch/powerpc/kernel/lparcfg.c b/arch/powerpc/kernel/lparcfg.c
index e6024c2..0204089 100644
--- a/arch/powerpc/kernel/lparcfg.c
+++ b/arch/powerpc/kernel/lparcfg.c
@@ -387,8 +387,8 @@ static void pseries_cmo_data(struct seq_file *m)
 		return;
 
 	for_each_possible_cpu(cpu) {
-		cmo_faults += lppaca_of(cpu).cmo_faults;
-		cmo_fault_time += lppaca_of(cpu).cmo_fault_time;
+		cmo_faults += be64_to_cpu(lppaca_of(cpu).cmo_faults);
+		cmo_fault_time += be64_to_cpu(lppaca_of(cpu).cmo_fault_time);
 	}
 
 	seq_printf(m, "cmo_faults=%lu\n", cmo_faults);
@@ -406,8 +406,9 @@ static void splpar_dispatch_data(struct seq_file *m)
 	unsigned long dispatch_dispersions = 0;
 
 	for_each_possible_cpu(cpu) {
-		dispatches += lppaca_of(cpu).yield_count;
-		dispatch_dispersions += lppaca_of(cpu).dispersion_count;
+		dispatches += be32_to_cpu(lppaca_of(cpu).yield_count);
+		dispatch_dispersions +=
+			be32_to_cpu(lppaca_of(cpu).dispersion_count);
 	}
 
 	seq_printf(m, "dispatches=%lu\n", dispatches);
diff --git a/arch/powerpc/kernel/paca.c b/arch/powerpc/kernel/paca.c
index f8f2468..3fc16e3 100644
--- a/arch/powerpc/kernel/paca.c
+++ b/arch/powerpc/kernel/paca.c
@@ -34,10 +34,10 @@ extern unsigned long __toc_start;
  */
 struct lppaca lppaca[] = {
 	[0 ... (NR_LPPACAS-1)] = {
-		.desc = 0xd397d781,	/* "LpPa" */
-		.size = sizeof(struct lppaca),
+		.desc = cpu_to_be32(0xd397d781),	/* "LpPa" */
+		.size = cpu_to_be16(sizeof(struct lppaca)),
 		.fpregs_in_use = 1,
-		.slb_count = 64,
+		.slb_count = cpu_to_be16(64),
 		.vmxregs_in_use = 0,
 		.page_ins = 0,
 	},
@@ -101,8 +101,8 @@ static inline void free_lppacas(void) { }
  */
 struct slb_shadow slb_shadow[] __cacheline_aligned = {
 	[0 ... (NR_CPUS-1)] = {
-		.persistent = SLB_NUM_BOLTED,
-		.buffer_length = sizeof(struct slb_shadow),
+		.persistent = cpu_to_be32(SLB_NUM_BOLTED),
+		.buffer_length = cpu_to_be32(sizeof(struct slb_shadow)),
 	},
 };
 
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index c863aa1..b2bcd34 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -210,18 +210,18 @@ static u64 scan_dispatch_log(u64 stop_tb)
 	if (!dtl)
 		return 0;
 
-	if (i == vpa->dtl_idx)
+	if (i == be64_to_cpu(vpa->dtl_idx))
 		return 0;
-	while (i < vpa->dtl_idx) {
+	while (i < be64_to_cpu(vpa->dtl_idx)) {
 		if (dtl_consumer)
 			dtl_consumer(dtl, i);
-		dtb = dtl->timebase;
-		tb_delta = dtl->enqueue_to_dispatch_time +
-			dtl->ready_to_enqueue_time;
+		dtb = be64_to_cpu(dtl->timebase);
+		tb_delta = be32_to_cpu(dtl->enqueue_to_dispatch_time) +
+			be32_to_cpu(dtl->ready_to_enqueue_time);
 		barrier();
-		if (i + N_DISPATCH_LOG < vpa->dtl_idx) {
+		if (i + N_DISPATCH_LOG < be64_to_cpu(vpa->dtl_idx)) {
 			/* buffer has overflowed */
-			i = vpa->dtl_idx - N_DISPATCH_LOG;
+			i = be64_to_cpu(vpa->dtl_idx) - N_DISPATCH_LOG;
 			dtl = local_paca->dispatch_log + (i % N_DISPATCH_LOG);
 			continue;
 		}
@@ -269,7 +269,7 @@ static inline u64 calculate_stolen_time(u64 stop_tb)
 {
 	u64 stolen = 0;
 
-	if (get_paca()->dtl_ridx != get_paca()->lppaca_ptr->dtl_idx) {
+	if (get_paca()->dtl_ridx != be64_to_cpu(get_lppaca()->dtl_idx)) {
 		stolen = scan_dispatch_log(stop_tb);
 		get_paca()->system_time -= stolen;
 	}
diff --git a/arch/powerpc/kvm/book3s_64_slb.S b/arch/powerpc/kvm/book3s_64_slb.S
index 4f0caec..4f12e8f 100644
--- a/arch/powerpc/kvm/book3s_64_slb.S
+++ b/arch/powerpc/kvm/book3s_64_slb.S
@@ -17,6 +17,10 @@
  * Authors: Alexander Graf <agraf@suse.de>
  */
 
+#ifdef __LITTLE_ENDIAN__
+#error Need to fix SLB shadow accesses in little endian mode
+#endif
+
 #define SHADOW_SLB_ESID(num)	(SLBSHADOW_SAVEAREA + (num * 0x10))
 #define SHADOW_SLB_VSID(num)	(SLBSHADOW_SAVEAREA + (num * 0x10) + 0x8)
 #define UNBOLT_SLB_ENTRY(num) \
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index b02f91e..20e7fcd 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -29,6 +29,10 @@
 #include <asm/kvm_book3s_asm.h>
 #include <asm/mmu-hash64.h>
 
+#ifdef __LITTLE_ENDIAN__
+#error Need to fix lppaca and SLB shadow accesses in little endian mode
+#endif
+
 /*****************************************************************************
  *                                                                           *
  *        Real Mode handlers that need to be in the linear mapping           *
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
index bb7cfec..0c9c8d7 100644
--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -32,7 +32,7 @@ void __spin_yield(arch_spinlock_t *lock)
 		return;
 	holder_cpu = lock_value & 0xffff;
 	BUG_ON(holder_cpu >= NR_CPUS);
-	yield_count = lppaca_of(holder_cpu).yield_count;
+	yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count);
 	if ((yield_count & 1) == 0)
 		return;		/* virtual cpu is currently running */
 	rmb();
@@ -57,7 +57,7 @@ void __rw_yield(arch_rwlock_t *rw)
 		return;		/* no write lock at present */
 	holder_cpu = lock_value & 0xffff;
 	BUG_ON(holder_cpu >= NR_CPUS);
-	yield_count = lppaca_of(holder_cpu).yield_count;
+	yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count);
 	if ((yield_count & 1) == 0)
 		return;		/* virtual cpu is currently running */
 	rmb();
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 8726779..76d8e7c 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -443,8 +443,12 @@ good_area:
 				      regs, address);
 #ifdef CONFIG_PPC_SMLPAR
 			if (firmware_has_feature(FW_FEATURE_CMO)) {
+				u32 page_ins;
+
 				preempt_disable();
-				get_lppaca()->page_ins += (1 << PAGE_FACTOR);
+				page_ins = be32_to_cpu(get_lppaca()->page_ins);
+				page_ins += 1 << PAGE_FACTOR;
+				get_lppaca()->page_ins = cpu_to_be32(page_ins);
 				preempt_enable();
 			}
 #endif /* CONFIG_PPC_SMLPAR */
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index a538c80..9d1d33c 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -66,8 +66,10 @@ static inline void slb_shadow_update(unsigned long ea, int ssize,
 	 * we only update the current CPU's SLB shadow buffer.
 	 */
 	get_slb_shadow()->save_area[entry].esid = 0;
-	get_slb_shadow()->save_area[entry].vsid = mk_vsid_data(ea, ssize, flags);
-	get_slb_shadow()->save_area[entry].esid = mk_esid_data(ea, ssize, entry);
+	get_slb_shadow()->save_area[entry].vsid =
+				cpu_to_be64(mk_vsid_data(ea, ssize, flags));
+	get_slb_shadow()->save_area[entry].esid =
+				cpu_to_be64(mk_esid_data(ea, ssize, entry));
 }
 
 static inline void slb_shadow_clear(unsigned long entry)
@@ -112,7 +114,8 @@ static void __slb_flush_and_rebolt(void)
 	} else {
 		/* Update stack entry; others don't change */
 		slb_shadow_update(get_paca()->kstack, mmu_kernel_ssize, lflags, 2);
-		ksp_vsid_data = get_slb_shadow()->save_area[2].vsid;
+		ksp_vsid_data =
+			be64_to_cpu(get_slb_shadow()->save_area[2].vsid);
 	}
 
 	/* We need to do this all in asm, so we're sure we don't touch
diff --git a/arch/powerpc/platforms/pseries/dtl.c b/arch/powerpc/platforms/pseries/dtl.c
index 0cc0ac0..238240e 100644
--- a/arch/powerpc/platforms/pseries/dtl.c
+++ b/arch/powerpc/platforms/pseries/dtl.c
@@ -87,7 +87,7 @@ static void consume_dtle(struct dtl_entry *dtle, u64 index)
 	barrier();
 
 	/* check for hypervisor ring buffer overflow, ignore this entry if so */
-	if (index + N_DISPATCH_LOG < vpa->dtl_idx)
+	if (index + N_DISPATCH_LOG < be64_to_cpu(vpa->dtl_idx))
 		return;
 
 	++wp;
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 60b6f4e..0b7c86e 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -106,7 +106,7 @@ void vpa_init(int cpu)
 		lppaca_of(cpu).dtl_idx = 0;
 
 		/* hypervisor reads buffer length from this field */
-		dtl->enqueue_to_dispatch_time = DISPATCH_LOG_BYTES;
+		dtl->enqueue_to_dispatch_time = cpu_to_be32(DISPATCH_LOG_BYTES);
 		ret = register_dtl(hwcpu, __pa(dtl));
 		if (ret)
 			pr_err("WARNING: DTL registration of cpu %d (hw %d) "
diff --git a/arch/powerpc/platforms/pseries/processor_idle.c b/arch/powerpc/platforms/pseries/processor_idle.c
index 92db881..14899b1 100644
--- a/arch/powerpc/platforms/pseries/processor_idle.c
+++ b/arch/powerpc/platforms/pseries/processor_idle.c
@@ -45,7 +45,11 @@ static inline void idle_loop_prolog(unsigned long *in_purr)
 
 static inline void idle_loop_epilog(unsigned long in_purr)
 {
-	get_lppaca()->wait_state_cycles += mfspr(SPRN_PURR) - in_purr;
+	u64 wait_cycles;
+
+	wait_cycles = be64_to_cpu(get_lppaca()->wait_state_cycles);
+	wait_cycles += mfspr(SPRN_PURR) - in_purr;
+	get_lppaca()->wait_state_cycles = cpu_to_be64(wait_cycles);
 	get_lppaca()->idle = 0;
 }
 
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index b19cd83..33d6196 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -323,7 +323,7 @@ static int alloc_dispatch_logs(void)
 	get_paca()->lppaca_ptr->dtl_idx = 0;
 
 	/* hypervisor reads buffer length from this field */
-	dtl->enqueue_to_dispatch_time = DISPATCH_LOG_BYTES;
+	dtl->enqueue_to_dispatch_time = cpu_to_be32(DISPATCH_LOG_BYTES);
 	ret = register_dtl(hard_smp_processor_id(), __pa(dtl));
 	if (ret)
 		pr_err("WARNING: DTL registration of cpu %d (hw %d) failed "
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 30/63] powerpc: Emulate instructions in little endian mode
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (28 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 29/63] powerpc: Fix little endian " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 31/63] powerpc: Little endian SMP IPI demux Anton Blanchard
                   ` (32 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Alistair noticed we got a SIGILL on userspace mfpvr instructions.

Remove the little endian check in the emulation code, it is
probably there to protect against the old pseudo little endian
implementations but doesn't make sense for real little endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index bf33c22..e601220 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -964,7 +964,7 @@ static int emulate_instruction(struct pt_regs *regs)
 	u32 instword;
 	u32 rd;
 
-	if (!user_mode(regs) || (regs->msr & MSR_LE))
+	if (!user_mode(regs))
 		return -EINVAL;
 	CHECK_FULL_REGS(regs);
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 31/63] powerpc: Little endian SMP IPI demux
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (29 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 30/63] powerpc: Emulate instructions in little endian mode Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 32/63] powerpc/pseries: Fix endian issues in H_GET_TERM_CHAR/H_PUT_TERM_CHAR Anton Blanchard
                   ` (31 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Add little endian support for demuxing SMP IPIs

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/smp.c | 19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 98822400..bcdb706 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -210,6 +210,12 @@ void smp_muxed_ipi_message_pass(int cpu, int msg)
 	smp_ops->cause_ipi(cpu, info->data);
 }
 
+#ifdef __BIG_ENDIAN__
+#define IPI_MESSAGE(A) (1 << (24 - 8 * (A)))
+#else
+#define IPI_MESSAGE(A) (1 << (8 * (A)))
+#endif
+
 irqreturn_t smp_ipi_demux(void)
 {
 	struct cpu_messages *info = &__get_cpu_var(ipi_message);
@@ -219,19 +225,14 @@ irqreturn_t smp_ipi_demux(void)
 
 	do {
 		all = xchg(&info->messages, 0);
-
-#ifdef __BIG_ENDIAN
-		if (all & (1 << (24 - 8 * PPC_MSG_CALL_FUNCTION)))
+		if (all & IPI_MESSAGE(PPC_MSG_CALL_FUNCTION))
 			generic_smp_call_function_interrupt();
-		if (all & (1 << (24 - 8 * PPC_MSG_RESCHEDULE)))
+		if (all & IPI_MESSAGE(PPC_MSG_RESCHEDULE))
 			scheduler_ipi();
-		if (all & (1 << (24 - 8 * PPC_MSG_CALL_FUNC_SINGLE)))
+		if (all & IPI_MESSAGE(PPC_MSG_CALL_FUNC_SINGLE))
 			generic_smp_call_function_single_interrupt();
-		if (all & (1 << (24 - 8 * PPC_MSG_DEBUGGER_BREAK)))
+		if (all & IPI_MESSAGE(PPC_MSG_DEBUGGER_BREAK))
 			debug_ipi_action(0, NULL);
-#else
-#error Unsupported ENDIAN
-#endif
 	} while (info->messages);
 
 	return IRQ_HANDLED;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 32/63] powerpc/pseries: Fix endian issues in H_GET_TERM_CHAR/H_PUT_TERM_CHAR
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (30 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 31/63] powerpc: Little endian SMP IPI demux Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 33/63] powerpc: Fix little endian coredumps Anton Blanchard
                   ` (30 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/platforms/pseries/hvconsole.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/platforms/pseries/hvconsole.c b/arch/powerpc/platforms/pseries/hvconsole.c
index aa0aa37..ef6d59a 100644
--- a/arch/powerpc/platforms/pseries/hvconsole.c
+++ b/arch/powerpc/platforms/pseries/hvconsole.c
@@ -45,8 +45,8 @@ int hvc_get_chars(uint32_t vtermno, char *buf, int count)
 	unsigned long *lbuf = (unsigned long *)buf;
 
 	ret = plpar_hcall(H_GET_TERM_CHAR, retbuf, vtermno);
-	lbuf[0] = retbuf[1];
-	lbuf[1] = retbuf[2];
+	lbuf[0] = be64_to_cpu(retbuf[1]);
+	lbuf[1] = be64_to_cpu(retbuf[2]);
 
 	if (ret == H_SUCCESS)
 		return retbuf[0];
@@ -75,8 +75,9 @@ int hvc_put_chars(uint32_t vtermno, const char *buf, int count)
 	if (count > MAX_VIO_PUT_CHARS)
 		count = MAX_VIO_PUT_CHARS;
 
-	ret = plpar_hcall_norets(H_PUT_TERM_CHAR, vtermno, count, lbuf[0],
-				 lbuf[1]);
+	ret = plpar_hcall_norets(H_PUT_TERM_CHAR, vtermno, count,
+				 cpu_to_be64(lbuf[0]),
+				 cpu_to_be64(lbuf[1]));
 	if (ret == H_SUCCESS)
 		return count;
 	if (ret == H_BUSY)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 33/63] powerpc: Fix little endian coredumps
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (31 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 32/63] powerpc/pseries: Fix endian issues in H_GET_TERM_CHAR/H_PUT_TERM_CHAR Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 34/63] powerpc: Make rwlocks endian safe Anton Blanchard
                   ` (29 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We need to set ELF_DATA correctly on LE coredumps.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/uapi/asm/elf.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/uapi/asm/elf.h b/arch/powerpc/include/uapi/asm/elf.h
index 89fa042..7e39c91 100644
--- a/arch/powerpc/include/uapi/asm/elf.h
+++ b/arch/powerpc/include/uapi/asm/elf.h
@@ -109,7 +109,6 @@ typedef elf_gregset_t32 compat_elf_gregset_t;
 # define ELF_GREG_TYPE	elf_greg_t64
 # define ELF_ARCH	EM_PPC64
 # define ELF_CLASS	ELFCLASS64
-# define ELF_DATA	ELFDATA2MSB
 typedef elf_greg_t64 elf_greg_t;
 typedef elf_gregset_t64 elf_gregset_t;
 #else
@@ -118,11 +117,16 @@ typedef elf_gregset_t64 elf_gregset_t;
 # define ELF_GREG_TYPE	elf_greg_t32
 # define ELF_ARCH	EM_PPC
 # define ELF_CLASS	ELFCLASS32
-# define ELF_DATA	ELFDATA2MSB
 typedef elf_greg_t32 elf_greg_t;
 typedef elf_gregset_t32 elf_gregset_t;
 #endif /* __powerpc64__ */
 
+#ifdef __BIG_ENDIAN__
+#define ELF_DATA	ELFDATA2MSB
+#else
+#define ELF_DATA	ELFDATA2LSB
+#endif
+
 /* Floating point registers */
 typedef double elf_fpreg_t;
 typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 34/63] powerpc: Make rwlocks endian safe
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (32 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 33/63] powerpc: Fix little endian coredumps Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 35/63] powerpc: Fix endian issues in VMX copy loops Anton Blanchard
                   ` (28 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Our ppc64 spinlocks and rwlocks use a trick where a lock token and
the paca index are placed in the lock with a single store. Since we
are using two u16s they need adjusting for little endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/paca.h         | 5 +++++
 arch/powerpc/include/asm/spinlock.h     | 4 ++++
 arch/powerpc/kvm/book3s_hv_rm_mmu.c     | 4 ++++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 8 ++++++++
 4 files changed, 21 insertions(+)

diff --git a/arch/powerpc/include/asm/paca.h b/arch/powerpc/include/asm/paca.h
index 77c91e7..2bf78b3 100644
--- a/arch/powerpc/include/asm/paca.h
+++ b/arch/powerpc/include/asm/paca.h
@@ -68,8 +68,13 @@ struct paca_struct {
 	 * instruction.  They must travel together and be properly
 	 * aligned.
 	 */
+#ifdef __BIG_ENDIAN__
 	u16 lock_token;			/* Constant 0x8000, used in locks */
 	u16 paca_index;			/* Logical processor number */
+#else
+	u16 paca_index;			/* Logical processor number */
+	u16 lock_token;			/* Constant 0x8000, used in locks */
+#endif
 
 	u64 kernel_toc;			/* Kernel TOC address */
 	u64 kernelbase;			/* Base address of kernel */
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 7c345b6..5f54a74 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -32,8 +32,12 @@
 
 #ifdef CONFIG_PPC64
 /* use 0x800000yy when locked, where yy == CPU number */
+#ifdef __BIG_ENDIAN__
 #define LOCK_TOKEN	(*(u32 *)(&get_paca()->lock_token))
 #else
+#define LOCK_TOKEN	(*(u32 *)(&get_paca()->paca_index))
+#endif
+#else
 #define LOCK_TOKEN	1
 #endif
 
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index fc25689..c3785d4 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -363,7 +363,11 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags,
 				 vcpu->arch.pgdir, true, &vcpu->arch.gpr[4]);
 }
 
+#ifdef __BIG_ENDIAN__
 #define LOCK_TOKEN	(*(u32 *)(&get_paca()->lock_token))
+#else
+#define LOCK_TOKEN	(*(u32 *)(&get_paca()->paca_index))
+#endif
 
 static inline int try_lock_tlbie(unsigned int *lock)
 {
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index 20e7fcd..b93e3cd 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -393,7 +393,11 @@ toc_tlbie_lock:
 	.tc	native_tlbie_lock[TC],native_tlbie_lock
 	.previous
 	ld	r3,toc_tlbie_lock@toc(2)
+#ifdef __BIG_ENDIAN__
 	lwz	r8,PACA_LOCK_TOKEN(r13)
+#else
+	lwz	r8,PACAPACAINDEX(r13)
+#endif
 24:	lwarx	r0,0,r3
 	cmpwi	r0,0
 	bne	24b
@@ -968,7 +972,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_201)
 32:	ld	r4,VCPU_KVM(r9)		/* pointer to struct kvm */
 
 	/* Take the guest's tlbie_lock */
+#ifdef __BIG_ENDIAN__
 	lwz	r8,PACA_LOCK_TOKEN(r13)
+#else
+	lwz	r8,PACAPACAINDEX(r13)
+#endif
 	addi	r3,r4,KVM_TLBIE_LOCK
 24:	lwarx	r0,0,r3
 	cmpwi	r0,0
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 35/63] powerpc: Fix endian issues in VMX copy loops
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (33 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 34/63] powerpc: Make rwlocks endian safe Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 36/63] powerpc: Book 3S MMU little endian support Anton Blanchard
                   ` (27 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Fix the permute loops for little endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/lib/copyuser_power7.S | 54 +++++++++++++++++++++----------------
 arch/powerpc/lib/memcpy_power7.S   | 55 ++++++++++++++++++++++----------------
 2 files changed, 63 insertions(+), 46 deletions(-)

diff --git a/arch/powerpc/lib/copyuser_power7.S b/arch/powerpc/lib/copyuser_power7.S
index d1f1179..e8e9c36 100644
--- a/arch/powerpc/lib/copyuser_power7.S
+++ b/arch/powerpc/lib/copyuser_power7.S
@@ -19,6 +19,14 @@
  */
 #include <asm/ppc_asm.h>
 
+#ifdef __BIG_ENDIAN__
+#define LVS(VRT,RA,RB)		lvsl	VRT,RA,RB
+#define VPERM(VRT,VRA,VRB,VRC)	vperm	VRT,VRA,VRB,VRC
+#else
+#define LVS(VRT,RA,RB)		lvsr	VRT,RA,RB
+#define VPERM(VRT,VRA,VRB,VRC)	vperm	VRT,VRB,VRA,VRC
+#endif
+
 	.macro err1
 100:
 	.section __ex_table,"a"
@@ -552,13 +560,13 @@ err3;	stw	r7,4(r3)
 	li	r10,32
 	li	r11,48
 
-	lvsl	vr16,0,r4	/* Setup permute control vector */
+	LVS(vr16,0,r4)		/* Setup permute control vector */
 err3;	lvx	vr0,0,r4
 	addi	r4,r4,16
 
 	bf	cr7*4+3,5f
 err3;	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 	addi	r4,r4,16
 err3;	stvx	vr8,r0,r3
 	addi	r3,r3,16
@@ -566,9 +574,9 @@ err3;	stvx	vr8,r0,r3
 
 5:	bf	cr7*4+2,6f
 err3;	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 err3;	lvx	vr0,r4,r9
-	vperm	vr9,vr1,vr0,vr16
+	VPERM(vr9,vr1,vr0,vr16)
 	addi	r4,r4,32
 err3;	stvx	vr8,r0,r3
 err3;	stvx	vr9,r3,r9
@@ -576,13 +584,13 @@ err3;	stvx	vr9,r3,r9
 
 6:	bf	cr7*4+1,7f
 err3;	lvx	vr3,r0,r4
-	vperm	vr8,vr0,vr3,vr16
+	VPERM(vr8,vr0,vr3,vr16)
 err3;	lvx	vr2,r4,r9
-	vperm	vr9,vr3,vr2,vr16
+	VPERM(vr9,vr3,vr2,vr16)
 err3;	lvx	vr1,r4,r10
-	vperm	vr10,vr2,vr1,vr16
+	VPERM(vr10,vr2,vr1,vr16)
 err3;	lvx	vr0,r4,r11
-	vperm	vr11,vr1,vr0,vr16
+	VPERM(vr11,vr1,vr0,vr16)
 	addi	r4,r4,64
 err3;	stvx	vr8,r0,r3
 err3;	stvx	vr9,r3,r9
@@ -611,21 +619,21 @@ err3;	stvx	vr11,r3,r11
 	.align	5
 8:
 err4;	lvx	vr7,r0,r4
-	vperm	vr8,vr0,vr7,vr16
+	VPERM(vr8,vr0,vr7,vr16)
 err4;	lvx	vr6,r4,r9
-	vperm	vr9,vr7,vr6,vr16
+	VPERM(vr9,vr7,vr6,vr16)
 err4;	lvx	vr5,r4,r10
-	vperm	vr10,vr6,vr5,vr16
+	VPERM(vr10,vr6,vr5,vr16)
 err4;	lvx	vr4,r4,r11
-	vperm	vr11,vr5,vr4,vr16
+	VPERM(vr11,vr5,vr4,vr16)
 err4;	lvx	vr3,r4,r12
-	vperm	vr12,vr4,vr3,vr16
+	VPERM(vr12,vr4,vr3,vr16)
 err4;	lvx	vr2,r4,r14
-	vperm	vr13,vr3,vr2,vr16
+	VPERM(vr13,vr3,vr2,vr16)
 err4;	lvx	vr1,r4,r15
-	vperm	vr14,vr2,vr1,vr16
+	VPERM(vr14,vr2,vr1,vr16)
 err4;	lvx	vr0,r4,r16
-	vperm	vr15,vr1,vr0,vr16
+	VPERM(vr15,vr1,vr0,vr16)
 	addi	r4,r4,128
 err4;	stvx	vr8,r0,r3
 err4;	stvx	vr9,r3,r9
@@ -649,13 +657,13 @@ err4;	stvx	vr15,r3,r16
 
 	bf	cr7*4+1,9f
 err3;	lvx	vr3,r0,r4
-	vperm	vr8,vr0,vr3,vr16
+	VPERM(vr8,vr0,vr3,vr16)
 err3;	lvx	vr2,r4,r9
-	vperm	vr9,vr3,vr2,vr16
+	VPERM(vr9,vr3,vr2,vr16)
 err3;	lvx	vr1,r4,r10
-	vperm	vr10,vr2,vr1,vr16
+	VPERM(vr10,vr2,vr1,vr16)
 err3;	lvx	vr0,r4,r11
-	vperm	vr11,vr1,vr0,vr16
+	VPERM(vr11,vr1,vr0,vr16)
 	addi	r4,r4,64
 err3;	stvx	vr8,r0,r3
 err3;	stvx	vr9,r3,r9
@@ -665,9 +673,9 @@ err3;	stvx	vr11,r3,r11
 
 9:	bf	cr7*4+2,10f
 err3;	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 err3;	lvx	vr0,r4,r9
-	vperm	vr9,vr1,vr0,vr16
+	VPERM(vr9,vr1,vr0,vr16)
 	addi	r4,r4,32
 err3;	stvx	vr8,r0,r3
 err3;	stvx	vr9,r3,r9
@@ -675,7 +683,7 @@ err3;	stvx	vr9,r3,r9
 
 10:	bf	cr7*4+3,11f
 err3;	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 	addi	r4,r4,16
 err3;	stvx	vr8,r0,r3
 	addi	r3,r3,16
diff --git a/arch/powerpc/lib/memcpy_power7.S b/arch/powerpc/lib/memcpy_power7.S
index 0663630..e4177db 100644
--- a/arch/powerpc/lib/memcpy_power7.S
+++ b/arch/powerpc/lib/memcpy_power7.S
@@ -20,6 +20,15 @@
 #include <asm/ppc_asm.h>
 
 _GLOBAL(memcpy_power7)
+
+#ifdef __BIG_ENDIAN__
+#define LVS(VRT,RA,RB)		lvsl	VRT,RA,RB
+#define VPERM(VRT,VRA,VRB,VRC)	vperm	VRT,VRA,VRB,VRC
+#else
+#define LVS(VRT,RA,RB)		lvsr	VRT,RA,RB
+#define VPERM(VRT,VRA,VRB,VRC)	vperm	VRT,VRB,VRA,VRC
+#endif
+
 #ifdef CONFIG_ALTIVEC
 	cmpldi	r5,16
 	cmpldi	cr1,r5,4096
@@ -485,13 +494,13 @@ _GLOBAL(memcpy_power7)
 	li	r10,32
 	li	r11,48
 
-	lvsl	vr16,0,r4	/* Setup permute control vector */
+	LVS(vr16,0,r4)		/* Setup permute control vector */
 	lvx	vr0,0,r4
 	addi	r4,r4,16
 
 	bf	cr7*4+3,5f
 	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 	addi	r4,r4,16
 	stvx	vr8,r0,r3
 	addi	r3,r3,16
@@ -499,9 +508,9 @@ _GLOBAL(memcpy_power7)
 
 5:	bf	cr7*4+2,6f
 	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 	lvx	vr0,r4,r9
-	vperm	vr9,vr1,vr0,vr16
+	VPERM(vr9,vr1,vr0,vr16)
 	addi	r4,r4,32
 	stvx	vr8,r0,r3
 	stvx	vr9,r3,r9
@@ -509,13 +518,13 @@ _GLOBAL(memcpy_power7)
 
 6:	bf	cr7*4+1,7f
 	lvx	vr3,r0,r4
-	vperm	vr8,vr0,vr3,vr16
+	VPERM(vr8,vr0,vr3,vr16)
 	lvx	vr2,r4,r9
-	vperm	vr9,vr3,vr2,vr16
+	VPERM(vr9,vr3,vr2,vr16)
 	lvx	vr1,r4,r10
-	vperm	vr10,vr2,vr1,vr16
+	VPERM(vr10,vr2,vr1,vr16)
 	lvx	vr0,r4,r11
-	vperm	vr11,vr1,vr0,vr16
+	VPERM(vr11,vr1,vr0,vr16)
 	addi	r4,r4,64
 	stvx	vr8,r0,r3
 	stvx	vr9,r3,r9
@@ -544,21 +553,21 @@ _GLOBAL(memcpy_power7)
 	.align	5
 8:
 	lvx	vr7,r0,r4
-	vperm	vr8,vr0,vr7,vr16
+	VPERM(vr8,vr0,vr7,vr16)
 	lvx	vr6,r4,r9
-	vperm	vr9,vr7,vr6,vr16
+	VPERM(vr9,vr7,vr6,vr16)
 	lvx	vr5,r4,r10
-	vperm	vr10,vr6,vr5,vr16
+	VPERM(vr10,vr6,vr5,vr16)
 	lvx	vr4,r4,r11
-	vperm	vr11,vr5,vr4,vr16
+	VPERM(vr11,vr5,vr4,vr16)
 	lvx	vr3,r4,r12
-	vperm	vr12,vr4,vr3,vr16
+	VPERM(vr12,vr4,vr3,vr16)
 	lvx	vr2,r4,r14
-	vperm	vr13,vr3,vr2,vr16
+	VPERM(vr13,vr3,vr2,vr16)
 	lvx	vr1,r4,r15
-	vperm	vr14,vr2,vr1,vr16
+	VPERM(vr14,vr2,vr1,vr16)
 	lvx	vr0,r4,r16
-	vperm	vr15,vr1,vr0,vr16
+	VPERM(vr15,vr1,vr0,vr16)
 	addi	r4,r4,128
 	stvx	vr8,r0,r3
 	stvx	vr9,r3,r9
@@ -582,13 +591,13 @@ _GLOBAL(memcpy_power7)
 
 	bf	cr7*4+1,9f
 	lvx	vr3,r0,r4
-	vperm	vr8,vr0,vr3,vr16
+	VPERM(vr8,vr0,vr3,vr16)
 	lvx	vr2,r4,r9
-	vperm	vr9,vr3,vr2,vr16
+	VPERM(vr9,vr3,vr2,vr16)
 	lvx	vr1,r4,r10
-	vperm	vr10,vr2,vr1,vr16
+	VPERM(vr10,vr2,vr1,vr16)
 	lvx	vr0,r4,r11
-	vperm	vr11,vr1,vr0,vr16
+	VPERM(vr11,vr1,vr0,vr16)
 	addi	r4,r4,64
 	stvx	vr8,r0,r3
 	stvx	vr9,r3,r9
@@ -598,9 +607,9 @@ _GLOBAL(memcpy_power7)
 
 9:	bf	cr7*4+2,10f
 	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 	lvx	vr0,r4,r9
-	vperm	vr9,vr1,vr0,vr16
+	VPERM(vr9,vr1,vr0,vr16)
 	addi	r4,r4,32
 	stvx	vr8,r0,r3
 	stvx	vr9,r3,r9
@@ -608,7 +617,7 @@ _GLOBAL(memcpy_power7)
 
 10:	bf	cr7*4+3,11f
 	lvx	vr1,r0,r4
-	vperm	vr8,vr0,vr1,vr16
+	VPERM(vr8,vr0,vr1,vr16)
 	addi	r4,r4,16
 	stvx	vr8,r0,r3
 	addi	r3,r3,16
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 36/63] powerpc: Book 3S MMU little endian support
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (34 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 35/63] powerpc: Fix endian issues in VMX copy loops Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-07  4:20   ` Paul Mackerras
  2013-08-06 16:01 ` [PATCH 37/63] powerpc: Fix offset of FPRs in VSX registers in little endian builds Anton Blanchard
                   ` (26 subsequent siblings)
  62 siblings, 1 reply; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/mmu-hash64.h |  4 +--
 arch/powerpc/mm/hash_native_64.c      | 46 ++++++++++++++++++++---------------
 arch/powerpc/mm/hash_utils_64.c       | 38 ++++++++++++++---------------
 3 files changed, 46 insertions(+), 42 deletions(-)

diff --git a/arch/powerpc/include/asm/mmu-hash64.h b/arch/powerpc/include/asm/mmu-hash64.h
index c4cf011..807014d 100644
--- a/arch/powerpc/include/asm/mmu-hash64.h
+++ b/arch/powerpc/include/asm/mmu-hash64.h
@@ -135,8 +135,8 @@ extern char initial_stab[];
 #ifndef __ASSEMBLY__
 
 struct hash_pte {
-	unsigned long v;
-	unsigned long r;
+	__be64 v;
+	__be64 r;
 };
 
 extern struct hash_pte *htab_address;
diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index c33d939..9bcd991 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -35,7 +35,11 @@
 #define DBG_LOW(fmt...)
 #endif
 
+#ifdef __BIG_ENDIAN__
 #define HPTE_LOCK_BIT 3
+#else
+#define HPTE_LOCK_BIT (63-3)
+#endif
 
 DEFINE_RAW_SPINLOCK(native_tlbie_lock);
 
@@ -172,7 +176,7 @@ static inline void tlbie(unsigned long vpn, int psize, int apsize,
 
 static inline void native_lock_hpte(struct hash_pte *hptep)
 {
-	unsigned long *word = &hptep->v;
+	unsigned long *word = (unsigned long *)&hptep->v;
 
 	while (1) {
 		if (!test_and_set_bit_lock(HPTE_LOCK_BIT, word))
@@ -184,7 +188,7 @@ static inline void native_lock_hpte(struct hash_pte *hptep)
 
 static inline void native_unlock_hpte(struct hash_pte *hptep)
 {
-	unsigned long *word = &hptep->v;
+	unsigned long *word = (unsigned long *)&hptep->v;
 
 	clear_bit_unlock(HPTE_LOCK_BIT, word);
 }
@@ -204,10 +208,10 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
 	}
 
 	for (i = 0; i < HPTES_PER_GROUP; i++) {
-		if (! (hptep->v & HPTE_V_VALID)) {
+		if (! (be64_to_cpu(hptep->v) & HPTE_V_VALID)) {
 			/* retry with lock held */
 			native_lock_hpte(hptep);
-			if (! (hptep->v & HPTE_V_VALID))
+			if (! (be64_to_cpu(hptep->v) & HPTE_V_VALID))
 				break;
 			native_unlock_hpte(hptep);
 		}
@@ -226,14 +230,14 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
 			i, hpte_v, hpte_r);
 	}
 
-	hptep->r = hpte_r;
+	hptep->r = cpu_to_be64(hpte_r);
 	/* Guarantee the second dword is visible before the valid bit */
 	eieio();
 	/*
 	 * Now set the first dword including the valid bit
 	 * NOTE: this also unlocks the hpte
 	 */
-	hptep->v = hpte_v;
+	hptep->v = cpu_to_be64(hpte_v);
 
 	__asm__ __volatile__ ("ptesync" : : : "memory");
 
@@ -254,12 +258,12 @@ static long native_hpte_remove(unsigned long hpte_group)
 
 	for (i = 0; i < HPTES_PER_GROUP; i++) {
 		hptep = htab_address + hpte_group + slot_offset;
-		hpte_v = hptep->v;
+		hpte_v = be64_to_cpu(hptep->v);
 
 		if ((hpte_v & HPTE_V_VALID) && !(hpte_v & HPTE_V_BOLTED)) {
 			/* retry with lock held */
 			native_lock_hpte(hptep);
-			hpte_v = hptep->v;
+			hpte_v = be64_to_cpu(hptep->v);
 			if ((hpte_v & HPTE_V_VALID)
 			    && !(hpte_v & HPTE_V_BOLTED))
 				break;
@@ -294,7 +298,7 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 
 	native_lock_hpte(hptep);
 
-	hpte_v = hptep->v;
+	hpte_v = be64_to_cpu(hptep->v);
 	/*
 	 * We need to invalidate the TLB always because hpte_remove doesn't do
 	 * a tlb invalidate. If a hash bucket gets full, we "evict" a more/less
@@ -308,8 +312,8 @@ static long native_hpte_updatepp(unsigned long slot, unsigned long newpp,
 	} else {
 		DBG_LOW(" -> hit\n");
 		/* Update the HPTE */
-		hptep->r = (hptep->r & ~(HPTE_R_PP | HPTE_R_N)) |
-			(newpp & (HPTE_R_PP | HPTE_R_N | HPTE_R_C));
+		hptep->r = cpu_to_be64((be64_to_cpu(hptep->r) & ~(HPTE_R_PP | HPTE_R_N)) |
+			(newpp & (HPTE_R_PP | HPTE_R_N | HPTE_R_C)));
 	}
 	native_unlock_hpte(hptep);
 
@@ -334,7 +338,7 @@ static long native_hpte_find(unsigned long vpn, int psize, int ssize)
 	slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
 	for (i = 0; i < HPTES_PER_GROUP; i++) {
 		hptep = htab_address + slot;
-		hpte_v = hptep->v;
+		hpte_v = be64_to_cpu(hptep->v);
 
 		if (HPTE_V_COMPARE(hpte_v, want_v) && (hpte_v & HPTE_V_VALID))
 			/* HPTE matches */
@@ -369,8 +373,9 @@ static void native_hpte_updateboltedpp(unsigned long newpp, unsigned long ea,
 	hptep = htab_address + slot;
 
 	/* Update the HPTE */
-	hptep->r = (hptep->r & ~(HPTE_R_PP | HPTE_R_N)) |
-		(newpp & (HPTE_R_PP | HPTE_R_N));
+	hptep->r = cpu_to_be64((be64_to_cpu(hptep->r) &
+			~(HPTE_R_PP | HPTE_R_N)) |
+		(newpp & (HPTE_R_PP | HPTE_R_N)));
 	/*
 	 * Ensure it is out of the tlb too. Bolted entries base and
 	 * actual page size will be same.
@@ -392,7 +397,7 @@ static void native_hpte_invalidate(unsigned long slot, unsigned long vpn,
 
 	want_v = hpte_encode_avpn(vpn, bpsize, ssize);
 	native_lock_hpte(hptep);
-	hpte_v = hptep->v;
+	hpte_v = be64_to_cpu(hptep->v);
 
 	/*
 	 * We need to invalidate the TLB always because hpte_remove doesn't do
@@ -458,7 +463,7 @@ static void native_hugepage_invalidate(struct mm_struct *mm,
 		hptep = htab_address + slot;
 		want_v = hpte_encode_avpn(vpn, psize, ssize);
 		native_lock_hpte(hptep);
-		hpte_v = hptep->v;
+		hpte_v = be64_to_cpu(hptep->v);
 
 		/* Even if we miss, we need to invalidate the TLB */
 		if (!HPTE_V_COMPARE(hpte_v, want_v) || !(hpte_v & HPTE_V_VALID))
@@ -519,11 +524,12 @@ static void hpte_decode(struct hash_pte *hpte, unsigned long slot,
 			int *psize, int *apsize, int *ssize, unsigned long *vpn)
 {
 	unsigned long avpn, pteg, vpi;
-	unsigned long hpte_v = hpte->v;
+	unsigned long hpte_v = be64_to_cpu(hpte->v);
+	unsigned long hpte_r = be64_to_cpu(hpte->r);
 	unsigned long vsid, seg_off;
 	int size, a_size, shift;
 	/* Look at the 8 bit LP value */
-	unsigned int lp = (hpte->r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
+	unsigned int lp = (hpte_r >> LP_SHIFT) & ((1 << LP_BITS) - 1);
 
 	if (!(hpte_v & HPTE_V_LARGE)) {
 		size   = MMU_PAGE_4K;
@@ -612,7 +618,7 @@ static void native_hpte_clear(void)
 		 * running,  right?  and for crash dump, we probably
 		 * don't want to wait for a maybe bad cpu.
 		 */
-		hpte_v = hptep->v;
+		hpte_v = be64_to_cpu(hptep->v);
 
 		/*
 		 * Call __tlbie() here rather than tlbie() since we
@@ -664,7 +670,7 @@ static void native_flush_hash_range(unsigned long number, int local)
 			hptep = htab_address + slot;
 			want_v = hpte_encode_avpn(vpn, psize, ssize);
 			native_lock_hpte(hptep);
-			hpte_v = hptep->v;
+			hpte_v = be64_to_cpu(hptep->v);
 			if (!HPTE_V_COMPARE(hpte_v, want_v) ||
 			    !(hpte_v & HPTE_V_VALID))
 				native_unlock_hpte(hptep);
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index bde8b55..ac47946 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -251,19 +251,18 @@ static int __init htab_dt_scan_seg_sizes(unsigned long node,
 					 void *data)
 {
 	char *type = of_get_flat_dt_prop(node, "device_type", NULL);
-	u32 *prop;
+	__be32 *prop;
 	unsigned long size = 0;
 
 	/* We are scanning "cpu" nodes only */
 	if (type == NULL || strcmp(type, "cpu") != 0)
 		return 0;
 
-	prop = (u32 *)of_get_flat_dt_prop(node, "ibm,processor-segment-sizes",
-					  &size);
+	prop = of_get_flat_dt_prop(node, "ibm,processor-segment-sizes", &size);
 	if (prop == NULL)
 		return 0;
 	for (; size >= 4; size -= 4, ++prop) {
-		if (prop[0] == 40) {
+		if (be32_to_cpu(prop[0]) == 40) {
 			DBG("1T segment support detected\n");
 			cur_cpu_spec->mmu_features |= MMU_FTR_1T_SEGMENT;
 			return 1;
@@ -307,23 +306,22 @@ static int __init htab_dt_scan_page_sizes(unsigned long node,
 					  void *data)
 {
 	char *type = of_get_flat_dt_prop(node, "device_type", NULL);
-	u32 *prop;
+	__be32 *prop;
 	unsigned long size = 0;
 
 	/* We are scanning "cpu" nodes only */
 	if (type == NULL || strcmp(type, "cpu") != 0)
 		return 0;
 
-	prop = (u32 *)of_get_flat_dt_prop(node,
-					  "ibm,segment-page-sizes", &size);
+	prop = of_get_flat_dt_prop(node, "ibm,segment-page-sizes", &size);
 	if (prop != NULL) {
 		pr_info("Page sizes from device-tree:\n");
 		size /= 4;
 		cur_cpu_spec->mmu_features &= ~(MMU_FTR_16M_PAGE);
 		while(size > 0) {
-			unsigned int base_shift = prop[0];
-			unsigned int slbenc = prop[1];
-			unsigned int lpnum = prop[2];
+			unsigned int base_shift = be32_to_cpu(prop[0]);
+			unsigned int slbenc = be32_to_cpu(prop[1]);
+			unsigned int lpnum = be32_to_cpu(prop[2]);
 			struct mmu_psize_def *def;
 			int idx, base_idx;
 
@@ -356,8 +354,8 @@ static int __init htab_dt_scan_page_sizes(unsigned long node,
 				def->tlbiel = 0;
 
 			while (size > 0 && lpnum) {
-				unsigned int shift = prop[0];
-				int penc  = prop[1];
+				unsigned int shift = be32_to_cpu(prop[0]);
+				int penc  = be32_to_cpu(prop[1]);
 
 				prop += 2; size -= 2;
 				lpnum--;
@@ -390,8 +388,8 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned long node,
 					const char *uname, int depth,
 					void *data) {
 	char *type = of_get_flat_dt_prop(node, "device_type", NULL);
-	unsigned long *addr_prop;
-	u32 *page_count_prop;
+	__be64 *addr_prop;
+	__be32 *page_count_prop;
 	unsigned int expected_pages;
 	long unsigned int phys_addr;
 	long unsigned int block_size;
@@ -405,12 +403,12 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned long node,
 	page_count_prop = of_get_flat_dt_prop(node, "ibm,expected#pages", NULL);
 	if (page_count_prop == NULL)
 		return 0;
-	expected_pages = (1 << page_count_prop[0]);
+	expected_pages = (1 << be32_to_cpu(page_count_prop[0]));
 	addr_prop = of_get_flat_dt_prop(node, "reg", NULL);
 	if (addr_prop == NULL)
 		return 0;
-	phys_addr = addr_prop[0];
-	block_size = addr_prop[1];
+	phys_addr = __be64_to_cpu(addr_prop[0]);
+	block_size = __be64_to_cpu(addr_prop[1]);
 	if (block_size != (16 * GB))
 		return 0;
 	printk(KERN_INFO "Huge page(16GB) memory: "
@@ -534,16 +532,16 @@ static int __init htab_dt_scan_pftsize(unsigned long node,
 				       void *data)
 {
 	char *type = of_get_flat_dt_prop(node, "device_type", NULL);
-	u32 *prop;
+	__be32 *prop;
 
 	/* We are scanning "cpu" nodes only */
 	if (type == NULL || strcmp(type, "cpu") != 0)
 		return 0;
 
-	prop = (u32 *)of_get_flat_dt_prop(node, "ibm,pft-size", NULL);
+	prop = of_get_flat_dt_prop(node, "ibm,pft-size", NULL);
 	if (prop != NULL) {
 		/* pft_size[0] is the NUMA CEC cookie */
-		ppc64_pft_size = prop[1];
+		ppc64_pft_size = be32_to_cpu(prop[1]);
 		return 1;
 	}
 	return 0;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 37/63] powerpc: Fix offset of FPRs in VSX registers in little endian builds
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (35 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 36/63] powerpc: Book 3S MMU little endian support Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 38/63] powerpc: PTRACE_PEEKUSR/PTRACE_POKEUSER of FPR " Anton Blanchard
                   ` (25 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The FPRs overlap the high doublewords of the first 32 VSX registers.
Fix TS_FPROFFSET and TS_VSRLOWOFFSET so we access the correct fields
in little endian mode.

If VSX is disabled the FPRs are only one doubleword in length so
TS_FPROFFSET needs adjusting in little endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/processor.h | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index 47a35b0..d705431 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -14,8 +14,18 @@
 
 #ifdef CONFIG_VSX
 #define TS_FPRWIDTH 2
+
+#ifdef __BIG_ENDIAN__
+#define TS_FPROFFSET 0
+#define TS_VSRLOWOFFSET 1
+#else
+#define TS_FPROFFSET 1
+#define TS_VSRLOWOFFSET 0
+#endif
+
 #else
 #define TS_FPRWIDTH 1
+#define TS_FPROFFSET 0
 #endif
 
 #ifdef CONFIG_PPC64
@@ -142,8 +152,6 @@ typedef struct {
 	unsigned long seg;
 } mm_segment_t;
 
-#define TS_FPROFFSET 0
-#define TS_VSRLOWOFFSET 1
 #define TS_FPR(i) fpr[i][TS_FPROFFSET]
 #define TS_TRANS_FPR(i) transact_fpr[i][TS_FPROFFSET]
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 38/63] powerpc: PTRACE_PEEKUSR/PTRACE_POKEUSER of FPR registers in little endian builds
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (36 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 37/63] powerpc: Fix offset of FPRs in VSX registers in little endian builds Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 39/63] powerpc: Little endian builds double word swap VSX state during context save/restore Anton Blanchard
                   ` (24 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

FPRs overlap the high 64bits of the first 32 VSX registers. The
ptrace FP read/write code assumes big endian ordering and grabs
the lowest 64 bits.

Fix this by using the TS_FPR macro which does the right thing.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/ptrace.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/kernel/ptrace.c b/arch/powerpc/kernel/ptrace.c
index 9a0d24c..8d5d4e9 100644
--- a/arch/powerpc/kernel/ptrace.c
+++ b/arch/powerpc/kernel/ptrace.c
@@ -1554,8 +1554,8 @@ long arch_ptrace(struct task_struct *child, long request,
 
 			flush_fp_to_thread(child);
 			if (fpidx < (PT_FPSCR - PT_FPR0))
-				tmp = ((unsigned long *)child->thread.fpr)
-					[fpidx * TS_FPRWIDTH];
+				memcpy(&tmp, &child->thread.TS_FPR(fpidx),
+				       sizeof(long));
 			else
 				tmp = child->thread.fpscr.val;
 		}
@@ -1587,8 +1587,8 @@ long arch_ptrace(struct task_struct *child, long request,
 
 			flush_fp_to_thread(child);
 			if (fpidx < (PT_FPSCR - PT_FPR0))
-				((unsigned long *)child->thread.fpr)
-					[fpidx * TS_FPRWIDTH] = data;
+				memcpy(&child->thread.TS_FPR(fpidx), &data,
+				       sizeof(long));
 			else
 				child->thread.fpscr.val = data;
 			ret = 0;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 39/63] powerpc: Little endian builds double word swap VSX state during context save/restore
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (37 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 38/63] powerpc: PTRACE_PEEKUSR/PTRACE_POKEUSER of FPR " Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 40/63] powerpc: Support endian agnostic MMIO Anton Blanchard
                   ` (23 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The elements within VSX loads and stores are big endian ordered
regardless of endianness. Our VSX context save/restore code uses
lxvd2x and stxvd2x which is a 2x doubleword operation. This means
the two doublewords will be swapped and we have to perform another
swap to undo it.

We need to do this on save and restore.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/ppc-opcode.h |  3 +++
 arch/powerpc/include/asm/ppc_asm.h    | 21 +++++++++++++++++----
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index eccfc16..247fa1d 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -134,6 +134,7 @@
 #define PPC_INST_TLBIVAX		0x7c000624
 #define PPC_INST_TLBSRX_DOT		0x7c0006a5
 #define PPC_INST_XXLOR			0xf0000510
+#define PPC_INST_XXSWAPD		0xf0000250
 #define PPC_INST_XVCPSGNDP		0xf0000780
 #define PPC_INST_TRECHKPT		0x7c0007dd
 #define PPC_INST_TRECLAIM		0x7c00075d
@@ -297,6 +298,8 @@
 					       VSX_XX1((s), a, b))
 #define XXLOR(t, a, b)		stringify_in_c(.long PPC_INST_XXLOR | \
 					       VSX_XX3((t), a, b))
+#define XXSWAPD(t, a)		stringify_in_c(.long PPC_INST_XXSWAPD | \
+					       VSX_XX3((t), a, a))
 #define XVCPSGNDP(t, a, b)	stringify_in_c(.long (PPC_INST_XVCPSGNDP | \
 					       VSX_XX3((t), (a), (b))))
 
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 4ebb4f8..bc606e4 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -180,9 +180,20 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 #define REST_32VRS_TRANSACT(n,b,base)	REST_16VRS_TRANSACT(n,b,base);	\
 					REST_16VRS_TRANSACT(n+16,b,base)
 
+#ifdef __BIG_ENDIAN__
+#define STXVD2X_ROT(n,b,base)		STXVD2X(n,b,base)
+#define LXVD2X_ROT(n,b,base)		LXVD2X(n,b,base)
+#else
+#define STXVD2X_ROT(n,b,base)		XXSWAPD(n,n);		\
+					STXVD2X(n,b,base);	\
+					XXSWAPD(n,n)
+
+#define LXVD2X_ROT(n,b,base)		LXVD2X(n,b,base);	\
+					XXSWAPD(n,n)
+#endif
 
 #define SAVE_VSR_TRANSACT(n,b,base)	li b,THREAD_TRANSACT_VSR0+(16*(n)); \
-					STXVD2X(n,R##base,R##b)
+					STXVD2X_ROT(n,R##base,R##b)
 #define SAVE_2VSRS_TRANSACT(n,b,base)	SAVE_VSR_TRANSACT(n,b,base);	\
 	                                SAVE_VSR_TRANSACT(n+1,b,base)
 #define SAVE_4VSRS_TRANSACT(n,b,base)	SAVE_2VSRS_TRANSACT(n,b,base);	\
@@ -195,7 +206,7 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	                                SAVE_16VSRS_TRANSACT(n+16,b,base)
 
 #define REST_VSR_TRANSACT(n,b,base)	li b,THREAD_TRANSACT_VSR0+(16*(n)); \
-					LXVD2X(n,R##base,R##b)
+					LXVD2X_ROT(n,R##base,R##b)
 #define REST_2VSRS_TRANSACT(n,b,base)	REST_VSR_TRANSACT(n,b,base);    \
 	                                REST_VSR_TRANSACT(n+1,b,base)
 #define REST_4VSRS_TRANSACT(n,b,base)	REST_2VSRS_TRANSACT(n,b,base);	\
@@ -208,13 +219,15 @@ END_FW_FTR_SECTION_IFSET(FW_FEATURE_SPLPAR)
 	                                REST_16VSRS_TRANSACT(n+16,b,base)
 
 /* Save the lower 32 VSRs in the thread VSR region */
-#define SAVE_VSR(n,b,base)	li b,THREAD_VSR0+(16*(n));  STXVD2X(n,R##base,R##b)
+#define SAVE_VSR(n,b,base)	li b,THREAD_VSR0+(16*(n)); \
+				STXVD2X_ROT(n,R##base,R##b)
 #define SAVE_2VSRS(n,b,base)	SAVE_VSR(n,b,base); SAVE_VSR(n+1,b,base)
 #define SAVE_4VSRS(n,b,base)	SAVE_2VSRS(n,b,base); SAVE_2VSRS(n+2,b,base)
 #define SAVE_8VSRS(n,b,base)	SAVE_4VSRS(n,b,base); SAVE_4VSRS(n+4,b,base)
 #define SAVE_16VSRS(n,b,base)	SAVE_8VSRS(n,b,base); SAVE_8VSRS(n+8,b,base)
 #define SAVE_32VSRS(n,b,base)	SAVE_16VSRS(n,b,base); SAVE_16VSRS(n+16,b,base)
-#define REST_VSR(n,b,base)	li b,THREAD_VSR0+(16*(n)); LXVD2X(n,R##base,R##b)
+#define REST_VSR(n,b,base)	li b,THREAD_VSR0+(16*(n)); \
+				LXVD2X_ROT(n,R##base,R##b)
 #define REST_2VSRS(n,b,base)	REST_VSR(n,b,base); REST_VSR(n+1,b,base)
 #define REST_4VSRS(n,b,base)	REST_2VSRS(n,b,base); REST_2VSRS(n+2,b,base)
 #define REST_8VSRS(n,b,base)	REST_4VSRS(n,b,base); REST_4VSRS(n+4,b,base)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 40/63] powerpc: Support endian agnostic MMIO
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (38 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 39/63] powerpc: Little endian builds double word swap VSX state during context save/restore Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 41/63] powerpc: Add little endian support for word-at-a-time functions Anton Blanchard
                   ` (22 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Ian Munsie <imunsie@au1.ibm.com>

This patch maps the MMIO functions for 32bit PowerPC to their
appropriate instructions depending on CPU endianness.

The macros used to create the corresponding inline functions are also
renamed by this patch. Previously they had BE or LE in their names which
was misleading - they had nothing to do with endianness, but actually
created different instruction forms so their new names reflect the
instruction form they are creating (D-Form and X-Form).

Little endian 64bit PowerPC is not supported, so the lack of mappings
(and corresponding breakage) for that case is intentional to bring the
attention of anyone doing a 64bit little endian port. 64bit big endian
is unaffected.

[ Added 64 bit versions - Anton ]

Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/io.h | 67 +++++++++++++++++++++++++++++++------------
 1 file changed, 49 insertions(+), 18 deletions(-)

diff --git a/arch/powerpc/include/asm/io.h b/arch/powerpc/include/asm/io.h
index dd15e5e..3aca565 100644
--- a/arch/powerpc/include/asm/io.h
+++ b/arch/powerpc/include/asm/io.h
@@ -103,7 +103,7 @@ extern resource_size_t isa_mem_base;
 
 /* gcc 4.0 and older doesn't have 'Z' constraint */
 #if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ == 0)
-#define DEF_MMIO_IN_LE(name, size, insn)				\
+#define DEF_MMIO_IN_X(name, size, insn)				\
 static inline u##size name(const volatile u##size __iomem *addr)	\
 {									\
 	u##size ret;							\
@@ -112,7 +112,7 @@ static inline u##size name(const volatile u##size __iomem *addr)	\
 	return ret;							\
 }
 
-#define DEF_MMIO_OUT_LE(name, size, insn) 				\
+#define DEF_MMIO_OUT_X(name, size, insn)				\
 static inline void name(volatile u##size __iomem *addr, u##size val)	\
 {									\
 	__asm__ __volatile__("sync;"#insn" %1,0,%2"			\
@@ -120,7 +120,7 @@ static inline void name(volatile u##size __iomem *addr, u##size val)	\
 	IO_SET_SYNC_FLAG();						\
 }
 #else /* newer gcc */
-#define DEF_MMIO_IN_LE(name, size, insn)				\
+#define DEF_MMIO_IN_X(name, size, insn)				\
 static inline u##size name(const volatile u##size __iomem *addr)	\
 {									\
 	u##size ret;							\
@@ -129,7 +129,7 @@ static inline u##size name(const volatile u##size __iomem *addr)	\
 	return ret;							\
 }
 
-#define DEF_MMIO_OUT_LE(name, size, insn) 				\
+#define DEF_MMIO_OUT_X(name, size, insn)				\
 static inline void name(volatile u##size __iomem *addr, u##size val)	\
 {									\
 	__asm__ __volatile__("sync;"#insn" %1,%y0"			\
@@ -138,7 +138,7 @@ static inline void name(volatile u##size __iomem *addr, u##size val)	\
 }
 #endif
 
-#define DEF_MMIO_IN_BE(name, size, insn)				\
+#define DEF_MMIO_IN_D(name, size, insn)				\
 static inline u##size name(const volatile u##size __iomem *addr)	\
 {									\
 	u##size ret;							\
@@ -147,7 +147,7 @@ static inline u##size name(const volatile u##size __iomem *addr)	\
 	return ret;							\
 }
 
-#define DEF_MMIO_OUT_BE(name, size, insn)				\
+#define DEF_MMIO_OUT_D(name, size, insn)				\
 static inline void name(volatile u##size __iomem *addr, u##size val)	\
 {									\
 	__asm__ __volatile__("sync;"#insn"%U0%X0 %1,%0"			\
@@ -155,22 +155,37 @@ static inline void name(volatile u##size __iomem *addr, u##size val)	\
 	IO_SET_SYNC_FLAG();						\
 }
 
+DEF_MMIO_IN_D(in_8,     8, lbz);
+DEF_MMIO_OUT_D(out_8,   8, stb);
 
-DEF_MMIO_IN_BE(in_8,     8, lbz);
-DEF_MMIO_IN_BE(in_be16, 16, lhz);
-DEF_MMIO_IN_BE(in_be32, 32, lwz);
-DEF_MMIO_IN_LE(in_le16, 16, lhbrx);
-DEF_MMIO_IN_LE(in_le32, 32, lwbrx);
+#ifdef __BIG_ENDIAN__
+DEF_MMIO_IN_D(in_be16, 16, lhz);
+DEF_MMIO_IN_D(in_be32, 32, lwz);
+DEF_MMIO_IN_X(in_le16, 16, lhbrx);
+DEF_MMIO_IN_X(in_le32, 32, lwbrx);
 
-DEF_MMIO_OUT_BE(out_8,     8, stb);
-DEF_MMIO_OUT_BE(out_be16, 16, sth);
-DEF_MMIO_OUT_BE(out_be32, 32, stw);
-DEF_MMIO_OUT_LE(out_le16, 16, sthbrx);
-DEF_MMIO_OUT_LE(out_le32, 32, stwbrx);
+DEF_MMIO_OUT_D(out_be16, 16, sth);
+DEF_MMIO_OUT_D(out_be32, 32, stw);
+DEF_MMIO_OUT_X(out_le16, 16, sthbrx);
+DEF_MMIO_OUT_X(out_le32, 32, stwbrx);
+#else
+DEF_MMIO_IN_X(in_be16, 16, lhbrx);
+DEF_MMIO_IN_X(in_be32, 32, lwbrx);
+DEF_MMIO_IN_D(in_le16, 16, lhz);
+DEF_MMIO_IN_D(in_le32, 32, lwz);
+
+DEF_MMIO_OUT_X(out_be16, 16, sthbrx);
+DEF_MMIO_OUT_X(out_be32, 32, stwbrx);
+DEF_MMIO_OUT_D(out_le16, 16, sth);
+DEF_MMIO_OUT_D(out_le32, 32, stw);
+
+#endif /* __BIG_ENDIAN */
 
 #ifdef __powerpc64__
-DEF_MMIO_OUT_BE(out_be64, 64, std);
-DEF_MMIO_IN_BE(in_be64, 64, ld);
+
+#ifdef __BIG_ENDIAN__
+DEF_MMIO_OUT_D(out_be64, 64, std);
+DEF_MMIO_IN_D(in_be64, 64, ld);
 
 /* There is no asm instructions for 64 bits reverse loads and stores */
 static inline u64 in_le64(const volatile u64 __iomem *addr)
@@ -182,6 +197,22 @@ static inline void out_le64(volatile u64 __iomem *addr, u64 val)
 {
 	out_be64(addr, swab64(val));
 }
+#else
+DEF_MMIO_OUT_D(out_le64, 64, std);
+DEF_MMIO_IN_D(in_le64, 64, ld);
+
+/* There is no asm instructions for 64 bits reverse loads and stores */
+static inline u64 in_be64(const volatile u64 __iomem *addr)
+{
+	return swab64(in_le64(addr));
+}
+
+static inline void out_be64(volatile u64 __iomem *addr, u64 val)
+{
+	out_le64(addr, swab64(val));
+}
+
+#endif
 #endif /* __powerpc64__ */
 
 /*
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 41/63] powerpc: Add little endian support for word-at-a-time functions
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (39 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 40/63] powerpc: Support endian agnostic MMIO Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:01 ` [PATCH 42/63] powerpc: Set MSR_LE bit on little endian builds Anton Blanchard
                   ` (21 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The powerpc word-at-a-time functions are big endian specific.
Bring in the x86 version in order to support little endian builds.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/word-at-a-time.h | 71 +++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/arch/powerpc/include/asm/word-at-a-time.h b/arch/powerpc/include/asm/word-at-a-time.h
index d0b6d4a..213a5f2 100644
--- a/arch/powerpc/include/asm/word-at-a-time.h
+++ b/arch/powerpc/include/asm/word-at-a-time.h
@@ -8,6 +8,8 @@
 #include <linux/kernel.h>
 #include <asm/asm-compat.h>
 
+#ifdef __BIG_ENDIAN__
+
 struct word_at_a_time {
 	const unsigned long high_bits, low_bits;
 };
@@ -38,4 +40,73 @@ static inline bool has_zero(unsigned long val, unsigned long *data, const struct
 	return (val + c->high_bits) & ~rhs;
 }
 
+#else
+
+/*
+ * This is largely generic for little-endian machines, but the
+ * optimal byte mask counting is probably going to be something
+ * that is architecture-specific. If you have a reliably fast
+ * bit count instruction, that might be better than the multiply
+ * and shift, for example.
+ */
+struct word_at_a_time {
+	const unsigned long one_bits, high_bits;
+};
+
+#define WORD_AT_A_TIME_CONSTANTS { REPEAT_BYTE(0x01), REPEAT_BYTE(0x80) }
+
+#ifdef CONFIG_64BIT
+
+/*
+ * Jan Achrenius on G+: microoptimized version of
+ * the simpler "(mask & ONEBYTES) * ONEBYTES >> 56"
+ * that works for the bytemasks without having to
+ * mask them first.
+ */
+static inline long count_masked_bytes(unsigned long mask)
+{
+	return mask*0x0001020304050608ul >> 56;
+}
+
+#else	/* 32-bit case */
+
+/* Carl Chatfield / Jan Achrenius G+ version for 32-bit */
+static inline long count_masked_bytes(long mask)
+{
+	/* (000000 0000ff 00ffff ffffff) -> ( 1 1 2 3 ) */
+	long a = (0x0ff0001+mask) >> 23;
+	/* Fix the 1 for 00 case */
+	return a & mask;
+}
+
+#endif
+
+/* Return nonzero if it has a zero */
+static inline unsigned long has_zero(unsigned long a, unsigned long *bits, const struct word_at_a_time *c)
+{
+	unsigned long mask = ((a - c->one_bits) & ~a) & c->high_bits;
+	*bits = mask;
+	return mask;
+}
+
+static inline unsigned long prep_zero_mask(unsigned long a, unsigned long bits, const struct word_at_a_time *c)
+{
+	return bits;
+}
+
+static inline unsigned long create_zero_mask(unsigned long bits)
+{
+	bits = (bits - 1) & ~bits;
+	return bits >> 7;
+}
+
+/* The mask we created is directly usable as a bytemask */
+#define zero_bytemask(mask) (mask)
+
+static inline unsigned long find_zero(unsigned long mask)
+{
+	return count_masked_bytes(mask);
+}
+#endif
+
 #endif /* _ASM_WORD_AT_A_TIME_H */
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 42/63] powerpc: Set MSR_LE bit on little endian builds
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (40 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 41/63] powerpc: Add little endian support for word-at-a-time functions Anton Blanchard
@ 2013-08-06 16:01 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 43/63] powerpc: Reset MSR_LE on signal entry Anton Blanchard
                   ` (20 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:01 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We need to set MSR_LE in kernel and userspace for little endian builds

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/reg.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index a312e0c..4e57083 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -115,7 +115,12 @@
 #define MSR_64BIT	MSR_SF
 
 /* Server variant */
-#define MSR_		(MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_ISF |MSR_HV)
+#define __MSR		(MSR_ME | MSR_RI | MSR_IR | MSR_DR | MSR_ISF |MSR_HV)
+#ifdef __BIG_ENDIAN__
+#define MSR_		__MSR
+#else
+#define MSR_		(__MSR | MSR_LE)
+#endif
 #define MSR_KERNEL	(MSR_ | MSR_64BIT)
 #define MSR_USER32	(MSR_ | MSR_PR | MSR_EE)
 #define MSR_USER64	(MSR_USER32 | MSR_64BIT)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 43/63] powerpc: Reset MSR_LE on signal entry
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (41 preceding siblings ...)
  2013-08-06 16:01 ` [PATCH 42/63] powerpc: Set MSR_LE bit on little endian builds Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 44/63] powerpc: Include the appropriate endianness header Anton Blanchard
                   ` (19 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We always take signals in big endian which is wrong. Signals
should be taken in native endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/signal_32.c | 3 ++-
 arch/powerpc/kernel/signal_64.c | 3 ++-
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 0f83122..3b9a673 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -1036,8 +1036,9 @@ int handle_rt_signal32(unsigned long sig, struct k_sigaction *ka,
 	regs->gpr[5] = (unsigned long) &rt_sf->uc;
 	regs->gpr[6] = (unsigned long) rt_sf;
 	regs->nip = (unsigned long) ka->sa.sa_handler;
-	/* enter the signal handler in big-endian mode */
+	/* enter the signal handler in native-endian mode */
 	regs->msr &= ~MSR_LE;
+	regs->msr |= (MSR_KERNEL & MSR_LE);
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 	/* Remove TM bits from thread's MSR.  The MSR in the sigcontext
 	 * just indicates to userland that we were doing a transaction, but we
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index cbd2692..f7e61e0 100644
--- a/arch/powerpc/kernel/signal_64.c
+++ b/arch/powerpc/kernel/signal_64.c
@@ -767,8 +767,9 @@ int handle_rt_signal64(int signr, struct k_sigaction *ka, siginfo_t *info,
 
 	/* Set up "regs" so we "return" to the signal handler. */
 	err |= get_user(regs->nip, &funct_desc_ptr->entry);
-	/* enter the signal handler in big-endian mode */
+	/* enter the signal handler in native-endian mode */
 	regs->msr &= ~MSR_LE;
+	regs->msr |= (MSR_KERNEL & MSR_LE);
 	regs->gpr[1] = newsp;
 	err |= get_user(regs->gpr[2], &funct_desc_ptr->toc);
 	regs->gpr[3] = signr;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 44/63] powerpc: Include the appropriate endianness header
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (42 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 43/63] powerpc: Reset MSR_LE on signal entry Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 45/63] powerpc: endian safe trampoline Anton Blanchard
                   ` (18 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Ian Munsie <imunsie@au1.ibm.com>

This patch will have powerpc include the appropriate generic endianness
header depending on what the compiler reports.

Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/uapi/asm/byteorder.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/arch/powerpc/include/uapi/asm/byteorder.h b/arch/powerpc/include/uapi/asm/byteorder.h
index aa6cc4f..ca931d0 100644
--- a/arch/powerpc/include/uapi/asm/byteorder.h
+++ b/arch/powerpc/include/uapi/asm/byteorder.h
@@ -7,6 +7,10 @@
  * as published by the Free Software Foundation; either version
  * 2 of the License, or (at your option) any later version.
  */
+#ifdef __LITTLE_ENDIAN__
+#include <linux/byteorder/little_endian.h>
+#else
 #include <linux/byteorder/big_endian.h>
+#endif
 
 #endif /* _ASM_POWERPC_BYTEORDER_H */
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 45/63] powerpc: endian safe trampoline
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (43 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 44/63] powerpc: Include the appropriate endianness header Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 46/63] powerpc: Add endian safe trampoline to pseries secondary thread entry Anton Blanchard
                   ` (17 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Benjamin Herrenschmidt <benh@kernel.crashing.org>

Create a trampoline that works in either endian and flips to
the expected endian. Use it for primary and secondary thread
entry as well as RTAS and OF call return.

Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/ppc_asm.h | 31 ++++++++++++++++++++++++++++++-
 arch/powerpc/kernel/entry_64.S     | 36 ++++++++++++++++++++----------------
 arch/powerpc/kernel/head_64.S      |  2 ++
 3 files changed, 52 insertions(+), 17 deletions(-)

diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index bc606e4..f9d8418 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -845,6 +845,35 @@ END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,946)
 #define N_SLINE	68
 #define N_SO	100
 
-#endif /*  __ASSEMBLY__ */
+/*
+ * Create an endian fixup trampoline
+ *
+ * This starts with a "tdi 0,0,0x48" instruction which is
+ * essentially a "trap never", and thus akin to a nop.
+ *
+ * The opcode for this instruction read with the wrong endian
+ * however results in a b . + 8
+ *
+ * So essentially we use that trick to execute the following
+ * trampoline in "reverse endian" if we are running with the
+ * MSR_LE bit set the "wrong" way for whatever endianness the
+ * kernel is built for.
+ */
 
+#ifdef CONFIG_PPC_BOOK3E
+#define FIXUP_ENDIAN
+#else
+#define FIXUP_ENDIAN						   \
+	tdi   0,0,0x48;	  /* Reverse endian of b . + 8		*/ \
+	b     $+36;	  /* Skip trampoline if endian is good	*/ \
+	.long 0x05009f42; /* bcl 20,31,$+4			*/ \
+	.long 0xa602487d; /* mflr r10				*/ \
+	.long 0x1c004a39; /* addi r10,r10,28			*/ \
+	.long 0xa600607d; /* mfmsr r11				*/ \
+	.long 0x01006b69; /* xori r11,r11,1			*/ \
+	.long 0xa6035a7d; /* mtsrr0 r10				*/ \
+	.long 0xa6037b7d; /* mtsrr1 r11				*/ \
+	.long 0x2400004c  /* rfid				*/
+#endif /* !CONFIG_PPC_BOOK3E */
+#endif /*  __ASSEMBLY__ */
 #endif /* _ASM_POWERPC_PPC_ASM_H */
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 209e8ce..5694e1f 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -1020,7 +1020,7 @@ _GLOBAL(enter_rtas)
 	
         li      r9,1
         rldicr  r9,r9,MSR_SF_LG,(63-MSR_SF_LG)
-	ori	r9,r9,MSR_IR|MSR_DR|MSR_FE0|MSR_FE1|MSR_FP|MSR_RI
+	ori	r9,r9,MSR_IR|MSR_DR|MSR_FE0|MSR_FE1|MSR_FP|MSR_RI|MSR_LE
 	andc	r6,r0,r9
 	sync				/* disable interrupts so SRR0/1 */
 	mtmsrd	r0			/* don't get trashed */
@@ -1035,6 +1035,8 @@ _GLOBAL(enter_rtas)
 	b	.	/* prevent speculative execution */
 
 _STATIC(rtas_return_loc)
+	FIXUP_ENDIAN
+
 	/* relocation is off at this point */
 	GET_PACA(r4)
 	clrldi	r4,r4,2			/* convert to realmode address */
@@ -1106,28 +1108,30 @@ _GLOBAL(enter_prom)
 	std	r10,_CCR(r1)
 	std	r11,_MSR(r1)
 
-	/* Get the PROM entrypoint */
-	mtlr	r4
+	/* Put PROM address in SRR0 */
+	mtsrr0	r4
+
+	/* Setup our trampoline return addr in LR */
+	bcl	20,31,$+4
+0:	mflr	r4
+	addi	r4,r4,(1f - 0b)
+       	mtlr	r4
 
-	/* Switch MSR to 32 bits mode
+	/* Prepare a 32-bit mode big endian MSR
 	 */
 #ifdef CONFIG_PPC_BOOK3E
 	rlwinm	r11,r11,0,1,31
-	mtmsr	r11
+	mtsrr1	r11
+	rfi
 #else /* CONFIG_PPC_BOOK3E */
-        mfmsr   r11
-        li      r12,1
-        rldicr  r12,r12,MSR_SF_LG,(63-MSR_SF_LG)
-        andc    r11,r11,r12
-        li      r12,1
-        rldicr  r12,r12,MSR_ISF_LG,(63-MSR_ISF_LG)
-        andc    r11,r11,r12
-        mtmsrd  r11
+	LOAD_REG_IMMEDIATE(r12, MSR_SF | MSR_ISF | MSR_LE)
+	andc	r11,r11,r12
+	mtsrr1	r11
+	rfid
 #endif /* CONFIG_PPC_BOOK3E */
-        isync
 
-	/* Enter PROM here... */
-	blrl
+1:	/* Return from OF */
+	FIXUP_ENDIAN
 
 	/* Just make sure that r1 top 32 bits didn't get
 	 * corrupt by OF
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 3d11d80..065d10f 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -68,6 +68,7 @@ _stext:
 _GLOBAL(__start)
 	/* NOP this out unconditionally */
 BEGIN_FTR_SECTION
+	FIXUP_ENDIAN
 	b	.__start_initialization_multiplatform
 END_FTR_SECTION(0, 1)
 
@@ -115,6 +116,7 @@ __run_at_load:
  */
 	.globl	__secondary_hold
 __secondary_hold:
+	FIXUP_ENDIAN
 #ifndef CONFIG_PPC_BOOK3E
 	mfmsr	r24
 	ori	r24,r24,MSR_RI
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 46/63] powerpc: Add endian safe trampoline to pseries secondary thread entry
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (44 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 45/63] powerpc: endian safe trampoline Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 47/63] pseries: Add H_SET_MODE to change exception endianness Anton Blanchard
                   ` (16 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We might need to flip endian when starting secondary threads via
RTAS.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/head_64.S | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 065d10f..2ae41ab 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -207,6 +207,7 @@ _GLOBAL(generic_secondary_thread_init)
  * as SCOM before entry).
  */
 _GLOBAL(generic_secondary_smp_init)
+	FIXUP_ENDIAN
 	mr	r24,r3
 	mr	r25,r4
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 47/63] pseries: Add H_SET_MODE to change exception endianness
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (45 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 46/63] powerpc: Add endian safe trampoline to pseries secondary thread entry Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 48/63] powerpc/kvm/book3s_hv: Add little endian guest support Anton Blanchard
                   ` (15 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

On little endian builds call H_SET_MODE so exceptions have the
correct endianness. We need a better hook to handle flipping back
into big endian mode on a kexec, but insert it into the mmu
teardown callback for now.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/hvcall.h               |  2 ++
 arch/powerpc/platforms/pseries/lpar.c           | 17 ++++++++++
 arch/powerpc/platforms/pseries/plpar_wrappers.h | 26 +++++++++++++++
 arch/powerpc/platforms/pseries/setup.c          | 42 +++++++++++++++++++++++++
 4 files changed, 87 insertions(+)

diff --git a/arch/powerpc/include/asm/hvcall.h b/arch/powerpc/include/asm/hvcall.h
index 0c7f2bf..d8b600b 100644
--- a/arch/powerpc/include/asm/hvcall.h
+++ b/arch/powerpc/include/asm/hvcall.h
@@ -403,6 +403,8 @@ static inline unsigned long cmo_get_page_size(void)
 extern long pSeries_enable_reloc_on_exc(void);
 extern long pSeries_disable_reloc_on_exc(void);
 
+extern long pseries_big_endian_exceptions(void);
+
 #else
 
 #define pSeries_enable_reloc_on_exc()  do {} while (0)
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 0b7c86e..f28bf35 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -239,6 +239,23 @@ static void pSeries_lpar_hptab_clear(void)
 					&(ptes[j].pteh), &(ptes[j].ptel));
 		}
 	}
+
+#ifdef __LITTLE_ENDIAN__
+	/* Reset exceptions to big endian */
+	if (firmware_has_feature(FW_FEATURE_SET_MODE)) {
+		long rc;
+
+		rc = pseries_big_endian_exceptions();
+		/*
+		 * At this point it is unlikely panic() will get anything
+		 * out to the user, but at least this will stop us from
+		 * continuing on further and creating an even more
+		 * difficult to debug situation.
+		 */
+		if (rc)
+			panic("Could not enable big endian exceptions");
+	}
+#endif
 }
 
 /*
diff --git a/arch/powerpc/platforms/pseries/plpar_wrappers.h b/arch/powerpc/platforms/pseries/plpar_wrappers.h
index 417d0bf..6c1c218 100644
--- a/arch/powerpc/platforms/pseries/plpar_wrappers.h
+++ b/arch/powerpc/platforms/pseries/plpar_wrappers.h
@@ -287,6 +287,32 @@ static inline long disable_reloc_on_exceptions(void) {
 	return plpar_set_mode(0, 3, 0, 0);
 }
 
+/*
+ * Take exceptions in big endian mode on this partition
+ *
+ * Note: this call has a partition wide scope and can take a while to complete.
+ * If it returns H_LONG_BUSY_* it should be retried periodically until it
+ * returns H_SUCCESS.
+ */
+static inline long enable_big_endian_exceptions(void)
+{
+	/* mflags = 0: big endian exceptions */
+	return plpar_set_mode(0, 4, 0, 0);
+}
+
+/*
+ * Take exceptions in little endian mode on this partition
+ *
+ * Note: this call has a partition wide scope and can take a while to complete.
+ * If it returns H_LONG_BUSY_* it should be retried periodically until it
+ * returns H_SUCCESS.
+ */
+static inline long enable_little_endian_exceptions(void)
+{
+	/* mflags = 1: little endian exceptions */
+	return plpar_set_mode(1, 4, 0, 0);
+}
+
 static inline long plapr_set_ciabr(unsigned long ciabr)
 {
 	return plpar_set_mode(0, 1, ciabr, 0);
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 33d6196..51c195d 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -430,6 +430,32 @@ static void pSeries_machine_kexec(struct kimage *image)
 }
 #endif
 
+#ifdef __LITTLE_ENDIAN__
+long pseries_big_endian_exceptions(void)
+{
+	long rc;
+
+	while (1) {
+		rc = enable_big_endian_exceptions();
+		if (!H_IS_LONG_BUSY(rc))
+			return rc;
+		mdelay(get_longbusy_msecs(rc));
+	}
+}
+
+static long pseries_little_endian_exceptions(void)
+{
+	long rc;
+
+	while (1) {
+		rc = enable_little_endian_exceptions();
+		if (!H_IS_LONG_BUSY(rc))
+			return rc;
+		mdelay(get_longbusy_msecs(rc));
+	}
+}
+#endif
+
 static void __init pSeries_setup_arch(void)
 {
 	panic_timeout = 10;
@@ -687,6 +713,22 @@ static int __init pSeries_probe(void)
 	/* Now try to figure out if we are running on LPAR */
 	of_scan_flat_dt(pseries_probe_fw_features, NULL);
 
+#ifdef __LITTLE_ENDIAN__
+	if (firmware_has_feature(FW_FEATURE_SET_MODE)) {
+		long rc;
+		/*
+		 * Tell the hypervisor that we want our exceptions to
+		 * be taken in little endian mode. If this fails we don't
+		 * want to use BUG() because it will trigger an exception.
+		 */
+		rc = pseries_little_endian_exceptions();
+		if (rc) {
+			ppc_md.progress("H_SET_MODE LE exception fail", 0);
+			panic("Could not enable little endian exceptions");
+		}
+	}
+#endif
+
 	if (firmware_has_feature(FW_FEATURE_LPAR))
 		hpte_init_lpar();
 	else
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 48/63] powerpc/kvm/book3s_hv: Add little endian guest support
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (46 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 47/63] pseries: Add H_SET_MODE to change exception endianness Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-07  4:24   ` Paul Mackerras
  2013-08-06 16:02 ` [PATCH 49/63] powerpc: Remove open coded byte swap macro in alignment handler Anton Blanchard
                   ` (14 subsequent siblings)
  62 siblings, 1 reply; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Add support for the H_SET_MODE hcall so we can select the
endianness of our exceptions.

We create a guest MSR from scratch when delivering exceptions in
a few places and instead of extracting the LPCR[ILE] and inserting
it into MSR_LE each time simply create a new variable intr_msr which
contains the entire MSR to use.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/kvm_host.h     |  1 +
 arch/powerpc/kernel/asm-offsets.c       |  1 +
 arch/powerpc/kvm/book3s_64_mmu_hv.c     |  2 +-
 arch/powerpc/kvm/book3s_hv.c            | 44 +++++++++++++++++++++++++++++++++
 arch/powerpc/kvm/book3s_hv_rmhandlers.S | 15 ++++-------
 5 files changed, 52 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index af326cd..88f0c8e 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -609,6 +609,7 @@ struct kvm_vcpu_arch {
 	spinlock_t tbacct_lock;
 	u64 busy_stolen;
 	u64 busy_preempt;
+	unsigned long intr_msr;
 #endif
 };
 
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index c7e8afc..0f93cb0 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -486,6 +486,7 @@ int main(void)
 	DEFINE(VCPU_DAR, offsetof(struct kvm_vcpu, arch.shregs.dar));
 	DEFINE(VCPU_VPA, offsetof(struct kvm_vcpu, arch.vpa.pinned_addr));
 	DEFINE(VCPU_VPA_DIRTY, offsetof(struct kvm_vcpu, arch.vpa.dirty));
+	DEFINE(VCPU_INTR_MSR, offsetof(struct kvm_vcpu, arch.intr_msr));
 #endif
 #ifdef CONFIG_PPC_BOOK3S
 	DEFINE(VCPU_VCPUID, offsetof(struct kvm_vcpu, vcpu_id));
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index 710d313..fae9751 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -266,7 +266,7 @@ void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
 
 static void kvmppc_mmu_book3s_64_hv_reset_msr(struct kvm_vcpu *vcpu)
 {
-	kvmppc_set_msr(vcpu, MSR_SF | MSR_ME);
+	kvmppc_set_msr(vcpu, vcpu->arch.intr_msr);
 }
 
 /*
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index cf39bf4..d4f087b 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -503,6 +503,43 @@ static void kvmppc_create_dtl_entry(struct kvm_vcpu *vcpu,
 	vcpu->arch.dtl.dirty = true;
 }
 
+static int kvmppc_h_set_mode(struct kvm_vcpu *vcpu, unsigned long mflags,
+			     unsigned long resource, unsigned long value1,
+			     unsigned long value2)
+{
+	struct kvm *kvm = vcpu->kvm;
+	struct kvm_vcpu *v;
+	int n;
+
+	if (resource == 4) {
+		if (value1)
+			return H_P3;
+		if (value2)
+			return H_P4;
+
+		switch (mflags) {
+		case 0:
+			kvm->arch.lpcr &= ~LPCR_ILE;
+			kvm_for_each_vcpu(n, v, kvm)
+				v->arch.intr_msr &= ~MSR_LE;
+			kick_all_cpus_sync();
+			return H_SUCCESS;
+
+		case 1:
+			kvm->arch.lpcr |= LPCR_ILE;
+			kvm_for_each_vcpu(n, v, kvm)
+				v->arch.intr_msr |= MSR_LE;
+			kick_all_cpus_sync();
+			return H_SUCCESS;
+
+		default:
+			return H_UNSUPPORTED_FLAG_START;
+		}
+	}
+
+	return H_P2;
+}
+
 int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 {
 	unsigned long req = kvmppc_get_gpr(vcpu, 3);
@@ -557,6 +594,12 @@ int kvmppc_pseries_do_hcall(struct kvm_vcpu *vcpu)
 
 		/* Send the error out to userspace via KVM_RUN */
 		return rc;
+	case H_SET_MODE:
+		ret = kvmppc_h_set_mode(vcpu, kvmppc_get_gpr(vcpu, 4),
+					kvmppc_get_gpr(vcpu, 5),
+					kvmppc_get_gpr(vcpu, 6),
+					kvmppc_get_gpr(vcpu, 7));
+		break;
 
 	case H_XIRR:
 	case H_CPPR:
@@ -925,6 +968,7 @@ struct kvm_vcpu *kvmppc_core_vcpu_create(struct kvm *kvm, unsigned int id)
 	spin_lock_init(&vcpu->arch.vpa_update_lock);
 	spin_lock_init(&vcpu->arch.tbacct_lock);
 	vcpu->arch.busy_preempt = TB_NIL;
+	vcpu->arch.intr_msr = MSR_SF | MSR_ME;
 
 	kvmppc_mmu_book3s_hv_init(vcpu);
 
diff --git a/arch/powerpc/kvm/book3s_hv_rmhandlers.S b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
index b93e3cd..c48061c 100644
--- a/arch/powerpc/kvm/book3s_hv_rmhandlers.S
+++ b/arch/powerpc/kvm/book3s_hv_rmhandlers.S
@@ -521,8 +521,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ARCH_206)
 12:	mr	r6,r10
 	mr	r10,r0
 	mr	r7,r11
-	li	r11,(MSR_ME << 1) | 1	/* synthesize MSR_SF | MSR_ME */
-	rotldi	r11,r11,63
+	ld	r11,VCPU_INTR_MSR(r4)
 	b	5f
 11:	beq	5f
 	mfspr	r0,SPRN_DEC
@@ -758,8 +757,7 @@ do_ext_interrupt:
 	mtspr	SPRN_SRR0, r10
 	mtspr	SPRN_SRR1, r11
 	li	r10, BOOK3S_INTERRUPT_EXTERNAL
-	li	r11, (MSR_ME << 1) | 1	/* synthesize MSR_SF | MSR_ME */
-	rotldi	r11, r11, 63
+	ld	r11,VCPU_INTR_MSR(r9)
 2:	mr	r4, r9
 	mtspr	SPRN_LPCR, r8
 	b	fast_guest_return
@@ -1300,8 +1298,7 @@ kvmppc_hdsi:
 	mtspr	SPRN_SRR0, r10
 	mtspr	SPRN_SRR1, r11
 	li	r10, BOOK3S_INTERRUPT_DATA_STORAGE
-	li	r11, (MSR_ME << 1) | 1	/* synthesize MSR_SF | MSR_ME */
-	rotldi	r11, r11, 63
+	ld	r11,VCPU_INTR_MSR(r9)
 fast_interrupt_c_return:
 6:	ld	r7, VCPU_CTR(r9)
 	lwz	r8, VCPU_XER(r9)
@@ -1370,8 +1367,7 @@ kvmppc_hisi:
 1:	mtspr	SPRN_SRR0, r10
 	mtspr	SPRN_SRR1, r11
 	li	r10, BOOK3S_INTERRUPT_INST_STORAGE
-	li	r11, (MSR_ME << 1) | 1	/* synthesize MSR_SF | MSR_ME */
-	rotldi	r11, r11, 63
+	ld	r11,VCPU_INTR_MSR(r9)
 	b	fast_interrupt_c_return
 
 3:	ld	r6, VCPU_KVM(r9)	/* not relocated, use VRMA */
@@ -1697,8 +1693,7 @@ machine_check_realmode:
 	beq	mc_cont
 	/* If not, deliver a machine check.  SRR0/1 are already set */
 	li	r10, BOOK3S_INTERRUPT_MACHINE_CHECK
-	li	r11, (MSR_ME << 1) | 1	/* synthesize MSR_SF | MSR_ME */
-	rotldi	r11, r11, 63
+	ld	r11,VCPU_INTR_MSR(r9)
 	b	fast_interrupt_c_return
 
 secondary_too_late:
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 49/63] powerpc: Remove open coded byte swap macro in alignment handler
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (47 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 48/63] powerpc/kvm/book3s_hv: Add little endian guest support Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 50/63] powerpc: Remove hard coded FP offsets " Anton Blanchard
                   ` (13 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Use swab64/32/16 instead of open coding it.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/align.c | 36 ++++++++++++------------------------
 1 file changed, 12 insertions(+), 24 deletions(-)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index 52e5758..573728b 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -54,8 +54,6 @@ struct aligninfo {
 /* DSISR bits reported for a DCBZ instruction: */
 #define DCBZ	0x5f	/* 8xx/82xx dcbz faults when cache not enabled */
 
-#define SWAP(a, b)	(t = (a), (a) = (b), (b) = t)
-
 /*
  * The PowerPC stores certain bits of the instruction that caused the
  * alignment exception in the DSISR register.  This array maps those
@@ -458,7 +456,7 @@ static struct aligninfo spe_aligninfo[32] = {
 static int emulate_spe(struct pt_regs *regs, unsigned int reg,
 		       unsigned int instr)
 {
-	int t, ret;
+	int ret;
 	union {
 		u64 ll;
 		u32 w[2];
@@ -581,24 +579,18 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
 	if (flags & SW) {
 		switch (flags & 0xf0) {
 		case E8:
-			SWAP(data.v[0], data.v[7]);
-			SWAP(data.v[1], data.v[6]);
-			SWAP(data.v[2], data.v[5]);
-			SWAP(data.v[3], data.v[4]);
+			data.ll = swab64(data.ll);
 			break;
 		case E4:
-
-			SWAP(data.v[0], data.v[3]);
-			SWAP(data.v[1], data.v[2]);
-			SWAP(data.v[4], data.v[7]);
-			SWAP(data.v[5], data.v[6]);
+			data.w[0] = swab32(data.w[0]);
+			data.w[1] = swab32(data.w[1]);
 			break;
 		/* Its half word endian */
 		default:
-			SWAP(data.v[0], data.v[1]);
-			SWAP(data.v[2], data.v[3]);
-			SWAP(data.v[4], data.v[5]);
-			SWAP(data.v[6], data.v[7]);
+			data.h[0] = swab16(data.h[0]);
+			data.h[1] = swab16(data.h[1]);
+			data.h[2] = swab16(data.h[2]);
+			data.h[3] = swab16(data.h[3]);
 			break;
 		}
 	}
@@ -706,7 +698,7 @@ int fix_alignment(struct pt_regs *regs)
 	unsigned int dsisr;
 	unsigned char __user *addr;
 	unsigned long p, swiz;
-	int ret, t;
+	int ret;
 	union {
 		u64 ll;
 		double dd;
@@ -911,17 +903,13 @@ int fix_alignment(struct pt_regs *regs)
 	if (flags & SW) {
 		switch (nb) {
 		case 8:
-			SWAP(data.v[0], data.v[7]);
-			SWAP(data.v[1], data.v[6]);
-			SWAP(data.v[2], data.v[5]);
-			SWAP(data.v[3], data.v[4]);
+			data.ll = swab64(data.ll);
 			break;
 		case 4:
-			SWAP(data.v[4], data.v[7]);
-			SWAP(data.v[5], data.v[6]);
+			data.x32.low32 = swab32(data.x32.low32);
 			break;
 		case 2:
-			SWAP(data.v[6], data.v[7]);
+			data.x16.low16 = swab16(data.x16.low16);
 			break;
 		}
 	}
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 50/63] powerpc: Remove hard coded FP offsets in alignment handler
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (48 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 49/63] powerpc: Remove open coded byte swap macro in alignment handler Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 51/63] powerpc: Alignment handler shouldn't access VSX registers with TS_FPR Anton Blanchard
                   ` (12 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The alignment handler assumes big endian ordering when selecting
the low word of a 64bit floating point value. Use the existing
union which works in both little and big endian.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/align.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index 573728b..ceced33 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -891,7 +891,7 @@ int fix_alignment(struct pt_regs *regs)
 #ifdef CONFIG_PPC_FPU
 			preempt_disable();
 			enable_kernel_fp();
-			cvt_df(&data.dd, (float *)&data.v[4]);
+			cvt_df(&data.dd, (float *)&data.x32.low32);
 			preempt_enable();
 #else
 			return 0;
@@ -931,7 +931,7 @@ int fix_alignment(struct pt_regs *regs)
 #ifdef CONFIG_PPC_FPU
 		preempt_disable();
 		enable_kernel_fp();
-		cvt_fd((float *)&data.v[4], &data.dd);
+		cvt_fd((float *)&data.x32.low32, &data.dd);
 		preempt_enable();
 #else
 		return 0;
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 51/63] powerpc: Alignment handler shouldn't access VSX registers with TS_FPR
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (49 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 50/63] powerpc: Remove hard coded FP offsets " Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 52/63] powerpc: Add little endian support to alignment handler Anton Blanchard
                   ` (11 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The TS_FPR macro selects the FPR component of a VSX register (the
high doubleword). emulate_vsx is using this macro to get the
address of the associated VSX register. This happens to work on big
endian, but fails on little endian.

Replace it with an explicit array access.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/align.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index ceced33..edc179f 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -646,7 +646,7 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg,
 	flush_vsx_to_thread(current);
 
 	if (reg < 32)
-		ptr = (char *) &current->thread.TS_FPR(reg);
+		ptr = (char *) &current->thread.fpr[reg][0];
 	else
 		ptr = (char *) &current->thread.vr[reg - 32];
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 52/63] powerpc: Add little endian support to alignment handler
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (50 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 51/63] powerpc: Alignment handler shouldn't access VSX registers with TS_FPR Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 53/63] powerpc: Handle VSX alignment faults in little endian mode Anton Blanchard
                   ` (10 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Handle most unaligned load and store faults in little
endian mode. Strings, multiples and VSX are not supported.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/align.c | 93 ++++++++++++++++++++++++++++++---------------
 1 file changed, 63 insertions(+), 30 deletions(-)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index edc179f..0a1a8f2 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -262,6 +262,7 @@ static int emulate_dcbz(struct pt_regs *regs, unsigned char __user *addr)
 
 #define SWIZ_PTR(p)		((unsigned char __user *)((p) ^ swiz))
 
+#ifdef __BIG_ENDIAN__
 static int emulate_multiple(struct pt_regs *regs, unsigned char __user *addr,
 			    unsigned int reg, unsigned int nb,
 			    unsigned int flags, unsigned int instr,
@@ -390,6 +391,7 @@ static int emulate_fp_pair(unsigned char __user *addr, unsigned int reg,
 		return -EFAULT;
 	return 1;	/* exception handled and fixed up */
 }
+#endif
 
 #ifdef CONFIG_SPE
 
@@ -628,7 +630,7 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
 }
 #endif /* CONFIG_SPE */
 
-#ifdef CONFIG_VSX
+#if defined(CONFIG_VSX) && defined(__BIG_ENDIAN__)
 /*
  * Emulate VSX instructions...
  */
@@ -698,18 +700,28 @@ int fix_alignment(struct pt_regs *regs)
 	unsigned int dsisr;
 	unsigned char __user *addr;
 	unsigned long p, swiz;
-	int ret;
-	union {
+	int ret, i;
+	union data {
 		u64 ll;
 		double dd;
 		unsigned char v[8];
 		struct {
+#ifdef __LITTLE_ENDIAN__
+			int	 low32;
+			unsigned hi32;
+#else
 			unsigned hi32;
 			int	 low32;
+#endif
 		} x32;
 		struct {
+#ifdef __LITTLE_ENDIAN__
+			short	      low16;
+			unsigned char hi48[6];
+#else
 			unsigned char hi48[6];
 			short	      low16;
+#endif
 		} x16;
 	} data;
 
@@ -768,8 +780,9 @@ int fix_alignment(struct pt_regs *regs)
 
 	/* Byteswap little endian loads and stores */
 	swiz = 0;
-	if (regs->msr & MSR_LE) {
+	if ((regs->msr & MSR_LE) != (MSR_KERNEL & MSR_LE)) {
 		flags ^= SW;
+#ifdef __BIG_ENDIAN__
 		/*
 		 * So-called "PowerPC little endian" mode works by
 		 * swizzling addresses rather than by actually doing
@@ -782,11 +795,13 @@ int fix_alignment(struct pt_regs *regs)
 		 */
 		if (cpu_has_feature(CPU_FTR_PPC_LE))
 			swiz = 7;
+#endif
 	}
 
 	/* DAR has the operand effective address */
 	addr = (unsigned char __user *)regs->dar;
 
+#ifdef __BIG_ENDIAN__
 #ifdef CONFIG_VSX
 	if ((instruction & 0xfc00003e) == 0x7c000018) {
 		unsigned int elsize;
@@ -806,7 +821,7 @@ int fix_alignment(struct pt_regs *regs)
 			elsize = 8;
 
 		flags = 0;
-		if (regs->msr & MSR_LE)
+		if ((regs->msr & MSR_LE) != (MSR_KERNEL & MSR_LE))
 			flags |= SW;
 		if (instruction & 0x100)
 			flags |= ST;
@@ -821,6 +836,9 @@ int fix_alignment(struct pt_regs *regs)
 		return emulate_vsx(addr, reg, areg, regs, flags, nb, elsize);
 	}
 #endif
+#else
+	return -EFAULT;
+#endif
 	/* A size of 0 indicates an instruction we don't support, with
 	 * the exception of DCBZ which is handled as a special case here
 	 */
@@ -835,9 +853,13 @@ int fix_alignment(struct pt_regs *regs)
 	 * function
 	 */
 	if (flags & M) {
+#ifdef __BIG_ENDIAN__
 		PPC_WARN_ALIGNMENT(multiple, regs);
 		return emulate_multiple(regs, addr, reg, nb,
 					flags, instr, swiz);
+#else
+		return -EFAULT;
+#endif
 	}
 
 	/* Verify the address of the operand */
@@ -856,8 +878,12 @@ int fix_alignment(struct pt_regs *regs)
 
 	/* Special case for 16-byte FP loads and stores */
 	if (nb == 16) {
+#ifdef __BIG_ENDIAN__
 		PPC_WARN_ALIGNMENT(fp_pair, regs);
 		return emulate_fp_pair(addr, reg, flags);
+#else
+		return -EFAULT;
+#endif
 	}
 
 	PPC_WARN_ALIGNMENT(unaligned, regs);
@@ -866,24 +892,28 @@ int fix_alignment(struct pt_regs *regs)
 	 * get it from register values
 	 */
 	if (!(flags & ST)) {
-		data.ll = 0;
-		ret = 0;
-		p = (unsigned long) addr;
+		unsigned int start = 0;
+
 		switch (nb) {
-		case 8:
-			ret |= __get_user_inatomic(data.v[0], SWIZ_PTR(p++));
-			ret |= __get_user_inatomic(data.v[1], SWIZ_PTR(p++));
-			ret |= __get_user_inatomic(data.v[2], SWIZ_PTR(p++));
-			ret |= __get_user_inatomic(data.v[3], SWIZ_PTR(p++));
 		case 4:
-			ret |= __get_user_inatomic(data.v[4], SWIZ_PTR(p++));
-			ret |= __get_user_inatomic(data.v[5], SWIZ_PTR(p++));
+			start = offsetof(union data, x32.low32);
+			break;
 		case 2:
-			ret |= __get_user_inatomic(data.v[6], SWIZ_PTR(p++));
-			ret |= __get_user_inatomic(data.v[7], SWIZ_PTR(p++));
-			if (unlikely(ret))
-				return -EFAULT;
+			start = offsetof(union data, x16.low16);
+			break;
 		}
+
+		data.ll = 0;
+		ret = 0;
+		p = (unsigned long)addr;
+
+		for (i = 0; i < nb; i++)
+			ret |= __get_user_inatomic(data.v[start + i],
+						   SWIZ_PTR(p++));
+
+		if (unlikely(ret))
+			return -EFAULT;
+
 	} else if (flags & F) {
 		data.dd = current->thread.TS_FPR(reg);
 		if (flags & S) {
@@ -941,21 +971,24 @@ int fix_alignment(struct pt_regs *regs)
 
 	/* Store result to memory or update registers */
 	if (flags & ST) {
-		ret = 0;
-		p = (unsigned long) addr;
+		unsigned int start = 0;
+
 		switch (nb) {
-		case 8:
-			ret |= __put_user_inatomic(data.v[0], SWIZ_PTR(p++));
-			ret |= __put_user_inatomic(data.v[1], SWIZ_PTR(p++));
-			ret |= __put_user_inatomic(data.v[2], SWIZ_PTR(p++));
-			ret |= __put_user_inatomic(data.v[3], SWIZ_PTR(p++));
 		case 4:
-			ret |= __put_user_inatomic(data.v[4], SWIZ_PTR(p++));
-			ret |= __put_user_inatomic(data.v[5], SWIZ_PTR(p++));
+			start = offsetof(union data, x32.low32);
+			break;
 		case 2:
-			ret |= __put_user_inatomic(data.v[6], SWIZ_PTR(p++));
-			ret |= __put_user_inatomic(data.v[7], SWIZ_PTR(p++));
+			start = offsetof(union data, x16.low16);
+			break;
 		}
+
+		ret = 0;
+		p = (unsigned long)addr;
+
+		for (i = 0; i < nb; i++)
+			ret |= __put_user_inatomic(data.v[start + i],
+						   SWIZ_PTR(p++));
+
 		if (unlikely(ret))
 			return -EFAULT;
 	} else if (flags & F)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 53/63] powerpc: Handle VSX alignment faults in little endian mode
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (51 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 52/63] powerpc: Add little endian support to alignment handler Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 54/63] ibmveth: Fix little endian issues Anton Blanchard
                   ` (9 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Things are complicated by the fact that VSX elements are big
endian ordered even in little endian mode. 8 byte loads and
stores also write to the top 8 bytes of the register.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/kernel/align.c | 41 +++++++++++++++++++++++++++++++++--------
 1 file changed, 33 insertions(+), 8 deletions(-)

diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index 0a1a8f2..033d04a 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -630,7 +630,7 @@ static int emulate_spe(struct pt_regs *regs, unsigned int reg,
 }
 #endif /* CONFIG_SPE */
 
-#if defined(CONFIG_VSX) && defined(__BIG_ENDIAN__)
+#ifdef CONFIG_VSX
 /*
  * Emulate VSX instructions...
  */
@@ -654,8 +654,25 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg,
 
 	lptr = (unsigned long *) ptr;
 
+#ifdef __LITTLE_ENDIAN__
+	if (flags & SW) {
+		elsize = length;
+		sw = length-1;
+	} else {
+		/*
+		 * The elements are BE ordered, even in LE mode, so process
+		 * them in reverse order.
+		 */
+		addr += length - elsize;
+
+		/* 8 byte memory accesses go in the top 8 bytes of the VR */
+		if (length == 8)
+			ptr += 8;
+	}
+#else
 	if (flags & SW)
 		sw = elsize-1;
+#endif
 
 	for (j = 0; j < length; j += elsize) {
 		for (i = 0; i < elsize; ++i) {
@@ -665,19 +682,31 @@ static int emulate_vsx(unsigned char __user *addr, unsigned int reg,
 				ret |= __get_user(ptr[i^sw], addr + i);
 		}
 		ptr  += elsize;
+#ifdef __LITTLE_ENDIAN__
+		addr -= elsize;
+#else
 		addr += elsize;
+#endif
 	}
 
+#ifdef __BIG_ENDIAN__
+#define VSX_HI 0
+#define VSX_LO 1
+#else
+#define VSX_HI 1
+#define VSX_LO 0
+#endif
+
 	if (!ret) {
 		if (flags & U)
 			regs->gpr[areg] = regs->dar;
 
 		/* Splat load copies the same data to top and bottom 8 bytes */
 		if (flags & SPLT)
-			lptr[1] = lptr[0];
-		/* For 8 byte loads, zero the top 8 bytes */
+			lptr[VSX_LO] = lptr[VSX_HI];
+		/* For 8 byte loads, zero the low 8 bytes */
 		else if (!(flags & ST) && (8 == length))
-			lptr[1] = 0;
+			lptr[VSX_LO] = 0;
 	} else
 		return -EFAULT;
 
@@ -801,7 +830,6 @@ int fix_alignment(struct pt_regs *regs)
 	/* DAR has the operand effective address */
 	addr = (unsigned char __user *)regs->dar;
 
-#ifdef __BIG_ENDIAN__
 #ifdef CONFIG_VSX
 	if ((instruction & 0xfc00003e) == 0x7c000018) {
 		unsigned int elsize;
@@ -836,9 +864,6 @@ int fix_alignment(struct pt_regs *regs)
 		return emulate_vsx(addr, reg, areg, regs, flags, nb, elsize);
 	}
 #endif
-#else
-	return -EFAULT;
-#endif
 	/* A size of 0 indicates an instruction we don't support, with
 	 * the exception of DCBZ which is handled as a special case here
 	 */
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 54/63] ibmveth: Fix little endian issues
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (52 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 53/63] powerpc: Handle VSX alignment faults in little endian mode Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 55/63] ibmvscsi: " Anton Blanchard
                   ` (8 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The hypervisor is big endian, so little endian kernel builds need
to byteswap.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 drivers/net/ethernet/ibm/ibmveth.c |  4 ++--
 drivers/net/ethernet/ibm/ibmveth.h | 19 ++++++++++++++++---
 2 files changed, 18 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/ibm/ibmveth.c b/drivers/net/ethernet/ibm/ibmveth.c
index 70fd559..5d41aee 100644
--- a/drivers/net/ethernet/ibm/ibmveth.c
+++ b/drivers/net/ethernet/ibm/ibmveth.c
@@ -106,7 +106,7 @@ struct ibmveth_stat ibmveth_stats[] = {
 /* simple methods of getting data from the current rxq entry */
 static inline u32 ibmveth_rxq_flags(struct ibmveth_adapter *adapter)
 {
-	return adapter->rx_queue.queue_addr[adapter->rx_queue.index].flags_off;
+	return be32_to_cpu(adapter->rx_queue.queue_addr[adapter->rx_queue.index].flags_off);
 }
 
 static inline int ibmveth_rxq_toggle(struct ibmveth_adapter *adapter)
@@ -132,7 +132,7 @@ static inline int ibmveth_rxq_frame_offset(struct ibmveth_adapter *adapter)
 
 static inline int ibmveth_rxq_frame_length(struct ibmveth_adapter *adapter)
 {
-	return adapter->rx_queue.queue_addr[adapter->rx_queue.index].length;
+	return be32_to_cpu(adapter->rx_queue.queue_addr[adapter->rx_queue.index].length);
 }
 
 static inline int ibmveth_rxq_csum_good(struct ibmveth_adapter *adapter)
diff --git a/drivers/net/ethernet/ibm/ibmveth.h b/drivers/net/ethernet/ibm/ibmveth.h
index 43a794f..84066ba 100644
--- a/drivers/net/ethernet/ibm/ibmveth.h
+++ b/drivers/net/ethernet/ibm/ibmveth.h
@@ -164,14 +164,26 @@ struct ibmveth_adapter {
     u64 tx_send_failed;
 };
 
+/*
+ * We pass struct ibmveth_buf_desc_fields to the hypervisor in registers,
+ * so we don't need to byteswap the two elements. However since we use
+ * a union (ibmveth_buf_desc) to convert from the struct to a u64 we
+ * do end up with endian specific ordering of the elements and that
+ * needs correcting.
+ */
 struct ibmveth_buf_desc_fields {
+#ifdef __BIG_ENDIAN
+	u32 flags_len;
+	u32 address;
+#else
+	u32 address;
 	u32 flags_len;
+#endif
 #define IBMVETH_BUF_VALID	0x80000000
 #define IBMVETH_BUF_TOGGLE	0x40000000
 #define IBMVETH_BUF_NO_CSUM	0x02000000
 #define IBMVETH_BUF_CSUM_GOOD	0x01000000
 #define IBMVETH_BUF_LEN_MASK	0x00FFFFFF
-	u32 address;
 };
 
 union ibmveth_buf_desc {
@@ -180,7 +192,7 @@ union ibmveth_buf_desc {
 };
 
 struct ibmveth_rx_q_entry {
-	u32 flags_off;
+	__be32 flags_off;
 #define IBMVETH_RXQ_TOGGLE		0x80000000
 #define IBMVETH_RXQ_TOGGLE_SHIFT	31
 #define IBMVETH_RXQ_VALID		0x40000000
@@ -188,7 +200,8 @@ struct ibmveth_rx_q_entry {
 #define IBMVETH_RXQ_CSUM_GOOD		0x01000000
 #define IBMVETH_RXQ_OFF_MASK		0x0000FFFF
 
-	u32 length;
+	__be32 length;
+	/* correlator is only used by the OS, no need to byte swap */
 	u64 correlator;
 };
 
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 55/63] ibmvscsi: Fix little endian issues
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (53 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 54/63] ibmveth: Fix little endian issues Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 56/63] [SCSI] lpfc: Don't force CONFIG_GENERIC_CSUM on Anton Blanchard
                   ` (7 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

The hypervisor is big endian, so little endian kernel builds need
to byteswap.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 drivers/scsi/ibmvscsi/ibmvscsi.c | 153 ++++++++++++++++++++++-----------------
 drivers/scsi/ibmvscsi/viosrp.h   |  46 ++++++------
 2 files changed, 108 insertions(+), 91 deletions(-)

diff --git a/drivers/scsi/ibmvscsi/ibmvscsi.c b/drivers/scsi/ibmvscsi/ibmvscsi.c
index d0fa4b6..62bdbc9 100644
--- a/drivers/scsi/ibmvscsi/ibmvscsi.c
+++ b/drivers/scsi/ibmvscsi/ibmvscsi.c
@@ -241,7 +241,7 @@ static void gather_partition_info(void)
 	struct device_node *rootdn;
 
 	const char *ppartition_name;
-	const unsigned int *p_number_ptr;
+	const __be32 *p_number_ptr;
 
 	/* Retrieve information about this partition */
 	rootdn = of_find_node_by_path("/");
@@ -255,7 +255,7 @@ static void gather_partition_info(void)
 				sizeof(partition_name));
 	p_number_ptr = of_get_property(rootdn, "ibm,partition-no", NULL);
 	if (p_number_ptr)
-		partition_number = *p_number_ptr;
+		partition_number = of_read_number(p_number_ptr, 1);
 	of_node_put(rootdn);
 }
 
@@ -270,10 +270,11 @@ static void set_adapter_info(struct ibmvscsi_host_data *hostdata)
 	strncpy(hostdata->madapter_info.partition_name, partition_name,
 			sizeof(hostdata->madapter_info.partition_name));
 
-	hostdata->madapter_info.partition_number = partition_number;
+	hostdata->madapter_info.partition_number =
+					cpu_to_be32(partition_number);
 
-	hostdata->madapter_info.mad_version = 1;
-	hostdata->madapter_info.os_type = 2;
+	hostdata->madapter_info.mad_version = cpu_to_be32(1);
+	hostdata->madapter_info.os_type = cpu_to_be32(2);
 }
 
 /**
@@ -464,9 +465,9 @@ static int initialize_event_pool(struct event_pool *pool,
 		memset(&evt->crq, 0x00, sizeof(evt->crq));
 		atomic_set(&evt->free, 1);
 		evt->crq.valid = 0x80;
-		evt->crq.IU_length = sizeof(*evt->xfer_iu);
-		evt->crq.IU_data_ptr = pool->iu_token + 
-			sizeof(*evt->xfer_iu) * i;
+		evt->crq.IU_length = cpu_to_be16(sizeof(*evt->xfer_iu));
+		evt->crq.IU_data_ptr = cpu_to_be64(pool->iu_token + 
+			sizeof(*evt->xfer_iu) * i);
 		evt->xfer_iu = pool->iu_storage + i;
 		evt->hostdata = hostdata;
 		evt->ext_list = NULL;
@@ -588,7 +589,7 @@ static void init_event_struct(struct srp_event_struct *evt_struct,
 	evt_struct->cmnd_done = NULL;
 	evt_struct->sync_srp = NULL;
 	evt_struct->crq.format = format;
-	evt_struct->crq.timeout = timeout;
+	evt_struct->crq.timeout = cpu_to_be16(timeout);
 	evt_struct->done = done;
 }
 
@@ -659,8 +660,8 @@ static int map_sg_list(struct scsi_cmnd *cmd, int nseg,
 
 	scsi_for_each_sg(cmd, sg, nseg, i) {
 		struct srp_direct_buf *descr = md + i;
-		descr->va = sg_dma_address(sg);
-		descr->len = sg_dma_len(sg);
+		descr->va = cpu_to_be64(sg_dma_address(sg));
+		descr->len = cpu_to_be32(sg_dma_len(sg));
 		descr->key = 0;
 		total_length += sg_dma_len(sg);
  	}
@@ -703,13 +704,14 @@ static int map_sg_data(struct scsi_cmnd *cmd,
 	}
 
 	indirect->table_desc.va = 0;
-	indirect->table_desc.len = sg_mapped * sizeof(struct srp_direct_buf);
+	indirect->table_desc.len = cpu_to_be32(sg_mapped *
+					       sizeof(struct srp_direct_buf));
 	indirect->table_desc.key = 0;
 
 	if (sg_mapped <= MAX_INDIRECT_BUFS) {
 		total_length = map_sg_list(cmd, sg_mapped,
 					   &indirect->desc_list[0]);
-		indirect->len = total_length;
+		indirect->len = cpu_to_be32(total_length);
 		return 1;
 	}
 
@@ -731,9 +733,10 @@ static int map_sg_data(struct scsi_cmnd *cmd,
 
 	total_length = map_sg_list(cmd, sg_mapped, evt_struct->ext_list);
 
-	indirect->len = total_length;
-	indirect->table_desc.va = evt_struct->ext_list_token;
-	indirect->table_desc.len = sg_mapped * sizeof(indirect->desc_list[0]);
+	indirect->len = cpu_to_be32(total_length);
+	indirect->table_desc.va = cpu_to_be64(evt_struct->ext_list_token);
+	indirect->table_desc.len = cpu_to_be32(sg_mapped *
+					       sizeof(indirect->desc_list[0]));
 	memcpy(indirect->desc_list, evt_struct->ext_list,
 	       MAX_INDIRECT_BUFS * sizeof(struct srp_direct_buf));
  	return 1;
@@ -849,7 +852,7 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
 				   struct ibmvscsi_host_data *hostdata,
 				   unsigned long timeout)
 {
-	u64 *crq_as_u64 = (u64 *) &evt_struct->crq;
+	__be64 *crq_as_u64 = (__be64 *)&evt_struct->crq;
 	int request_status = 0;
 	int rc;
 	int srp_req = 0;
@@ -920,8 +923,8 @@ static int ibmvscsi_send_srp_event(struct srp_event_struct *evt_struct,
 		add_timer(&evt_struct->timer);
 	}
 
-	if ((rc =
-	     ibmvscsi_send_crq(hostdata, crq_as_u64[0], crq_as_u64[1])) != 0) {
+	if ((rc = ibmvscsi_send_crq(hostdata, be64_to_cpu(crq_as_u64[0]),
+				    be64_to_cpu(crq_as_u64[1]))) != 0) {
 		list_del(&evt_struct->list);
 		del_timer(&evt_struct->timer);
 
@@ -987,15 +990,16 @@ static void handle_cmd_rsp(struct srp_event_struct *evt_struct)
 		if (((cmnd->result >> 1) & 0x1f) == CHECK_CONDITION)
 			memcpy(cmnd->sense_buffer,
 			       rsp->data,
-			       rsp->sense_data_len);
+			       be32_to_cpu(rsp->sense_data_len));
 		unmap_cmd_data(&evt_struct->iu.srp.cmd, 
 			       evt_struct, 
 			       evt_struct->hostdata->dev);
 
 		if (rsp->flags & SRP_RSP_FLAG_DOOVER)
-			scsi_set_resid(cmnd, rsp->data_out_res_cnt);
+			scsi_set_resid(cmnd,
+				       be32_to_cpu(rsp->data_out_res_cnt));
 		else if (rsp->flags & SRP_RSP_FLAG_DIOVER)
-			scsi_set_resid(cmnd, rsp->data_in_res_cnt);
+			scsi_set_resid(cmnd, be32_to_cpu(rsp->data_in_res_cnt));
 	}
 
 	if (evt_struct->cmnd_done)
@@ -1037,7 +1041,7 @@ static int ibmvscsi_queuecommand_lck(struct scsi_cmnd *cmnd,
 	memset(srp_cmd, 0x00, SRP_MAX_IU_LEN);
 	srp_cmd->opcode = SRP_CMD;
 	memcpy(srp_cmd->cdb, cmnd->cmnd, sizeof(srp_cmd->cdb));
-	srp_cmd->lun = ((u64) lun) << 48;
+	srp_cmd->lun = cpu_to_be64(((u64)lun) << 48);
 
 	if (!map_data_for_srp_cmd(cmnd, evt_struct, srp_cmd, hostdata->dev)) {
 		if (!firmware_has_feature(FW_FEATURE_CMO))
@@ -1062,9 +1066,10 @@ static int ibmvscsi_queuecommand_lck(struct scsi_cmnd *cmnd,
 	if ((in_fmt == SRP_DATA_DESC_INDIRECT ||
 	     out_fmt == SRP_DATA_DESC_INDIRECT) &&
 	    indirect->table_desc.va == 0) {
-		indirect->table_desc.va = evt_struct->crq.IU_data_ptr +
+		indirect->table_desc.va =
+			cpu_to_be64(be64_to_cpu(evt_struct->crq.IU_data_ptr) +
 			offsetof(struct srp_cmd, add_data) +
-			offsetof(struct srp_indirect_buf, desc_list);
+			offsetof(struct srp_indirect_buf, desc_list));
 	}
 
 	return ibmvscsi_send_srp_event(evt_struct, hostdata, 0);
@@ -1158,7 +1163,7 @@ static void login_rsp(struct srp_event_struct *evt_struct)
 	 * request_limit could have been set to -1 by this client.
 	 */
 	atomic_set(&hostdata->request_limit,
-		   evt_struct->xfer_iu->srp.login_rsp.req_lim_delta);
+		   be32_to_cpu(evt_struct->xfer_iu->srp.login_rsp.req_lim_delta));
 
 	/* If we had any pending I/Os, kick them */
 	scsi_unblock_requests(hostdata->host);
@@ -1184,8 +1189,9 @@ static int send_srp_login(struct ibmvscsi_host_data *hostdata)
 	login = &evt_struct->iu.srp.login_req;
 	memset(login, 0, sizeof(*login));
 	login->opcode = SRP_LOGIN_REQ;
-	login->req_it_iu_len = sizeof(union srp_iu);
-	login->req_buf_fmt = SRP_BUF_FORMAT_DIRECT | SRP_BUF_FORMAT_INDIRECT;
+	login->req_it_iu_len = cpu_to_be32(sizeof(union srp_iu));
+	login->req_buf_fmt = cpu_to_be16(SRP_BUF_FORMAT_DIRECT |
+					 SRP_BUF_FORMAT_INDIRECT);
 
 	spin_lock_irqsave(hostdata->host->host_lock, flags);
 	/* Start out with a request limit of 0, since this is negotiated in
@@ -1214,12 +1220,13 @@ static void capabilities_rsp(struct srp_event_struct *evt_struct)
 		dev_err(hostdata->dev, "error 0x%X getting capabilities info\n",
 			evt_struct->xfer_iu->mad.capabilities.common.status);
 	} else {
-		if (hostdata->caps.migration.common.server_support != SERVER_SUPPORTS_CAP)
+		if (hostdata->caps.migration.common.server_support !=
+		    cpu_to_be16(SERVER_SUPPORTS_CAP))
 			dev_info(hostdata->dev, "Partition migration not supported\n");
 
 		if (client_reserve) {
 			if (hostdata->caps.reserve.common.server_support ==
-			    SERVER_SUPPORTS_CAP)
+			    cpu_to_be16(SERVER_SUPPORTS_CAP))
 				dev_info(hostdata->dev, "Client reserve enabled\n");
 			else
 				dev_info(hostdata->dev, "Client reserve not supported\n");
@@ -1251,9 +1258,9 @@ static void send_mad_capabilities(struct ibmvscsi_host_data *hostdata)
 	req = &evt_struct->iu.mad.capabilities;
 	memset(req, 0, sizeof(*req));
 
-	hostdata->caps.flags = CAP_LIST_SUPPORTED;
+	hostdata->caps.flags = cpu_to_be32(CAP_LIST_SUPPORTED);
 	if (hostdata->client_migrated)
-		hostdata->caps.flags |= CLIENT_MIGRATED;
+		hostdata->caps.flags |= cpu_to_be32(CLIENT_MIGRATED);
 
 	strncpy(hostdata->caps.name, dev_name(&hostdata->host->shost_gendev),
 		sizeof(hostdata->caps.name));
@@ -1264,22 +1271,31 @@ static void send_mad_capabilities(struct ibmvscsi_host_data *hostdata)
 	strncpy(hostdata->caps.loc, location, sizeof(hostdata->caps.loc));
 	hostdata->caps.loc[sizeof(hostdata->caps.loc) - 1] = '\0';
 
-	req->common.type = VIOSRP_CAPABILITIES_TYPE;
-	req->buffer = hostdata->caps_addr;
+	req->common.type = cpu_to_be32(VIOSRP_CAPABILITIES_TYPE);
+	req->buffer = cpu_to_be64(hostdata->caps_addr);
 
-	hostdata->caps.migration.common.cap_type = MIGRATION_CAPABILITIES;
-	hostdata->caps.migration.common.length = sizeof(hostdata->caps.migration);
-	hostdata->caps.migration.common.server_support = SERVER_SUPPORTS_CAP;
-	hostdata->caps.migration.ecl = 1;
+	hostdata->caps.migration.common.cap_type =
+				cpu_to_be32(MIGRATION_CAPABILITIES);
+	hostdata->caps.migration.common.length =
+				cpu_to_be16(sizeof(hostdata->caps.migration));
+	hostdata->caps.migration.common.server_support =
+				cpu_to_be16(SERVER_SUPPORTS_CAP);
+	hostdata->caps.migration.ecl = cpu_to_be32(1);
 
 	if (client_reserve) {
-		hostdata->caps.reserve.common.cap_type = RESERVATION_CAPABILITIES;
-		hostdata->caps.reserve.common.length = sizeof(hostdata->caps.reserve);
-		hostdata->caps.reserve.common.server_support = SERVER_SUPPORTS_CAP;
-		hostdata->caps.reserve.type = CLIENT_RESERVE_SCSI_2;
-		req->common.length = sizeof(hostdata->caps);
+		hostdata->caps.reserve.common.cap_type =
+					cpu_to_be32(RESERVATION_CAPABILITIES);
+		hostdata->caps.reserve.common.length =
+				cpu_to_be16(sizeof(hostdata->caps.reserve));
+		hostdata->caps.reserve.common.server_support =
+				cpu_to_be16(SERVER_SUPPORTS_CAP);
+		hostdata->caps.reserve.type =
+				cpu_to_be32(CLIENT_RESERVE_SCSI_2);
+		req->common.length =
+				cpu_to_be16(sizeof(hostdata->caps));
 	} else
-		req->common.length = sizeof(hostdata->caps) - sizeof(hostdata->caps.reserve);
+		req->common.length = cpu_to_be16(sizeof(hostdata->caps) -
+						sizeof(hostdata->caps.reserve));
 
 	spin_lock_irqsave(hostdata->host->host_lock, flags);
 	if (ibmvscsi_send_srp_event(evt_struct, hostdata, info_timeout * 2))
@@ -1297,7 +1313,7 @@ static void send_mad_capabilities(struct ibmvscsi_host_data *hostdata)
 static void fast_fail_rsp(struct srp_event_struct *evt_struct)
 {
 	struct ibmvscsi_host_data *hostdata = evt_struct->hostdata;
-	u8 status = evt_struct->xfer_iu->mad.fast_fail.common.status;
+	u16 status = be16_to_cpu(evt_struct->xfer_iu->mad.fast_fail.common.status);
 
 	if (status == VIOSRP_MAD_NOT_SUPPORTED)
 		dev_err(hostdata->dev, "fast_fail not supported in server\n");
@@ -1334,8 +1350,8 @@ static int enable_fast_fail(struct ibmvscsi_host_data *hostdata)
 
 	fast_fail_mad = &evt_struct->iu.mad.fast_fail;
 	memset(fast_fail_mad, 0, sizeof(*fast_fail_mad));
-	fast_fail_mad->common.type = VIOSRP_ENABLE_FAST_FAIL;
-	fast_fail_mad->common.length = sizeof(*fast_fail_mad);
+	fast_fail_mad->common.type = cpu_to_be32(VIOSRP_ENABLE_FAST_FAIL);
+	fast_fail_mad->common.length = cpu_to_be16(sizeof(*fast_fail_mad));
 
 	spin_lock_irqsave(hostdata->host->host_lock, flags);
 	rc = ibmvscsi_send_srp_event(evt_struct, hostdata, info_timeout * 2);
@@ -1362,15 +1378,15 @@ static void adapter_info_rsp(struct srp_event_struct *evt_struct)
 			 "host partition %s (%d), OS %d, max io %u\n",
 			 hostdata->madapter_info.srp_version,
 			 hostdata->madapter_info.partition_name,
-			 hostdata->madapter_info.partition_number,
-			 hostdata->madapter_info.os_type,
-			 hostdata->madapter_info.port_max_txu[0]);
+			 be32_to_cpu(hostdata->madapter_info.partition_number),
+			 be32_to_cpu(hostdata->madapter_info.os_type),
+			 be32_to_cpu(hostdata->madapter_info.port_max_txu[0]));
 		
 		if (hostdata->madapter_info.port_max_txu[0]) 
 			hostdata->host->max_sectors = 
-				hostdata->madapter_info.port_max_txu[0] >> 9;
+				be32_to_cpu(hostdata->madapter_info.port_max_txu[0]) >> 9;
 		
-		if (hostdata->madapter_info.os_type == 3 &&
+		if (be32_to_cpu(hostdata->madapter_info.os_type) == 3 &&
 		    strcmp(hostdata->madapter_info.srp_version, "1.6a") <= 0) {
 			dev_err(hostdata->dev, "host (Ver. %s) doesn't support large transfers\n",
 				hostdata->madapter_info.srp_version);
@@ -1379,7 +1395,7 @@ static void adapter_info_rsp(struct srp_event_struct *evt_struct)
 			hostdata->host->sg_tablesize = MAX_INDIRECT_BUFS;
 		}
 
-		if (hostdata->madapter_info.os_type == 3) {
+		if (be32_to_cpu(hostdata->madapter_info.os_type) == 3) {
 			enable_fast_fail(hostdata);
 			return;
 		}
@@ -1414,9 +1430,9 @@ static void send_mad_adapter_info(struct ibmvscsi_host_data *hostdata)
 	req = &evt_struct->iu.mad.adapter_info;
 	memset(req, 0x00, sizeof(*req));
 	
-	req->common.type = VIOSRP_ADAPTER_INFO_TYPE;
-	req->common.length = sizeof(hostdata->madapter_info);
-	req->buffer = hostdata->adapter_info_addr;
+	req->common.type = cpu_to_be32(VIOSRP_ADAPTER_INFO_TYPE);
+	req->common.length = cpu_to_be16(sizeof(hostdata->madapter_info));
+	req->buffer = cpu_to_be64(hostdata->adapter_info_addr);
 
 	spin_lock_irqsave(hostdata->host->host_lock, flags);
 	if (ibmvscsi_send_srp_event(evt_struct, hostdata, info_timeout * 2))
@@ -1501,7 +1517,7 @@ static int ibmvscsi_eh_abort_handler(struct scsi_cmnd *cmd)
 		/* Set up an abort SRP command */
 		memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
 		tsk_mgmt->opcode = SRP_TSK_MGMT;
-		tsk_mgmt->lun = ((u64) lun) << 48;
+		tsk_mgmt->lun = cpu_to_be64(((u64) lun) << 48);
 		tsk_mgmt->tsk_mgmt_func = SRP_TSK_ABORT_TASK;
 		tsk_mgmt->task_tag = (u64) found_evt;
 
@@ -1624,7 +1640,7 @@ static int ibmvscsi_eh_device_reset_handler(struct scsi_cmnd *cmd)
 		/* Set up a lun reset SRP command */
 		memset(tsk_mgmt, 0x00, sizeof(*tsk_mgmt));
 		tsk_mgmt->opcode = SRP_TSK_MGMT;
-		tsk_mgmt->lun = ((u64) lun) << 48;
+		tsk_mgmt->lun = cpu_to_be64(((u64) lun) << 48);
 		tsk_mgmt->tsk_mgmt_func = SRP_TSK_LUN_RESET;
 
 		evt->sync_srp = &srp_rsp;
@@ -1735,8 +1751,9 @@ static void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 {
 	long rc;
 	unsigned long flags;
+	/* The hypervisor copies our tag value here so no byteswapping */
 	struct srp_event_struct *evt_struct =
-	    (struct srp_event_struct *)crq->IU_data_ptr;
+			(__force struct srp_event_struct *)crq->IU_data_ptr;
 	switch (crq->valid) {
 	case 0xC0:		/* initialization */
 		switch (crq->format) {
@@ -1792,18 +1809,18 @@ static void ibmvscsi_handle_crq(struct viosrp_crq *crq,
 	 */
 	if (!valid_event_struct(&hostdata->pool, evt_struct)) {
 		dev_err(hostdata->dev, "returned correlation_token 0x%p is invalid!\n",
-		       (void *)crq->IU_data_ptr);
+		       evt_struct);
 		return;
 	}
 
 	if (atomic_read(&evt_struct->free)) {
 		dev_err(hostdata->dev, "received duplicate correlation_token 0x%p!\n",
-			(void *)crq->IU_data_ptr);
+			evt_struct);
 		return;
 	}
 
 	if (crq->format == VIOSRP_SRP_FORMAT)
-		atomic_add(evt_struct->xfer_iu->srp.rsp.req_lim_delta,
+		atomic_add(be32_to_cpu(evt_struct->xfer_iu->srp.rsp.req_lim_delta),
 			   &hostdata->request_limit);
 
 	del_timer(&evt_struct->timer);
@@ -1856,13 +1873,11 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
 
 	/* Set up a lun reset SRP command */
 	memset(host_config, 0x00, sizeof(*host_config));
-	host_config->common.type = VIOSRP_HOST_CONFIG_TYPE;
-	host_config->common.length = length;
-	host_config->buffer = addr = dma_map_single(hostdata->dev, buffer,
-						    length,
-						    DMA_BIDIRECTIONAL);
+	host_config->common.type = cpu_to_be32(VIOSRP_HOST_CONFIG_TYPE);
+	host_config->common.length = cpu_to_be16(length);
+	addr = dma_map_single(hostdata->dev, buffer, length, DMA_BIDIRECTIONAL);
 
-	if (dma_mapping_error(hostdata->dev, host_config->buffer)) {
+	if (dma_mapping_error(hostdata->dev, addr)) {
 		if (!firmware_has_feature(FW_FEATURE_CMO))
 			dev_err(hostdata->dev,
 			        "dma_mapping error getting host config\n");
@@ -1870,6 +1885,8 @@ static int ibmvscsi_do_host_config(struct ibmvscsi_host_data *hostdata,
 		return -1;
 	}
 
+	host_config->buffer = cpu_to_be64(addr);
+
 	init_completion(&evt_struct->comp);
 	spin_lock_irqsave(hostdata->host->host_lock, flags);
 	rc = ibmvscsi_send_srp_event(evt_struct, hostdata, info_timeout * 2);
diff --git a/drivers/scsi/ibmvscsi/viosrp.h b/drivers/scsi/ibmvscsi/viosrp.h
index 2cd735d..1162430 100644
--- a/drivers/scsi/ibmvscsi/viosrp.h
+++ b/drivers/scsi/ibmvscsi/viosrp.h
@@ -75,9 +75,9 @@ struct viosrp_crq {
 	u8 format;		/* SCSI vs out-of-band */
 	u8 reserved;
 	u8 status;		/* non-scsi failure? (e.g. DMA failure) */
-	u16 timeout;		/* in seconds */
-	u16 IU_length;		/* in bytes */
-	u64 IU_data_ptr;	/* the TCE for transferring data */
+	__be16 timeout;		/* in seconds */
+	__be16 IU_length;		/* in bytes */
+	__be64 IU_data_ptr;	/* the TCE for transferring data */
 };
 
 /* MADs are Management requests above and beyond the IUs defined in the SRP
@@ -124,10 +124,10 @@ enum viosrp_capability_flag {
  * Common MAD header
  */
 struct mad_common {
-	u32 type;
-	u16 status;
-	u16 length;
-	u64 tag;
+	__be32 type;
+	__be16 status;
+	__be16 length;
+	__be64 tag;
 };
 
 /*
@@ -139,23 +139,23 @@ struct mad_common {
  */
 struct viosrp_empty_iu {
 	struct mad_common common;
-	u64 buffer;
-	u32 port;
+	__be64 buffer;
+	__be32 port;
 };
 
 struct viosrp_error_log {
 	struct mad_common common;
-	u64 buffer;
+	__be64 buffer;
 };
 
 struct viosrp_adapter_info {
 	struct mad_common common;
-	u64 buffer;
+	__be64 buffer;
 };
 
 struct viosrp_host_config {
 	struct mad_common common;
-	u64 buffer;
+	__be64 buffer;
 };
 
 struct viosrp_fast_fail {
@@ -164,27 +164,27 @@ struct viosrp_fast_fail {
 
 struct viosrp_capabilities {
 	struct mad_common common;
-	u64 buffer;
+	__be64 buffer;
 };
 
 struct mad_capability_common {
-	u32 cap_type;
-	u16 length;
-	u16 server_support;
+	__be32 cap_type;
+	__be16 length;
+	__be16 server_support;
 };
 
 struct mad_reserve_cap {
 	struct mad_capability_common common;
-	u32 type;
+	__be32 type;
 };
 
 struct mad_migration_cap {
 	struct mad_capability_common common;
-	u32 ecl;
+	__be32 ecl;
 };
 
 struct capabilities{
-	u32 flags;
+	__be32 flags;
 	char name[SRP_MAX_LOC_LEN];
 	char loc[SRP_MAX_LOC_LEN];
 	struct mad_migration_cap migration;
@@ -208,10 +208,10 @@ union viosrp_iu {
 struct mad_adapter_info_data {
 	char srp_version[8];
 	char partition_name[96];
-	u32 partition_number;
-	u32 mad_version;
-	u32 os_type;
-	u32 port_max_txu[8];	/* per-port maximum transfer */
+	__be32 partition_number;
+	__be32 mad_version;
+	__be32 os_type;
+	__be32 port_max_txu[8];	/* per-port maximum transfer */
 };
 
 #endif
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 56/63] [SCSI] lpfc: Don't force CONFIG_GENERIC_CSUM on
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (54 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 55/63] ibmvscsi: " Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 57/63] powerpc: Use generic checksum code in little endian Anton Blanchard
                   ` (6 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We want ppc64 to be able to select between optimised assembly
checksum routines in big endian, and the generic lib/checksum.c
routines in little endian.

The lpfc driver is forcing CONFIG_GENERIC_CSUM on which means
we are unable to make the choice of when to enable and disable
it in the arch Kconfig. If the option exists it is always
forced on.

Other users of the checksum routines (eg networking code) do
not have a select CONFIG_GENERIC_CSUM, so I don't see why
the lpfc driver should.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 drivers/scsi/Kconfig | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig
index 48b2918..92ff027 100644
--- a/drivers/scsi/Kconfig
+++ b/drivers/scsi/Kconfig
@@ -1353,7 +1353,6 @@ config SCSI_LPFC
 	tristate "Emulex LightPulse Fibre Channel Support"
 	depends on PCI && SCSI
 	select SCSI_FC_ATTRS
-	select GENERIC_CSUM
 	select CRC_T10DIF
 	help
           This lpfc driver supports the Emulex LightPulse
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 57/63] powerpc: Use generic checksum code in little endian
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (55 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 56/63] [SCSI] lpfc: Don't force CONFIG_GENERIC_CSUM on Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 58/63] powerpc: Use generic memcpy " Anton Blanchard
                   ` (5 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We need to fix some endian issues in our checksum code. For now
just enable the generic checksum routines for little endian builds.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/Kconfig                | 3 +++
 arch/powerpc/include/asm/checksum.h | 5 +++++
 arch/powerpc/kernel/ppc_ksyms.c     | 2 ++
 arch/powerpc/lib/Makefile           | 9 +++++++--
 4 files changed, 17 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 3bf72cd..818974d 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -140,6 +140,9 @@ config PPC
 	select OLD_SIGACTION if PPC32
 	select HAVE_DEBUG_STACKOVERFLOW
 
+config GENERIC_CSUM
+	def_bool CPU_LITTLE_ENDIAN
+
 config EARLY_PRINTK
 	bool
 	default y
diff --git a/arch/powerpc/include/asm/checksum.h b/arch/powerpc/include/asm/checksum.h
index ce0c284..a8de989 100644
--- a/arch/powerpc/include/asm/checksum.h
+++ b/arch/powerpc/include/asm/checksum.h
@@ -14,6 +14,7 @@
  * which always checksum on 4 octet boundaries.  ihl is the number
  * of 32-bit words and is always >= 5.
  */
+#ifndef CONFIG_GENERIC_CSUM
 extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
 
 /*
@@ -123,5 +124,9 @@ static inline __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
 	return sum;
 #endif
 }
+
+#else
+#include <asm-generic/checksum.h>
+#endif
 #endif /* __KERNEL__ */
 #endif
diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
index c296665..0c2dd60 100644
--- a/arch/powerpc/kernel/ppc_ksyms.c
+++ b/arch/powerpc/kernel/ppc_ksyms.c
@@ -79,10 +79,12 @@ EXPORT_SYMBOL(strlen);
 EXPORT_SYMBOL(strcmp);
 EXPORT_SYMBOL(strncmp);
 
+#ifndef CONFIG_GENERIC_CSUM
 EXPORT_SYMBOL(csum_partial);
 EXPORT_SYMBOL(csum_partial_copy_generic);
 EXPORT_SYMBOL(ip_fast_csum);
 EXPORT_SYMBOL(csum_tcpudp_magic);
+#endif
 
 EXPORT_SYMBOL(__copy_tofrom_user);
 EXPORT_SYMBOL(__clear_user);
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 4504332..33ab261 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -10,15 +10,20 @@ CFLAGS_REMOVE_code-patching.o = -pg
 CFLAGS_REMOVE_feature-fixups.o = -pg
 
 obj-y			:= string.o alloc.o \
-			   checksum_$(CONFIG_WORD_SIZE).o crtsavres.o
+			   crtsavres.o
 obj-$(CONFIG_PPC32)	+= div64.o copy_32.o
 obj-$(CONFIG_HAS_IOMEM)	+= devres.o
 
 obj-$(CONFIG_PPC64)	+= copypage_64.o copyuser_64.o \
 			   memcpy_64.o usercopy_64.o mem_64.o string.o \
-			   checksum_wrappers_64.o hweight_64.o \
+			   hweight_64.o \
 			   copyuser_power7.o string_64.o copypage_power7.o \
 			   memcpy_power7.o
+ifeq ($(CONFIG_GENERIC_CSUM),)
+obj-y			+= checksum_$(CONFIG_WORD_SIZE).o
+obj-$(CONFIG_PPC64)	+= checksum_wrappers_64.o
+endif
+
 obj-$(CONFIG_PPC_EMULATE_SSTEP)	+= sstep.o ldstfp.o
 
 ifeq ($(CONFIG_PPC64),y)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 58/63] powerpc: Use generic memcpy code in little endian
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (56 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 57/63] powerpc: Use generic checksum code in little endian Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 59/63] powerpc: uname should return ppc64le/ppcle on little endian builds Anton Blanchard
                   ` (4 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We need to fix some endian issues in our memcpy code. For now
just enable the generic memcpy routine for little endian builds.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/include/asm/string.h | 4 ++++
 arch/powerpc/kernel/ppc_ksyms.c   | 2 ++
 arch/powerpc/lib/Makefile         | 9 ++++++---
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index e40010a..0dffad6 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -10,7 +10,9 @@
 #define __HAVE_ARCH_STRNCMP
 #define __HAVE_ARCH_STRCAT
 #define __HAVE_ARCH_MEMSET
+#ifdef __BIG_ENDIAN__
 #define __HAVE_ARCH_MEMCPY
+#endif
 #define __HAVE_ARCH_MEMMOVE
 #define __HAVE_ARCH_MEMCMP
 #define __HAVE_ARCH_MEMCHR
@@ -22,7 +24,9 @@ extern int strcmp(const char *,const char *);
 extern int strncmp(const char *, const char *, __kernel_size_t);
 extern char * strcat(char *, const char *);
 extern void * memset(void *,int,__kernel_size_t);
+#ifdef __BIG_ENDIAN__
 extern void * memcpy(void *,const void *,__kernel_size_t);
+#endif
 extern void * memmove(void *,const void *,__kernel_size_t);
 extern int memcmp(const void *,const void *,__kernel_size_t);
 extern void * memchr(const void *,int,__kernel_size_t);
diff --git a/arch/powerpc/kernel/ppc_ksyms.c b/arch/powerpc/kernel/ppc_ksyms.c
index 0c2dd60..526ad5c 100644
--- a/arch/powerpc/kernel/ppc_ksyms.c
+++ b/arch/powerpc/kernel/ppc_ksyms.c
@@ -147,7 +147,9 @@ EXPORT_SYMBOL(__ucmpdi2);
 #endif
 long long __bswapdi2(long long);
 EXPORT_SYMBOL(__bswapdi2);
+#ifdef __BIG_ENDIAN__
 EXPORT_SYMBOL(memcpy);
+#endif
 EXPORT_SYMBOL(memset);
 EXPORT_SYMBOL(memmove);
 EXPORT_SYMBOL(memcmp);
diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile
index 33ab261..5310132 100644
--- a/arch/powerpc/lib/Makefile
+++ b/arch/powerpc/lib/Makefile
@@ -15,15 +15,18 @@ obj-$(CONFIG_PPC32)	+= div64.o copy_32.o
 obj-$(CONFIG_HAS_IOMEM)	+= devres.o
 
 obj-$(CONFIG_PPC64)	+= copypage_64.o copyuser_64.o \
-			   memcpy_64.o usercopy_64.o mem_64.o string.o \
+			   usercopy_64.o mem_64.o string.o \
 			   hweight_64.o \
-			   copyuser_power7.o string_64.o copypage_power7.o \
-			   memcpy_power7.o
+			   copyuser_power7.o string_64.o copypage_power7.o
 ifeq ($(CONFIG_GENERIC_CSUM),)
 obj-y			+= checksum_$(CONFIG_WORD_SIZE).o
 obj-$(CONFIG_PPC64)	+= checksum_wrappers_64.o
 endif
 
+ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),)
+obj-$(CONFIG_PPC64)		+= memcpy_power7.o memcpy_64.o 
+endif
+
 obj-$(CONFIG_PPC_EMULATE_SSTEP)	+= sstep.o ldstfp.o
 
 ifeq ($(CONFIG_PPC64),y)
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 59/63] powerpc: uname should return ppc64le/ppcle on little endian builds
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (57 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 58/63] powerpc: Use generic memcpy " Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 60/63] powerpc: Add ability to build little endian kernels Anton Blanchard
                   ` (3 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

We need to distinguish between big endian and little endian
environments, so fix uname to return the right thing.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/Makefile | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 967fd23..c574a69 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -36,17 +36,26 @@ KBUILD_DEFCONFIG := ppc64_defconfig
 endif
 
 ifeq ($(CONFIG_PPC64),y)
-OLDARCH	:= ppc64
-
 new_nm := $(shell if $(NM) --help 2>&1 | grep -- '--synthetic' > /dev/null; then echo y; else echo n; fi)
 
 ifeq ($(new_nm),y)
 NM		:= $(NM) --synthetic
 endif
+endif
 
+ifeq ($(CONFIG_PPC64),y)
+ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
+OLDARCH	:= ppc64le
+else
+OLDARCH	:= ppc64
+endif
+else
+ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
+OLDARCH	:= ppcle
 else
 OLDARCH	:= ppc
 endif
+endif
 
 # It seems there are times we use this Makefile without
 # including the config file, but this replicates the old behaviour
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 60/63] powerpc: Add ability to build little endian kernels
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (58 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 59/63] powerpc: uname should return ppc64le/ppcle on little endian builds Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 61/63] powerpc: Don't set HAVE_EFFICIENT_UNALIGNED_ACCESS on little endian builds Anton Blanchard
                   ` (2 subsequent siblings)
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

From: Ian Munsie <imunsie@au1.ibm.com>

This patch allows the kbuild system to successfully compile a kernel for
the little endian PowerPC64 architecture.

To build such a kernel a supported platform must be used and
CONFIG_CPU_LITTLE_ENDIAN must be set. If cross compiling, CROSS_COMPILE
must point to a suitable toolchain (compiled for the powerpc64le-linux
and powerpcle-linux targets).

Signed-off-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/Makefile                   | 24 +++++++++++++++++++++---
 arch/powerpc/boot/Makefile              |  3 ++-
 arch/powerpc/kernel/vdso32/vdso32.lds.S |  4 ++++
 arch/powerpc/kernel/vdso64/vdso64.lds.S |  4 ++++
 arch/powerpc/platforms/Kconfig.cputype  | 11 +++++++++++
 5 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index c574a69..0ebad9c 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -65,11 +65,29 @@ endif
 
 UTS_MACHINE := $(OLDARCH)
 
+ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
+override CC	+= -mlittle-endian
+override AS	+= -mlittle-endian
+override LD	+= -EL
+override CROSS32CC += -mlittle-endian
+override CROSS32AS += -mlittle-endian
+LDEMULATION	:= lppc
+GNUTARGET	:= powerpcle
+MULTIPLEWORD	:= -mno-multiple
+else
+override CC	+= -mbig-endian
+override AS	+= -mbig-endian
+override LD	+= -EB
+LDEMULATION	:= ppc
+GNUTARGET	:= powerpc
+MULTIPLEWORD	:= -mmultiple
+endif
+
 ifeq ($(HAS_BIARCH),y)
 override AS	+= -a$(CONFIG_WORD_SIZE)
-override LD	+= -m elf$(CONFIG_WORD_SIZE)ppc
+override LD	+= -m elf$(CONFIG_WORD_SIZE)$(LDEMULATION)
 override CC	+= -m$(CONFIG_WORD_SIZE)
-override AR	:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-powerpc $(AR)
+override AR	:= GNUTARGET=elf$(CONFIG_WORD_SIZE)-$(GNUTARGET) $(AR)
 endif
 
 LDFLAGS_vmlinux-y := -Bstatic
@@ -95,7 +113,7 @@ endif
 CFLAGS-$(CONFIG_PPC64)	:= -mtraceback=no -mcall-aixdesc
 CFLAGS-$(CONFIG_PPC64)	+= $(call cc-option,-mcmodel=medium,-mminimal-toc)
 CFLAGS-$(CONFIG_PPC64)	+= $(call cc-option,-mno-pointers-to-nested-functions)
-CFLAGS-$(CONFIG_PPC32)	:= -ffixed-r2 -mmultiple
+CFLAGS-$(CONFIG_PPC32)	:= -ffixed-r2 $(MULTIPLEWORD)
 
 CFLAGS-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=power7,-mtune=power4)
 CFLAGS-$(CONFIG_CELL_CPU) += $(call cc-option,-mcpu=cell)
diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
index 6a15c96..1e6baf0 100644
--- a/arch/powerpc/boot/Makefile
+++ b/arch/powerpc/boot/Makefile
@@ -22,7 +22,8 @@ all: $(obj)/zImage
 BOOTCFLAGS    := -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs \
 		 -fno-strict-aliasing -Os -msoft-float -pipe \
 		 -fomit-frame-pointer -fno-builtin -fPIC -nostdinc \
-		 -isystem $(shell $(CROSS32CC) -print-file-name=include)
+		 -isystem $(shell $(CROSS32CC) -print-file-name=include) \
+		 -mbig-endian
 BOOTAFLAGS	:= -D__ASSEMBLY__ $(BOOTCFLAGS) -traditional -nostdinc
 
 ifdef CONFIG_DEBUG_INFO
diff --git a/arch/powerpc/kernel/vdso32/vdso32.lds.S b/arch/powerpc/kernel/vdso32/vdso32.lds.S
index f223409..e58ee10 100644
--- a/arch/powerpc/kernel/vdso32/vdso32.lds.S
+++ b/arch/powerpc/kernel/vdso32/vdso32.lds.S
@@ -4,7 +4,11 @@
  */
 #include <asm/vdso.h>
 
+#ifdef __LITTLE_ENDIAN__
+OUTPUT_FORMAT("elf32-powerpcle", "elf32-powerpcle", "elf32-powerpcle")
+#else
 OUTPUT_FORMAT("elf32-powerpc", "elf32-powerpc", "elf32-powerpc")
+#endif
 OUTPUT_ARCH(powerpc:common)
 ENTRY(_start)
 
diff --git a/arch/powerpc/kernel/vdso64/vdso64.lds.S b/arch/powerpc/kernel/vdso64/vdso64.lds.S
index e486381..64fb183 100644
--- a/arch/powerpc/kernel/vdso64/vdso64.lds.S
+++ b/arch/powerpc/kernel/vdso64/vdso64.lds.S
@@ -4,7 +4,11 @@
  */
 #include <asm/vdso.h>
 
+#ifdef __LITTLE_ENDIAN__
+OUTPUT_FORMAT("elf64-powerpcle", "elf64-powerpcle", "elf64-powerpcle")
+#else
 OUTPUT_FORMAT("elf64-powerpc", "elf64-powerpc", "elf64-powerpc")
+#endif
 OUTPUT_ARCH(powerpc:common64)
 ENTRY(_start)
 
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 47d9a03..89f91ed 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -389,3 +389,14 @@ config PPC_DOORBELL
 	default n
 
 endmenu
+
+config CPU_LITTLE_ENDIAN
+	bool "Build little endian kernel"
+	default n
+	help
+	  This option selects whether a big endian or little endian kernel will
+	  be built.
+
+	  Note that if cross compiling a little endian kernel,
+	  CROSS_COMPILE must point to a toolchain capable of targeting
+	  little endian powerpc.
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 61/63] powerpc: Don't set HAVE_EFFICIENT_UNALIGNED_ACCESS on little endian builds
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (59 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 60/63] powerpc: Add ability to build little endian kernels Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 62/63] powerpc: Work around little endian gcc bug Anton Blanchard
  2013-08-06 16:02 ` [PATCH 63/63] powerpc: Add pseries_le_defconfig Anton Blanchard
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

POWER7 takes alignment exceptions on some unaligned addresses, so
disable HAVE_EFFICIENT_UNALIGNED_ACCESS. This fixes an early boot
issue in the printk code.

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 818974d..9dcc85f 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -97,7 +97,7 @@ config PPC
 	select VIRT_TO_BUS if !PPC64
 	select HAVE_IDE
 	select HAVE_IOREMAP_PROT
-	select HAVE_EFFICIENT_UNALIGNED_ACCESS
+	select HAVE_EFFICIENT_UNALIGNED_ACCESS if !CPU_LITTLE_ENDIAN
 	select HAVE_KPROBES
 	select HAVE_ARCH_KGDB
 	select HAVE_KRETPROBES
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 62/63] powerpc: Work around little endian gcc bug
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (60 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 61/63] powerpc: Don't set HAVE_EFFICIENT_UNALIGNED_ACCESS on little endian builds Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 16:02 ` [PATCH 63/63] powerpc: Add pseries_le_defconfig Anton Blanchard
  62 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

Temporarily work around an ICE we are seeing while building
in little endian mode:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57134

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/Makefile b/arch/powerpc/Makefile
index 0ebad9c..ab23441 100644
--- a/arch/powerpc/Makefile
+++ b/arch/powerpc/Makefile
@@ -66,7 +66,7 @@ endif
 UTS_MACHINE := $(OLDARCH)
 
 ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y)
-override CC	+= -mlittle-endian
+override CC	+= -mlittle-endian -mno-strict-align
 override AS	+= -mlittle-endian
 override LD	+= -EL
 override CROSS32CC += -mlittle-endian
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
                   ` (61 preceding siblings ...)
  2013-08-06 16:02 ` [PATCH 62/63] powerpc: Work around little endian gcc bug Anton Blanchard
@ 2013-08-06 16:02 ` Anton Blanchard
  2013-08-06 23:31   ` Michael Neuling
  2013-08-08  8:33   ` Madhavan Srinivasan
  62 siblings, 2 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-06 16:02 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras; +Cc: linuxppc-dev

This is the pseries_defconfig with CONFIG_CPU_LITTLE_ENDIAN enabled
and CONFIG_VIRTUALIZATION disabled (required until we fix some
endian issues in KVM).

Signed-off-by: Anton Blanchard <anton@samba.org>
---
 arch/powerpc/configs/pseries_le_defconfig | 347 ++++++++++++++++++++++++++++++
 1 file changed, 347 insertions(+)
 create mode 100644 arch/powerpc/configs/pseries_le_defconfig

diff --git a/arch/powerpc/configs/pseries_le_defconfig b/arch/powerpc/configs/pseries_le_defconfig
new file mode 100644
index 0000000..a30db45
--- /dev/null
+++ b/arch/powerpc/configs/pseries_le_defconfig
@@ -0,0 +1,347 @@
+CONFIG_PPC64=y
+CONFIG_ALTIVEC=y
+CONFIG_VSX=y
+CONFIG_SMP=y
+CONFIG_NR_CPUS=2048
+CONFIG_EXPERIMENTAL=y
+CONFIG_SYSVIPC=y
+CONFIG_POSIX_MQUEUE=y
+CONFIG_AUDIT=y
+CONFIG_AUDITSYSCALL=y
+CONFIG_IRQ_DOMAIN_DEBUG=y
+CONFIG_NO_HZ=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_TASKSTATS=y
+CONFIG_TASK_DELAY_ACCT=y
+CONFIG_TASK_XACCT=y
+CONFIG_TASK_IO_ACCOUNTING=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_CGROUPS=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=y
+CONFIG_KPROBES=y
+CONFIG_JUMP_LABEL=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+CONFIG_MODVERSIONS=y
+CONFIG_MODULE_SRCVERSION_ALL=y
+CONFIG_PARTITION_ADVANCED=y
+CONFIG_EFI_PARTITION=y
+CONFIG_PPC_SPLPAR=y
+CONFIG_SCANLOG=m
+CONFIG_PPC_SMLPAR=y
+CONFIG_DTL=y
+# CONFIG_PPC_PMAC is not set
+CONFIG_RTAS_FLASH=m
+CONFIG_IBMEBUS=y
+CONFIG_HZ_100=y
+CONFIG_BINFMT_MISC=m
+CONFIG_PPC_TRANSACTIONAL_MEM=y
+CONFIG_HOTPLUG_CPU=y
+CONFIG_KEXEC=y
+CONFIG_IRQ_ALL_CPUS=y
+CONFIG_MEMORY_HOTPLUG=y
+CONFIG_MEMORY_HOTREMOVE=y
+CONFIG_PPC_64K_PAGES=y
+CONFIG_PPC_SUBPAGE_PROT=y
+CONFIG_SCHED_SMT=y
+CONFIG_PPC_DENORMALISATION=y
+CONFIG_HOTPLUG_PCI=m
+CONFIG_HOTPLUG_PCI_RPA=m
+CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_XFRM_USER=m
+CONFIG_NET_KEY=m
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_NET_IPIP=y
+CONFIG_SYN_COOKIES=y
+CONFIG_INET_AH=m
+CONFIG_INET_ESP=m
+CONFIG_INET_IPCOMP=m
+# CONFIG_IPV6 is not set
+CONFIG_NETFILTER=y
+CONFIG_NF_CONNTRACK=m
+CONFIG_NF_CONNTRACK_EVENTS=y
+CONFIG_NF_CT_PROTO_UDPLITE=m
+CONFIG_NF_CONNTRACK_FTP=m
+CONFIG_NF_CONNTRACK_IRC=m
+CONFIG_NF_CONNTRACK_TFTP=m
+CONFIG_NF_CT_NETLINK=m
+CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
+CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
+CONFIG_NETFILTER_XT_TARGET_MARK=m
+CONFIG_NETFILTER_XT_TARGET_NFLOG=m
+CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
+CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
+CONFIG_NETFILTER_XT_MATCH_COMMENT=m
+CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
+CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
+CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
+CONFIG_NETFILTER_XT_MATCH_DCCP=m
+CONFIG_NETFILTER_XT_MATCH_DSCP=m
+CONFIG_NETFILTER_XT_MATCH_ESP=m
+CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
+CONFIG_NETFILTER_XT_MATCH_HELPER=m
+CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
+CONFIG_NETFILTER_XT_MATCH_LENGTH=m
+CONFIG_NETFILTER_XT_MATCH_LIMIT=m
+CONFIG_NETFILTER_XT_MATCH_MAC=m
+CONFIG_NETFILTER_XT_MATCH_MARK=m
+CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
+CONFIG_NETFILTER_XT_MATCH_OWNER=m
+CONFIG_NETFILTER_XT_MATCH_POLICY=m
+CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
+CONFIG_NETFILTER_XT_MATCH_QUOTA=m
+CONFIG_NETFILTER_XT_MATCH_RATEEST=m
+CONFIG_NETFILTER_XT_MATCH_REALM=m
+CONFIG_NETFILTER_XT_MATCH_RECENT=m
+CONFIG_NETFILTER_XT_MATCH_SCTP=m
+CONFIG_NETFILTER_XT_MATCH_STATE=m
+CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
+CONFIG_NETFILTER_XT_MATCH_STRING=m
+CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
+CONFIG_NETFILTER_XT_MATCH_TIME=m
+CONFIG_NETFILTER_XT_MATCH_U32=m
+CONFIG_NF_CONNTRACK_IPV4=m
+CONFIG_IP_NF_QUEUE=m
+CONFIG_IP_NF_IPTABLES=m
+CONFIG_IP_NF_MATCH_AH=m
+CONFIG_IP_NF_MATCH_ECN=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_TARGET_ULOG=m
+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+CONFIG_PROC_DEVICETREE=y
+CONFIG_PARPORT=m
+CONFIG_PARPORT_PC=m
+CONFIG_BLK_DEV_FD=m
+CONFIG_BLK_DEV_LOOP=y
+CONFIG_BLK_DEV_NBD=m
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=65536
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDECD=y
+CONFIG_BLK_DEV_GENERIC=y
+CONFIG_BLK_DEV_AMD74XX=y
+CONFIG_BLK_DEV_SD=y
+CONFIG_CHR_DEV_ST=y
+CONFIG_BLK_DEV_SR=y
+CONFIG_BLK_DEV_SR_VENDOR=y
+CONFIG_CHR_DEV_SG=y
+CONFIG_SCSI_MULTI_LUN=y
+CONFIG_SCSI_CONSTANTS=y
+CONFIG_SCSI_FC_ATTRS=y
+CONFIG_SCSI_CXGB3_ISCSI=m
+CONFIG_SCSI_CXGB4_ISCSI=m
+CONFIG_SCSI_BNX2_ISCSI=m
+CONFIG_BE2ISCSI=m
+CONFIG_SCSI_MPT2SAS=m
+CONFIG_SCSI_IBMVSCSI=y
+CONFIG_SCSI_IBMVFC=m
+CONFIG_SCSI_SYM53C8XX_2=y
+CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
+CONFIG_SCSI_IPR=y
+CONFIG_SCSI_QLA_FC=m
+CONFIG_SCSI_QLA_ISCSI=m
+CONFIG_SCSI_LPFC=m
+CONFIG_ATA=y
+# CONFIG_ATA_SFF is not set
+CONFIG_MD=y
+CONFIG_BLK_DEV_MD=y
+CONFIG_MD_LINEAR=y
+CONFIG_MD_RAID0=y
+CONFIG_MD_RAID1=y
+CONFIG_MD_RAID10=m
+CONFIG_MD_RAID456=m
+CONFIG_MD_MULTIPATH=m
+CONFIG_MD_FAULTY=m
+CONFIG_BLK_DEV_DM=y
+CONFIG_DM_CRYPT=m
+CONFIG_DM_SNAPSHOT=m
+CONFIG_DM_MIRROR=m
+CONFIG_DM_ZERO=m
+CONFIG_DM_MULTIPATH=m
+CONFIG_BONDING=m
+CONFIG_DUMMY=m
+CONFIG_NETCONSOLE=y
+CONFIG_NETPOLL_TRAP=y
+CONFIG_TUN=m
+CONFIG_VORTEX=y
+CONFIG_ACENIC=m
+CONFIG_ACENIC_OMIT_TIGON_I=y
+CONFIG_PCNET32=y
+CONFIG_TIGON3=y
+CONFIG_CHELSIO_T1=m
+CONFIG_BE2NET=m
+CONFIG_S2IO=m
+CONFIG_IBMVETH=y
+CONFIG_EHEA=y
+CONFIG_E100=y
+CONFIG_E1000=y
+CONFIG_E1000E=y
+CONFIG_IXGB=m
+CONFIG_IXGBE=m
+CONFIG_MLX4_EN=m
+CONFIG_MYRI10GE=m
+CONFIG_QLGE=m
+CONFIG_NETXEN_NIC=m
+CONFIG_PPP=m
+CONFIG_PPP_BSDCOMP=m
+CONFIG_PPP_DEFLATE=m
+CONFIG_PPPOE=m
+CONFIG_PPP_ASYNC=m
+CONFIG_PPP_SYNC_TTY=m
+# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
+CONFIG_INPUT_EVDEV=m
+CONFIG_INPUT_MISC=y
+CONFIG_INPUT_PCSPKR=m
+# CONFIG_SERIO_SERPORT is not set
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_ICOM=m
+CONFIG_SERIAL_JSM=m
+CONFIG_HVC_CONSOLE=y
+CONFIG_HVC_RTAS=y
+CONFIG_HVCS=m
+CONFIG_IBM_BSR=m
+CONFIG_GEN_RTC=y
+CONFIG_RAW_DRIVER=y
+CONFIG_MAX_RAW_DEVS=1024
+CONFIG_FB=y
+CONFIG_FIRMWARE_EDID=y
+CONFIG_FB_OF=y
+CONFIG_FB_MATROX=y
+CONFIG_FB_MATROX_MILLENIUM=y
+CONFIG_FB_MATROX_MYSTIQUE=y
+CONFIG_FB_MATROX_G=y
+CONFIG_FB_RADEON=y
+CONFIG_FB_IBM_GXT4500=y
+CONFIG_LCD_PLATFORM=m
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_FRAMEBUFFER_CONSOLE=y
+CONFIG_LOGO=y
+CONFIG_HID_GYRATION=y
+CONFIG_HID_PANTHERLORD=y
+CONFIG_HID_PETALYNX=y
+CONFIG_HID_SAMSUNG=y
+CONFIG_HID_SONY=y
+CONFIG_HID_SUNPLUS=y
+CONFIG_USB_HIDDEV=y
+CONFIG_USB=y
+CONFIG_USB_MON=m
+CONFIG_USB_EHCI_HCD=y
+# CONFIG_USB_EHCI_HCD_PPC_OF is not set
+CONFIG_USB_OHCI_HCD=y
+CONFIG_USB_STORAGE=m
+CONFIG_INFINIBAND=m
+CONFIG_INFINIBAND_USER_MAD=m
+CONFIG_INFINIBAND_USER_ACCESS=m
+CONFIG_INFINIBAND_MTHCA=m
+CONFIG_INFINIBAND_EHCA=m
+CONFIG_INFINIBAND_CXGB3=m
+CONFIG_INFINIBAND_CXGB4=m
+CONFIG_MLX4_INFINIBAND=m
+CONFIG_INFINIBAND_IPOIB=m
+CONFIG_INFINIBAND_IPOIB_CM=y
+CONFIG_INFINIBAND_SRP=m
+CONFIG_INFINIBAND_ISER=m
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT2_FS_POSIX_ACL=y
+CONFIG_EXT2_FS_SECURITY=y
+CONFIG_EXT2_FS_XIP=y
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_POSIX_ACL=y
+CONFIG_EXT3_FS_SECURITY=y
+CONFIG_EXT4_FS=y
+CONFIG_EXT4_FS_POSIX_ACL=y
+CONFIG_EXT4_FS_SECURITY=y
+CONFIG_REISERFS_FS=y
+CONFIG_REISERFS_FS_XATTR=y
+CONFIG_REISERFS_FS_POSIX_ACL=y
+CONFIG_REISERFS_FS_SECURITY=y
+CONFIG_JFS_FS=m
+CONFIG_JFS_POSIX_ACL=y
+CONFIG_JFS_SECURITY=y
+CONFIG_XFS_FS=m
+CONFIG_XFS_POSIX_ACL=y
+CONFIG_BTRFS_FS=m
+CONFIG_BTRFS_FS_POSIX_ACL=y
+CONFIG_NILFS2_FS=m
+CONFIG_AUTOFS4_FS=m
+CONFIG_FUSE_FS=m
+CONFIG_ISO9660_FS=y
+CONFIG_UDF_FS=m
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_TMPFS=y
+CONFIG_TMPFS_POSIX_ACL=y
+CONFIG_HUGETLBFS=y
+CONFIG_CRAMFS=m
+CONFIG_SQUASHFS=m
+CONFIG_SQUASHFS_XATTR=y
+CONFIG_SQUASHFS_LZO=y
+CONFIG_SQUASHFS_XZ=y
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3_ACL=y
+CONFIG_NFS_V4=y
+CONFIG_NFSD=m
+CONFIG_NFSD_V3_ACL=y
+CONFIG_NFSD_V4=y
+CONFIG_CIFS=m
+CONFIG_CIFS_XATTR=y
+CONFIG_CIFS_POSIX=y
+CONFIG_NLS_DEFAULT="utf8"
+CONFIG_NLS_CODEPAGE_437=y
+CONFIG_NLS_ASCII=y
+CONFIG_NLS_ISO8859_1=y
+CONFIG_NLS_UTF8=y
+CONFIG_CRC_T10DIF=y
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_KERNEL=y
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_DEBUG_STACK_USAGE=y
+CONFIG_LATENCYTOP=y
+CONFIG_SCHED_TRACER=y
+CONFIG_BLK_DEV_IO_TRACE=y
+CONFIG_DEBUG_STACKOVERFLOW=y
+CONFIG_CODE_PATCHING_SELFTEST=y
+CONFIG_FTR_FIXUP_SELFTEST=y
+CONFIG_MSI_BITMAP_SELFTEST=y
+CONFIG_XMON=y
+CONFIG_XMON_DEFAULT=y
+CONFIG_CRYPTO_NULL=m
+CONFIG_CRYPTO_TEST=m
+CONFIG_CRYPTO_PCBC=m
+CONFIG_CRYPTO_HMAC=y
+CONFIG_CRYPTO_MICHAEL_MIC=m
+CONFIG_CRYPTO_TGR192=m
+CONFIG_CRYPTO_WP512=m
+CONFIG_CRYPTO_ANUBIS=m
+CONFIG_CRYPTO_BLOWFISH=m
+CONFIG_CRYPTO_CAST6=m
+CONFIG_CRYPTO_KHAZAD=m
+CONFIG_CRYPTO_SALSA20=m
+CONFIG_CRYPTO_SERPENT=m
+CONFIG_CRYPTO_TEA=m
+CONFIG_CRYPTO_TWOFISH=m
+CONFIG_CRYPTO_LZO=m
+# CONFIG_CRYPTO_ANSI_CPRNG is not set
+CONFIG_CRYPTO_DEV_NX=y
+CONFIG_CRYPTO_DEV_NX_ENCRYPT=m
+# CONFIG_VIRTUALIZATION is not set
+CONFIG_CPU_LITTLE_ENDIAN=y
-- 
1.8.1.2

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-06 16:02 ` [PATCH 63/63] powerpc: Add pseries_le_defconfig Anton Blanchard
@ 2013-08-06 23:31   ` Michael Neuling
  2013-08-07  5:16     ` Michael Ellerman
  2013-08-08  7:53     ` Anton Blanchard
  2013-08-08  8:33   ` Madhavan Srinivasan
  1 sibling, 2 replies; 76+ messages in thread
From: Michael Neuling @ 2013-08-06 23:31 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: Paul Mackerras, linuxppc-dev

Anton Blanchard <anton@samba.org> wrote:

> This is the pseries_defconfig with CONFIG_CPU_LITTLE_ENDIAN enabled
> and CONFIG_VIRTUALIZATION disabled (required until we fix some
> endian issues in KVM).

The CONFIG_VIRTUALIZATION disabling should be done in the Kconfig not
here. 

I'm not that keen on another defconfig.  benh is already talking about
having a powernv defconfig.  I'm worried we are going to fragment the
defconfigs.  If you want something special like LE, then change the
default one.  

Mikey

> 
> Signed-off-by: Anton Blanchard <anton@samba.org>
> ---
>  arch/powerpc/configs/pseries_le_defconfig | 347 ++++++++++++++++++++++++++++++
>  1 file changed, 347 insertions(+)
>  create mode 100644 arch/powerpc/configs/pseries_le_defconfig
> 
> diff --git a/arch/powerpc/configs/pseries_le_defconfig b/arch/powerpc/configs/pseries_le_defconfig
> new file mode 100644
> index 0000000..a30db45
> --- /dev/null
> +++ b/arch/powerpc/configs/pseries_le_defconfig
> @@ -0,0 +1,347 @@
> +CONFIG_PPC64=y
> +CONFIG_ALTIVEC=y
> +CONFIG_VSX=y
> +CONFIG_SMP=y
> +CONFIG_NR_CPUS=2048
> +CONFIG_EXPERIMENTAL=y
> +CONFIG_SYSVIPC=y
> +CONFIG_POSIX_MQUEUE=y
> +CONFIG_AUDIT=y
> +CONFIG_AUDITSYSCALL=y
> +CONFIG_IRQ_DOMAIN_DEBUG=y
> +CONFIG_NO_HZ=y
> +CONFIG_HIGH_RES_TIMERS=y
> +CONFIG_TASKSTATS=y
> +CONFIG_TASK_DELAY_ACCT=y
> +CONFIG_TASK_XACCT=y
> +CONFIG_TASK_IO_ACCOUNTING=y
> +CONFIG_IKCONFIG=y
> +CONFIG_IKCONFIG_PROC=y
> +CONFIG_CGROUPS=y
> +CONFIG_CGROUP_FREEZER=y
> +CONFIG_CGROUP_DEVICE=y
> +CONFIG_CPUSETS=y
> +CONFIG_CGROUP_CPUACCT=y
> +CONFIG_BLK_DEV_INITRD=y
> +# CONFIG_COMPAT_BRK is not set
> +CONFIG_PROFILING=y
> +CONFIG_OPROFILE=y
> +CONFIG_KPROBES=y
> +CONFIG_JUMP_LABEL=y
> +CONFIG_MODULES=y
> +CONFIG_MODULE_UNLOAD=y
> +CONFIG_MODVERSIONS=y
> +CONFIG_MODULE_SRCVERSION_ALL=y
> +CONFIG_PARTITION_ADVANCED=y
> +CONFIG_EFI_PARTITION=y
> +CONFIG_PPC_SPLPAR=y
> +CONFIG_SCANLOG=m
> +CONFIG_PPC_SMLPAR=y
> +CONFIG_DTL=y
> +# CONFIG_PPC_PMAC is not set
> +CONFIG_RTAS_FLASH=m
> +CONFIG_IBMEBUS=y
> +CONFIG_HZ_100=y
> +CONFIG_BINFMT_MISC=m
> +CONFIG_PPC_TRANSACTIONAL_MEM=y
> +CONFIG_HOTPLUG_CPU=y
> +CONFIG_KEXEC=y
> +CONFIG_IRQ_ALL_CPUS=y
> +CONFIG_MEMORY_HOTPLUG=y
> +CONFIG_MEMORY_HOTREMOVE=y
> +CONFIG_PPC_64K_PAGES=y
> +CONFIG_PPC_SUBPAGE_PROT=y
> +CONFIG_SCHED_SMT=y
> +CONFIG_PPC_DENORMALISATION=y
> +CONFIG_HOTPLUG_PCI=m
> +CONFIG_HOTPLUG_PCI_RPA=m
> +CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
> +CONFIG_PACKET=y
> +CONFIG_UNIX=y
> +CONFIG_XFRM_USER=m
> +CONFIG_NET_KEY=m
> +CONFIG_INET=y
> +CONFIG_IP_MULTICAST=y
> +CONFIG_NET_IPIP=y
> +CONFIG_SYN_COOKIES=y
> +CONFIG_INET_AH=m
> +CONFIG_INET_ESP=m
> +CONFIG_INET_IPCOMP=m
> +# CONFIG_IPV6 is not set
> +CONFIG_NETFILTER=y
> +CONFIG_NF_CONNTRACK=m
> +CONFIG_NF_CONNTRACK_EVENTS=y
> +CONFIG_NF_CT_PROTO_UDPLITE=m
> +CONFIG_NF_CONNTRACK_FTP=m
> +CONFIG_NF_CONNTRACK_IRC=m
> +CONFIG_NF_CONNTRACK_TFTP=m
> +CONFIG_NF_CT_NETLINK=m
> +CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
> +CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
> +CONFIG_NETFILTER_XT_TARGET_MARK=m
> +CONFIG_NETFILTER_XT_TARGET_NFLOG=m
> +CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
> +CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
> +CONFIG_NETFILTER_XT_MATCH_COMMENT=m
> +CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
> +CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
> +CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
> +CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
> +CONFIG_NETFILTER_XT_MATCH_DCCP=m
> +CONFIG_NETFILTER_XT_MATCH_DSCP=m
> +CONFIG_NETFILTER_XT_MATCH_ESP=m
> +CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
> +CONFIG_NETFILTER_XT_MATCH_HELPER=m
> +CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
> +CONFIG_NETFILTER_XT_MATCH_LENGTH=m
> +CONFIG_NETFILTER_XT_MATCH_LIMIT=m
> +CONFIG_NETFILTER_XT_MATCH_MAC=m
> +CONFIG_NETFILTER_XT_MATCH_MARK=m
> +CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
> +CONFIG_NETFILTER_XT_MATCH_OWNER=m
> +CONFIG_NETFILTER_XT_MATCH_POLICY=m
> +CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
> +CONFIG_NETFILTER_XT_MATCH_QUOTA=m
> +CONFIG_NETFILTER_XT_MATCH_RATEEST=m
> +CONFIG_NETFILTER_XT_MATCH_REALM=m
> +CONFIG_NETFILTER_XT_MATCH_RECENT=m
> +CONFIG_NETFILTER_XT_MATCH_SCTP=m
> +CONFIG_NETFILTER_XT_MATCH_STATE=m
> +CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
> +CONFIG_NETFILTER_XT_MATCH_STRING=m
> +CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
> +CONFIG_NETFILTER_XT_MATCH_TIME=m
> +CONFIG_NETFILTER_XT_MATCH_U32=m
> +CONFIG_NF_CONNTRACK_IPV4=m
> +CONFIG_IP_NF_QUEUE=m
> +CONFIG_IP_NF_IPTABLES=m
> +CONFIG_IP_NF_MATCH_AH=m
> +CONFIG_IP_NF_MATCH_ECN=m
> +CONFIG_IP_NF_MATCH_TTL=m
> +CONFIG_IP_NF_FILTER=m
> +CONFIG_IP_NF_TARGET_REJECT=m
> +CONFIG_IP_NF_TARGET_ULOG=m
> +CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
> +CONFIG_DEVTMPFS=y
> +CONFIG_DEVTMPFS_MOUNT=y
> +CONFIG_PROC_DEVICETREE=y
> +CONFIG_PARPORT=m
> +CONFIG_PARPORT_PC=m
> +CONFIG_BLK_DEV_FD=m
> +CONFIG_BLK_DEV_LOOP=y
> +CONFIG_BLK_DEV_NBD=m
> +CONFIG_BLK_DEV_RAM=y
> +CONFIG_BLK_DEV_RAM_SIZE=65536
> +CONFIG_IDE=y
> +CONFIG_BLK_DEV_IDECD=y
> +CONFIG_BLK_DEV_GENERIC=y
> +CONFIG_BLK_DEV_AMD74XX=y
> +CONFIG_BLK_DEV_SD=y
> +CONFIG_CHR_DEV_ST=y
> +CONFIG_BLK_DEV_SR=y
> +CONFIG_BLK_DEV_SR_VENDOR=y
> +CONFIG_CHR_DEV_SG=y
> +CONFIG_SCSI_MULTI_LUN=y
> +CONFIG_SCSI_CONSTANTS=y
> +CONFIG_SCSI_FC_ATTRS=y
> +CONFIG_SCSI_CXGB3_ISCSI=m
> +CONFIG_SCSI_CXGB4_ISCSI=m
> +CONFIG_SCSI_BNX2_ISCSI=m
> +CONFIG_BE2ISCSI=m
> +CONFIG_SCSI_MPT2SAS=m
> +CONFIG_SCSI_IBMVSCSI=y
> +CONFIG_SCSI_IBMVFC=m
> +CONFIG_SCSI_SYM53C8XX_2=y
> +CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
> +CONFIG_SCSI_IPR=y
> +CONFIG_SCSI_QLA_FC=m
> +CONFIG_SCSI_QLA_ISCSI=m
> +CONFIG_SCSI_LPFC=m
> +CONFIG_ATA=y
> +# CONFIG_ATA_SFF is not set
> +CONFIG_MD=y
> +CONFIG_BLK_DEV_MD=y
> +CONFIG_MD_LINEAR=y
> +CONFIG_MD_RAID0=y
> +CONFIG_MD_RAID1=y
> +CONFIG_MD_RAID10=m
> +CONFIG_MD_RAID456=m
> +CONFIG_MD_MULTIPATH=m
> +CONFIG_MD_FAULTY=m
> +CONFIG_BLK_DEV_DM=y
> +CONFIG_DM_CRYPT=m
> +CONFIG_DM_SNAPSHOT=m
> +CONFIG_DM_MIRROR=m
> +CONFIG_DM_ZERO=m
> +CONFIG_DM_MULTIPATH=m
> +CONFIG_BONDING=m
> +CONFIG_DUMMY=m
> +CONFIG_NETCONSOLE=y
> +CONFIG_NETPOLL_TRAP=y
> +CONFIG_TUN=m
> +CONFIG_VORTEX=y
> +CONFIG_ACENIC=m
> +CONFIG_ACENIC_OMIT_TIGON_I=y
> +CONFIG_PCNET32=y
> +CONFIG_TIGON3=y
> +CONFIG_CHELSIO_T1=m
> +CONFIG_BE2NET=m
> +CONFIG_S2IO=m
> +CONFIG_IBMVETH=y
> +CONFIG_EHEA=y
> +CONFIG_E100=y
> +CONFIG_E1000=y
> +CONFIG_E1000E=y
> +CONFIG_IXGB=m
> +CONFIG_IXGBE=m
> +CONFIG_MLX4_EN=m
> +CONFIG_MYRI10GE=m
> +CONFIG_QLGE=m
> +CONFIG_NETXEN_NIC=m
> +CONFIG_PPP=m
> +CONFIG_PPP_BSDCOMP=m
> +CONFIG_PPP_DEFLATE=m
> +CONFIG_PPPOE=m
> +CONFIG_PPP_ASYNC=m
> +CONFIG_PPP_SYNC_TTY=m
> +# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
> +CONFIG_INPUT_EVDEV=m
> +CONFIG_INPUT_MISC=y
> +CONFIG_INPUT_PCSPKR=m
> +# CONFIG_SERIO_SERPORT is not set
> +CONFIG_SERIAL_8250=y
> +CONFIG_SERIAL_8250_CONSOLE=y
> +CONFIG_SERIAL_ICOM=m
> +CONFIG_SERIAL_JSM=m
> +CONFIG_HVC_CONSOLE=y
> +CONFIG_HVC_RTAS=y
> +CONFIG_HVCS=m
> +CONFIG_IBM_BSR=m
> +CONFIG_GEN_RTC=y
> +CONFIG_RAW_DRIVER=y
> +CONFIG_MAX_RAW_DEVS=1024
> +CONFIG_FB=y
> +CONFIG_FIRMWARE_EDID=y
> +CONFIG_FB_OF=y
> +CONFIG_FB_MATROX=y
> +CONFIG_FB_MATROX_MILLENIUM=y
> +CONFIG_FB_MATROX_MYSTIQUE=y
> +CONFIG_FB_MATROX_G=y
> +CONFIG_FB_RADEON=y
> +CONFIG_FB_IBM_GXT4500=y
> +CONFIG_LCD_PLATFORM=m
> +# CONFIG_VGA_CONSOLE is not set
> +CONFIG_FRAMEBUFFER_CONSOLE=y
> +CONFIG_LOGO=y
> +CONFIG_HID_GYRATION=y
> +CONFIG_HID_PANTHERLORD=y
> +CONFIG_HID_PETALYNX=y
> +CONFIG_HID_SAMSUNG=y
> +CONFIG_HID_SONY=y
> +CONFIG_HID_SUNPLUS=y
> +CONFIG_USB_HIDDEV=y
> +CONFIG_USB=y
> +CONFIG_USB_MON=m
> +CONFIG_USB_EHCI_HCD=y
> +# CONFIG_USB_EHCI_HCD_PPC_OF is not set
> +CONFIG_USB_OHCI_HCD=y
> +CONFIG_USB_STORAGE=m
> +CONFIG_INFINIBAND=m
> +CONFIG_INFINIBAND_USER_MAD=m
> +CONFIG_INFINIBAND_USER_ACCESS=m
> +CONFIG_INFINIBAND_MTHCA=m
> +CONFIG_INFINIBAND_EHCA=m
> +CONFIG_INFINIBAND_CXGB3=m
> +CONFIG_INFINIBAND_CXGB4=m
> +CONFIG_MLX4_INFINIBAND=m
> +CONFIG_INFINIBAND_IPOIB=m
> +CONFIG_INFINIBAND_IPOIB_CM=y
> +CONFIG_INFINIBAND_SRP=m
> +CONFIG_INFINIBAND_ISER=m
> +CONFIG_EXT2_FS=y
> +CONFIG_EXT2_FS_XATTR=y
> +CONFIG_EXT2_FS_POSIX_ACL=y
> +CONFIG_EXT2_FS_SECURITY=y
> +CONFIG_EXT2_FS_XIP=y
> +CONFIG_EXT3_FS=y
> +CONFIG_EXT3_FS_POSIX_ACL=y
> +CONFIG_EXT3_FS_SECURITY=y
> +CONFIG_EXT4_FS=y
> +CONFIG_EXT4_FS_POSIX_ACL=y
> +CONFIG_EXT4_FS_SECURITY=y
> +CONFIG_REISERFS_FS=y
> +CONFIG_REISERFS_FS_XATTR=y
> +CONFIG_REISERFS_FS_POSIX_ACL=y
> +CONFIG_REISERFS_FS_SECURITY=y
> +CONFIG_JFS_FS=m
> +CONFIG_JFS_POSIX_ACL=y
> +CONFIG_JFS_SECURITY=y
> +CONFIG_XFS_FS=m
> +CONFIG_XFS_POSIX_ACL=y
> +CONFIG_BTRFS_FS=m
> +CONFIG_BTRFS_FS_POSIX_ACL=y
> +CONFIG_NILFS2_FS=m
> +CONFIG_AUTOFS4_FS=m
> +CONFIG_FUSE_FS=m
> +CONFIG_ISO9660_FS=y
> +CONFIG_UDF_FS=m
> +CONFIG_MSDOS_FS=y
> +CONFIG_VFAT_FS=y
> +CONFIG_PROC_KCORE=y
> +CONFIG_TMPFS=y
> +CONFIG_TMPFS_POSIX_ACL=y
> +CONFIG_HUGETLBFS=y
> +CONFIG_CRAMFS=m
> +CONFIG_SQUASHFS=m
> +CONFIG_SQUASHFS_XATTR=y
> +CONFIG_SQUASHFS_LZO=y
> +CONFIG_SQUASHFS_XZ=y
> +CONFIG_NFS_FS=y
> +CONFIG_NFS_V3_ACL=y
> +CONFIG_NFS_V4=y
> +CONFIG_NFSD=m
> +CONFIG_NFSD_V3_ACL=y
> +CONFIG_NFSD_V4=y
> +CONFIG_CIFS=m
> +CONFIG_CIFS_XATTR=y
> +CONFIG_CIFS_POSIX=y
> +CONFIG_NLS_DEFAULT="utf8"
> +CONFIG_NLS_CODEPAGE_437=y
> +CONFIG_NLS_ASCII=y
> +CONFIG_NLS_ISO8859_1=y
> +CONFIG_NLS_UTF8=y
> +CONFIG_CRC_T10DIF=y
> +CONFIG_MAGIC_SYSRQ=y
> +CONFIG_DEBUG_KERNEL=y
> +CONFIG_LOCKUP_DETECTOR=y
> +CONFIG_DEBUG_STACK_USAGE=y
> +CONFIG_LATENCYTOP=y
> +CONFIG_SCHED_TRACER=y
> +CONFIG_BLK_DEV_IO_TRACE=y
> +CONFIG_DEBUG_STACKOVERFLOW=y
> +CONFIG_CODE_PATCHING_SELFTEST=y
> +CONFIG_FTR_FIXUP_SELFTEST=y
> +CONFIG_MSI_BITMAP_SELFTEST=y
> +CONFIG_XMON=y
> +CONFIG_XMON_DEFAULT=y
> +CONFIG_CRYPTO_NULL=m
> +CONFIG_CRYPTO_TEST=m
> +CONFIG_CRYPTO_PCBC=m
> +CONFIG_CRYPTO_HMAC=y
> +CONFIG_CRYPTO_MICHAEL_MIC=m
> +CONFIG_CRYPTO_TGR192=m
> +CONFIG_CRYPTO_WP512=m
> +CONFIG_CRYPTO_ANUBIS=m
> +CONFIG_CRYPTO_BLOWFISH=m
> +CONFIG_CRYPTO_CAST6=m
> +CONFIG_CRYPTO_KHAZAD=m
> +CONFIG_CRYPTO_SALSA20=m
> +CONFIG_CRYPTO_SERPENT=m
> +CONFIG_CRYPTO_TEA=m
> +CONFIG_CRYPTO_TWOFISH=m
> +CONFIG_CRYPTO_LZO=m
> +# CONFIG_CRYPTO_ANSI_CPRNG is not set
> +CONFIG_CRYPTO_DEV_NX=y
> +CONFIG_CRYPTO_DEV_NX_ENCRYPT=m
> +# CONFIG_VIRTUALIZATION is not set
> +CONFIG_CPU_LITTLE_ENDIAN=y
> -- 
> 1.8.1.2
> 
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev@lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 36/63] powerpc: Book 3S MMU little endian support
  2013-08-06 16:01 ` [PATCH 36/63] powerpc: Book 3S MMU little endian support Anton Blanchard
@ 2013-08-07  4:20   ` Paul Mackerras
  2013-08-09  6:08     ` Anton Blanchard
  0 siblings, 1 reply; 76+ messages in thread
From: Paul Mackerras @ 2013-08-07  4:20 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: linuxppc-dev

On Wed, Aug 07, 2013 at 02:01:53AM +1000, Anton Blanchard wrote:

> +#ifdef __BIG_ENDIAN__
>  #define HPTE_LOCK_BIT 3
> +#else
> +#define HPTE_LOCK_BIT (63-3)
> +#endif

Are you deliberately using a different bit here?  AFAICS you are using
0x20 in the 7th byte as the lock bit for LE, whereas we use 0x08 in
that byte on BE.  Both are software-use bits, so it will still work,
but is there a rationale for the change?  Or did you mean (56+3)
rather than (63-3)?

Paul.

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 48/63] powerpc/kvm/book3s_hv: Add little endian guest support
  2013-08-06 16:02 ` [PATCH 48/63] powerpc/kvm/book3s_hv: Add little endian guest support Anton Blanchard
@ 2013-08-07  4:24   ` Paul Mackerras
  0 siblings, 0 replies; 76+ messages in thread
From: Paul Mackerras @ 2013-08-07  4:24 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: linuxppc-dev

On Wed, Aug 07, 2013 at 02:02:05AM +1000, Anton Blanchard wrote:
> Add support for the H_SET_MODE hcall so we can select the
> endianness of our exceptions.
> 
> We create a guest MSR from scratch when delivering exceptions in
> a few places and instead of extracting the LPCR[ILE] and inserting
> it into MSR_LE each time simply create a new variable intr_msr which
> contains the entire MSR to use.
> 
> Signed-off-by: Anton Blanchard <anton@samba.org>

Acked-by: Paul Mackerras <paulus@samba.org>

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-06 23:31   ` Michael Neuling
@ 2013-08-07  5:16     ` Michael Ellerman
  2013-08-08 15:32       ` Aneesh Kumar K.V
  2013-08-08  7:53     ` Anton Blanchard
  1 sibling, 1 reply; 76+ messages in thread
From: Michael Ellerman @ 2013-08-07  5:16 UTC (permalink / raw)
  To: Michael Neuling; +Cc: linuxppc-dev, Paul Mackerras, Anton Blanchard

On Wed, Aug 07, 2013 at 09:31:00AM +1000, Michael Neuling wrote:
> Anton Blanchard <anton@samba.org> wrote:
> 
> > This is the pseries_defconfig with CONFIG_CPU_LITTLE_ENDIAN enabled
> > and CONFIG_VIRTUALIZATION disabled (required until we fix some
> > endian issues in KVM).
> 
> The CONFIG_VIRTUALIZATION disabling should be done in the Kconfig not
> here. 
> 
> I'm not that keen on another defconfig.  benh is already talking about
> having a powernv defconfig.  I'm worried we are going to fragment the
> defconfigs.  If you want something special like LE, then change the
> default one.  

I disagree. defconfigs are great because they're easy to add to kisskb
or other auto builders, making automated build testing easier.

In fact because the defconfigs are pretty much the only thing that
people build test, if you stray too far from them you are almost
guaranteed to find breakage. For example the UP build was broken for
months because we didn't have a defconfig for it.

cheers

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-06 23:31   ` Michael Neuling
  2013-08-07  5:16     ` Michael Ellerman
@ 2013-08-08  7:53     ` Anton Blanchard
  2013-08-08 23:12       ` Michael Neuling
  1 sibling, 1 reply; 76+ messages in thread
From: Anton Blanchard @ 2013-08-08  7:53 UTC (permalink / raw)
  To: Michael Neuling; +Cc: Paul Mackerras, linuxppc-dev


Hi,

> The CONFIG_VIRTUALIZATION disabling should be done in the Kconfig not
> here. 
> 
> I'm not that keen on another defconfig.  benh is already talking about
> having a powernv defconfig.  I'm worried we are going to fragment the
> defconfigs.  If you want something special like LE, then change the
> default one.

I agree we don't want machine specific defconfigs, but I think it makes
sense to have ones that cover the key options that conflict. I'm
thinking 32bit, 64bit, 64bit BookE, 64bit LE etc.

One bonus is if we have a smaller set of defconfigs we might actually
get better testing. I have no idea which 32bit defconfigs I should test
for example, and I'm not going to test them all!

Anton

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-06 16:02 ` [PATCH 63/63] powerpc: Add pseries_le_defconfig Anton Blanchard
  2013-08-06 23:31   ` Michael Neuling
@ 2013-08-08  8:33   ` Madhavan Srinivasan
  2013-08-08 10:49     ` Michael Neuling
  1 sibling, 1 reply; 76+ messages in thread
From: Madhavan Srinivasan @ 2013-08-08  8:33 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: Paul Mackerras, linuxppc-dev

On Tuesday 06 August 2013 09:32 PM, Anton Blanchard wrote:
> This is the pseries_defconfig with CONFIG_CPU_LITTLE_ENDIAN enabled
> and CONFIG_VIRTUALIZATION disabled (required until we fix some
> endian issues in KVM).
> 
> Signed-off-by: Anton Blanchard <anton@samba.org>
> ---
>  arch/powerpc/configs/pseries_le_defconfig | 347 ++++++++++++++++++++++++++++++
>  1 file changed, 347 insertions(+)
>  create mode 100644 arch/powerpc/configs/pseries_le_defconfig
> 
> diff --git a/arch/powerpc/configs/pseries_le_defconfig b/arch/powerpc/configs/pseries_le_defconfig
> new file mode 100644
> index 0000000..a30db45
> --- /dev/null
> +++ b/arch/powerpc/configs/pseries_le_defconfig
> @@ -0,0 +1,347 @@
> +CONFIG_PPC64=y
> +CONFIG_ALTIVEC=y
> +CONFIG_VSX=y
> +CONFIG_SMP=y
> +CONFIG_NR_CPUS=2048
> +CONFIG_EXPERIMENTAL=y
> +CONFIG_SYSVIPC=y
> +CONFIG_POSIX_MQUEUE=y
> +CONFIG_AUDIT=y
> +CONFIG_AUDITSYSCALL=y
> +CONFIG_IRQ_DOMAIN_DEBUG=y
> +CONFIG_NO_HZ=y
> +CONFIG_HIGH_RES_TIMERS=y
> +CONFIG_TASKSTATS=y
> +CONFIG_TASK_DELAY_ACCT=y
> +CONFIG_TASK_XACCT=y
> +CONFIG_TASK_IO_ACCOUNTING=y
> +CONFIG_IKCONFIG=y
> +CONFIG_IKCONFIG_PROC=y
> +CONFIG_CGROUPS=y
> +CONFIG_CGROUP_FREEZER=y
> +CONFIG_CGROUP_DEVICE=y
> +CONFIG_CPUSETS=y
> +CONFIG_CGROUP_CPUACCT=y
> +CONFIG_BLK_DEV_INITRD=y
> +# CONFIG_COMPAT_BRK is not set
> +CONFIG_PROFILING=y
> +CONFIG_OPROFILE=y
> +CONFIG_KPROBES=y
> +CONFIG_JUMP_LABEL=y
> +CONFIG_MODULES=y
> +CONFIG_MODULE_UNLOAD=y
> +CONFIG_MODVERSIONS=y
> +CONFIG_MODULE_SRCVERSION_ALL=y
> +CONFIG_PARTITION_ADVANCED=y
> +CONFIG_EFI_PARTITION=y
> +CONFIG_PPC_SPLPAR=y
> +CONFIG_SCANLOG=m
> +CONFIG_PPC_SMLPAR=y
> +CONFIG_DTL=y
> +# CONFIG_PPC_PMAC is not set
> +CONFIG_RTAS_FLASH=m
> +CONFIG_IBMEBUS=y
> +CONFIG_HZ_100=y
> +CONFIG_BINFMT_MISC=m
> +CONFIG_PPC_TRANSACTIONAL_MEM=y
> +CONFIG_HOTPLUG_CPU=y
> +CONFIG_KEXEC=y
> +CONFIG_IRQ_ALL_CPUS=y
> +CONFIG_MEMORY_HOTPLUG=y
> +CONFIG_MEMORY_HOTREMOVE=y
> +CONFIG_PPC_64K_PAGES=y
> +CONFIG_PPC_SUBPAGE_PROT=y
> +CONFIG_SCHED_SMT=y
> +CONFIG_PPC_DENORMALISATION=y
> +CONFIG_HOTPLUG_PCI=m
Why the value "m" in the le_config file, when it is "y" in
pseries_defconfig.
Also I do see a warning saying invalid value for this symbol.
> +CONFIG_HOTPLUG_PCI_RPA=m
> +CONFIG_HOTPLUG_PCI_RPA_DLPAR=m
> +CONFIG_PACKET=y
> +CONFIG_UNIX=y
> +CONFIG_XFRM_USER=m
> +CONFIG_NET_KEY=m
> +CONFIG_INET=y
> +CONFIG_IP_MULTICAST=y
> +CONFIG_NET_IPIP=y
> +CONFIG_SYN_COOKIES=y
> +CONFIG_INET_AH=m
> +CONFIG_INET_ESP=m
> +CONFIG_INET_IPCOMP=m
> +# CONFIG_IPV6 is not set
> +CONFIG_NETFILTER=y
> +CONFIG_NF_CONNTRACK=m
> +CONFIG_NF_CONNTRACK_EVENTS=y
> +CONFIG_NF_CT_PROTO_UDPLITE=m
> +CONFIG_NF_CONNTRACK_FTP=m
> +CONFIG_NF_CONNTRACK_IRC=m
> +CONFIG_NF_CONNTRACK_TFTP=m
> +CONFIG_NF_CT_NETLINK=m
> +CONFIG_NETFILTER_XT_TARGET_CLASSIFY=m
> +CONFIG_NETFILTER_XT_TARGET_CONNMARK=m
> +CONFIG_NETFILTER_XT_TARGET_MARK=m
> +CONFIG_NETFILTER_XT_TARGET_NFLOG=m
> +CONFIG_NETFILTER_XT_TARGET_NFQUEUE=m
> +CONFIG_NETFILTER_XT_TARGET_TCPMSS=m
> +CONFIG_NETFILTER_XT_MATCH_COMMENT=m
> +CONFIG_NETFILTER_XT_MATCH_CONNBYTES=m
> +CONFIG_NETFILTER_XT_MATCH_CONNLIMIT=m
> +CONFIG_NETFILTER_XT_MATCH_CONNMARK=m
> +CONFIG_NETFILTER_XT_MATCH_CONNTRACK=m
> +CONFIG_NETFILTER_XT_MATCH_DCCP=m
> +CONFIG_NETFILTER_XT_MATCH_DSCP=m
> +CONFIG_NETFILTER_XT_MATCH_ESP=m
> +CONFIG_NETFILTER_XT_MATCH_HASHLIMIT=m
> +CONFIG_NETFILTER_XT_MATCH_HELPER=m
> +CONFIG_NETFILTER_XT_MATCH_IPRANGE=m
> +CONFIG_NETFILTER_XT_MATCH_LENGTH=m
> +CONFIG_NETFILTER_XT_MATCH_LIMIT=m
> +CONFIG_NETFILTER_XT_MATCH_MAC=m
> +CONFIG_NETFILTER_XT_MATCH_MARK=m
> +CONFIG_NETFILTER_XT_MATCH_MULTIPORT=m
> +CONFIG_NETFILTER_XT_MATCH_OWNER=m
> +CONFIG_NETFILTER_XT_MATCH_POLICY=m
> +CONFIG_NETFILTER_XT_MATCH_PKTTYPE=m
> +CONFIG_NETFILTER_XT_MATCH_QUOTA=m
> +CONFIG_NETFILTER_XT_MATCH_RATEEST=m
> +CONFIG_NETFILTER_XT_MATCH_REALM=m
> +CONFIG_NETFILTER_XT_MATCH_RECENT=m
> +CONFIG_NETFILTER_XT_MATCH_SCTP=m
> +CONFIG_NETFILTER_XT_MATCH_STATE=m
> +CONFIG_NETFILTER_XT_MATCH_STATISTIC=m
> +CONFIG_NETFILTER_XT_MATCH_STRING=m
> +CONFIG_NETFILTER_XT_MATCH_TCPMSS=m
> +CONFIG_NETFILTER_XT_MATCH_TIME=m
> +CONFIG_NETFILTER_XT_MATCH_U32=m
> +CONFIG_NF_CONNTRACK_IPV4=m
> +CONFIG_IP_NF_QUEUE=m
> +CONFIG_IP_NF_IPTABLES=m
> +CONFIG_IP_NF_MATCH_AH=m
> +CONFIG_IP_NF_MATCH_ECN=m
> +CONFIG_IP_NF_MATCH_TTL=m
> +CONFIG_IP_NF_FILTER=m
> +CONFIG_IP_NF_TARGET_REJECT=m
> +CONFIG_IP_NF_TARGET_ULOG=m
> +CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
> +CONFIG_DEVTMPFS=y
> +CONFIG_DEVTMPFS_MOUNT=y
> +CONFIG_PROC_DEVICETREE=y
> +CONFIG_PARPORT=m
> +CONFIG_PARPORT_PC=m
> +CONFIG_BLK_DEV_FD=m
> +CONFIG_BLK_DEV_LOOP=y
> +CONFIG_BLK_DEV_NBD=m
> +CONFIG_BLK_DEV_RAM=y
> +CONFIG_BLK_DEV_RAM_SIZE=65536
> +CONFIG_IDE=y
> +CONFIG_BLK_DEV_IDECD=y
> +CONFIG_BLK_DEV_GENERIC=y
> +CONFIG_BLK_DEV_AMD74XX=y
> +CONFIG_BLK_DEV_SD=y
> +CONFIG_CHR_DEV_ST=y
> +CONFIG_BLK_DEV_SR=y
> +CONFIG_BLK_DEV_SR_VENDOR=y
> +CONFIG_CHR_DEV_SG=y
> +CONFIG_SCSI_MULTI_LUN=y
> +CONFIG_SCSI_CONSTANTS=y
> +CONFIG_SCSI_FC_ATTRS=y
> +CONFIG_SCSI_CXGB3_ISCSI=m
> +CONFIG_SCSI_CXGB4_ISCSI=m
> +CONFIG_SCSI_BNX2_ISCSI=m
> +CONFIG_BE2ISCSI=m
> +CONFIG_SCSI_MPT2SAS=m
> +CONFIG_SCSI_IBMVSCSI=y
> +CONFIG_SCSI_IBMVFC=m
> +CONFIG_SCSI_SYM53C8XX_2=y
> +CONFIG_SCSI_SYM53C8XX_DMA_ADDRESSING_MODE=0
> +CONFIG_SCSI_IPR=y
> +CONFIG_SCSI_QLA_FC=m
> +CONFIG_SCSI_QLA_ISCSI=m
> +CONFIG_SCSI_LPFC=m
> +CONFIG_ATA=y
> +# CONFIG_ATA_SFF is not set
> +CONFIG_MD=y
> +CONFIG_BLK_DEV_MD=y
> +CONFIG_MD_LINEAR=y
> +CONFIG_MD_RAID0=y
> +CONFIG_MD_RAID1=y
> +CONFIG_MD_RAID10=m
> +CONFIG_MD_RAID456=m
> +CONFIG_MD_MULTIPATH=m
> +CONFIG_MD_FAULTY=m
> +CONFIG_BLK_DEV_DM=y
> +CONFIG_DM_CRYPT=m
> +CONFIG_DM_SNAPSHOT=m
> +CONFIG_DM_MIRROR=m
> +CONFIG_DM_ZERO=m
> +CONFIG_DM_MULTIPATH=m
> +CONFIG_BONDING=m
> +CONFIG_DUMMY=m
> +CONFIG_NETCONSOLE=y
> +CONFIG_NETPOLL_TRAP=y
> +CONFIG_TUN=m
> +CONFIG_VORTEX=y
> +CONFIG_ACENIC=m
> +CONFIG_ACENIC_OMIT_TIGON_I=y
> +CONFIG_PCNET32=y
> +CONFIG_TIGON3=y
> +CONFIG_CHELSIO_T1=m
> +CONFIG_BE2NET=m
> +CONFIG_S2IO=m
> +CONFIG_IBMVETH=y
> +CONFIG_EHEA=y
> +CONFIG_E100=y
> +CONFIG_E1000=y
> +CONFIG_E1000E=y
> +CONFIG_IXGB=m
> +CONFIG_IXGBE=m
> +CONFIG_MLX4_EN=m
> +CONFIG_MYRI10GE=m
> +CONFIG_QLGE=m
> +CONFIG_NETXEN_NIC=m
> +CONFIG_PPP=m
> +CONFIG_PPP_BSDCOMP=m
> +CONFIG_PPP_DEFLATE=m
> +CONFIG_PPPOE=m
> +CONFIG_PPP_ASYNC=m
> +CONFIG_PPP_SYNC_TTY=m
> +# CONFIG_INPUT_MOUSEDEV_PSAUX is not set
> +CONFIG_INPUT_EVDEV=m
> +CONFIG_INPUT_MISC=y
> +CONFIG_INPUT_PCSPKR=m
> +# CONFIG_SERIO_SERPORT is not set
> +CONFIG_SERIAL_8250=y
> +CONFIG_SERIAL_8250_CONSOLE=y
> +CONFIG_SERIAL_ICOM=m
> +CONFIG_SERIAL_JSM=m
> +CONFIG_HVC_CONSOLE=y
> +CONFIG_HVC_RTAS=y
> +CONFIG_HVCS=m
> +CONFIG_IBM_BSR=m
> +CONFIG_GEN_RTC=y
> +CONFIG_RAW_DRIVER=y
> +CONFIG_MAX_RAW_DEVS=1024
> +CONFIG_FB=y
> +CONFIG_FIRMWARE_EDID=y
> +CONFIG_FB_OF=y
> +CONFIG_FB_MATROX=y
> +CONFIG_FB_MATROX_MILLENIUM=y
> +CONFIG_FB_MATROX_MYSTIQUE=y
> +CONFIG_FB_MATROX_G=y
> +CONFIG_FB_RADEON=y
> +CONFIG_FB_IBM_GXT4500=y
> +CONFIG_LCD_PLATFORM=m
> +# CONFIG_VGA_CONSOLE is not set
> +CONFIG_FRAMEBUFFER_CONSOLE=y
> +CONFIG_LOGO=y
> +CONFIG_HID_GYRATION=y
> +CONFIG_HID_PANTHERLORD=y
> +CONFIG_HID_PETALYNX=y
> +CONFIG_HID_SAMSUNG=y
> +CONFIG_HID_SONY=y
> +CONFIG_HID_SUNPLUS=y
> +CONFIG_USB_HIDDEV=y
> +CONFIG_USB=y
> +CONFIG_USB_MON=m
> +CONFIG_USB_EHCI_HCD=y
> +# CONFIG_USB_EHCI_HCD_PPC_OF is not set
> +CONFIG_USB_OHCI_HCD=y
> +CONFIG_USB_STORAGE=m
> +CONFIG_INFINIBAND=m
> +CONFIG_INFINIBAND_USER_MAD=m
> +CONFIG_INFINIBAND_USER_ACCESS=m
> +CONFIG_INFINIBAND_MTHCA=m
> +CONFIG_INFINIBAND_EHCA=m
> +CONFIG_INFINIBAND_CXGB3=m
> +CONFIG_INFINIBAND_CXGB4=m
> +CONFIG_MLX4_INFINIBAND=m
> +CONFIG_INFINIBAND_IPOIB=m
> +CONFIG_INFINIBAND_IPOIB_CM=y
> +CONFIG_INFINIBAND_SRP=m
> +CONFIG_INFINIBAND_ISER=m
> +CONFIG_EXT2_FS=y
> +CONFIG_EXT2_FS_XATTR=y
> +CONFIG_EXT2_FS_POSIX_ACL=y
> +CONFIG_EXT2_FS_SECURITY=y
> +CONFIG_EXT2_FS_XIP=y
> +CONFIG_EXT3_FS=y
> +CONFIG_EXT3_FS_POSIX_ACL=y
> +CONFIG_EXT3_FS_SECURITY=y
> +CONFIG_EXT4_FS=y
> +CONFIG_EXT4_FS_POSIX_ACL=y
> +CONFIG_EXT4_FS_SECURITY=y
> +CONFIG_REISERFS_FS=y
> +CONFIG_REISERFS_FS_XATTR=y
> +CONFIG_REISERFS_FS_POSIX_ACL=y
> +CONFIG_REISERFS_FS_SECURITY=y
> +CONFIG_JFS_FS=m
> +CONFIG_JFS_POSIX_ACL=y
> +CONFIG_JFS_SECURITY=y
> +CONFIG_XFS_FS=m
> +CONFIG_XFS_POSIX_ACL=y
> +CONFIG_BTRFS_FS=m
> +CONFIG_BTRFS_FS_POSIX_ACL=y
> +CONFIG_NILFS2_FS=m
> +CONFIG_AUTOFS4_FS=m
> +CONFIG_FUSE_FS=m
> +CONFIG_ISO9660_FS=y
> +CONFIG_UDF_FS=m
> +CONFIG_MSDOS_FS=y
> +CONFIG_VFAT_FS=y
> +CONFIG_PROC_KCORE=y
> +CONFIG_TMPFS=y
> +CONFIG_TMPFS_POSIX_ACL=y
> +CONFIG_HUGETLBFS=y
> +CONFIG_CRAMFS=m
> +CONFIG_SQUASHFS=m
> +CONFIG_SQUASHFS_XATTR=y
> +CONFIG_SQUASHFS_LZO=y
> +CONFIG_SQUASHFS_XZ=y
> +CONFIG_NFS_FS=y
> +CONFIG_NFS_V3_ACL=y
> +CONFIG_NFS_V4=y
> +CONFIG_NFSD=m
> +CONFIG_NFSD_V3_ACL=y
> +CONFIG_NFSD_V4=y
> +CONFIG_CIFS=m
> +CONFIG_CIFS_XATTR=y
> +CONFIG_CIFS_POSIX=y
> +CONFIG_NLS_DEFAULT="utf8"
> +CONFIG_NLS_CODEPAGE_437=y
> +CONFIG_NLS_ASCII=y
> +CONFIG_NLS_ISO8859_1=y
> +CONFIG_NLS_UTF8=y
> +CONFIG_CRC_T10DIF=y
> +CONFIG_MAGIC_SYSRQ=y
> +CONFIG_DEBUG_KERNEL=y
> +CONFIG_LOCKUP_DETECTOR=y
> +CONFIG_DEBUG_STACK_USAGE=y
> +CONFIG_LATENCYTOP=y
> +CONFIG_SCHED_TRACER=y
> +CONFIG_BLK_DEV_IO_TRACE=y
> +CONFIG_DEBUG_STACKOVERFLOW=y
> +CONFIG_CODE_PATCHING_SELFTEST=y
> +CONFIG_FTR_FIXUP_SELFTEST=y
> +CONFIG_MSI_BITMAP_SELFTEST=y
> +CONFIG_XMON=y
> +CONFIG_XMON_DEFAULT=y
> +CONFIG_CRYPTO_NULL=m
> +CONFIG_CRYPTO_TEST=m
> +CONFIG_CRYPTO_PCBC=m
> +CONFIG_CRYPTO_HMAC=y
> +CONFIG_CRYPTO_MICHAEL_MIC=m
> +CONFIG_CRYPTO_TGR192=m
> +CONFIG_CRYPTO_WP512=m
> +CONFIG_CRYPTO_ANUBIS=m
> +CONFIG_CRYPTO_BLOWFISH=m
> +CONFIG_CRYPTO_CAST6=m
> +CONFIG_CRYPTO_KHAZAD=m
> +CONFIG_CRYPTO_SALSA20=m
> +CONFIG_CRYPTO_SERPENT=m
> +CONFIG_CRYPTO_TEA=m
> +CONFIG_CRYPTO_TWOFISH=m
> +CONFIG_CRYPTO_LZO=m
> +# CONFIG_CRYPTO_ANSI_CPRNG is not set
> +CONFIG_CRYPTO_DEV_NX=y
> +CONFIG_CRYPTO_DEV_NX_ENCRYPT=m
> +# CONFIG_VIRTUALIZATION is not set
> +CONFIG_CPU_LITTLE_ENDIAN=y
> 

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-08  8:33   ` Madhavan Srinivasan
@ 2013-08-08 10:49     ` Michael Neuling
  2013-08-09  3:21       ` Michael Ellerman
  0 siblings, 1 reply; 76+ messages in thread
From: Michael Neuling @ 2013-08-08 10:49 UTC (permalink / raw)
  To: Madhavan Srinivasan; +Cc: linuxppc-dev, Paul Mackerras, Anton Blanchard

> > +CONFIG_SCHED_SMT=y
> > +CONFIG_PPC_DENORMALISATION=y
> > +CONFIG_HOTPLUG_PCI=m
> Why the value "m" in the le_config file, when it is "y" in
> pseries_defconfig.

We are out of sync already!?!?! *sigh* ;-)

> Also I do see a warning saying invalid value for this symbol.

Mikey

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-07  5:16     ` Michael Ellerman
@ 2013-08-08 15:32       ` Aneesh Kumar K.V
  0 siblings, 0 replies; 76+ messages in thread
From: Aneesh Kumar K.V @ 2013-08-08 15:32 UTC (permalink / raw)
  To: Michael Ellerman, Michael Neuling
  Cc: Paul Mackerras, linuxppc-dev, Anton Blanchard

Michael Ellerman <michael@ellerman.id.au> writes:

> On Wed, Aug 07, 2013 at 09:31:00AM +1000, Michael Neuling wrote:
>> Anton Blanchard <anton@samba.org> wrote:
>> 
>> > This is the pseries_defconfig with CONFIG_CPU_LITTLE_ENDIAN enabled
>> > and CONFIG_VIRTUALIZATION disabled (required until we fix some
>> > endian issues in KVM).
>> 
>> The CONFIG_VIRTUALIZATION disabling should be done in the Kconfig not
>> here. 
>> 
>> I'm not that keen on another defconfig.  benh is already talking about
>> having a powernv defconfig.  I'm worried we are going to fragment the
>> defconfigs.  If you want something special like LE, then change the
>> default one.  
>
> I disagree. defconfigs are great because they're easy to add to kisskb
> or other auto builders, making automated build testing easier.
>
> In fact because the defconfigs are pretty much the only thing that
> people build test, if you stray too far from them you are almost
> guaranteed to find breakage. For example the UP build was broken for
> months because we didn't have a defconfig for it.

This is true. My test build for patches mostly include 

for config in "....."
do
   make $config
   ....
done

-aneesh

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-08  7:53     ` Anton Blanchard
@ 2013-08-08 23:12       ` Michael Neuling
  2013-08-09  3:23         ` Michael Ellerman
  0 siblings, 1 reply; 76+ messages in thread
From: Michael Neuling @ 2013-08-08 23:12 UTC (permalink / raw)
  To: Anton Blanchard; +Cc: linuxppc-dev, Paul Mackerras

Anton Blanchard <anton@samba.org> wrote:

> 
> Hi,
> 
> > The CONFIG_VIRTUALIZATION disabling should be done in the Kconfig not
> > here. 
> > 
> > I'm not that keen on another defconfig.  benh is already talking about
> > having a powernv defconfig.  I'm worried we are going to fragment the
> > defconfigs.  If you want something special like LE, then change the
> > default one.
> 
> I agree we don't want machine specific defconfigs, but I think it makes
> sense to have ones that cover the key options that conflict. I'm
> thinking 32bit, 64bit, 64bit BookE, 64bit LE etc.
> 
> One bonus is if we have a smaller set of defconfigs we might actually
> get better testing. I have no idea which 32bit defconfigs I should test
> for example, and I'm not going to test them all!

FWIW, I test with these configs and it seems to catch most of 32/64 bit,
SMP/UP, differnt MMU issues:

  pseries_defconfig 
  ppc64_defconfig 
  ppc64e_defconfig 
  pmac32_defconfig
  ppc44x_defconfig 
  ppc6xx_defconfig 
  mpc85xx_smp_defconfig
  mpc85xx_defconfig 
  chroma_defconfig 
  ps3_defconfig 
  celleb_defconfig

Mikey

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-08 10:49     ` Michael Neuling
@ 2013-08-09  3:21       ` Michael Ellerman
  0 siblings, 0 replies; 76+ messages in thread
From: Michael Ellerman @ 2013-08-09  3:21 UTC (permalink / raw)
  To: Michael Neuling
  Cc: linuxppc-dev, Madhavan Srinivasan, Paul Mackerras, Anton Blanchard

On Thu, Aug 08, 2013 at 08:49:51PM +1000, Michael Neuling wrote:
> > > +CONFIG_SCHED_SMT=y
> > > +CONFIG_PPC_DENORMALISATION=y
> > > +CONFIG_HOTPLUG_PCI=m
> > Why the value "m" in the le_config file, when it is "y" in
> > pseries_defconfig.
> 
> We are out of sync already!?!?! *sigh* ;-)

That's good, more coverage!

cheers

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 63/63] powerpc: Add pseries_le_defconfig
  2013-08-08 23:12       ` Michael Neuling
@ 2013-08-09  3:23         ` Michael Ellerman
  0 siblings, 0 replies; 76+ messages in thread
From: Michael Ellerman @ 2013-08-09  3:23 UTC (permalink / raw)
  To: Michael Neuling; +Cc: Paul Mackerras, linuxppc-dev, Anton Blanchard

On Fri, Aug 09, 2013 at 09:12:37AM +1000, Michael Neuling wrote:
> Anton Blanchard <anton@samba.org> wrote:
> 
> > 
> > Hi,
> > 
> > > The CONFIG_VIRTUALIZATION disabling should be done in the Kconfig not
> > > here. 
> > > 
> > > I'm not that keen on another defconfig.  benh is already talking about
> > > having a powernv defconfig.  I'm worried we are going to fragment the
> > > defconfigs.  If you want something special like LE, then change the
> > > default one.
> > 
> > I agree we don't want machine specific defconfigs, but I think it makes
> > sense to have ones that cover the key options that conflict. I'm
> > thinking 32bit, 64bit, 64bit BookE, 64bit LE etc.
> > 
> > One bonus is if we have a smaller set of defconfigs we might actually
> > get better testing. I have no idea which 32bit defconfigs I should test
> > for example, and I'm not going to test them all!
> 
> FWIW, I test with these configs and it seems to catch most of 32/64 bit,
> SMP/UP, differnt MMU issues:
> 
>   pseries_defconfig 
>   ppc64_defconfig 
>   ppc64e_defconfig 
>   pmac32_defconfig
>   ppc44x_defconfig 
>   ppc6xx_defconfig 
>   mpc85xx_smp_defconfig
>   mpc85xx_defconfig 
>   chroma_defconfig 

>   ps3_defconfig 
>   celleb_defconfig

I think cell_defconfig pretty much covers these too, and also gets you
coverage of the IBM blades.

And add randconfig too :)

cheers

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 36/63] powerpc: Book 3S MMU little endian support
  2013-08-07  4:20   ` Paul Mackerras
@ 2013-08-09  6:08     ` Anton Blanchard
  0 siblings, 0 replies; 76+ messages in thread
From: Anton Blanchard @ 2013-08-09  6:08 UTC (permalink / raw)
  To: Paul Mackerras; +Cc: linuxppc-dev


Hi Paul,

> On Wed, Aug 07, 2013 at 02:01:53AM +1000, Anton Blanchard wrote:
> 
> > +#ifdef __BIG_ENDIAN__
> >  #define HPTE_LOCK_BIT 3
> > +#else
> > +#define HPTE_LOCK_BIT (63-3)
> > +#endif
> 
> Are you deliberately using a different bit here?  AFAICS you are using
> 0x20 in the 7th byte as the lock bit for LE, whereas we use 0x08 in
> that byte on BE.  Both are software-use bits, so it will still work,
> but is there a rationale for the change?  Or did you mean (56+3)
> rather than (63-3)?

Ouch, nice catch! I got lucky there.

Anton

^ permalink raw reply	[flat|nested] 76+ messages in thread

end of thread, other threads:[~2013-08-09  6:08 UTC | newest]

Thread overview: 76+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-06 16:01 [PATCH 00/63] 64bit PowerPC little endian support Anton Blanchard
2013-08-06 16:01 ` [PATCH 01/63] powerpc: Align p_toc Anton Blanchard
2013-08-06 16:01 ` [PATCH 02/63] powerpc: handle unaligned ldbrx/stdbrx Anton Blanchard
2013-08-06 16:01 ` [PATCH 03/63] powerpc: Wrap MSR macros with parentheses Anton Blanchard
2013-08-06 16:01 ` [PATCH 04/63] powerpc: Remove SAVE_VSRU and REST_VSRU macros Anton Blanchard
2013-08-06 16:01 ` [PATCH 05/63] powerpc: Simplify logic in include/uapi/asm/elf.h Anton Blanchard
2013-08-06 16:01 ` [PATCH 06/63] powerpc/pseries: Simplify H_GET_TERM_CHAR Anton Blanchard
2013-08-06 16:01 ` [PATCH 07/63] powerpc: Fix a number of sparse warnings Anton Blanchard
2013-08-06 16:01 ` [PATCH 08/63] powerpc/pci: Don't use bitfield for force_32bit_msi Anton Blanchard
2013-08-06 16:01 ` [PATCH 09/63] powerpc: Stop using non-architected shared_proc field in lppaca Anton Blanchard
2013-08-06 16:01 ` [PATCH 10/63] powerpc: Make prom.c device tree accesses endian safe Anton Blanchard
2013-08-06 16:01 ` [PATCH 11/63] powerpc: More little endian fixes for prom.c Anton Blanchard
2013-08-06 16:01 ` [PATCH 12/63] powerpc: Make RTAS device tree accesses endian safe Anton Blanchard
2013-08-06 16:01 ` [PATCH 13/63] powerpc: Make cache info " Anton Blanchard
2013-08-06 16:01 ` [PATCH 14/63] powerpc: Make RTAS calls " Anton Blanchard
2013-08-06 16:01 ` [PATCH 15/63] powerpc: Make logical to real cpu mapping code " Anton Blanchard
2013-08-06 16:01 ` [PATCH 16/63] powerpc: More little endian fixes for setup-common.c Anton Blanchard
2013-08-06 16:01 ` [PATCH 17/63] powerpc: Add some endian annotations to time and xics code Anton Blanchard
2013-08-06 16:01 ` [PATCH 18/63] powerpc: Fix some endian issues in " Anton Blanchard
2013-08-06 16:01 ` [PATCH 19/63] powerpc: of_parse_dma_window should take a __be32 *dma_window Anton Blanchard
2013-08-06 16:01 ` [PATCH 20/63] powerpc: Make device tree accesses in cache info code endian safe Anton Blanchard
2013-08-06 16:01 ` [PATCH 21/63] powerpc: Make prom_init.c " Anton Blanchard
2013-08-06 16:01 ` [PATCH 22/63] powerpc: Make device tree accesses in HVC VIO console " Anton Blanchard
2013-08-06 16:01 ` [PATCH 23/63] powerpc: Make device tree accesses in VIO subsystem " Anton Blanchard
2013-08-06 16:01 ` [PATCH 24/63] powerpc: Make OF PCI device tree accesses " Anton Blanchard
2013-08-06 16:01 ` [PATCH 25/63] powerpc: Make PCI device node " Anton Blanchard
2013-08-06 16:01 ` [PATCH 26/63] powerpc: Little endian fixes for legacy_serial.c Anton Blanchard
2013-08-06 16:01 ` [PATCH 27/63] powerpc: Make NUMA device node code endian safe Anton Blanchard
2013-08-06 16:01 ` [PATCH 28/63] powerpc: Add endian annotations to lppaca, slb_shadow and dtl_entry Anton Blanchard
2013-08-06 16:01 ` [PATCH 29/63] powerpc: Fix little endian " Anton Blanchard
2013-08-06 16:01 ` [PATCH 30/63] powerpc: Emulate instructions in little endian mode Anton Blanchard
2013-08-06 16:01 ` [PATCH 31/63] powerpc: Little endian SMP IPI demux Anton Blanchard
2013-08-06 16:01 ` [PATCH 32/63] powerpc/pseries: Fix endian issues in H_GET_TERM_CHAR/H_PUT_TERM_CHAR Anton Blanchard
2013-08-06 16:01 ` [PATCH 33/63] powerpc: Fix little endian coredumps Anton Blanchard
2013-08-06 16:01 ` [PATCH 34/63] powerpc: Make rwlocks endian safe Anton Blanchard
2013-08-06 16:01 ` [PATCH 35/63] powerpc: Fix endian issues in VMX copy loops Anton Blanchard
2013-08-06 16:01 ` [PATCH 36/63] powerpc: Book 3S MMU little endian support Anton Blanchard
2013-08-07  4:20   ` Paul Mackerras
2013-08-09  6:08     ` Anton Blanchard
2013-08-06 16:01 ` [PATCH 37/63] powerpc: Fix offset of FPRs in VSX registers in little endian builds Anton Blanchard
2013-08-06 16:01 ` [PATCH 38/63] powerpc: PTRACE_PEEKUSR/PTRACE_POKEUSER of FPR " Anton Blanchard
2013-08-06 16:01 ` [PATCH 39/63] powerpc: Little endian builds double word swap VSX state during context save/restore Anton Blanchard
2013-08-06 16:01 ` [PATCH 40/63] powerpc: Support endian agnostic MMIO Anton Blanchard
2013-08-06 16:01 ` [PATCH 41/63] powerpc: Add little endian support for word-at-a-time functions Anton Blanchard
2013-08-06 16:01 ` [PATCH 42/63] powerpc: Set MSR_LE bit on little endian builds Anton Blanchard
2013-08-06 16:02 ` [PATCH 43/63] powerpc: Reset MSR_LE on signal entry Anton Blanchard
2013-08-06 16:02 ` [PATCH 44/63] powerpc: Include the appropriate endianness header Anton Blanchard
2013-08-06 16:02 ` [PATCH 45/63] powerpc: endian safe trampoline Anton Blanchard
2013-08-06 16:02 ` [PATCH 46/63] powerpc: Add endian safe trampoline to pseries secondary thread entry Anton Blanchard
2013-08-06 16:02 ` [PATCH 47/63] pseries: Add H_SET_MODE to change exception endianness Anton Blanchard
2013-08-06 16:02 ` [PATCH 48/63] powerpc/kvm/book3s_hv: Add little endian guest support Anton Blanchard
2013-08-07  4:24   ` Paul Mackerras
2013-08-06 16:02 ` [PATCH 49/63] powerpc: Remove open coded byte swap macro in alignment handler Anton Blanchard
2013-08-06 16:02 ` [PATCH 50/63] powerpc: Remove hard coded FP offsets " Anton Blanchard
2013-08-06 16:02 ` [PATCH 51/63] powerpc: Alignment handler shouldn't access VSX registers with TS_FPR Anton Blanchard
2013-08-06 16:02 ` [PATCH 52/63] powerpc: Add little endian support to alignment handler Anton Blanchard
2013-08-06 16:02 ` [PATCH 53/63] powerpc: Handle VSX alignment faults in little endian mode Anton Blanchard
2013-08-06 16:02 ` [PATCH 54/63] ibmveth: Fix little endian issues Anton Blanchard
2013-08-06 16:02 ` [PATCH 55/63] ibmvscsi: " Anton Blanchard
2013-08-06 16:02 ` [PATCH 56/63] [SCSI] lpfc: Don't force CONFIG_GENERIC_CSUM on Anton Blanchard
2013-08-06 16:02 ` [PATCH 57/63] powerpc: Use generic checksum code in little endian Anton Blanchard
2013-08-06 16:02 ` [PATCH 58/63] powerpc: Use generic memcpy " Anton Blanchard
2013-08-06 16:02 ` [PATCH 59/63] powerpc: uname should return ppc64le/ppcle on little endian builds Anton Blanchard
2013-08-06 16:02 ` [PATCH 60/63] powerpc: Add ability to build little endian kernels Anton Blanchard
2013-08-06 16:02 ` [PATCH 61/63] powerpc: Don't set HAVE_EFFICIENT_UNALIGNED_ACCESS on little endian builds Anton Blanchard
2013-08-06 16:02 ` [PATCH 62/63] powerpc: Work around little endian gcc bug Anton Blanchard
2013-08-06 16:02 ` [PATCH 63/63] powerpc: Add pseries_le_defconfig Anton Blanchard
2013-08-06 23:31   ` Michael Neuling
2013-08-07  5:16     ` Michael Ellerman
2013-08-08 15:32       ` Aneesh Kumar K.V
2013-08-08  7:53     ` Anton Blanchard
2013-08-08 23:12       ` Michael Neuling
2013-08-09  3:23         ` Michael Ellerman
2013-08-08  8:33   ` Madhavan Srinivasan
2013-08-08 10:49     ` Michael Neuling
2013-08-09  3:21       ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).