All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] powerpc/64: use asm sections for head/exception layout
@ 2016-09-13  3:08 Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 1/8] powerpc/pseries: hypervisor facility unavailable use correct handler Nicholas Piggin
                   ` (7 more replies)
  0 siblings, 8 replies; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

Hi,

This patch uses asm/ld sections and macro wrappers to hide the details
of placing exception vectors at the correct location.

Currently we lay out everything in head_64.S as it is to appear in
the output, starting from physical 0, and uses '.' location counter
directives to place vectors correctly.

After this series, sections are created for real and virtual
exceptions, and trampoline/helper spaces, and "common" handlers
are put into .text section. Exception handlers are specified with
macros that define type, name, and locations.

  /* This is the entirity of the decrementer handlers */
  VECTOR_HANDLER_REAL_MASKABLE(decrementer, 0x900, 0x980)
  VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
  TRAMP_KVM(PACA_EXGEN, 0x900)
  COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)

  /* Although not all handlers come out quite so neatly */

Benefits:
* All handler code for a given exception can be grouped together
  in the file.
* Important head sections can be identified and handled by the
  linker. This can be used to prevent branch stubs from being
  placed inside fixed section code, for example.
* Most overflows can be caught at compile-time.
* Shuffling handlers around or making extra space for vectors
  becomes much simpler.

However there are some negatives:
* Another layer of macros in exception-64s.S
* asm/linker sections are not trivial to use, taking addresses
  can require a helper in some cases because the assembler can't
  calculate deltas between sections itself.

Intermediate steps of this patch are quite painful due to lots of
shuffling. I have tried to verify before/after equivalence of
compiled binary using objdump diffs, although it's not always
easy to verify completely (due to offsets and labels and padding
changing).

Vectors do not have to specify exact implementation details,
only requirements, which makes changes easier. For example,
an "inline" handler can be made OOL easily, because the macro
can emit the trampoline code into the correct section:

-VECTOR_HANDLER_REAL_MASKABLE(decrementer, 0x900, 0x980)
+VECTOR_HANDLER_REAL_OOL_MASKABLE(decrementer, 0x900, 0x980)

If 0x3000-0x4000 is required for new hardware exceptions, then
it's a simple change:

OPEN_FIXED_SECTION(real_vectors,        0x0100, 0x1900)
-OPEN_FIXED_SECTION(real_trampolines,    0x1900, 0x4000)
+OPEN_FIXED_SECTION(real_trampolines,    0x1900, 0x3000)
-OPEN_FIXED_SECTION(virt_vectors,        0x4000, 0x6000)
+OPEN_FIXED_SECTION(virt_vectors,        0x3000, 0x6000)
OPEN_FIXED_SECTION(virt_trampolines,    0x6000, 0x7000)

(This does also require sufficient space in real_trampolines)

I posted this series a while ago, but didn't have much feedback.
This one is significantly trimmed down with fewer
unrelated/unnecessary changes.

Comments?

Thanks,
Nick

Nicholas Piggin (8):
  powerpc/pseries: hypervisor facility unavailable use correct handler
  powerpc/pseries: syscall remove trampoline
  powerpc/pseries: exception vector macros
  powerpc/pseries: consolidate exception handler alignment
  powerpc/64: use gas sections for arranging exception vectors
  powerpc/pseries: move related exception code together
  powerpc/pseries: use single macro for both parts of OOL exception
  powerpc/pseries: remove unused exception code, small cleanups

 arch/powerpc/include/asm/exception-64s.h |  135 +-
 arch/powerpc/include/asm/head-64.h       |  348 +++++
 arch/powerpc/kernel/exceptions-64s.S     | 2135 ++++++++++++++----------------
 arch/powerpc/kernel/head_64.S            |   58 +-
 arch/powerpc/kernel/vmlinux.lds.S        |   45 +-
 5 files changed, 1510 insertions(+), 1211 deletions(-)
 create mode 100644 arch/powerpc/include/asm/head-64.h

-- 
2.9.3

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/8] powerpc/pseries: hypervisor facility unavailable use correct handler
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  2016-09-25  3:00   ` [1/8] " Michael Ellerman
  2016-09-13  3:08 ` [PATCH 2/8] powerpc/pseries: syscall remove trampoline Nicholas Piggin
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

The 0xf80 hv_facility_unavailable trampoline branches to the 0xf60
handler. This works because they both do the same thing, but it should
be fixed.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index bffec73..4015f71 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -355,7 +355,7 @@ facility_unavailable_trampoline:
 hv_facility_unavailable_trampoline:
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	facility_unavailable_hv
+	b	hv_facility_unavailable_hv
 
 #ifdef CONFIG_CBE_RAS
 	STD_EXCEPTION_HV(0x1200, 0x1202, cbe_system_error)
@@ -600,7 +600,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf40)
 	STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
 	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf60)
-	STD_EXCEPTION_HV_OOL(0xf82, facility_unavailable)
+	STD_EXCEPTION_HV_OOL(0xf82, hv_facility_unavailable)
 	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82)
 
 /*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/8] powerpc/pseries: syscall remove trampoline
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 1/8] powerpc/pseries: hypervisor facility unavailable use correct handler Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  2016-09-25  3:00   ` [2/8] " Michael Ellerman
  2016-09-13  3:08 ` [PATCH 3/8] powerpc/pseries: exception vector macros Nicholas Piggin
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

The syscall trampoline is not required, remove it.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 4015f71..ea57a2c 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -42,7 +42,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 #define SYSCALL_PSERIES_2_RFID 					\
 	mfspr	r12,SPRN_SRR1 ;					\
 	ld	r10,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER(r10, system_call_entry) ; 			\
+	LOAD_HANDLER(r10, system_call_common) ; 		\
 	mtspr	SPRN_SRR0,r10 ; 				\
 	ld	r10,PACAKMSR(r13) ;				\
 	mtspr	SPRN_SRR1,r10 ; 				\
@@ -65,7 +65,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 #define SYSCALL_PSERIES_2_DIRECT				\
 	mflr	r10 ;						\
 	ld	r12,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER(r12, system_call_entry) ;			\
+	LOAD_HANDLER(r12, system_call_common) ;			\
 	mtctr	r12 ;						\
 	mfspr	r12,SPRN_SRR1 ;					\
 	/* Re-use of r13... No spare regs to do this */	\
@@ -910,10 +910,6 @@ hv_facility_unavailable_relon_trampoline:
 #endif
 	STD_RELON_EXCEPTION_PSERIES(0x5700, 0x1700, altivec_assist)
 
-	.align	7
-system_call_entry:
-	b	system_call_common
-
 ppc64_runlatch_on_trampoline:
 	b	__ppc64_runlatch_on
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/8] powerpc/pseries: exception vector macros
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 1/8] powerpc/pseries: hypervisor facility unavailable use correct handler Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 2/8] powerpc/pseries: syscall remove trampoline Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  2016-09-13  6:56   ` kbuild test robot
  2016-09-13  3:08 ` [PATCH 4/8] powerpc/pseries: consolidate exception handler alignment Nicholas Piggin
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

Create arch/powerpc/include/asm/head-64.h with macros that specify
an exception vector (name, type, location), which will be used to
label and lay out exceptions into the object file.

Naming is moved out of exception-64s.h, which is used to specify the
implementation of exception handlers.

objdump of generated code in exception vectors is unchanged except for
names. Alignment directives scattered around are annoying, but done
this way so that disassembly can verify identical instruction
generation before and after patch. These get cleaned up in future
patch.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/exception-64s.h | 131 +++----
 arch/powerpc/include/asm/head-64.h       | 186 ++++++++++
 arch/powerpc/kernel/exceptions-64s.S     | 599 +++++++++++++++----------------
 3 files changed, 538 insertions(+), 378 deletions(-)
 create mode 100644 arch/powerpc/include/asm/head-64.h

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index bed66e5..6c0080f 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -34,6 +34,7 @@
  * exception handlers (including pSeries LPAR) and iSeries LPAR
  * implementations as possible.
  */
+#include <asm/head-64.h>
 
 #define EX_R9		0
 #define EX_R10		8
@@ -171,6 +172,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	std	r12,area+EX_R12(r13);					\
 	GET_SCRATCH0(r10);						\
 	std	r10,area+EX_R13(r13)
+
 #define EXCEPTION_PROLOG_1(area, extra, vec)				\
 	__EXCEPTION_PROLOG_1(area, extra, vec)
 
@@ -192,10 +194,10 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 	EXCEPTION_PROLOG_1(area, extra, vec);				\
 	EXCEPTION_PROLOG_PSERIES_1(label, h);
 
-#define __KVMTEST(n)							\
-	lbz	r10,HSTATE_IN_GUEST(r13);			\
+#define __KVMTEST(h, n)							\
+	lbz	r10,HSTATE_IN_GUEST(r13);				\
 	cmpwi	r10,0;							\
-	bne	do_kvm_##n
+	bne	do_kvm_##h##n
 
 #ifdef CONFIG_KVM_BOOK3S_HV_POSSIBLE
 /*
@@ -208,8 +210,7 @@ END_FTR_SECTION_NESTED(ftr,ftr,943)
 #define kvmppc_interrupt kvmppc_interrupt_pr
 #endif
 
-#define __KVM_HANDLER(area, h, n)					\
-do_kvm_##n:								\
+#define __KVM_HANDLER_PROLOG(area, n)					\
 	BEGIN_FTR_SECTION_NESTED(947)					\
 	ld	r10,area+EX_CFAR(r13);					\
 	std	r10,HSTATE_CFAR(r13);					\
@@ -222,21 +223,23 @@ do_kvm_##n:								\
 	stw	r9,HSTATE_SCRATCH1(r13);				\
 	ld	r9,area+EX_R9(r13);					\
 	std	r12,HSTATE_SCRATCH0(r13);				\
+
+#define __KVM_HANDLER(area, h, n)					\
+	__KVM_HANDLER_PROLOG(area, n)					\
 	li	r12,n;							\
 	b	kvmppc_interrupt
 
 #define __KVM_HANDLER_SKIP(area, h, n)					\
-do_kvm_##n:								\
 	cmpwi	r10,KVM_GUEST_MODE_SKIP;				\
 	ld	r10,area+EX_R10(r13);					\
 	beq	89f;							\
-	stw	r9,HSTATE_SCRATCH1(r13);			\
+	stw	r9,HSTATE_SCRATCH1(r13);				\
 	BEGIN_FTR_SECTION_NESTED(948)					\
 	ld	r9,area+EX_PPR(r13);					\
 	std	r9,HSTATE_PPR(r13);					\
 	END_FTR_SECTION_NESTED(CPU_FTR_HAS_PPR,CPU_FTR_HAS_PPR,948);	\
 	ld	r9,area+EX_R9(r13);					\
-	std	r12,HSTATE_SCRATCH0(r13);			\
+	std	r12,HSTATE_SCRATCH0(r13);				\
 	li	r12,n;							\
 	b	kvmppc_interrupt;					\
 89:	mtocrf	0x80,r9;						\
@@ -244,12 +247,12 @@ do_kvm_##n:								\
 	b	kvmppc_skip_##h##interrupt
 
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-#define KVMTEST(n)			__KVMTEST(n)
+#define KVMTEST(h, n)			__KVMTEST(h, n)
 #define KVM_HANDLER(area, h, n)		__KVM_HANDLER(area, h, n)
 #define KVM_HANDLER_SKIP(area, h, n)	__KVM_HANDLER_SKIP(area, h, n)
 
 #else
-#define KVMTEST(n)
+#define KVMTEST(h, n)
 #define KVM_HANDLER(area, h, n)
 #define KVM_HANDLER_SKIP(area, h, n)
 #endif
@@ -333,69 +336,50 @@ do_kvm_##n:								\
 /*
  * Exception vectors.
  */
-#define STD_EXCEPTION_PSERIES(vec, label)		\
-	. = vec;					\
-	.globl label##_pSeries;				\
-label##_pSeries:					\
+#define STD_EXCEPTION_PSERIES(vec, label)			\
 	SET_SCRATCH0(r13);		/* save r13 */		\
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common,	\
-				 EXC_STD, KVMTEST, vec)
+	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label,		\
+				 EXC_STD, KVMTEST_PR, vec);	\
 
 /* Version of above for when we have to branch out-of-line */
+#define __OOL_EXCEPTION(vec, label, hdlr)			\
+	SET_SCRATCH0(r13)					\
+	EXCEPTION_PROLOG_0(PACA_EXGEN)				\
+	b hdlr;
+
 #define STD_EXCEPTION_PSERIES_OOL(vec, label)			\
-	.globl label##_pSeries;					\
-label##_pSeries:						\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, vec);	\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_STD)
-
-#define STD_EXCEPTION_HV(loc, vec, label)		\
-	. = loc;					\
-	.globl label##_hv;				\
-label##_hv:						\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_PR, vec);	\
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD)
+
+#define STD_EXCEPTION_HV(loc, vec, label)			\
 	SET_SCRATCH0(r13);	/* save r13 */			\
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label##_common,	\
-				 EXC_HV, KVMTEST, vec)
+	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, label,		\
+				 EXC_HV, KVMTEST_HV, vec);
 
-/* Version of above for when we have to branch out-of-line */
-#define STD_EXCEPTION_HV_OOL(vec, label)		\
-	.globl label##_hv;				\
-label##_hv:						\
-	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, vec);	\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV)
+#define STD_EXCEPTION_HV_OOL(vec, label)			\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, vec);	\
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
 #define STD_RELON_EXCEPTION_PSERIES(loc, vec, label)	\
-	. = loc;					\
-	.globl label##_relon_pSeries;			\
-label##_relon_pSeries:					\
 	/* No guest interrupts come through here */	\
 	SET_SCRATCH0(r13);		/* save r13 */	\
-	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
-				       EXC_STD, NOTEST, vec)
+	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label, EXC_STD, NOTEST, vec);
 
 #define STD_RELON_EXCEPTION_PSERIES_OOL(vec, label)		\
-	.globl label##_relon_pSeries;				\
-label##_relon_pSeries:						\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec);		\
-	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, EXC_STD)
+	EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_STD)
 
 #define STD_RELON_EXCEPTION_HV(loc, vec, label)		\
-	. = loc;					\
-	.globl label##_relon_hv;			\
-label##_relon_hv:					\
 	/* No guest interrupts come through here */	\
 	SET_SCRATCH0(r13);	/* save r13 */		\
-	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label##_common, \
-				       EXC_HV, NOTEST, vec)
+	EXCEPTION_RELON_PROLOG_PSERIES(PACA_EXGEN, label, EXC_HV, NOTEST, vec);
 
 #define STD_RELON_EXCEPTION_HV_OOL(vec, label)			\
-	.globl label##_relon_hv;				\
-label##_relon_hv:						\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, vec);		\
-	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, EXC_HV)
+	EXCEPTION_RELON_PROLOG_PSERIES_1(label, EXC_HV)
 
 /* This associate vector numbers with bits in paca->irq_happened */
 #define SOFTEN_VALUE_0x500	PACA_IRQ_EE
-#define SOFTEN_VALUE_0x502	PACA_IRQ_EE
 #define SOFTEN_VALUE_0x900	PACA_IRQ_DEC
 #define SOFTEN_VALUE_0x982	PACA_IRQ_DEC
 #define SOFTEN_VALUE_0xa00	PACA_IRQ_DBELL
@@ -411,16 +395,23 @@ label##_relon_hv:						\
 	cmpwi	r10,0;							\
 	li	r10,SOFTEN_VALUE_##vec;					\
 	beq	masked_##h##interrupt
+
 #define _SOFTEN_TEST(h, vec)	__SOFTEN_TEST(h, vec)
 
 #define SOFTEN_TEST_PR(vec)						\
-	KVMTEST(vec);							\
+	KVMTEST(EXC_STD, vec);						\
 	_SOFTEN_TEST(EXC_STD, vec)
 
 #define SOFTEN_TEST_HV(vec)						\
-	KVMTEST(vec);							\
+	KVMTEST(EXC_HV, vec);						\
 	_SOFTEN_TEST(EXC_HV, vec)
 
+#define KVMTEST_PR(vec)							\
+	KVMTEST(EXC_STD, vec)
+
+#define KVMTEST_HV(vec)							\
+	KVMTEST(EXC_HV, vec)
+
 #define SOFTEN_NOTEST_PR(vec)		_SOFTEN_TEST(EXC_STD, vec)
 #define SOFTEN_NOTEST_HV(vec)		_SOFTEN_TEST(EXC_HV, vec)
 
@@ -428,58 +419,47 @@ label##_relon_hv:						\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
 	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, h);
+	EXCEPTION_PROLOG_PSERIES_1(label, h);
 
 #define _MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_EXCEPTION_PSERIES(vec, label, h, extra)
 
 #define MASKABLE_EXCEPTION_PSERIES(loc, vec, label)			\
-	. = loc;							\
-	.globl label##_pSeries;						\
-label##_pSeries:							\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
 				    EXC_STD, SOFTEN_TEST_PR)
 
+#define MASKABLE_EXCEPTION_PSERIES_OOL(vec, label)			\
+	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_PR, vec);		\
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_STD)
+
 #define MASKABLE_EXCEPTION_HV(loc, vec, label)				\
-	. = loc;							\
-	.globl label##_hv;						\
-label##_hv:								\
 	_MASKABLE_EXCEPTION_PSERIES(vec, label,				\
 				    EXC_HV, SOFTEN_TEST_HV)
 
 #define MASKABLE_EXCEPTION_HV_OOL(vec, label)				\
-	.globl label##_hv;						\
-label##_hv:								\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_TEST_HV, vec);		\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
 #define __MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
 	SET_SCRATCH0(r13);    /* save r13 */				\
 	EXCEPTION_PROLOG_0(PACA_EXGEN);					\
-	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);		\
-	EXCEPTION_RELON_PROLOG_PSERIES_1(label##_common, h);
-#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)	\
+	__EXCEPTION_PROLOG_1(PACA_EXGEN, extra, vec);			\
+	EXCEPTION_RELON_PROLOG_PSERIES_1(label, h)
+
+#define _MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)		\
 	__MASKABLE_RELON_EXCEPTION_PSERIES(vec, label, h, extra)
 
 #define MASKABLE_RELON_EXCEPTION_PSERIES(loc, vec, label)		\
-	. = loc;							\
-	.globl label##_relon_pSeries;					\
-label##_relon_pSeries:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_STD, SOFTEN_NOTEST_PR)
 
 #define MASKABLE_RELON_EXCEPTION_HV(loc, vec, label)			\
-	. = loc;							\
-	.globl label##_relon_hv;					\
-label##_relon_hv:							\
 	_MASKABLE_RELON_EXCEPTION_PSERIES(vec, label,			\
 					  EXC_HV, SOFTEN_NOTEST_HV)
 
 #define MASKABLE_RELON_EXCEPTION_HV_OOL(vec, label)			\
-	.globl label##_relon_hv;					\
-label##_relon_hv:							\
 	EXCEPTION_PROLOG_1(PACA_EXGEN, SOFTEN_NOTEST_HV, vec);		\
-	EXCEPTION_PROLOG_PSERIES_1(label##_common, EXC_HV);
+	EXCEPTION_PROLOG_PSERIES_1(label, EXC_HV)
 
 /*
  * Our exception common code can be passed various "additions"
@@ -505,9 +485,6 @@ BEGIN_FTR_SECTION				\
 END_FTR_SECTION_IFSET(CPU_FTR_CTRL)
 
 #define EXCEPTION_COMMON(trap, label, hdlr, ret, additions)	\
-	.align	7;						\
-	.globl label##_common;					\
-label##_common:							\
 	EXCEPTION_PROLOG_COMMON(trap, PACA_EXGEN);		\
 	/* Volatile regs are potentially clobbered here */	\
 	additions;						\
diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
new file mode 100644
index 0000000..cf44542
--- /dev/null
+++ b/arch/powerpc/include/asm/head-64.h
@@ -0,0 +1,186 @@
+#ifndef _ASM_POWERPC_HEAD_64_H
+#define _ASM_POWERPC_HEAD_64_H
+
+#include <asm/cache.h>
+
+#define VECTOR_HANDLER_REAL_BEGIN(name, start, end)			\
+	. = start ;							\
+	.global exc_##start##_##name ;					\
+exc_##start##_##name:
+
+#define VECTOR_HANDLER_REAL_END(name, start, end)
+
+#define VECTOR_HANDLER_VIRT_BEGIN(name, start, end)			\
+	. = start ;							\
+	.global exc_##start##_##name ;					\
+exc_##start##_##name:
+
+#define VECTOR_HANDLER_VIRT_END(name, start, end)
+
+#define COMMON_HANDLER_BEGIN(name)					\
+	.global name;							\
+name:
+
+#define COMMON_HANDLER_END(name)
+
+#define TRAMP_HANDLER_BEGIN(name)					\
+	.global name ;							\
+name:
+
+#define TRAMP_HANDLER_END(name)
+
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+#define TRAMP_KVM_BEGIN(name)						\
+	TRAMP_HANDLER_BEGIN(name)
+
+#define TRAMP_KVM_END(name)						\
+	TRAMP_HANDLER_END(name)
+#else
+#define TRAMP_KVM_BEGIN(name)
+#define TRAMP_KVM_END(name)
+#endif
+
+#define VECTOR_HANDLER_REAL_NONE(start, end)
+
+#define VECTOR_HANDLER_VIRT_NONE(start, end)
+
+
+#define VECTOR_HANDLER_REAL(name, start, end)				\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	STD_EXCEPTION_PSERIES(start, name##_common);			\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define VECTOR_HANDLER_VIRT(name, start, end, realvec)			\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	STD_RELON_EXCEPTION_PSERIES(start, realvec, name##_common);	\
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define VECTOR_HANDLER_REAL_MASKABLE(name, start, end)			\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	MASKABLE_EXCEPTION_PSERIES(start, start, name##_common);	\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define VECTOR_HANDLER_VIRT_MASKABLE(name, start, end, realvec)		\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	MASKABLE_RELON_EXCEPTION_PSERIES(start, realvec, name##_common); \
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define VECTOR_HANDLER_REAL_HV(name, start, end)			\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	STD_EXCEPTION_HV(start, start + 0x2, name##_common);		\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define VECTOR_HANDLER_VIRT_HV(name, start, end, realvec)		\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	STD_RELON_EXCEPTION_HV(start, realvec + 0x2, name##_common);	\
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define __VECTOR_HANDLER_REAL_OOL(name, start, end)			\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	__OOL_EXCEPTION(start, label, tramp_real_##name);		\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define __TRAMP_HANDLER_REAL_OOL(name, vec)				\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	STD_EXCEPTION_PSERIES_OOL(vec, name##_common);			\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_REAL_OOL_MASKABLE(name, vec)			\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common);		\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(name, start, end, handler)	\
+	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
+	__OOL_EXCEPTION(start, label, handler);				\
+	VECTOR_HANDLER_REAL_END(name, start, end);
+
+#define __VECTOR_HANDLER_REAL_OOL_HV(name, start, end)			\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_REAL_OOL_HV(name, vec)				\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	STD_EXCEPTION_HV_OOL(vec + 0x2, name##_common);			\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(name, vec)			\
+	TRAMP_HANDLER_BEGIN(tramp_real_##name);				\
+	MASKABLE_EXCEPTION_HV_OOL(vec, name##_common);			\
+	TRAMP_HANDLER_END(tramp_real_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL(name, start, end)			\
+	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
+	__OOL_EXCEPTION(start, label, tramp_virt_##name);		\
+	VECTOR_HANDLER_VIRT_END(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL(name, realvec)				\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	STD_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end)		\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL_MASKABLE(name, realvec)		\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL_HV(name, start, end)			\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL_HV(name, realvec)			\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	STD_RELON_EXCEPTION_HV_OOL(realvec, name##_common);		\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define __VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end)		\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
+
+#define __TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(name, realvec)		\
+	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common);	\
+	TRAMP_HANDLER_END(tramp_virt_##name);
+
+#define TRAMP_KVM(area, n)						\
+	TRAMP_KVM_BEGIN(do_kvm_##n);					\
+	KVM_HANDLER(area, EXC_STD, n);					\
+	TRAMP_KVM_END(do_kvm_##n)
+
+#define TRAMP_KVM_SKIP(area, n)						\
+	TRAMP_KVM_BEGIN(do_kvm_##n);					\
+	KVM_HANDLER_SKIP(area, EXC_STD, n);				\
+	TRAMP_KVM_END(do_kvm_##n)
+
+#define TRAMP_KVM_HV(area, n)						\
+	TRAMP_KVM_BEGIN(do_kvm_H##n);					\
+	KVM_HANDLER(area, EXC_HV, n + 0x2);				\
+	TRAMP_KVM_END(do_kvm_H##n)
+
+#define TRAMP_KVM_HV_SKIP(area, n)					\
+	TRAMP_KVM_BEGIN(do_kvm_H##n);					\
+	KVM_HANDLER_SKIP(area, EXC_HV, n + 0x2);			\
+	TRAMP_KVM_END(do_kvm_H##n)
+
+#define COMMON_HANDLER(name, realvec, hdlr)				\
+	COMMON_HANDLER_BEGIN(name);					\
+	STD_EXCEPTION_COMMON(realvec, name, hdlr);			\
+	COMMON_HANDLER_END(name);
+
+#define COMMON_HANDLER_ASYNC(name, realvec, hdlr)			\
+	COMMON_HANDLER_BEGIN(name);					\
+	STD_EXCEPTION_COMMON_ASYNC(realvec, name, hdlr);		\
+	COMMON_HANDLER_END(name);
+
+#define COMMON_HANDLER_HV(name, realvec, hdlr)				\
+	COMMON_HANDLER_BEGIN(name);					\
+	STD_EXCEPTION_COMMON(realvec + 0x2, name, hdlr);		\
+	COMMON_HANDLER_END(name);
+
+#endif	/* _ASM_POWERPC_HEAD_64_H */
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index ea57a2c..fc682ac 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -16,6 +16,7 @@
 #include <asm/exception-64s.h>
 #include <asm/ptrace.h>
 #include <asm/cpuidle.h>
+#include <asm/head-64.h>
 
 /*
  * We layout physical memory as follows:
@@ -94,8 +95,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
 	.globl __start_interrupts
 __start_interrupts:
 
-	.globl system_reset_pSeries;
-system_reset_pSeries:
+VECTOR_HANDLER_REAL_BEGIN(system_reset, 0x100, 0x200)
 	SET_SCRATCH0(r13)
 #ifdef CONFIG_PPC_P7_NAP
 BEGIN_FTR_SECTION
@@ -136,9 +136,9 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 #endif /* CONFIG_PPC_P7_NAP */
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
 				 NOTEST, 0x100)
+VECTOR_HANDLER_REAL_END(system_reset, 0x100, 0x200)
 
-	. = 0x200
-machine_check_pSeries_1:
+VECTOR_HANDLER_REAL_BEGIN(machine_check, 0x200, 0x300)
 	/* This is moved out of line as it can be patched by FW, but
 	 * some code path might still want to branch into the original
 	 * vector
@@ -158,20 +158,14 @@ BEGIN_FTR_SECTION
 FTR_SECTION_ELSE
 	b	machine_check_pSeries_0
 ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
+VECTOR_HANDLER_REAL_END(machine_check, 0x200, 0x300)
 
-	. = 0x300
-	.globl data_access_pSeries
-data_access_pSeries:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, data_access_common, EXC_STD,
-				 KVMTEST, 0x300)
+VECTOR_HANDLER_REAL(data_access, 0x300, 0x380)
 
-	. = 0x380
-	.globl data_access_slb_pSeries
-data_access_slb_pSeries:
+VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
 	std	r3,PACA_EXSLB+EX_R3(r13)
 	mfspr	r3,SPRN_DAR
 	mfspr	r12,SPRN_SRR1
@@ -189,15 +183,14 @@ data_access_slb_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
 
-	STD_EXCEPTION_PSERIES(0x400, instruction_access)
+VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
 
-	. = 0x480
-	.globl instruction_access_slb_pSeries
-instruction_access_slb_pSeries:
+VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x480)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
 	std	r3,PACA_EXSLB+EX_R3(r13)
 	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
 	mfspr	r12,SPRN_SRR1
@@ -210,50 +203,52 @@ instruction_access_slb_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
 
 	/* We open code these as we can't have a ". = x" (even with
 	 * x = "." within a feature section
 	 */
-	. = 0x500;
-	.globl hardware_interrupt_pSeries;
+VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
 	.globl hardware_interrupt_hv;
-hardware_interrupt_pSeries:
 hardware_interrupt_hv:
 	BEGIN_FTR_SECTION
-		_MASKABLE_EXCEPTION_PSERIES(0x502, hardware_interrupt,
+		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
 					    EXC_HV, SOFTEN_TEST_HV)
+do_kvm_H0x500:
 		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
 	FTR_SECTION_ELSE
-		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt,
+		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
 					    EXC_STD, SOFTEN_TEST_PR)
+do_kvm_0x500:
 		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+VECTOR_HANDLER_REAL_END(hardware_interrupt, 0x500, 0x600)
+
+VECTOR_HANDLER_REAL(alignment, 0x600, 0x700)
+
+TRAMP_KVM(PACA_EXGEN, 0x600)
+
+VECTOR_HANDLER_REAL(program_check, 0x700, 0x800)
+
+TRAMP_KVM(PACA_EXGEN, 0x700)
+
+VECTOR_HANDLER_REAL(fp_unavailable, 0x800, 0x900)
 
-	STD_EXCEPTION_PSERIES(0x600, alignment)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x600)
+TRAMP_KVM(PACA_EXGEN, 0x800)
 
-	STD_EXCEPTION_PSERIES(0x700, program_check)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x700)
+VECTOR_HANDLER_REAL_MASKABLE(decrementer, 0x900, 0x980)
 
-	STD_EXCEPTION_PSERIES(0x800, fp_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x800)
+VECTOR_HANDLER_REAL_HV(hdecrementer, 0x980, 0xa00)
 
-	. = 0x900
-	.globl decrementer_pSeries
-decrementer_pSeries:
-	_MASKABLE_EXCEPTION_PSERIES(0x900, decrementer, EXC_STD, SOFTEN_TEST_PR)
+VECTOR_HANDLER_REAL_MASKABLE(doorbell_super, 0xa00, 0xb00)
 
-	STD_EXCEPTION_HV(0x980, 0x982, hdecrementer)
+TRAMP_KVM(PACA_EXGEN, 0xa00)
 
-	MASKABLE_EXCEPTION_PSERIES(0xa00, 0xa00, doorbell_super)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xa00)
+VECTOR_HANDLER_REAL(trap_0b, 0xb00, 0xc00)
 
-	STD_EXCEPTION_PSERIES(0xb00, trap_0b)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xb00)
+TRAMP_KVM(PACA_EXGEN, 0xb00)
 
-	. = 0xc00
-	.globl	system_call_pSeries
-system_call_pSeries:
+VECTOR_HANDLER_REAL_BEGIN(system_call, 0xc00, 0xd00)
 	 /*
 	  * If CONFIG_KVM_BOOK3S_64_HANDLER is set, save the PPR (on systems
 	  * that support it) before changing to HMT_MEDIUM. That allows the KVM
@@ -270,7 +265,7 @@ system_call_pSeries:
 	std	r10,PACA_EXGEN+EX_R10(r13)
 	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
 	mfcr	r9
-	KVMTEST(0xc00)
+	KVMTEST_PR(0xc00)
 	GET_SCRATCH0(r13)
 #else
 	HMT_MEDIUM;
@@ -278,96 +273,59 @@ system_call_pSeries:
 	SYSCALL_PSERIES_1
 	SYSCALL_PSERIES_2_RFID
 	SYSCALL_PSERIES_3
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xc00)
+VECTOR_HANDLER_REAL_END(system_call, 0xc00, 0xd00)
+
+TRAMP_KVM(PACA_EXGEN, 0xc00)
+
+VECTOR_HANDLER_REAL(single_step, 0xd00, 0xe00)
+
+TRAMP_KVM(PACA_EXGEN, 0xd00)
 
-	STD_EXCEPTION_PSERIES(0xd00, single_step)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xd00)
 
 	/* At 0xe??? we have a bunch of hypervisor exceptions, we branch
 	 * out of line to handle them
 	 */
-	. = 0xe00
-hv_data_storage_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_data_storage_hv
+__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
 
-	. = 0xe20
-hv_instr_storage_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_instr_storage_hv
+__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
 
-	. = 0xe40
-emulation_assist_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	emulation_assist_hv
+__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
 
-	. = 0xe60
-hv_exception_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	hmi_exception_early
+__VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
 
-	. = 0xe80
-hv_doorbell_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_doorbell_hv
+__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
 
-	. = 0xea0
-hv_virt_irq_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_virt_irq_hv
+__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0, 0xec0)
 
-	/* We need to deal with the Altivec unavailable exception
-	 * here which is at 0xf20, thus in the middle of the
-	 * prolog code of the PerformanceMonitor one. A little
-	 * trickery is thus necessary
-	 */
-	. = 0xf00
-performance_monitor_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	performance_monitor_pSeries
+VECTOR_HANDLER_REAL_NONE(0xec0, 0xf00)
 
-	. = 0xf20
-altivec_unavailable_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	altivec_unavailable_pSeries
+__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
 
-	. = 0xf40
-vsx_unavailable_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	vsx_unavailable_pSeries
+__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
 
-	. = 0xf60
-facility_unavailable_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	facility_unavailable_pSeries
+__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
+
+__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
+
+__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
+
+VECTOR_HANDLER_REAL_NONE(0xfa0, 0x1200)
 
-	. = 0xf80
-hv_facility_unavailable_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	hv_facility_unavailable_hv
 
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_HV(0x1200, 0x1202, cbe_system_error)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1202)
-#endif /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_HV(cbe_system_error, 0x1200, 0x1300)
 
-	STD_EXCEPTION_PSERIES(0x1300, instruction_breakpoint)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x1300)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1200)
 
-	. = 0x1500
-	.global denorm_exception_hv
-denorm_exception_hv:
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
+#endif
+
+VECTOR_HANDLER_REAL(instruction_breakpoint, 0x1300, 0x1400)
+
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1300)
+
+VECTOR_HANDLER_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x1600)
 	mtspr	SPRN_SPRG_HSCRATCH0,r13
 	EXCEPTION_PROLOG_0(PACA_EXGEN)
 	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
@@ -380,31 +338,41 @@ denorm_exception_hv:
 	bne+	denorm_assist
 #endif
 
-	KVMTEST(0x1500)
+	KVMTEST_PR(0x1500)
 	EXCEPTION_PROLOG_PSERIES_1(denorm_common, EXC_HV)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x1500)
+VECTOR_HANDLER_REAL_END(denorm_exception_hv, 0x1500, 0x1600)
+
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1500)
 
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_HV(0x1600, 0x1602, cbe_maintenance)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1602)
-#endif /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_HV(cbe_maintenance, 0x1600, 0x1700)
 
-	STD_EXCEPTION_PSERIES(0x1700, altivec_assist)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x1700)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1600)
+
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
+#endif
+
+VECTOR_HANDLER_REAL(altivec_assist, 0x1700, 0x1800)
+
+TRAMP_KVM(PACA_EXGEN, 0x1700)
 
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_HV(0x1800, 0x1802, cbe_thermal)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0x1802)
-#else
+VECTOR_HANDLER_REAL_HV(cbe_thermal, 0x1800, 0x1900)
+
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
+
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
 	. = 0x1800
-#endif /* CONFIG_CBE_RAS */
+#endif
 
 
 /*** Out of line interrupts support ***/
 
-	.align	7
 	/* moved from 0x200 */
-machine_check_powernv_early:
+	.align 7;
+TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
 BEGIN_FTR_SECTION
 	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
 	/*
@@ -477,14 +445,15 @@ BEGIN_FTR_SECTION
 	b	1b
 	b	.	/* prevent speculative execution */
 END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+TRAMP_HANDLER_END(machine_check_powernv_early)
 
-machine_check_pSeries:
+TRAMP_HANDLER_BEGIN(machine_check_pSeries)
 	.globl machine_check_fwnmi
 machine_check_fwnmi:
 	SET_SCRATCH0(r13)		/* save r13 */
 	EXCEPTION_PROLOG_0(PACA_EXMC)
 machine_check_pSeries_0:
-	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST, 0x200)
+	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200)
 	/*
 	 * The following is essentially EXCEPTION_PROLOG_PSERIES_1 with the
 	 * difference that MSR_RI is not enabled, because PACA_EXMC is being
@@ -502,16 +471,18 @@ machine_check_pSeries_0:
 	rfid
 	b	.	/* prevent speculative execution */
 
-	KVM_HANDLER_SKIP(PACA_EXMC, EXC_STD, 0x200)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_STD, 0x300)
-	KVM_HANDLER_SKIP(PACA_EXSLB, EXC_STD, 0x380)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x400)
-	KVM_HANDLER(PACA_EXSLB, EXC_STD, 0x480)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x900)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x982)
+TRAMP_HANDLER_END(machine_check_pSeries)
+
+TRAMP_KVM_SKIP(PACA_EXMC, 0x200)
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x300)
+TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
+TRAMP_KVM(PACA_EXGEN, 0x400)
+TRAMP_KVM(PACA_EXSLB, 0x480)
+TRAMP_KVM(PACA_EXGEN, 0x900)
+TRAMP_KVM_HV(PACA_EXGEN, 0x980)
 
 #ifdef CONFIG_PPC_DENORMALISATION
-denorm_assist:
+COMMON_HANDLER_BEGIN(denorm_assist)
 BEGIN_FTR_SECTION
 /*
  * To denormalise we need to move a copy of the register to itself.
@@ -573,35 +544,43 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 	HRFID
 	b	.
 #endif
+COMMON_HANDLER_END(denorm_assist)
 
 	.align	7
 	/* moved from 0xe00 */
-	STD_EXCEPTION_HV_OOL(0xe02, h_data_storage)
-	KVM_HANDLER_SKIP(PACA_EXGEN, EXC_HV, 0xe02)
-	STD_EXCEPTION_HV_OOL(0xe22, h_instr_storage)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe22)
-	STD_EXCEPTION_HV_OOL(0xe42, emulation_assist)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe42)
-	MASKABLE_EXCEPTION_HV_OOL(0xe62, hmi_exception)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe62)
+__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
 
-	MASKABLE_EXCEPTION_HV_OOL(0xe82, h_doorbell)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xe82)
+__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
 
-	MASKABLE_EXCEPTION_HV_OOL(0xea2, h_virt_irq)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xea2)
+__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
+
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
+
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
+
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
+TRAMP_KVM_HV(PACA_EXGEN, 0xea0)
 
 	/* moved from 0xf00 */
-	STD_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf00)
-	STD_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf20)
-	STD_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf40)
-	STD_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_STD, 0xf60)
-	STD_EXCEPTION_HV_OOL(0xf82, hv_facility_unavailable)
-	KVM_HANDLER(PACA_EXGEN, EXC_HV, 0xf82)
+__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
+TRAMP_KVM(PACA_EXGEN, 0xf00)
+
+__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
+TRAMP_KVM(PACA_EXGEN, 0xf20)
+
+__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
+TRAMP_KVM(PACA_EXGEN, 0xf40)
+
+__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
+TRAMP_KVM(PACA_EXGEN, 0xf60)
+
+__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
 
 /*
  * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
@@ -654,7 +633,7 @@ masked_##_H##interrupt:					\
  * in the generated frame has EE set to 1 or the exception
  * handler will not properly re-enable them.
  */
-_GLOBAL(__replay_interrupt)
+COMMON_HANDLER_BEGIN(__replay_interrupt)
 	/* We are going to jump to the exception common code which
 	 * will retrieve various register values from the PACA which
 	 * we don't give a damn about, so we don't bother storing them.
@@ -679,22 +658,23 @@ FTR_SECTION_ELSE
 	beq	doorbell_super_common
 ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 	blr
+COMMON_HANDLER_END(__replay_interrupt)
 
 #ifdef CONFIG_PPC_PSERIES
 /*
  * Vectors for the FWNMI option.  Share common code.
  */
-	.globl system_reset_fwnmi
       .align 7
-system_reset_fwnmi:
+TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
 	SET_SCRATCH0(r13)		/* save r13 */
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
 				 NOTEST, 0x100)
+TRAMP_HANDLER_END(system_reset_fwnmi)
 
 #endif /* CONFIG_PPC_PSERIES */
 
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-kvmppc_skip_interrupt:
+TRAMP_HANDLER_BEGIN(kvmppc_skip_interrupt)
 	/*
 	 * Here all GPRs are unchanged from when the interrupt happened
 	 * except for r13, which is saved in SPRG_SCRATCH0.
@@ -705,8 +685,9 @@ kvmppc_skip_interrupt:
 	GET_SCRATCH0(r13)
 	rfid
 	b	.
+TRAMP_HANDLER_END(kvmppc_skip_interrupt)
 
-kvmppc_skip_Hinterrupt:
+TRAMP_HANDLER_BEGIN(kvmppc_skip_Hinterrupt)
 	/*
 	 * Here all GPRs are unchanged from when the interrupt happened
 	 * except for r13, which is saved in SPRG_SCRATCH0.
@@ -717,6 +698,7 @@ kvmppc_skip_Hinterrupt:
 	GET_SCRATCH0(r13)
 	hrfid
 	b	.
+TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
 #endif
 
 /*
@@ -728,34 +710,50 @@ kvmppc_skip_Hinterrupt:
 
 /*** Common interrupt handlers ***/
 
-	STD_EXCEPTION_COMMON(0x100, system_reset, system_reset_exception)
+	.align 7;
+COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
+	.align 7;
+COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
+	.align 7;
+COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
+	.align 7;
+COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-	STD_EXCEPTION_COMMON_ASYNC(0x500, hardware_interrupt, do_IRQ)
-	STD_EXCEPTION_COMMON_ASYNC(0x900, decrementer, timer_interrupt)
-	STD_EXCEPTION_COMMON(0x980, hdecrementer, hdec_interrupt)
+	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
-	STD_EXCEPTION_COMMON_ASYNC(0xa00, doorbell_super, doorbell_exception)
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
 #else
-	STD_EXCEPTION_COMMON_ASYNC(0xa00, doorbell_super, unknown_exception)
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
 #endif
-	STD_EXCEPTION_COMMON(0xb00, trap_0b, unknown_exception)
-	STD_EXCEPTION_COMMON(0xd00, single_step, single_step_exception)
-	STD_EXCEPTION_COMMON(0xe00, trap_0e, unknown_exception)
-	STD_EXCEPTION_COMMON(0xe40, emulation_assist, emulation_assist_interrupt)
-	STD_EXCEPTION_COMMON_ASYNC(0xe60, hmi_exception, handle_hmi_exception)
+	.align 7;
+COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
+	.align 7;
+COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
+	.align 7;
+COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
+	.align 7;
+COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
+	.align 7;
+COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
+	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
-	STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, doorbell_exception)
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
 #else
-	STD_EXCEPTION_COMMON_ASYNC(0xe80, h_doorbell, unknown_exception)
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
-	STD_EXCEPTION_COMMON_ASYNC(0xea0, h_virt_irq, do_IRQ)
-	STD_EXCEPTION_COMMON_ASYNC(0xf00, performance_monitor, performance_monitor_exception)
-	STD_EXCEPTION_COMMON(0x1300, instruction_breakpoint, instruction_breakpoint_exception)
-	STD_EXCEPTION_COMMON(0x1502, denorm, unknown_exception)
+	.align 7;
+COMMON_HANDLER_ASYNC(h_virt_irq_common, 0xea0, do_IRQ)
+	.align 7;
+COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
+	.align 7;
+COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
+	.align 7;
+COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
+	.align 7;
 #ifdef CONFIG_ALTIVEC
-	STD_EXCEPTION_COMMON(0x1700, altivec_assist, altivec_assist_exception)
+COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
 #else
-	STD_EXCEPTION_COMMON(0x1700, altivec_assist, unknown_exception)
+COMMON_HANDLER(altivec_assist_common, 0x1700, unknown_exception)
 #endif
 
 	/*
@@ -773,10 +771,12 @@ kvmppc_skip_Hinterrupt:
 	 * only has extra guff for STAB-based processors -- which never
 	 * come here.
 	 */
-	STD_RELON_EXCEPTION_PSERIES(0x4300, 0x300, data_access)
-	. = 0x4380
-	.globl data_access_slb_relon_pSeries
-data_access_slb_relon_pSeries:
+VECTOR_HANDLER_VIRT_NONE(0x4100, 0x4200)
+VECTOR_HANDLER_VIRT_NONE(0x4200, 0x4300)
+
+VECTOR_HANDLER_VIRT(data_access, 0x4300, 0x4380, 0x300)
+
+VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
 	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x380)
@@ -797,11 +797,11 @@ data_access_slb_relon_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_VIRT_END(data_access_slb, 0x4380, 0x4400)
 
-	STD_RELON_EXCEPTION_PSERIES(0x4400, 0x400, instruction_access)
-	. = 0x4480
-	.globl instruction_access_slb_relon_pSeries
-instruction_access_slb_relon_pSeries:
+VECTOR_HANDLER_VIRT(instruction_access, 0x4400, 0x4480, 0x400)
+
+VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXSLB)
 	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x480)
@@ -817,101 +817,88 @@ instruction_access_slb_relon_pSeries:
 	mtctr	r10
 	bctr
 #endif
+VECTOR_HANDLER_VIRT_END(instruction_access_slb, 0x4480, 0x4500)
 
-	. = 0x4500
-	.globl hardware_interrupt_relon_pSeries;
+VECTOR_HANDLER_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x4600)
 	.globl hardware_interrupt_relon_hv;
-hardware_interrupt_relon_pSeries:
 hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x502, hardware_interrupt, EXC_HV, SOFTEN_TEST_HV)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_HV, SOFTEN_TEST_HV)
 	FTR_SECTION_ELSE
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt, EXC_STD, SOFTEN_TEST_PR)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_STD, SOFTEN_TEST_PR)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
-	STD_RELON_EXCEPTION_PSERIES(0x4600, 0x600, alignment)
-	STD_RELON_EXCEPTION_PSERIES(0x4700, 0x700, program_check)
-	STD_RELON_EXCEPTION_PSERIES(0x4800, 0x800, fp_unavailable)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4900, 0x900, decrementer)
-	STD_RELON_EXCEPTION_HV(0x4980, 0x982, hdecrementer)
-	MASKABLE_RELON_EXCEPTION_PSERIES(0x4a00, 0xa00, doorbell_super)
-	STD_RELON_EXCEPTION_PSERIES(0x4b00, 0xb00, trap_0b)
-
-	. = 0x4c00
-	.globl system_call_relon_pSeries
-system_call_relon_pSeries:
+VECTOR_HANDLER_VIRT_END(hardware_interrupt, 0x4500, 0x4600)
+
+VECTOR_HANDLER_VIRT(alignment, 0x4600, 0x4700, 0x600)
+VECTOR_HANDLER_VIRT(program_check, 0x4700, 0x4800, 0x700)
+VECTOR_HANDLER_VIRT(fp_unavailable, 0x4800, 0x4900, 0x800)
+VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
+VECTOR_HANDLER_VIRT_HV(hdecrementer, 0x4980, 0x4a00, 0x980)
+VECTOR_HANDLER_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x4b00, 0xa00)
+VECTOR_HANDLER_VIRT(trap_0b, 0x4b00, 0x4c00, 0xb00)
+
+VECTOR_HANDLER_VIRT_BEGIN(system_call, 0x4c00, 0x4d00)
 	HMT_MEDIUM
 	SYSCALL_PSERIES_1
 	SYSCALL_PSERIES_2_DIRECT
 	SYSCALL_PSERIES_3
+VECTOR_HANDLER_VIRT_END(system_call, 0x4c00, 0x4d00)
 
-	STD_RELON_EXCEPTION_PSERIES(0x4d00, 0xd00, single_step)
+VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
 
-	. = 0x4e00
-	b	.	/* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
 
-	. = 0x4e20
-	b	.	/* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
 
-	. = 0x4e40
-emulation_assist_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	emulation_assist_relon_hv
+__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
 
-	. = 0x4e60
-	b	.	/* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
 
-	. = 0x4e80
-h_doorbell_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_doorbell_relon_hv
+__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
 
-	. = 0x4ea0
-h_virt_irq_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	h_virt_irq_relon_hv
+__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0x4ea0, 0x4ec0)
 
-	. = 0x4f00
-performance_monitor_relon_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	performance_monitor_relon_pSeries
+VECTOR_HANDLER_VIRT_NONE(0x4ec0, 0x4f00)
 
-	. = 0x4f20
-altivec_unavailable_relon_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	altivec_unavailable_relon_pSeries
+__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
 
-	. = 0x4f40
-vsx_unavailable_relon_pseries_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	vsx_unavailable_relon_pSeries
+__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
 
-	. = 0x4f60
-facility_unavailable_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	facility_unavailable_relon_pSeries
+__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
 
-	. = 0x4f80
-hv_facility_unavailable_relon_trampoline:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	hv_facility_unavailable_relon_hv
+__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
+
+__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
+
+VECTOR_HANDLER_VIRT_NONE(0x4fa0, 0x5200)
+
+VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
+
+VECTOR_HANDLER_VIRT(instruction_breakpoint, 0x5300, 0x5400, 0x1300)
 
-	STD_RELON_EXCEPTION_PSERIES(0x5300, 0x1300, instruction_breakpoint)
 #ifdef CONFIG_PPC_DENORMALISATION
-	. = 0x5500
-	b	denorm_exception_hv
+VECTOR_HANDLER_VIRT_BEGIN(denorm_exception, 0x5500, 0x5600)
+	b	exc_0x1500_denorm_exception_hv
+VECTOR_HANDLER_VIRT_END(denorm_exception, 0x5500, 0x5600)
+#else
+VECTOR_HANDLER_VIRT_NONE(0x5500, 0x5600)
 #endif
-	STD_RELON_EXCEPTION_PSERIES(0x5700, 0x1700, altivec_assist)
 
-ppc64_runlatch_on_trampoline:
+VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
+
+VECTOR_HANDLER_VIRT(altivec_assist, 0x5700, 0x5800, 0x1700)
+
+VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
+
+TRAMP_HANDLER_BEGIN(ppc64_runlatch_on_trampoline)
 	b	__ppc64_runlatch_on
+TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
 
 /*
  * Here r13 points to the paca, r9 contains the saved CR,
@@ -919,8 +906,7 @@ ppc64_runlatch_on_trampoline:
  * r9 - r13 are saved in paca->exgen.
  */
 	.align	7
-	.globl data_access_common
-data_access_common:
+COMMON_HANDLER_BEGIN(data_access_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
 	mfspr	r10,SPRN_DSISR
@@ -938,10 +924,10 @@ BEGIN_MMU_FTR_SECTION
 MMU_FTR_SECTION_ELSE
 	b	handle_page_fault
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+COMMON_HANDLER_END(data_access_common)
 
 	.align  7
-	.globl  h_data_storage_common
-h_data_storage_common:
+COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
 	std     r10,PACA_EXGEN+EX_DAR(r13)
 	mfspr   r10,SPRN_HDSISR
@@ -952,10 +938,10 @@ h_data_storage_common:
 	addi    r3,r1,STACK_FRAME_OVERHEAD
 	bl      unknown_exception
 	b       ret_from_except
+COMMON_HANDLER_END(h_data_storage_common)
 
 	.align	7
-	.globl instruction_access_common
-instruction_access_common:
+COMMON_HANDLER_BEGIN(instruction_access_common)
 	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
 	RECONCILE_IRQ_STATE(r10, r11)
 	ld	r12,_MSR(r1)
@@ -969,17 +955,17 @@ BEGIN_MMU_FTR_SECTION
 MMU_FTR_SECTION_ELSE
 	b	handle_page_fault
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+COMMON_HANDLER_END(instruction_access_common)
 
-	STD_EXCEPTION_COMMON(0xe20, h_instr_storage, unknown_exception)
+	.align 7
+COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
 	/*
 	 * Machine check is different because we use a different
 	 * save area: PACA_EXMC instead of PACA_EXGEN.
 	 */
 	.align	7
-	.globl machine_check_common
-machine_check_common:
-
+COMMON_HANDLER_BEGIN(machine_check_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXMC+EX_DAR(r13)
 	mfspr	r10,SPRN_DSISR
@@ -998,10 +984,10 @@ machine_check_common:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	machine_check_exception
 	b	ret_from_except
+COMMON_HANDLER_END(machine_check_common)
 
 	.align	7
-	.globl alignment_common
-alignment_common:
+COMMON_HANDLER_BEGIN(alignment_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
 	mfspr	r10,SPRN_DSISR
@@ -1016,20 +1002,20 @@ alignment_common:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	alignment_exception
 	b	ret_from_except
+COMMON_HANDLER_END(alignment_common)
 
 	.align	7
-	.globl program_check_common
-program_check_common:
+COMMON_HANDLER_BEGIN(program_check_common)
 	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
 	bl	save_nvgprs
 	RECONCILE_IRQ_STATE(r10, r11)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	program_check_exception
 	b	ret_from_except
+COMMON_HANDLER_END(program_check_common)
 
 	.align	7
-	.globl fp_unavailable_common
-fp_unavailable_common:
+COMMON_HANDLER_BEGIN(fp_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
 	bne	1f			/* if from user, just load it up */
 	bl	save_nvgprs
@@ -1057,9 +1043,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 	bl	fp_unavailable_tm
 	b	ret_from_except
 #endif
+COMMON_HANDLER_END(fp_unavailable_common)
+
 	.align	7
-	.globl altivec_unavailable_common
-altivec_unavailable_common:
+COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
 #ifdef CONFIG_ALTIVEC
 BEGIN_FTR_SECTION
@@ -1091,10 +1078,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	altivec_unavailable_exception
 	b	ret_from_except
+COMMON_HANDLER_END(altivec_unavailable_common)
 
 	.align	7
-	.globl vsx_unavailable_common
-vsx_unavailable_common:
+COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
 #ifdef CONFIG_VSX
 BEGIN_FTR_SECTION
@@ -1125,17 +1112,17 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	vsx_unavailable_exception
 	b	ret_from_except
+COMMON_HANDLER_END(vsx_unavailable_common)
 
 	/* Equivalents to the above handlers for relocation-on interrupt vectors */
-	STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
-	MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell)
-	MASKABLE_RELON_EXCEPTION_HV_OOL(0xea0, h_virt_irq)
-
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf00, performance_monitor)
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf20, altivec_unavailable)
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf40, vsx_unavailable)
-	STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
-	STD_RELON_EXCEPTION_HV_OOL(0xf80, hv_facility_unavailable)
+__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
+__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
+__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
+__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
+__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
+__TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
+__TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
 
 	/*
 	 * The __end_interrupts marker must be past the out-of-line (OOL)
@@ -1163,18 +1150,24 @@ fwnmi_data_area:
 	. = 0x8000
 #endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
 
-	STD_EXCEPTION_COMMON(0xf60, facility_unavailable, facility_unavailable_exception)
-	STD_EXCEPTION_COMMON(0xf80, hv_facility_unavailable, facility_unavailable_exception)
+	.align 7;
+COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
+	.align 7;
+COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
 #ifdef CONFIG_CBE_RAS
-	STD_EXCEPTION_COMMON(0x1200, cbe_system_error, cbe_system_error_exception)
-	STD_EXCEPTION_COMMON(0x1600, cbe_maintenance, cbe_maintenance_exception)
-	STD_EXCEPTION_COMMON(0x1800, cbe_thermal, cbe_thermal_exception)
+	.align 7;
+COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
+	.align 7;
+COMMON_HANDLER(cbe_maintenance_common, 0x1600, cbe_maintenance_exception)
+	.align 7;
+COMMON_HANDLER(cbe_thermal_common, 0x1800, cbe_thermal_exception)
 #endif /* CONFIG_CBE_RAS */
 
-	.globl hmi_exception_early
-hmi_exception_early:
-	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST, 0xe62)
+
+	.align 7;
+COMMON_HANDLER_BEGIN(hmi_exception_early)
+	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, 0xe60)
 	mr	r10,r1			/* Save r1			*/
 	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
 	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame		*/
@@ -1220,7 +1213,8 @@ hmi_exception_early:
 hmi_exception_after_realmode:
 	SET_SCRATCH0(r13)
 	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	hmi_exception_hv
+	b	tramp_real_hmi_exception
+COMMON_HANDLER_END(hmi_exception_early)
 
 
 #define MACHINE_CHECK_HANDLER_WINDUP			\
@@ -1259,8 +1253,7 @@ hmi_exception_after_realmode:
 	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
 	 */
 	.align	7
-	.globl machine_check_handle_early
-machine_check_handle_early:
+COMMON_HANDLER_BEGIN(machine_check_handle_early)
 	std	r0,GPR0(r1)	/* Save r0 */
 	EXCEPTION_PROLOG_COMMON_3(0x200)
 	bl	save_nvgprs
@@ -1398,6 +1391,8 @@ unrecover_mce:
 1:	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unrecoverable_exception
 	b	1b
+COMMON_HANDLER_END(machine_check_handle_early)
+
 /*
  * r13 points to the PACA, r9 contains the saved CR,
  * r12 contain the saved SRR1, SRR0 is still ready for return
@@ -1406,7 +1401,7 @@ unrecover_mce:
  * r3 is saved in paca->slb_r3
  * We assume we aren't going to take any exceptions during this procedure.
  */
-slb_miss_realmode:
+COMMON_HANDLER_BEGIN(slb_miss_realmode)
 	mflr	r10
 #ifdef CONFIG_RELOCATABLE
 	mtctr	r11
@@ -1465,15 +1460,17 @@ unrecov_slb:
 1:	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unrecoverable_exception
 	b	1b
+COMMON_HANDLER_END(slb_miss_realmode)
 
 
 #ifdef CONFIG_PPC_970_NAP
-power4_fixup_nap:
+TRAMP_HANDLER_BEGIN(power4_fixup_nap)
 	andc	r9,r9,r10
 	std	r9,TI_LOCAL_FLAGS(r11)
 	ld	r10,_LINK(r1)		/* make idle task do the */
 	std	r10,_NIP(r1)		/* equivalent of a blr */
 	blr
+TRAMP_HANDLER_END(power4_fixup_nap)
 #endif
 
 /*
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/8] powerpc/pseries: consolidate exception handler alignment
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
                   ` (2 preceding siblings ...)
  2016-09-13  3:08 ` [PATCH 3/8] powerpc/pseries: exception vector macros Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 5/8] powerpc/64: use gas sections for arranging exception vectors Nicholas Piggin
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

Move exception handler alignment directives into the head-64.h macros,
beause they will no longer work in-place after the next patch. This
slightly changes functions that have alignments applied and therefore
code generation, which is why it was not done initially (see earlier
patch).

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/head-64.h   |  1 +
 arch/powerpc/kernel/exceptions-64s.S | 36 ------------------------------------
 2 files changed, 1 insertion(+), 36 deletions(-)

diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
index cf44542..a76049d 100644
--- a/arch/powerpc/include/asm/head-64.h
+++ b/arch/powerpc/include/asm/head-64.h
@@ -18,6 +18,7 @@ exc_##start##_##name:
 #define VECTOR_HANDLER_VIRT_END(name, start, end)
 
 #define COMMON_HANDLER_BEGIN(name)					\
+	.align	7;							\
 	.global name;							\
 name:
 
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index fc682ac..f8331d3 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -371,7 +371,6 @@ VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
 /*** Out of line interrupts support ***/
 
 	/* moved from 0x200 */
-	.align 7;
 TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
 BEGIN_FTR_SECTION
 	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
@@ -546,7 +545,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 #endif
 COMMON_HANDLER_END(denorm_assist)
 
-	.align	7
 	/* moved from 0xe00 */
 __TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
 TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
@@ -664,7 +662,6 @@ COMMON_HANDLER_END(__replay_interrupt)
 /*
  * Vectors for the FWNMI option.  Share common code.
  */
-      .align 7
 TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
 	SET_SCRATCH0(r13)		/* save r13 */
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
@@ -710,46 +707,30 @@ TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
 
 /*** Common interrupt handlers ***/
 
-	.align 7;
 COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
-	.align 7;
 COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
-	.align 7;
 COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
-	.align 7;
 COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
 COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
 #else
 COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
 #endif
-	.align 7;
 COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
-	.align 7;
 COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
-	.align 7;
 COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
-	.align 7;
 COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
-	.align 7;
 COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
-	.align 7;
 #ifdef CONFIG_PPC_DOORBELL
 COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
 #else
 COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
-	.align 7;
 COMMON_HANDLER_ASYNC(h_virt_irq_common, 0xea0, do_IRQ)
-	.align 7;
 COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
-	.align 7;
 COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
-	.align 7;
 COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
-	.align 7;
 #ifdef CONFIG_ALTIVEC
 COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
 #else
@@ -905,7 +886,6 @@ TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
  * SRR0 and SRR1 are saved in r11 and r12,
  * r9 - r13 are saved in paca->exgen.
  */
-	.align	7
 COMMON_HANDLER_BEGIN(data_access_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
@@ -926,7 +906,6 @@ MMU_FTR_SECTION_ELSE
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
 COMMON_HANDLER_END(data_access_common)
 
-	.align  7
 COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
 	std     r10,PACA_EXGEN+EX_DAR(r13)
@@ -940,7 +919,6 @@ COMMON_HANDLER_BEGIN(h_data_storage_common)
 	b       ret_from_except
 COMMON_HANDLER_END(h_data_storage_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(instruction_access_common)
 	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
 	RECONCILE_IRQ_STATE(r10, r11)
@@ -957,14 +935,12 @@ MMU_FTR_SECTION_ELSE
 ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
 COMMON_HANDLER_END(instruction_access_common)
 
-	.align 7
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
 	/*
 	 * Machine check is different because we use a different
 	 * save area: PACA_EXMC instead of PACA_EXGEN.
 	 */
-	.align	7
 COMMON_HANDLER_BEGIN(machine_check_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXMC+EX_DAR(r13)
@@ -986,7 +962,6 @@ COMMON_HANDLER_BEGIN(machine_check_common)
 	b	ret_from_except
 COMMON_HANDLER_END(machine_check_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(alignment_common)
 	mfspr	r10,SPRN_DAR
 	std	r10,PACA_EXGEN+EX_DAR(r13)
@@ -1004,7 +979,6 @@ COMMON_HANDLER_BEGIN(alignment_common)
 	b	ret_from_except
 COMMON_HANDLER_END(alignment_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(program_check_common)
 	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
 	bl	save_nvgprs
@@ -1014,7 +988,6 @@ COMMON_HANDLER_BEGIN(program_check_common)
 	b	ret_from_except
 COMMON_HANDLER_END(program_check_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(fp_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
 	bne	1f			/* if from user, just load it up */
@@ -1045,7 +1018,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_TM)
 #endif
 COMMON_HANDLER_END(fp_unavailable_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
 #ifdef CONFIG_ALTIVEC
@@ -1080,7 +1052,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	b	ret_from_except
 COMMON_HANDLER_END(altivec_unavailable_common)
 
-	.align	7
 COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
 #ifdef CONFIG_VSX
@@ -1150,22 +1121,16 @@ fwnmi_data_area:
 	. = 0x8000
 #endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
 
-	.align 7;
 COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
-	.align 7;
 COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
 #ifdef CONFIG_CBE_RAS
-	.align 7;
 COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
-	.align 7;
 COMMON_HANDLER(cbe_maintenance_common, 0x1600, cbe_maintenance_exception)
-	.align 7;
 COMMON_HANDLER(cbe_thermal_common, 0x1800, cbe_thermal_exception)
 #endif /* CONFIG_CBE_RAS */
 
 
-	.align 7;
 COMMON_HANDLER_BEGIN(hmi_exception_early)
 	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, 0xe60)
 	mr	r10,r1			/* Save r1			*/
@@ -1252,7 +1217,6 @@ COMMON_HANDLER_END(hmi_exception_early)
 	 * Handle machine check early in real mode. We come here with
 	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
 	 */
-	.align	7
 COMMON_HANDLER_BEGIN(machine_check_handle_early)
 	std	r0,GPR0(r1)	/* Save r0 */
 	EXCEPTION_PROLOG_COMMON_3(0x200)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/8] powerpc/64: use gas sections for arranging exception vectors
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
                   ` (3 preceding siblings ...)
  2016-09-13  3:08 ` [PATCH 4/8] powerpc/pseries: consolidate exception handler alignment Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 6/8] powerpc/pseries: move related exception code together Nicholas Piggin
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

Use assembler sections of fixed size and location to arrange pseries
exception vector code (64e also using it in head_64.S for 0x0..0x100).

This allows better flexibility in arranging exception code and hiding
unimportant details behind macros.

Gas sections can be a bit painful to use this way, mainly because the
assembler does not know where they will be finally linked. Taking
absolute addresses requires a bit of trickery for example, but it can
be hidden behind macros for the most part.

Generated code is mostly the same except locations, offsets, alignments.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/exception-64s.h |   4 +-
 arch/powerpc/include/asm/head-64.h       | 205 +++++++++++++++++++++++++++----
 arch/powerpc/kernel/exceptions-64s.S     | 103 +++++++++++-----
 arch/powerpc/kernel/head_64.S            |  58 +++++----
 arch/powerpc/kernel/vmlinux.lds.S        |  45 ++++++-
 5 files changed, 336 insertions(+), 79 deletions(-)

diff --git a/arch/powerpc/include/asm/exception-64s.h b/arch/powerpc/include/asm/exception-64s.h
index 6c0080f..c7a1c90 100644
--- a/arch/powerpc/include/asm/exception-64s.h
+++ b/arch/powerpc/include/asm/exception-64s.h
@@ -89,9 +89,9 @@
  * low halfword of the address, but for Kdump we need the whole low
  * word.
  */
-#define LOAD_HANDLER(reg, label)					\
 	/* Handlers must be within 64K of kbase, which must be 64k aligned */ \
-	ori	reg,reg,(label)-_stext;	/* virt addr of handler ... */
+#define LOAD_HANDLER(reg, label)					\
+	ori	reg,reg,ABS_ADDR(label);
 
 /* Exception register prefixes */
 #define EXC_HV	H
diff --git a/arch/powerpc/include/asm/head-64.h b/arch/powerpc/include/asm/head-64.h
index a76049d..ff11106 100644
--- a/arch/powerpc/include/asm/head-64.h
+++ b/arch/powerpc/include/asm/head-64.h
@@ -3,32 +3,157 @@
 
 #include <asm/cache.h>
 
+/*
+ * We can't do CPP stringification and concatination directly into the section
+ * name for some reason, so these macros can do it for us.
+ */
+.macro define_ftsec name
+	.section ".head.text.\name\()","ax",@progbits
+.endm
+.macro define_data_ftsec name
+	.section ".head.data.\name\()","a",@progbits
+.endm
+.macro use_ftsec name
+	.section ".head.text.\name\()"
+.endm
+
+#define OPEN_FIXED_SECTION(sname, start, end)			\
+	sname##_start = (start);				\
+	sname##_end = (end);					\
+	sname##_len = (end) - (start);				\
+	define_ftsec sname;					\
+	. = 0x0;						\
+start_##sname:
+
+#define OPEN_TEXT_SECTION(start)				\
+	text_start = (start);					\
+	.section ".text","ax",@progbits;			\
+	. = 0x0;						\
+start_text:
+
+#define ZERO_FIXED_SECTION(sname, start, end)			\
+	sname##_start = (start);				\
+	sname##_end = (end);					\
+	sname##_len = (end) - (start);				\
+	define_data_ftsec sname;				\
+	. = 0x0;						\
+	. = sname##_len;
+
+#define USE_FIXED_SECTION(sname)				\
+	fs_label = start_##sname;				\
+	fs_start = sname##_start;				\
+	use_ftsec sname;
+
+#define USE_TEXT_SECTION()					\
+	fs_label = start_text;					\
+	fs_start = text_start;					\
+	.text
+
+#define UNUSE_FIXED_SECTION(sname)				\
+	.previous;
+
+#define CLOSE_FIXED_SECTION(sname)				\
+	USE_FIXED_SECTION(sname);				\
+	. = sname##_len;					\
+end_##sname:
+
+
+#define __FIXED_SECTION_ENTRY_BEGIN(sname, name, __align)	\
+	USE_FIXED_SECTION(sname);				\
+	.align __align;						\
+	.global name;						\
+name:
+
+#define FIXED_SECTION_ENTRY_BEGIN(sname, name)			\
+	__FIXED_SECTION_ENTRY_BEGIN(sname, name, 0)
+
+#define FIXED_SECTION_ENTRY_S_BEGIN(sname, name, start)		\
+	USE_FIXED_SECTION(sname);				\
+	name##_start = (start);					\
+	.if (start) < sname##_start;				\
+	.error "Fixed section underflow";			\
+	.abort;							\
+	.endif;							\
+	. = (start) - sname##_start;				\
+	.global name;						\
+name:
+
+#define FIXED_SECTION_ENTRY_END(sname, name)			\
+	UNUSE_FIXED_SECTION(sname);
+
+#define FIXED_SECTION_ENTRY_E_END(sname, name, end)		\
+	.if (end) > sname##_end;				\
+	.error "Fixed section overflow";			\
+	.abort;							\
+	.endif;							\
+	.if (. - name > end - name##_start);			\
+	.error "Fixed entry overflow";				\
+	.abort;							\
+	.endif;							\
+	. = ((end) - sname##_start);				\
+	UNUSE_FIXED_SECTION(sname);
+
+#define FIXED_SECTION_ENTRY_S(sname, name, start, entry)	\
+	FIXED_SECTION_ENTRY_S_BEGIN(sname, name, start);	\
+	entry;							\
+	FIXED_SECTION_ENTRY_END(sname, name);			\
+
+#define FIXED_SECTION_ENTRY(sname, name, start, end, entry)	\
+	FIXED_SECTION_ENTRY_S_BEGIN(sname, name, start);	\
+	entry;							\
+	FIXED_SECTION_ENTRY_E_END(sname, name, end);
+
+
+/*
+ * These macros are used to change symbols in other fixed sections to be
+ * absolute or related to our current fixed section.
+ *
+ * GAS makes things as painful as it possibly can.
+ */
+/* ABS_ADDR: absolute address of a label within same section */
+#define ABS_ADDR(label) (label - fs_label + fs_start)
+
+/* FIXED_SECTION_ABS_ADDR: absolute address of a label in another setcion */
+#define FIXED_SECTION_ABS_ADDR(sname, target)				\
+	(target - start_##sname + sname##_start)
+
+/* FIXED_SECTION_REL_ADDR: relative address of a label in another setcion */
+#define FIXED_SECTION_REL_ADDR(sname, target)				\
+	(FIXED_SECTION_ABS_ADDR(sname, target) + fs_label - fs_start)
+
+
 #define VECTOR_HANDLER_REAL_BEGIN(name, start, end)			\
-	. = start ;							\
-	.global exc_##start##_##name ;					\
-exc_##start##_##name:
+	FIXED_SECTION_ENTRY_S_BEGIN(real_vectors, exc_##start##_##name, start)
 
-#define VECTOR_HANDLER_REAL_END(name, start, end)
+#define VECTOR_HANDLER_REAL_END(name, start, end)			\
+	FIXED_SECTION_ENTRY_E_END(real_vectors, exc_##start##_##name, end)
 
 #define VECTOR_HANDLER_VIRT_BEGIN(name, start, end)			\
-	. = start ;							\
-	.global exc_##start##_##name ;					\
-exc_##start##_##name:
+	FIXED_SECTION_ENTRY_S_BEGIN(virt_vectors, exc_##start##_##name, start)
 
-#define VECTOR_HANDLER_VIRT_END(name, start, end)
+#define VECTOR_HANDLER_VIRT_END(name, start, end)			\
+	FIXED_SECTION_ENTRY_E_END(virt_vectors, exc_##start##_##name, end)
 
 #define COMMON_HANDLER_BEGIN(name)					\
+	USE_TEXT_SECTION();						\
 	.align	7;							\
 	.global name;							\
 name:
 
-#define COMMON_HANDLER_END(name)
+#define COMMON_HANDLER_END(name)					\
+	.previous
 
 #define TRAMP_HANDLER_BEGIN(name)					\
-	.global name ;							\
-name:
+	FIXED_SECTION_ENTRY_BEGIN(real_trampolines, name)
+
+#define TRAMP_HANDLER_END(name)						\
+	FIXED_SECTION_ENTRY_END(real_trampolines, name)
+
+#define VTRAMP_HANDLER_BEGIN(name)					\
+	FIXED_SECTION_ENTRY_BEGIN(virt_trampolines, name)
 
-#define TRAMP_HANDLER_END(name)
+#define VTRAMP_HANDLER_END(name)					\
+	FIXED_SECTION_ENTRY_END(virt_trampolines, name)
 
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
 #define TRAMP_KVM_BEGIN(name)						\
@@ -41,9 +166,13 @@ name:
 #define TRAMP_KVM_END(name)
 #endif
 
-#define VECTOR_HANDLER_REAL_NONE(start, end)
+#define VECTOR_HANDLER_REAL_NONE(start, end)				\
+	FIXED_SECTION_ENTRY_S_BEGIN(real_vectors, exc_##start##_##unused, start); \
+	FIXED_SECTION_ENTRY_E_END(real_vectors, exc_##start##_##unused, end)
 
-#define VECTOR_HANDLER_VIRT_NONE(start, end)
+#define VECTOR_HANDLER_VIRT_NONE(start, end)				\
+	FIXED_SECTION_ENTRY_S_BEGIN(virt_vectors, exc_##start##_##unused, start); \
+	FIXED_SECTION_ENTRY_E_END(virt_vectors, exc_##start##_##unused, end);
 
 
 #define VECTOR_HANDLER_REAL(name, start, end)				\
@@ -86,6 +215,10 @@ name:
 	STD_EXCEPTION_PSERIES_OOL(vec, name##_common);			\
 	TRAMP_HANDLER_END(tramp_real_##name);
 
+#define VECTOR_HANDLER_REAL_OOL(name, start, end)			\
+	__VECTOR_HANDLER_REAL_OOL(name, start, end);			\
+	__TRAMP_HANDLER_REAL_OOL(name, start);
+
 #define __VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end)		\
 	__VECTOR_HANDLER_REAL_OOL(name, start, end);
 
@@ -94,6 +227,10 @@ name:
 	MASKABLE_EXCEPTION_PSERIES_OOL(vec, name##_common);		\
 	TRAMP_HANDLER_END(tramp_real_##name);
 
+#define VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL_MASKABLE(name, start, end);		\
+	__TRAMP_HANDLER_REAL_OOL_MASKABLE(name, start);
+
 #define __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(name, start, end, handler)	\
 	VECTOR_HANDLER_REAL_BEGIN(name, start, end);			\
 	__OOL_EXCEPTION(start, label, handler);				\
@@ -107,6 +244,10 @@ name:
 	STD_EXCEPTION_HV_OOL(vec + 0x2, name##_common);			\
 	TRAMP_HANDLER_END(tramp_real_##name);
 
+#define VECTOR_HANDLER_REAL_OOL_HV(name, start, end)			\
+	__VECTOR_HANDLER_REAL_OOL_HV(name, start, end);			\
+	__TRAMP_HANDLER_REAL_OOL_HV(name, start);
+
 #define __VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end)		\
 	__VECTOR_HANDLER_REAL_OOL(name, start, end);
 
@@ -115,39 +256,59 @@ name:
 	MASKABLE_EXCEPTION_HV_OOL(vec, name##_common);			\
 	TRAMP_HANDLER_END(tramp_real_##name);
 
+#define VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end)		\
+	__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(name, start, end);	\
+	__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(name, start);
+
 #define __VECTOR_HANDLER_VIRT_OOL(name, start, end)			\
 	VECTOR_HANDLER_VIRT_BEGIN(name, start, end);			\
 	__OOL_EXCEPTION(start, label, tramp_virt_##name);		\
 	VECTOR_HANDLER_VIRT_END(name, start, end);
 
 #define __TRAMP_HANDLER_VIRT_OOL(name, realvec)				\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	STD_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name);
+
+#define VECTOR_HANDLER_VIRT_OOL(name, start, end, realvec)		\
+	__VECTOR_HANDLER_VIRT_OOL(name, start, end);			\
+	__TRAMP_HANDLER_VIRT_OOL(name, realvec);
 
 #define __VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end)		\
 	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
 
 #define __TRAMP_HANDLER_VIRT_OOL_MASKABLE(name, realvec)		\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	MASKABLE_RELON_EXCEPTION_PSERIES_OOL(realvec, name##_common);	\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name);
+
+#define VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end, realvec)	\
+	__VECTOR_HANDLER_VIRT_OOL_MASKABLE(name, start, end);		\
+	__TRAMP_HANDLER_VIRT_OOL_MASKABLE(name, realvec);
 
 #define __VECTOR_HANDLER_VIRT_OOL_HV(name, start, end)			\
 	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
 
 #define __TRAMP_HANDLER_VIRT_OOL_HV(name, realvec)			\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	STD_RELON_EXCEPTION_HV_OOL(realvec, name##_common);		\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name)
+
+#define VECTOR_HANDLER_VIRT_OOL_HV(name, start, end, realvec)		\
+	__VECTOR_HANDLER_VIRT_OOL_HV(name, start, end);			\
+	__TRAMP_HANDLER_VIRT_OOL_HV(name, realvec);
 
 #define __VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end)		\
 	__VECTOR_HANDLER_VIRT_OOL(name, start, end);
 
 #define __TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(name, realvec)		\
-	TRAMP_HANDLER_BEGIN(tramp_virt_##name);				\
+	VTRAMP_HANDLER_BEGIN(tramp_virt_##name);			\
 	MASKABLE_RELON_EXCEPTION_HV_OOL(realvec, name##_common);	\
-	TRAMP_HANDLER_END(tramp_virt_##name);
+	VTRAMP_HANDLER_END(tramp_virt_##name);
+
+#define VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end, realvec)	\
+	__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(name, start, end);	\
+	__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(name, realvec);
 
 #define TRAMP_KVM(area, n)						\
 	TRAMP_KVM_BEGIN(do_kvm_##n);					\
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index f8331d3..068f96f 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -19,16 +19,65 @@
 #include <asm/head-64.h>
 
 /*
+ * There are a few constraints to be conerned with.
+ * - Real mode exceptions code/data must be located at thier physical location.
+ * - Virtual mode exceptions must be mapped at their  0xc000... location.
+ * - Fixed location code must not call directly beyond the __end_interrupts
+ *   area when built with CONFIG_RELOCATABLE. LOAD_HANDLER / bctr sequence
+ *   must be used.
+ * - LOAD_HANDLER targets must be within first 64K of physical 0 /
+ *   virtual 0xc00...
+ * - Conditional branch targets must be within +/-32K of caller.
+ *
+ * "Virtual exceptions" run with relocation on (MSR_IR=1, MSR_DR=1), and
+ * therefore don't have to run in physically located code or rfid to
+ * virtual mode kernel code. However on relocatable kernels they do have
+ * to branch to KERNELBASE offset because the rest of the kernel (outside
+ * the exception vectors) may be located elsewhere.
+ *
+ * Virtual exceptions correspond with physical, except their entry points
+ * are offset by 0xc000000000000000 and also tend to get an added 0x4000
+ * offset applied. Virtual exceptions are enabled with the Alternate
+ * Interrupt Location (AIL) bit set in the LPCR. However this does not
+ * guarantee they will be delivered virtually. Some conditions (see the ISA)
+ * cause exceptions to be delivered in real mode.
+ *
+ * It's impossible to receive interrupts below 0x300 via AIL.
+ *
+ * KVM: None of these traps are from the guest ; anything that escalated
+ * to HV=1 from HV=0 is delivered via real mode handlers.
+ *
+ *
  * We layout physical memory as follows:
  * 0x0000 - 0x00ff : Secondary processor spin code
- * 0x0100 - 0x17ff : pSeries Interrupt prologs
- * 0x1800 - 0x4000 : interrupt support common interrupt prologs
- * 0x4000 - 0x5fff : pSeries interrupts with IR=1,DR=1
- * 0x6000 - 0x6fff : more interrupt support including for IR=1,DR=1
+ * 0x0100 - 0x18ff : Real mode pSeries interrupt vectors
+ * 0x1900 - 0x3fff : Real mode trampolines
+ * 0x4000 - 0x5fff : Relon (IR=1,DR=1) mode pSeries interrupt vectors
+ * 0x6000 - 0x6fff : Relon mode trampolines
  * 0x7000 - 0x7fff : FWNMI data area
- * 0x8000 - 0x8fff : Initial (CPU0) segment table
- * 0x9000 -        : Early init and support code
+ * 0x8000 -   .... : Common interrupt handlers, remaining early
+ *                   setup code, rest of kernel.
+ */
+OPEN_FIXED_SECTION(real_vectors,        0x0100, 0x1900)
+OPEN_FIXED_SECTION(real_trampolines,    0x1900, 0x4000)
+OPEN_FIXED_SECTION(virt_vectors,        0x4000, 0x6000)
+OPEN_FIXED_SECTION(virt_trampolines,    0x6000, 0x7000)
+#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
+/*
+ * Data area reserved for FWNMI option.
+ * This address (0x7000) is fixed by the RPA.
+ * pseries and powernv need to keep the whole page from
+ * 0x7000 to 0x8000 free for use by the firmware
  */
+ZERO_FIXED_SECTION(fwnmi_page,          0x7000, 0x8000)
+OPEN_TEXT_SECTION(0x8000)
+#else
+OPEN_TEXT_SECTION(0x7000)
+#endif
+
+USE_FIXED_SECTION(real_vectors)
+
+
 	/* Syscall routine is used twice, in reloc-off and reloc-on paths */
 #define SYSCALL_PSERIES_1 					\
 BEGIN_FTR_SECTION						\
@@ -91,7 +140,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
  * Therefore any relative branches in this section must only
  * branch to labels in this section.
  */
-	. = 0x100
 	.globl __start_interrupts
 __start_interrupts:
 
@@ -205,9 +253,6 @@ VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
 #endif
 VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
 
-	/* We open code these as we can't have a ". = x" (even with
-	 * x = "." within a feature section
-	 */
 VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
 	.globl hardware_interrupt_hv;
 hardware_interrupt_hv:
@@ -217,7 +262,7 @@ hardware_interrupt_hv:
 do_kvm_H0x500:
 		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
 	FTR_SECTION_ELSE
-		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
+		_MASKABLE_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common),
 					    EXC_STD, SOFTEN_TEST_PR)
 do_kvm_0x500:
 		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
@@ -364,7 +409,6 @@ TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
 
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
-	. = 0x1800
 #endif
 
 
@@ -617,9 +661,16 @@ masked_##_H##interrupt:					\
 	GET_SCRATCH0(r13);				\
 	##_H##rfid;					\
 	b	.
-	
+
+/*
+ * Real mode exceptions actually use this too, but alternate
+ * instruction code patches (which end up in the common .text area)
+ * cannot reach these if they are put there.
+ */
+USE_FIXED_SECTION(virt_trampolines)
 	MASKED_INTERRUPT()
 	MASKED_INTERRUPT(H)
+UNUSE_FIXED_SECTION(virt_trampolines)
 
 /*
  * Called from arch_local_irq_enable when an interrupt needs
@@ -806,7 +857,7 @@ hardware_interrupt_relon_hv:
 	BEGIN_FTR_SECTION
 		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_HV, SOFTEN_TEST_HV)
 	FTR_SECTION_ELSE
-		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, hardware_interrupt_common, EXC_STD, SOFTEN_TEST_PR)
+		_MASKABLE_RELON_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common), EXC_STD, SOFTEN_TEST_PR)
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 VECTOR_HANDLER_VIRT_END(hardware_interrupt, 0x4500, 0x4600)
 
@@ -1095,6 +1146,7 @@ __TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
 __TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
 __TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
 
+USE_FIXED_SECTION(virt_trampolines)
 	/*
 	 * The __end_interrupts marker must be past the out-of-line (OOL)
 	 * handlers, so that they are copied to real address 0x100 when running
@@ -1105,21 +1157,7 @@ __TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
 	.align	7
 	.globl	__end_interrupts
 __end_interrupts:
-
-#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
-/*
- * Data area reserved for FWNMI option.
- * This address (0x7000) is fixed by the RPA.
- */
-	.= 0x7000
-	.globl fwnmi_data_area
-fwnmi_data_area:
-
-	/* pseries and powernv need to keep the whole page from
-	 * 0x7000 to 0x8000 free for use by the firmware
-	 */
-	. = 0x8000
-#endif /* defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV) */
+UNUSE_FIXED_SECTION(virt_trampolines)
 
 COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
 COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
@@ -1437,6 +1475,13 @@ TRAMP_HANDLER_BEGIN(power4_fixup_nap)
 TRAMP_HANDLER_END(power4_fixup_nap)
 #endif
 
+CLOSE_FIXED_SECTION(real_vectors);
+CLOSE_FIXED_SECTION(real_trampolines);
+CLOSE_FIXED_SECTION(virt_vectors);
+CLOSE_FIXED_SECTION(virt_trampolines);
+
+USE_TEXT_SECTION()
+
 /*
  * Hash table stuff
  */
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index f765b04..39920fc 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -28,6 +28,7 @@
 #include <asm/page.h>
 #include <asm/mmu.h>
 #include <asm/ppc_asm.h>
+#include <asm/head-64.h>
 #include <asm/asm-offsets.h>
 #include <asm/bug.h>
 #include <asm/cputable.h>
@@ -65,10 +66,10 @@
  *   2. The kernel is entered at __start
  */
 
-	.text
-	.globl  _stext
-_stext:
-_GLOBAL(__start)
+OPEN_FIXED_SECTION(first_256B, 0x0, 0x100)
+
+USE_FIXED_SECTION(first_256B)
+FIXED_SECTION_ENTRY_S_BEGIN(first_256B, __start, 0x0)
 	/* NOP this out unconditionally */
 BEGIN_FTR_SECTION
 	FIXUP_ENDIAN
@@ -77,6 +78,7 @@ END_FTR_SECTION(0, 1)
 
 	/* Catch branch to 0 in real mode */
 	trap
+FIXED_SECTION_ENTRY_END(first_256B, __start)
 
 	/* Secondary processors spin on this value until it becomes non-zero.
 	 * When non-zero, it contains the real address of the function the cpu
@@ -101,13 +103,13 @@ __secondary_hold_acknowledge:
 	 * observing the alignment requirement.
 	 */
 	/* Do not move this variable as kexec-tools knows about it. */
-	. = 0x5c
-	.globl	__run_at_load
-__run_at_load:
+FIXED_SECTION_ENTRY_S_BEGIN(first_256B, __run_at_load, 0x5c)
 	.long	0x72756e30	/* "run0" -- relocate to 0 by default */
+FIXED_SECTION_ENTRY_END(first_256B, __run_at_load)
+
 #endif
 
-	. = 0x60
+FIXED_SECTION_ENTRY_S_BEGIN(first_256B, __secondary_hold, 0x60)
 /*
  * The following code is used to hold secondary processors
  * in a spin loop after they have entered the kernel, but
@@ -117,8 +119,6 @@ __run_at_load:
  * Use .globl here not _GLOBAL because we want __secondary_hold
  * to be the actual text address, not a descriptor.
  */
-	.globl	__secondary_hold
-__secondary_hold:
 	FIXUP_ENDIAN
 #ifndef CONFIG_PPC_BOOK3E
 	mfmsr	r24
@@ -133,7 +133,7 @@ __secondary_hold:
 	/* Tell the master cpu we're here */
 	/* Relocation is off & we are located at an address less */
 	/* than 0x100, so only need to grab low order offset.    */
-	std	r24,__secondary_hold_acknowledge-_stext(0)
+	std	r24,ABS_ADDR(__secondary_hold_acknowledge)(0)
 	sync
 
 	li	r26,0
@@ -141,7 +141,7 @@ __secondary_hold:
 	tovirt(r26,r26)
 #endif
 	/* All secondary cpus wait here until told to start. */
-100:	ld	r12,__secondary_hold_spinloop-_stext(r26)
+100:	ld	r12,ABS_ADDR(__secondary_hold_spinloop)(r26)
 	cmpdi	0,r12,0
 	beq	100b
 
@@ -166,12 +166,15 @@ __secondary_hold:
 #else
 	BUG_OPCODE
 #endif
+FIXED_SECTION_ENTRY_END(first_256B, __secondary_hold)
+
+CLOSE_FIXED_SECTION(first_256B)
 
 /* This value is used to mark exception frames on the stack. */
 	.section ".toc","aw"
 exception_marker:
 	.tc	ID_72656773_68657265[TC],0x7265677368657265
-	.text
+	.previous
 
 /*
  * On server, we include the exception vectors code here as it
@@ -180,8 +183,12 @@ exception_marker:
  */
 #ifdef CONFIG_PPC_BOOK3S
 #include "exceptions-64s.S"
+#else
+OPEN_TEXT_SECTION(0x100)
 #endif
 
+USE_TEXT_SECTION()
+
 #ifdef CONFIG_PPC_BOOK3E
 /*
  * The booting_thread_hwid holds the thread id we want to boot in cpu
@@ -558,7 +565,7 @@ __after_prom_start:
 #if defined(CONFIG_PPC_BOOK3E)
 	tovirt(r26,r26)		/* on booke, we already run at PAGE_OFFSET */
 #endif
-	lwz	r7,__run_at_load-_stext(r26)
+	lwz	r7,ABS_ADDR(__run_at_load)(r26)
 #if defined(CONFIG_PPC_BOOK3E)
 	tophys(r26,r26)
 #endif
@@ -601,7 +608,7 @@ __after_prom_start:
 #if defined(CONFIG_PPC_BOOK3E)
 	tovirt(r26,r26)		/* on booke, we already run at PAGE_OFFSET */
 #endif
-	lwz	r7,__run_at_load-_stext(r26)
+	lwz	r7,ABS_ADDR(__run_at_load)(r26)
 	cmplwi	cr0,r7,1
 	bne	3f
 
@@ -611,28 +618,31 @@ __after_prom_start:
 	sub	r5,r5,r11
 #else
 	/* just copy interrupts */
-	LOAD_REG_IMMEDIATE(r5, __end_interrupts - _stext)
+	LOAD_REG_IMMEDIATE(r5, FIXED_SECTION_ABS_ADDR(virt_trampolines, __end_interrupts))
 #endif
 	b	5f
 3:
 #endif
-	lis	r5,(copy_to_here - _stext)@ha
-	addi	r5,r5,(copy_to_here - _stext)@l /* # bytes of memory to copy */
+	/* # bytes of memory to copy */
+	lis	r5,ABS_ADDR(copy_to_here)@ha
+	addi	r5,r5,ABS_ADDR(copy_to_here)@l
 
 	bl	copy_and_flush		/* copy the first n bytes	 */
 					/* this includes the code being	 */
 					/* executed here.		 */
-	addis	r8,r3,(4f - _stext)@ha	/* Jump to the copy of this code */
-	addi	r12,r8,(4f - _stext)@l	/* that we just made */
+	/* Jump to the copy of this code that we just made*/
+	addis	r8,r3, ABS_ADDR(4f)@ha
+	addi	r12,r8, ABS_ADDR(4f)@l
 	mtctr	r12
 	bctr
 
-.balign 8
-p_end:	.llong	_end - _stext
+p_end: .llong _end - copy_to_here
 
 4:	/* Now copy the rest of the kernel up to _end */
-	addis	r5,r26,(p_end - _stext)@ha
-	ld	r5,(p_end - _stext)@l(r5)	/* get _end */
+	addis   r8,r26,ABS_ADDR(p_end)@ha
+	/* load p_end */
+	ld      r8,ABS_ADDR(p_end)@l(r8)
+	add	r5,r5,r8
 5:	bl	copy_and_flush		/* copy the rest */
 
 9:	b	start_here_multiplatform
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index b5fba68..df59e14 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -44,11 +44,52 @@ SECTIONS
  * Text, read only data and other permanent read-only sections
  */
 
+	_text = .;
+	_stext = .;
+
+#ifdef CONFIG_PPC64
+	/*
+	 * Head text.
+	 * This needs to be in its own output section to avoid ld placing
+	 * branch trampoline stubs randomly throughout the fixed sections,
+	 * which it will do (even if the branch comes from another section)
+	 * in order to optimize stub generation.
+	 */
+	.head.text : AT(ADDR(.head.text) - LOAD_OFFSET) {
+		KEEP(*(.head.text.first_256B));
+#ifndef CONFIG_PPC_BOOK3S
+		. = 0x100;
+#else
+		KEEP(*(.head.text.real_vectors));
+		*(.head.text.real_trampolines);
+		KEEP(*(.head.text.virt_vectors));
+		*(.head.text.virt_trampolines);
+#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
+		KEEP(*(.head.data.fwnmi_page));
+		. = 0x8000;
+#else
+		. = 0x7000;
+#endif
+#endif
+		/*
+		 * The offsets above are specified in order to catch the linker
+		 * adding branch stubs in one of the fixed sections, which
+		 * breaks the fixed section offsets (see head-64.h) and that
+		 * can't be caught by the assembler. If the build died here,
+		 * code in head is referencing labels it can't reach.
+		 *
+		 * Linker stub generation could be allowed in "trampoline"
+		 * sections if necessary, if they were put into their own
+		 * output sections and the fixed section code adjusted to
+		 * avoid complete padding of those sections (their offsets
+		 * would be specified here in the linker script).
+		 */
+	} :kernel
+#endif
+
 	/* Text and gots */
 	.text : AT(ADDR(.text) - LOAD_OFFSET) {
 		ALIGN_FUNCTION();
-		HEAD_TEXT
-		_text = .;
 		/* careful! __ftr_alt_* sections need to be close to .text */
 		*(.text .fixup __ftr_alt_* .ref.text)
 		SCHED_TEXT
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 6/8] powerpc/pseries: move related exception code together
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
                   ` (4 preceding siblings ...)
  2016-09-13  3:08 ` [PATCH 5/8] powerpc/64: use gas sections for arranging exception vectors Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 7/8] powerpc/pseries: use single macro for both parts of OOL exception Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 8/8] powerpc/pseries: remove unused exception code, small cleanups Nicholas Piggin
  7 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

This is just moving related handler code together in the .S, which
is possible since it is linked into the correct location. Generated
code should remain the same, except for placement and offsets.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 2110 ++++++++++++++++------------------
 1 file changed, 1022 insertions(+), 1088 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 068f96f..5dd7f0b 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -78,60 +78,6 @@ OPEN_TEXT_SECTION(0x7000)
 USE_FIXED_SECTION(real_vectors)
 
 
-	/* Syscall routine is used twice, in reloc-off and reloc-on paths */
-#define SYSCALL_PSERIES_1 					\
-BEGIN_FTR_SECTION						\
-	cmpdi	r0,0x1ebe ; 					\
-	beq-	1f ;						\
-END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
-	mr	r9,r13 ;					\
-	GET_PACA(r13) ;						\
-	mfspr	r11,SPRN_SRR0 ;					\
-0:
-
-#define SYSCALL_PSERIES_2_RFID 					\
-	mfspr	r12,SPRN_SRR1 ;					\
-	ld	r10,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER(r10, system_call_common) ; 		\
-	mtspr	SPRN_SRR0,r10 ; 				\
-	ld	r10,PACAKMSR(r13) ;				\
-	mtspr	SPRN_SRR1,r10 ; 				\
-	rfid ; 							\
-	b	. ;	/* prevent speculative execution */
-
-#define SYSCALL_PSERIES_3					\
-	/* Fast LE/BE switch system call */			\
-1:	mfspr	r12,SPRN_SRR1 ;					\
-	xori	r12,r12,MSR_LE ;				\
-	mtspr	SPRN_SRR1,r12 ;					\
-	rfid ;		/* return to userspace */		\
-	b	. ;	/* prevent speculative execution */
-
-#if defined(CONFIG_RELOCATABLE)
-	/*
-	 * We can't branch directly so we do it via the CTR which
-	 * is volatile across system calls.
-	 */
-#define SYSCALL_PSERIES_2_DIRECT				\
-	mflr	r10 ;						\
-	ld	r12,PACAKBASE(r13) ; 				\
-	LOAD_HANDLER(r12, system_call_common) ;			\
-	mtctr	r12 ;						\
-	mfspr	r12,SPRN_SRR1 ;					\
-	/* Re-use of r13... No spare regs to do this */	\
-	li	r13,MSR_RI ;					\
-	mtmsrd 	r13,1 ;						\
-	GET_PACA(r13) ;	/* get r13 back */			\
-	bctr ;
-#else
-	/* We can branch directly */
-#define SYSCALL_PSERIES_2_DIRECT				\
-	mfspr	r12,SPRN_SRR1 ;					\
-	li	r10,MSR_RI ;					\
-	mtmsrd 	r10,1 ;			/* Set RI (EE=0) */	\
-	b	system_call_common ;
-#endif
-
 /*
  * This is the start of the interrupt handlers for pSeries
  * This code runs with relocation off.
@@ -185,6 +131,20 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
 	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
 				 NOTEST, 0x100)
 VECTOR_HANDLER_REAL_END(system_reset, 0x100, 0x200)
+VECTOR_HANDLER_VIRT_NONE(0x4100, 0x4200)
+COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
+
+#ifdef CONFIG_PPC_PSERIES
+/*
+ * Vectors for the FWNMI option.  Share common code.
+ */
+TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
+	SET_SCRATCH0(r13)		/* save r13 */
+	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
+				 NOTEST, 0x100)
+TRAMP_HANDLER_END(system_reset_fwnmi)
+#endif /* CONFIG_PPC_PSERIES */
+
 
 VECTOR_HANDLER_REAL_BEGIN(machine_check, 0x200, 0x300)
 	/* This is moved out of line as it can be patched by FW, but
@@ -207,606 +167,364 @@ FTR_SECTION_ELSE
 	b	machine_check_pSeries_0
 ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 VECTOR_HANDLER_REAL_END(machine_check, 0x200, 0x300)
+VECTOR_HANDLER_VIRT_NONE(0x4200, 0x4300)
 
-VECTOR_HANDLER_REAL(data_access, 0x300, 0x380)
-
-VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_DAR
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
+TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
+BEGIN_FTR_SECTION
+	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
 	/*
-	 * We can't just use a direct branch to slb_miss_realmode
-	 * because the distance from here to there depends on where
-	 * the kernel ends up being put.
+	 * Register contents:
+	 * R13		= PACA
+	 * R9		= CR
+	 * Original R9 to R13 is saved on PACA_EXMC
+	 *
+	 * Switch to mc_emergency stack and handle re-entrancy (we limit
+	 * the nested MCE upto level 4 to avoid stack overflow).
+	 * Save MCE registers srr1, srr0, dar and dsisr and then set ME=1
+	 *
+	 * We use paca->in_mce to check whether this is the first entry or
+	 * nested machine check. We increment paca->in_mce to track nested
+	 * machine checks.
+	 *
+	 * If this is the first entry then set stack pointer to
+	 * paca->mc_emergency_sp, otherwise r1 is already pointing to
+	 * stack frame on mc_emergency stack.
+	 *
+	 * NOTE: We are here with MSR_ME=0 (off), which means we risk a
+	 * checkstop if we get another machine check exception before we do
+	 * rfid with MSR_ME=1.
 	 */
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
-
-VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
+	mr	r11,r1			/* Save r1 */
+	lhz	r10,PACA_IN_MCE(r13)
+	cmpwi	r10,0			/* Are we in nested machine check */
+	bne	0f			/* Yes, we are. */
+	/* First machine check entry */
+	ld	r1,PACAMCEMERGSP(r13)	/* Use MC emergency stack */
+0:	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame */
+	addi	r10,r10,1		/* increment paca->in_mce */
+	sth	r10,PACA_IN_MCE(r13)
+	/* Limit nested MCE to level 4 to avoid stack overflow */
+	cmpwi	r10,4
+	bgt	2f			/* Check if we hit limit of 4 */
+	std	r11,GPR1(r1)		/* Save r1 on the stack. */
+	std	r11,0(r1)		/* make stack chain pointer */
+	mfspr	r11,SPRN_SRR0		/* Save SRR0 */
+	std	r11,_NIP(r1)
+	mfspr	r11,SPRN_SRR1		/* Save SRR1 */
+	std	r11,_MSR(r1)
+	mfspr	r11,SPRN_DAR		/* Save DAR */
+	std	r11,_DAR(r1)
+	mfspr	r11,SPRN_DSISR		/* Save DSISR */
+	std	r11,_DSISR(r1)
+	std	r9,_CCR(r1)		/* Save CR in stackframe */
+	/* Save r9 through r13 from EXMC save area to stack frame. */
+	EXCEPTION_PROLOG_COMMON_2(PACA_EXMC)
+	mfmsr	r11			/* get MSR value */
+	ori	r11,r11,MSR_ME		/* turn on ME bit */
+	ori	r11,r11,MSR_RI		/* turn on RI bit */
+	ld	r12,PACAKBASE(r13)	/* get high part of &label */
+	LOAD_HANDLER(r12, machine_check_handle_early)
+1:	mtspr	SPRN_SRR0,r12
+	mtspr	SPRN_SRR1,r11
+	rfid
+	b	.	/* prevent speculative execution */
+2:
+	/* Stack overflow. Stay on emergency stack and panic.
+	 * Keep the ME bit off while panic-ing, so that if we hit
+	 * another machine check we checkstop.
+	 */
+	addi	r1,r1,INT_FRAME_SIZE	/* go back to previous stack frame */
+	ld	r11,PACAKMSR(r13)
+	ld	r12,PACAKBASE(r13)
+	LOAD_HANDLER(r12, unrecover_mce)
+	li	r10,MSR_ME
+	andc	r11,r11,r10		/* Turn off MSR_ME */
+	b	1b
+	b	.	/* prevent speculative execution */
+END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
+TRAMP_HANDLER_END(machine_check_powernv_early)
 
-VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
+TRAMP_HANDLER_BEGIN(machine_check_pSeries)
+	.globl machine_check_fwnmi
+machine_check_fwnmi:
+	SET_SCRATCH0(r13)		/* save r13 */
+	EXCEPTION_PROLOG_0(PACA_EXMC)
+machine_check_pSeries_0:
+	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200)
+	/*
+	 * The following is essentially EXCEPTION_PROLOG_PSERIES_1 with the
+	 * difference that MSR_RI is not enabled, because PACA_EXMC is being
+	 * used, so nested machine check corrupts it. machine_check_common
+	 * enables MSR_RI.
+	 */
+	ld	r12,PACAKBASE(r13)
+	ld	r10,PACAKMSR(r13)
+	xori	r10,r10,MSR_RI
+	mfspr	r11,SPRN_SRR0
+	LOAD_HANDLER(r12, machine_check_common)
+	mtspr	SPRN_SRR0,r12
 	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
-	mfctr	r11
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
-
-VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
-	.globl hardware_interrupt_hv;
-hardware_interrupt_hv:
-	BEGIN_FTR_SECTION
-		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
-					    EXC_HV, SOFTEN_TEST_HV)
-do_kvm_H0x500:
-		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x502)
-	FTR_SECTION_ELSE
-		_MASKABLE_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common),
-					    EXC_STD, SOFTEN_TEST_PR)
-do_kvm_0x500:
-		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
-	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
-VECTOR_HANDLER_REAL_END(hardware_interrupt, 0x500, 0x600)
-
-VECTOR_HANDLER_REAL(alignment, 0x600, 0x700)
-
-TRAMP_KVM(PACA_EXGEN, 0x600)
-
-VECTOR_HANDLER_REAL(program_check, 0x700, 0x800)
-
-TRAMP_KVM(PACA_EXGEN, 0x700)
-
-VECTOR_HANDLER_REAL(fp_unavailable, 0x800, 0x900)
+	mtspr	SPRN_SRR1,r10
+	rfid
+	b	.	/* prevent speculative execution */
 
-TRAMP_KVM(PACA_EXGEN, 0x800)
+TRAMP_HANDLER_END(machine_check_pSeries)
 
-VECTOR_HANDLER_REAL_MASKABLE(decrementer, 0x900, 0x980)
+TRAMP_KVM_SKIP(PACA_EXMC, 0x200)
 
-VECTOR_HANDLER_REAL_HV(hdecrementer, 0x980, 0xa00)
-
-VECTOR_HANDLER_REAL_MASKABLE(doorbell_super, 0xa00, 0xb00)
-
-TRAMP_KVM(PACA_EXGEN, 0xa00)
-
-VECTOR_HANDLER_REAL(trap_0b, 0xb00, 0xc00)
-
-TRAMP_KVM(PACA_EXGEN, 0xb00)
-
-VECTOR_HANDLER_REAL_BEGIN(system_call, 0xc00, 0xd00)
-	 /*
-	  * If CONFIG_KVM_BOOK3S_64_HANDLER is set, save the PPR (on systems
-	  * that support it) before changing to HMT_MEDIUM. That allows the KVM
-	  * code to save that value into the guest state (it is the guest's PPR
-	  * value). Otherwise just change to HMT_MEDIUM as userspace has
-	  * already saved the PPR.
-	  */
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-	SET_SCRATCH0(r13)
-	GET_PACA(r13)
-	std	r9,PACA_EXGEN+EX_R9(r13)
-	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
-	HMT_MEDIUM;
-	std	r10,PACA_EXGEN+EX_R10(r13)
-	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
-	mfcr	r9
-	KVMTEST_PR(0xc00)
-	GET_SCRATCH0(r13)
-#else
-	HMT_MEDIUM;
-#endif
-	SYSCALL_PSERIES_1
-	SYSCALL_PSERIES_2_RFID
-	SYSCALL_PSERIES_3
-VECTOR_HANDLER_REAL_END(system_call, 0xc00, 0xd00)
-
-TRAMP_KVM(PACA_EXGEN, 0xc00)
-
-VECTOR_HANDLER_REAL(single_step, 0xd00, 0xe00)
-
-TRAMP_KVM(PACA_EXGEN, 0xd00)
+COMMON_HANDLER_BEGIN(machine_check_common)
+	/*
+	 * Machine check is different because we use a different
+	 * save area: PACA_EXMC instead of PACA_EXGEN.
+	 */
+	mfspr	r10,SPRN_DAR
+	std	r10,PACA_EXMC+EX_DAR(r13)
+	mfspr	r10,SPRN_DSISR
+	stw	r10,PACA_EXMC+EX_DSISR(r13)
+	EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC)
+	FINISH_NAP
+	RECONCILE_IRQ_STATE(r10, r11)
+	ld	r3,PACA_EXMC+EX_DAR(r13)
+	lwz	r4,PACA_EXMC+EX_DSISR(r13)
+	/* Enable MSR_RI when finished with PACA_EXMC */
+	li	r10,MSR_RI
+	mtmsrd 	r10,1
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+	bl	save_nvgprs
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	machine_check_exception
+	b	ret_from_except
+COMMON_HANDLER_END(machine_check_common)
 
+#define MACHINE_CHECK_HANDLER_WINDUP			\
+	/* Clear MSR_RI before setting SRR0 and SRR1. */\
+	li	r0,MSR_RI;				\
+	mfmsr	r9;		/* get MSR value */	\
+	andc	r9,r9,r0;				\
+	mtmsrd	r9,1;		/* Clear MSR_RI */	\
+	/* Move original SRR0 and SRR1 into the respective regs */	\
+	ld	r9,_MSR(r1);				\
+	mtspr	SPRN_SRR1,r9;				\
+	ld	r3,_NIP(r1);				\
+	mtspr	SPRN_SRR0,r3;				\
+	ld	r9,_CTR(r1);				\
+	mtctr	r9;					\
+	ld	r9,_XER(r1);				\
+	mtxer	r9;					\
+	ld	r9,_LINK(r1);				\
+	mtlr	r9;					\
+	REST_GPR(0, r1);				\
+	REST_8GPRS(2, r1);				\
+	REST_GPR(10, r1);				\
+	ld	r11,_CCR(r1);				\
+	mtcr	r11;					\
+	/* Decrement paca->in_mce. */			\
+	lhz	r12,PACA_IN_MCE(r13);			\
+	subi	r12,r12,1;				\
+	sth	r12,PACA_IN_MCE(r13);			\
+	REST_GPR(11, r1);				\
+	REST_2GPRS(12, r1);				\
+	/* restore original r1. */			\
+	ld	r1,GPR1(r1)
 
-	/* At 0xe??? we have a bunch of hypervisor exceptions, we branch
-	 * out of line to handle them
+	/*
+	 * Handle machine check early in real mode. We come here with
+	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
 	 */
-__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
-
-__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
-
-__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
-
-__VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
-
-__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
-
-__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0, 0xec0)
-
-VECTOR_HANDLER_REAL_NONE(0xec0, 0xf00)
-
-__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
-
-__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
-
-__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
-
-__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
-
-__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
-
-VECTOR_HANDLER_REAL_NONE(0xfa0, 0x1200)
-
-
-#ifdef CONFIG_CBE_RAS
-VECTOR_HANDLER_REAL_HV(cbe_system_error, 0x1200, 0x1300)
-
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1200)
-
-#else /* CONFIG_CBE_RAS */
-VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
-#endif
-
-VECTOR_HANDLER_REAL(instruction_breakpoint, 0x1300, 0x1400)
-
-TRAMP_KVM_SKIP(PACA_EXGEN, 0x1300)
-
-VECTOR_HANDLER_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x1600)
-	mtspr	SPRN_SPRG_HSCRATCH0,r13
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
-
-#ifdef CONFIG_PPC_DENORMALISATION
-	mfspr	r10,SPRN_HSRR1
-	mfspr	r11,SPRN_HSRR0		/* save HSRR0 */
-	andis.	r10,r10,(HSRR1_DENORM)@h /* denorm? */
-	addi	r11,r11,-4		/* HSRR0 is next instruction */
-	bne+	denorm_assist
-#endif
-
-	KVMTEST_PR(0x1500)
-	EXCEPTION_PROLOG_PSERIES_1(denorm_common, EXC_HV)
-VECTOR_HANDLER_REAL_END(denorm_exception_hv, 0x1500, 0x1600)
-
-TRAMP_KVM_SKIP(PACA_EXGEN, 0x1500)
-
-#ifdef CONFIG_CBE_RAS
-VECTOR_HANDLER_REAL_HV(cbe_maintenance, 0x1600, 0x1700)
-
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1600)
-
-#else /* CONFIG_CBE_RAS */
-VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
-#endif
-
-VECTOR_HANDLER_REAL(altivec_assist, 0x1700, 0x1800)
-
-TRAMP_KVM(PACA_EXGEN, 0x1700)
-
-#ifdef CONFIG_CBE_RAS
-VECTOR_HANDLER_REAL_HV(cbe_thermal, 0x1800, 0x1900)
-
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
-
-#else /* CONFIG_CBE_RAS */
-VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
-#endif
-
-
-/*** Out of line interrupts support ***/
-
-	/* moved from 0x200 */
-TRAMP_HANDLER_BEGIN(machine_check_powernv_early)
-BEGIN_FTR_SECTION
-	EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
+COMMON_HANDLER_BEGIN(machine_check_handle_early)
+	std	r0,GPR0(r1)	/* Save r0 */
+	EXCEPTION_PROLOG_COMMON_3(0x200)
+	bl	save_nvgprs
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	machine_check_early
+	std	r3,RESULT(r1)	/* Save result */
+	ld	r12,_MSR(r1)
+#ifdef	CONFIG_PPC_P7_NAP
 	/*
-	 * Register contents:
-	 * R13		= PACA
-	 * R9		= CR
-	 * Original R9 to R13 is saved on PACA_EXMC
-	 *
-	 * Switch to mc_emergency stack and handle re-entrancy (we limit
-	 * the nested MCE upto level 4 to avoid stack overflow).
-	 * Save MCE registers srr1, srr0, dar and dsisr and then set ME=1
-	 *
-	 * We use paca->in_mce to check whether this is the first entry or
-	 * nested machine check. We increment paca->in_mce to track nested
-	 * machine checks.
-	 *
-	 * If this is the first entry then set stack pointer to
-	 * paca->mc_emergency_sp, otherwise r1 is already pointing to
-	 * stack frame on mc_emergency stack.
+	 * Check if thread was in power saving mode. We come here when any
+	 * of the following is true:
+	 * a. thread wasn't in power saving mode
+	 * b. thread was in power saving mode with no state loss,
+	 *    supervisor state loss or hypervisor state loss.
 	 *
-	 * NOTE: We are here with MSR_ME=0 (off), which means we risk a
-	 * checkstop if we get another machine check exception before we do
-	 * rfid with MSR_ME=1.
+	 * Go back to nap/sleep/winkle mode again if (b) is true.
 	 */
-	mr	r11,r1			/* Save r1 */
-	lhz	r10,PACA_IN_MCE(r13)
-	cmpwi	r10,0			/* Are we in nested machine check */
-	bne	0f			/* Yes, we are. */
-	/* First machine check entry */
-	ld	r1,PACAMCEMERGSP(r13)	/* Use MC emergency stack */
-0:	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame */
-	addi	r10,r10,1		/* increment paca->in_mce */
-	sth	r10,PACA_IN_MCE(r13)
-	/* Limit nested MCE to level 4 to avoid stack overflow */
-	cmpwi	r10,4
-	bgt	2f			/* Check if we hit limit of 4 */
-	std	r11,GPR1(r1)		/* Save r1 on the stack. */
-	std	r11,0(r1)		/* make stack chain pointer */
-	mfspr	r11,SPRN_SRR0		/* Save SRR0 */
-	std	r11,_NIP(r1)
-	mfspr	r11,SPRN_SRR1		/* Save SRR1 */
-	std	r11,_MSR(r1)
-	mfspr	r11,SPRN_DAR		/* Save DAR */
-	std	r11,_DAR(r1)
-	mfspr	r11,SPRN_DSISR		/* Save DSISR */
-	std	r11,_DSISR(r1)
-	std	r9,_CCR(r1)		/* Save CR in stackframe */
-	/* Save r9 through r13 from EXMC save area to stack frame. */
-	EXCEPTION_PROLOG_COMMON_2(PACA_EXMC)
-	mfmsr	r11			/* get MSR value */
-	ori	r11,r11,MSR_ME		/* turn on ME bit */
-	ori	r11,r11,MSR_RI		/* turn on RI bit */
-	ld	r12,PACAKBASE(r13)	/* get high part of &label */
-	LOAD_HANDLER(r12, machine_check_handle_early)
-1:	mtspr	SPRN_SRR0,r12
-	mtspr	SPRN_SRR1,r11
-	rfid
-	b	.	/* prevent speculative execution */
-2:
-	/* Stack overflow. Stay on emergency stack and panic.
-	 * Keep the ME bit off while panic-ing, so that if we hit
-	 * another machine check we checkstop.
+	rlwinm.	r11,r12,47-31,30,31	/* Was it in power saving mode? */
+	beq	4f			/* No, it wasn;t */
+	/* Thread was in power saving mode. Go back to nap again. */
+	cmpwi	r11,2
+	blt	3f
+	/* Supervisor/Hypervisor state loss */
+	li	r0,1
+	stb	r0,PACA_NAPSTATELOST(r13)
+3:	bl	machine_check_queue_event
+	MACHINE_CHECK_HANDLER_WINDUP
+	GET_PACA(r13)
+	ld	r1,PACAR1(r13)
+	/*
+	 * Check what idle state this CPU was in and go back to same mode
+	 * again.
 	 */
-	addi	r1,r1,INT_FRAME_SIZE	/* go back to previous stack frame */
-	ld	r11,PACAKMSR(r13)
-	ld	r12,PACAKBASE(r13)
-	LOAD_HANDLER(r12, unrecover_mce)
-	li	r10,MSR_ME
-	andc	r11,r11,r10		/* Turn off MSR_ME */
-	b	1b
-	b	.	/* prevent speculative execution */
-END_FTR_SECTION_IFSET(CPU_FTR_HVMODE)
-TRAMP_HANDLER_END(machine_check_powernv_early)
+	lbz	r3,PACA_THREAD_IDLE_STATE(r13)
+	cmpwi	r3,PNV_THREAD_NAP
+	bgt	10f
+	IDLE_STATE_ENTER_SEQ(PPC_NAP)
+	/* No return */
+10:
+	cmpwi	r3,PNV_THREAD_SLEEP
+	bgt	2f
+	IDLE_STATE_ENTER_SEQ(PPC_SLEEP)
+	/* No return */
 
-TRAMP_HANDLER_BEGIN(machine_check_pSeries)
-	.globl machine_check_fwnmi
-machine_check_fwnmi:
-	SET_SCRATCH0(r13)		/* save r13 */
-	EXCEPTION_PROLOG_0(PACA_EXMC)
-machine_check_pSeries_0:
-	EXCEPTION_PROLOG_1(PACA_EXMC, KVMTEST_PR, 0x200)
+2:
 	/*
-	 * The following is essentially EXCEPTION_PROLOG_PSERIES_1 with the
-	 * difference that MSR_RI is not enabled, because PACA_EXMC is being
-	 * used, so nested machine check corrupts it. machine_check_common
-	 * enables MSR_RI.
+	 * Go back to winkle. Please note that this thread was woken up in
+	 * machine check from winkle and have not restored the per-subcore
+	 * state. Hence before going back to winkle, set last bit of HSPGR0
+	 * to 1. This will make sure that if this thread gets woken up
+	 * again at reset vector 0x100 then it will get chance to restore
+	 * the subcore state.
 	 */
-	ld	r12,PACAKBASE(r13)
-	ld	r10,PACAKMSR(r13)
-	xori	r10,r10,MSR_RI
-	mfspr	r11,SPRN_SRR0
-	LOAD_HANDLER(r12, machine_check_common)
-	mtspr	SPRN_SRR0,r12
-	mfspr	r12,SPRN_SRR1
-	mtspr	SPRN_SRR1,r10
-	rfid
-	b	.	/* prevent speculative execution */
-
-TRAMP_HANDLER_END(machine_check_pSeries)
-
-TRAMP_KVM_SKIP(PACA_EXMC, 0x200)
-TRAMP_KVM_SKIP(PACA_EXGEN, 0x300)
-TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
-TRAMP_KVM(PACA_EXGEN, 0x400)
-TRAMP_KVM(PACA_EXSLB, 0x480)
-TRAMP_KVM(PACA_EXGEN, 0x900)
-TRAMP_KVM_HV(PACA_EXGEN, 0x980)
-
-#ifdef CONFIG_PPC_DENORMALISATION
-COMMON_HANDLER_BEGIN(denorm_assist)
-BEGIN_FTR_SECTION
-/*
- * To denormalise we need to move a copy of the register to itself.
- * For POWER6 do that here for all FP regs.
- */
-	mfmsr	r10
-	ori	r10,r10,(MSR_FP|MSR_FE0|MSR_FE1)
-	xori	r10,r10,(MSR_FE0|MSR_FE1)
-	mtmsrd	r10
-	sync
-
-#define FMR2(n)  fmr (n), (n) ; fmr n+1, n+1
-#define FMR4(n)  FMR2(n) ; FMR2(n+2)
-#define FMR8(n)  FMR4(n) ; FMR4(n+4)
-#define FMR16(n) FMR8(n) ; FMR8(n+8)
-#define FMR32(n) FMR16(n) ; FMR16(n+16)
-	FMR32(0)
-
-FTR_SECTION_ELSE
-/*
- * To denormalise we need to move a copy of the register to itself.
- * For POWER7 do that here for the first 32 VSX registers only.
- */
-	mfmsr	r10
-	oris	r10,r10,MSR_VSX@h
-	mtmsrd	r10
-	sync
-
-#define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1)
-#define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2)
-#define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4)
-#define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8)
-#define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16)
-	XVCPSGNDP32(0)
-
-ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206)
-
-BEGIN_FTR_SECTION
-	b	denorm_done
-END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
-/*
- * To denormalise we need to move a copy of the register to itself.
- * For POWER8 we need to do that for all 64 VSX registers
- */
-	XVCPSGNDP32(32)
-denorm_done:
-	mtspr	SPRN_HSRR0,r11
-	mtcrf	0x80,r9
-	ld	r9,PACA_EXGEN+EX_R9(r13)
-	RESTORE_PPR_PACA(PACA_EXGEN, r10)
-BEGIN_FTR_SECTION
-	ld	r10,PACA_EXGEN+EX_CFAR(r13)
-	mtspr	SPRN_CFAR,r10
-END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
-	ld	r10,PACA_EXGEN+EX_R10(r13)
-	ld	r11,PACA_EXGEN+EX_R11(r13)
-	ld	r12,PACA_EXGEN+EX_R12(r13)
-	ld	r13,PACA_EXGEN+EX_R13(r13)
-	HRFID
-	b	.
+	ori	r13,r13,1
+	SET_PACA(r13)
+	IDLE_STATE_ENTER_SEQ(PPC_WINKLE)
+	/* No return */
+4:
 #endif
-COMMON_HANDLER_END(denorm_assist)
-
-	/* moved from 0xe00 */
-__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
-TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
-
-__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
-
-__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
-
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
-
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
-TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
-
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
-TRAMP_KVM_HV(PACA_EXGEN, 0xea0)
-
-	/* moved from 0xf00 */
-__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
-TRAMP_KVM(PACA_EXGEN, 0xf00)
-
-__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
-TRAMP_KVM(PACA_EXGEN, 0xf20)
-
-__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
-TRAMP_KVM(PACA_EXGEN, 0xf40)
-
-__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
-TRAMP_KVM(PACA_EXGEN, 0xf60)
-
-__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
-TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
-
-/*
- * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
- * - If it was a decrementer interrupt, we bump the dec to max and and return.
- * - If it was a doorbell we return immediately since doorbells are edge
- *   triggered and won't automatically refire.
- * - If it was a HMI we return immediately since we handled it in realmode
- *   and it won't refire.
- * - else we hard disable and return.
- * This is called with r10 containing the value to OR to the paca field.
- */
-#define MASKED_INTERRUPT(_H)				\
-masked_##_H##interrupt:					\
-	std	r11,PACA_EXGEN+EX_R11(r13);		\
-	lbz	r11,PACAIRQHAPPENED(r13);		\
-	or	r11,r11,r10;				\
-	stb	r11,PACAIRQHAPPENED(r13);		\
-	cmpwi	r10,PACA_IRQ_DEC;			\
-	bne	1f;					\
-	lis	r10,0x7fff;				\
-	ori	r10,r10,0xffff;				\
-	mtspr	SPRN_DEC,r10;				\
-	b	2f;					\
-1:	cmpwi	r10,PACA_IRQ_DBELL;			\
-	beq	2f;					\
-	cmpwi	r10,PACA_IRQ_HMI;			\
-	beq	2f;					\
-	mfspr	r10,SPRN_##_H##SRR1;			\
-	rldicl	r10,r10,48,1; /* clear MSR_EE */	\
-	rotldi	r10,r10,16;				\
-	mtspr	SPRN_##_H##SRR1,r10;			\
-2:	mtcrf	0x80,r9;				\
-	ld	r9,PACA_EXGEN+EX_R9(r13);		\
-	ld	r10,PACA_EXGEN+EX_R10(r13);		\
-	ld	r11,PACA_EXGEN+EX_R11(r13);		\
-	GET_SCRATCH0(r13);				\
-	##_H##rfid;					\
-	b	.
-
-/*
- * Real mode exceptions actually use this too, but alternate
- * instruction code patches (which end up in the common .text area)
- * cannot reach these if they are put there.
- */
-USE_FIXED_SECTION(virt_trampolines)
-	MASKED_INTERRUPT()
-	MASKED_INTERRUPT(H)
-UNUSE_FIXED_SECTION(virt_trampolines)
-
-/*
- * Called from arch_local_irq_enable when an interrupt needs
- * to be resent. r3 contains 0x500, 0x900, 0xa00 or 0xe80 to indicate
- * which kind of interrupt. MSR:EE is already off. We generate a
- * stackframe like if a real interrupt had happened.
- *
- * Note: While MSR:EE is off, we need to make sure that _MSR
- * in the generated frame has EE set to 1 or the exception
- * handler will not properly re-enable them.
- */
-COMMON_HANDLER_BEGIN(__replay_interrupt)
-	/* We are going to jump to the exception common code which
-	 * will retrieve various register values from the PACA which
-	 * we don't give a damn about, so we don't bother storing them.
+	/*
+	 * Check if we are coming from hypervisor userspace. If yes then we
+	 * continue in host kernel in V mode to deliver the MC event.
 	 */
-	mfmsr	r12
-	mflr	r11
-	mfcr	r9
-	ori	r12,r12,MSR_EE
-	cmpwi	r3,0x900
-	beq	decrementer_common
-	cmpwi	r3,0x500
-	beq	hardware_interrupt_common
-BEGIN_FTR_SECTION
-	cmpwi	r3,0xe80
-	beq	h_doorbell_common
-	cmpwi	r3,0xea0
-	beq	h_virt_irq_common
-	cmpwi	r3,0xe60
-	beq	hmi_exception_common
-FTR_SECTION_ELSE
-	cmpwi	r3,0xa00
-	beq	doorbell_super_common
-ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
-	blr
-COMMON_HANDLER_END(__replay_interrupt)
-
-#ifdef CONFIG_PPC_PSERIES
-/*
- * Vectors for the FWNMI option.  Share common code.
- */
-TRAMP_HANDLER_BEGIN(system_reset_fwnmi)
-	SET_SCRATCH0(r13)		/* save r13 */
-	EXCEPTION_PROLOG_PSERIES(PACA_EXGEN, system_reset_common, EXC_STD,
-				 NOTEST, 0x100)
-TRAMP_HANDLER_END(system_reset_fwnmi)
-
-#endif /* CONFIG_PPC_PSERIES */
+	rldicl.	r11,r12,4,63		/* See if MC hit while in HV mode. */
+	beq	5f
+	andi.	r11,r12,MSR_PR		/* See if coming from user. */
+	bne	9f			/* continue in V mode if we are. */
 
+5:
 #ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-TRAMP_HANDLER_BEGIN(kvmppc_skip_interrupt)
 	/*
-	 * Here all GPRs are unchanged from when the interrupt happened
-	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 * We are coming from kernel context. Check if we are coming from
+	 * guest. if yes, then we can continue. We will fall through
+	 * do_kvm_200->kvmppc_interrupt to deliver the MC event to guest.
+	 */
+	lbz	r11,HSTATE_IN_GUEST(r13)
+	cmpwi	r11,0			/* Check if coming from guest */
+	bne	9f			/* continue if we are. */
+#endif
+	/*
+	 * At this point we are not sure about what context we come from.
+	 * Queue up the MCE event and return from the interrupt.
+	 * But before that, check if this is an un-recoverable exception.
+	 * If yes, then stay on emergency stack and panic.
 	 */
-	mfspr	r13, SPRN_SRR0
-	addi	r13, r13, 4
-	mtspr	SPRN_SRR0, r13
-	GET_SCRATCH0(r13)
+	andi.	r11,r12,MSR_RI
+	bne	2f
+1:	mfspr	r11,SPRN_SRR0
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER(r10,unrecover_mce)
+	mtspr	SPRN_SRR0,r10
+	ld	r10,PACAKMSR(r13)
+	/*
+	 * We are going down. But there are chances that we might get hit by
+	 * another MCE during panic path and we may run into unstable state
+	 * with no way out. Hence, turn ME bit off while going down, so that
+	 * when another MCE is hit during panic path, system will checkstop
+	 * and hypervisor will get restarted cleanly by SP.
+	 */
+	li	r3,MSR_ME
+	andc	r10,r10,r3		/* Turn off MSR_ME */
+	mtspr	SPRN_SRR1,r10
 	rfid
 	b	.
-TRAMP_HANDLER_END(kvmppc_skip_interrupt)
-
-TRAMP_HANDLER_BEGIN(kvmppc_skip_Hinterrupt)
+2:
 	/*
-	 * Here all GPRs are unchanged from when the interrupt happened
-	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 * Check if we have successfully handled/recovered from error, if not
+	 * then stay on emergency stack and panic.
 	 */
-	mfspr	r13, SPRN_HSRR0
-	addi	r13, r13, 4
-	mtspr	SPRN_HSRR0, r13
-	GET_SCRATCH0(r13)
-	hrfid
-	b	.
-TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
-#endif
-
-/*
- * Ensure that any handlers that get invoked from the exception prologs
- * above are below the first 64KB (0x10000) of the kernel image because
- * the prologs assemble the addresses of these handlers using the
- * LOAD_HANDLER macro, which uses an ori instruction.
- */
+	ld	r3,RESULT(r1)	/* Load result */
+	cmpdi	r3,0		/* see if we handled MCE successfully */
+	beq	1b		/* if !handled then panic */
+	/*
+	 * Return from MC interrupt.
+	 * Queue up the MCE event so that we can log it later, while
+	 * returning from kernel or opal call.
+	 */
+	bl	machine_check_queue_event
+	MACHINE_CHECK_HANDLER_WINDUP
+	rfid
+9:
+	/* Deliver the machine check to host kernel in V mode. */
+	MACHINE_CHECK_HANDLER_WINDUP
+	b	machine_check_pSeries
+COMMON_HANDLER_END(machine_check_handle_early)
 
-/*** Common interrupt handlers ***/
+COMMON_HANDLER_BEGIN(unrecover_mce)
+	/* Invoke machine_check_exception to print MCE event and panic. */
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	machine_check_exception
+	/*
+	 * We will not reach here. Even if we did, there is no way out. Call
+	 * unrecoverable_exception and die.
+	 */
+1:	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	unrecoverable_exception
+	b	1b
+COMMON_HANDLER_END(unrecover_mce)
 
-COMMON_HANDLER(system_reset_common, 0x100, system_reset_exception)
-COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
-COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
-COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-#ifdef CONFIG_PPC_DOORBELL
-COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
-#else
-COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
-#endif
-COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
-COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
-COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
-COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
-COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
-#ifdef CONFIG_PPC_DOORBELL
-COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
-#else
-COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
-#endif
-COMMON_HANDLER_ASYNC(h_virt_irq_common, 0xea0, do_IRQ)
-COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
-COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
-COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
-#ifdef CONFIG_ALTIVEC
-COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
-#else
-COMMON_HANDLER(altivec_assist_common, 0x1700, unknown_exception)
-#endif
+VECTOR_HANDLER_REAL(data_access, 0x300, 0x380)
+VECTOR_HANDLER_VIRT(data_access, 0x4300, 0x4380, 0x300)
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x300)
 
+COMMON_HANDLER_BEGIN(data_access_common)
 	/*
-	 * Relocation-on interrupts: A subset of the interrupts can be delivered
-	 * with IR=1/DR=1, if AIL==2 and MSR.HV won't be changed by delivering
-	 * it.  Addresses are the same as the original interrupt addresses, but
-	 * offset by 0xc000000000004000.
-	 * It's impossible to receive interrupts below 0x300 via this mechanism.
-	 * KVM: None of these traps are from the guest ; anything that escalated
-	 * to HV=1 from HV=0 is delivered via real mode handlers.
+	 * Here r13 points to the paca, r9 contains the saved CR,
+	 * SRR0 and SRR1 are saved in r11 and r12,
+	 * r9 - r13 are saved in paca->exgen.
 	 */
+	mfspr	r10,SPRN_DAR
+	std	r10,PACA_EXGEN+EX_DAR(r13)
+	mfspr	r10,SPRN_DSISR
+	stw	r10,PACA_EXGEN+EX_DSISR(r13)
+	EXCEPTION_PROLOG_COMMON(0x300, PACA_EXGEN)
+	RECONCILE_IRQ_STATE(r10, r11)
+	ld	r12,_MSR(r1)
+	ld	r3,PACA_EXGEN+EX_DAR(r13)
+	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
+	li	r5,0x300
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+BEGIN_MMU_FTR_SECTION
+	b	do_hash_page		/* Try to handle as hpte fault */
+MMU_FTR_SECTION_ELSE
+	b	handle_page_fault
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+COMMON_HANDLER_END(data_access_common)
 
+
+VECTOR_HANDLER_REAL_BEGIN(data_access_slb, 0x380, 0x400)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXSLB)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x380)
+	std	r3,PACA_EXSLB+EX_R3(r13)
+	mfspr	r3,SPRN_DAR
+	mfspr	r12,SPRN_SRR1
+#ifndef CONFIG_RELOCATABLE
+	b	slb_miss_realmode
+#else
 	/*
-	 * This uses the standard macro, since the original 0x300 vector
-	 * only has extra guff for STAB-based processors -- which never
-	 * come here.
+	 * We can't just use a direct branch to slb_miss_realmode
+	 * because the distance from here to there depends on where
+	 * the kernel ends up being put.
 	 */
-VECTOR_HANDLER_VIRT_NONE(0x4100, 0x4200)
-VECTOR_HANDLER_VIRT_NONE(0x4200, 0x4300)
-
-VECTOR_HANDLER_VIRT(data_access, 0x4300, 0x4380, 0x300)
+	mfctr	r11
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER(r10, slb_miss_realmode)
+	mtctr	r10
+	bctr
+#endif
+VECTOR_HANDLER_REAL_END(data_access_slb, 0x380, 0x400)
 
 VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
 	SET_SCRATCH0(r13)
@@ -830,26 +548,153 @@ VECTOR_HANDLER_VIRT_BEGIN(data_access_slb, 0x4380, 0x4400)
 	bctr
 #endif
 VECTOR_HANDLER_VIRT_END(data_access_slb, 0x4380, 0x4400)
+TRAMP_KVM_SKIP(PACA_EXSLB, 0x380)
 
+
+VECTOR_HANDLER_REAL(instruction_access, 0x400, 0x480)
 VECTOR_HANDLER_VIRT(instruction_access, 0x4400, 0x4480, 0x400)
+TRAMP_KVM(PACA_EXGEN, 0x400)
+COMMON_HANDLER_BEGIN(instruction_access_common)
+	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
+	RECONCILE_IRQ_STATE(r10, r11)
+	ld	r12,_MSR(r1)
+	ld	r3,_NIP(r1)
+	andis.	r4,r12,0x5820
+	li	r5,0x400
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+BEGIN_MMU_FTR_SECTION
+	b	do_hash_page		/* Try to handle as hpte fault */
+MMU_FTR_SECTION_ELSE
+	b	handle_page_fault
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+COMMON_HANDLER_END(instruction_access_common)
+
+
+VECTOR_HANDLER_REAL_BEGIN(instruction_access_slb, 0x480, 0x500)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXSLB)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST_PR, 0x480)
+	std	r3,PACA_EXSLB+EX_R3(r13)
+	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
+	mfspr	r12,SPRN_SRR1
+#ifndef CONFIG_RELOCATABLE
+	b	slb_miss_realmode
+#else
+	mfctr	r11
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER(r10, slb_miss_realmode)
+	mtctr	r10
+	bctr
+#endif
+VECTOR_HANDLER_REAL_END(instruction_access_slb, 0x480, 0x500)
+VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXSLB)
+	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x480)
+	std	r3,PACA_EXSLB+EX_R3(r13)
+	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
+	mfspr	r12,SPRN_SRR1
+#ifndef CONFIG_RELOCATABLE
+	b	slb_miss_realmode
+#else
+	mfctr	r11
+	ld	r10,PACAKBASE(r13)
+	LOAD_HANDLER(r10, slb_miss_realmode)
+	mtctr	r10
+	bctr
+#endif
+VECTOR_HANDLER_VIRT_END(instruction_access_slb, 0x4480, 0x4500)
+TRAMP_KVM(PACA_EXSLB, 0x480)
+
+
+TRAMP_HANDLER_BEGIN(slb_miss_realmode)
+	/*
+	 * r13 points to the PACA, r9 contains the saved CR,
+	 * r12 contain the saved SRR1, SRR0 is still ready for return
+	 * r3 has the faulting address
+	 * r9 - r13 are saved in paca->exslb.
+	 * r3 is saved in paca->slb_r3
+	 * We assume we aren't going to take any exceptions during this
+	 * procedure.
+	 */
+	mflr	r10
+#ifdef CONFIG_RELOCATABLE
+	mtctr	r11
+#endif
+
+	stw	r9,PACA_EXSLB+EX_CCR(r13)	/* save CR in exc. frame */
+	std	r10,PACA_EXSLB+EX_LR(r13)	/* save LR */
+
+#ifdef CONFIG_PPC_STD_MMU_64
+BEGIN_MMU_FTR_SECTION
+	bl	slb_allocate_realmode
+END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX)
+#endif
+	/* All done -- return from exception. */
+
+	ld	r10,PACA_EXSLB+EX_LR(r13)
+	ld	r3,PACA_EXSLB+EX_R3(r13)
+	lwz	r9,PACA_EXSLB+EX_CCR(r13)	/* get saved CR */
+
+	mtlr	r10
+	andi.	r10,r12,MSR_RI	/* check for unrecoverable exception */
+BEGIN_MMU_FTR_SECTION
+	beq-	2f
+FTR_SECTION_ELSE
+	b	2f
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+
+.machine	push
+.machine	"power4"
+	mtcrf	0x80,r9
+	mtcrf	0x01,r9		/* slb_allocate uses cr0 and cr7 */
+.machine	pop
+
+	RESTORE_PPR_PACA(PACA_EXSLB, r9)
+	ld	r9,PACA_EXSLB+EX_R9(r13)
+	ld	r10,PACA_EXSLB+EX_R10(r13)
+	ld	r11,PACA_EXSLB+EX_R11(r13)
+	ld	r12,PACA_EXSLB+EX_R12(r13)
+	ld	r13,PACA_EXSLB+EX_R13(r13)
+	rfid
+	b	.	/* prevent speculative execution */
 
-VECTOR_HANDLER_VIRT_BEGIN(instruction_access_slb, 0x4480, 0x4500)
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXSLB)
-	EXCEPTION_PROLOG_1(PACA_EXSLB, NOTEST, 0x480)
-	std	r3,PACA_EXSLB+EX_R3(r13)
-	mfspr	r3,SPRN_SRR0		/* SRR0 is faulting address */
-	mfspr	r12,SPRN_SRR1
-#ifndef CONFIG_RELOCATABLE
-	b	slb_miss_realmode
-#else
-	mfctr	r11
+2:	mfspr	r11,SPRN_SRR0
 	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10, slb_miss_realmode)
-	mtctr	r10
-	bctr
-#endif
-VECTOR_HANDLER_VIRT_END(instruction_access_slb, 0x4480, 0x4500)
+	LOAD_HANDLER(r10,unrecov_slb)
+	mtspr	SPRN_SRR0,r10
+	ld	r10,PACAKMSR(r13)
+	mtspr	SPRN_SRR1,r10
+	rfid
+	b	.
+TRAMP_HANDLER_END(slb_miss_realmode)
+
+COMMON_HANDLER_BEGIN(unrecov_slb)
+	EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
+	RECONCILE_IRQ_STATE(r10, r11)
+	bl	save_nvgprs
+1:	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	unrecoverable_exception
+	b	1b
+COMMON_HANDLER_END(unrecov_slb)
+
+
+VECTOR_HANDLER_REAL_BEGIN(hardware_interrupt, 0x500, 0x600)
+	.globl hardware_interrupt_hv;
+hardware_interrupt_hv:
+	BEGIN_FTR_SECTION
+		_MASKABLE_EXCEPTION_PSERIES(0x500, hardware_interrupt_common,
+					    EXC_HV, SOFTEN_TEST_HV)
+do_kvm_H0x500:
+		KVM_HANDLER(PACA_EXGEN, EXC_HV, 0x500)
+	FTR_SECTION_ELSE
+		_MASKABLE_EXCEPTION_PSERIES(0x500, FIXED_SECTION_REL_ADDR(text, hardware_interrupt_common),
+					    EXC_STD, SOFTEN_TEST_PR)
+do_kvm_0x500:
+		KVM_HANDLER(PACA_EXGEN, EXC_STD, 0x500)
+	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
+VECTOR_HANDLER_REAL_END(hardware_interrupt, 0x500, 0x600)
 
 VECTOR_HANDLER_VIRT_BEGIN(hardware_interrupt, 0x4500, 0x4600)
 	.globl hardware_interrupt_relon_hv;
@@ -861,102 +706,207 @@ hardware_interrupt_relon_hv:
 	ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
 VECTOR_HANDLER_VIRT_END(hardware_interrupt, 0x4500, 0x4600)
 
-VECTOR_HANDLER_VIRT(alignment, 0x4600, 0x4700, 0x600)
-VECTOR_HANDLER_VIRT(program_check, 0x4700, 0x4800, 0x700)
-VECTOR_HANDLER_VIRT(fp_unavailable, 0x4800, 0x4900, 0x800)
-VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
-VECTOR_HANDLER_VIRT_HV(hdecrementer, 0x4980, 0x4a00, 0x980)
-VECTOR_HANDLER_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x4b00, 0xa00)
-VECTOR_HANDLER_VIRT(trap_0b, 0x4b00, 0x4c00, 0xb00)
+COMMON_HANDLER_ASYNC(hardware_interrupt_common, 0x500, do_IRQ)
 
-VECTOR_HANDLER_VIRT_BEGIN(system_call, 0x4c00, 0x4d00)
-	HMT_MEDIUM
-	SYSCALL_PSERIES_1
-	SYSCALL_PSERIES_2_DIRECT
-	SYSCALL_PSERIES_3
-VECTOR_HANDLER_VIRT_END(system_call, 0x4c00, 0x4d00)
 
-VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
+VECTOR_HANDLER_REAL(alignment, 0x600, 0x700)
+VECTOR_HANDLER_VIRT(alignment, 0x4600, 0x4700, 0x600)
+TRAMP_KVM(PACA_EXGEN, 0x600)
+COMMON_HANDLER_BEGIN(alignment_common)
+	mfspr	r10,SPRN_DAR
+	std	r10,PACA_EXGEN+EX_DAR(r13)
+	mfspr	r10,SPRN_DSISR
+	stw	r10,PACA_EXGEN+EX_DSISR(r13)
+	EXCEPTION_PROLOG_COMMON(0x600, PACA_EXGEN)
+	ld	r3,PACA_EXGEN+EX_DAR(r13)
+	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
+	std	r3,_DAR(r1)
+	std	r4,_DSISR(r1)
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	alignment_exception
+	b	ret_from_except
+COMMON_HANDLER_END(alignment_common)
 
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
 
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
+VECTOR_HANDLER_REAL(program_check, 0x700, 0x800)
+VECTOR_HANDLER_VIRT(program_check, 0x4700, 0x4800, 0x700)
+TRAMP_KVM(PACA_EXGEN, 0x700)
+COMMON_HANDLER_BEGIN(program_check_common)
+	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	program_check_exception
+	b	ret_from_except
+COMMON_HANDLER_END(program_check_common)
 
-__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
 
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
+VECTOR_HANDLER_REAL(fp_unavailable, 0x800, 0x900)
+VECTOR_HANDLER_VIRT(fp_unavailable, 0x4800, 0x4900, 0x800)
+TRAMP_KVM(PACA_EXGEN, 0x800)
+COMMON_HANDLER_BEGIN(fp_unavailable_common)
+	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
+	bne	1f			/* if from user, just load it up */
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	kernel_fp_unavailable_exception
+	BUG_OPCODE
+1:
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+BEGIN_FTR_SECTION
+	/* Test if 2 TM state bits are zero.  If non-zero (ie. userspace was in
+	 * transaction), go do TM stuff
+	 */
+	rldicl.	r0, r12, (64-MSR_TS_LG), (64-2)
+	bne-	2f
+END_FTR_SECTION_IFSET(CPU_FTR_TM)
+#endif
+	bl	load_up_fpu
+	b	fast_exception_return
+#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
+2:	/* User process was in a transaction */
+	bl	save_nvgprs
+	RECONCILE_IRQ_STATE(r10, r11)
+	addi	r3,r1,STACK_FRAME_OVERHEAD
+	bl	fp_unavailable_tm
+	b	ret_from_except
+#endif
+COMMON_HANDLER_END(fp_unavailable_common)
 
-__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
 
-__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0x4ea0, 0x4ec0)
+VECTOR_HANDLER_REAL_MASKABLE(decrementer, 0x900, 0x980)
+VECTOR_HANDLER_VIRT_MASKABLE(decrementer, 0x4900, 0x4980, 0x900)
+TRAMP_KVM(PACA_EXGEN, 0x900)
+COMMON_HANDLER_ASYNC(decrementer_common, 0x900, timer_interrupt)
 
-VECTOR_HANDLER_VIRT_NONE(0x4ec0, 0x4f00)
 
-__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
+VECTOR_HANDLER_REAL_HV(hdecrementer, 0x980, 0xa00)
+VECTOR_HANDLER_VIRT_HV(hdecrementer, 0x4980, 0x4a00, 0x980)
+TRAMP_KVM_HV(PACA_EXGEN, 0x980)
+COMMON_HANDLER(hdecrementer_common, 0x980, hdec_interrupt)
 
-__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
 
-__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
+VECTOR_HANDLER_REAL_MASKABLE(doorbell_super, 0xa00, 0xb00)
+VECTOR_HANDLER_VIRT_MASKABLE(doorbell_super, 0x4a00, 0x4b00, 0xa00)
+TRAMP_KVM(PACA_EXGEN, 0xa00)
+#ifdef CONFIG_PPC_DOORBELL
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, doorbell_exception)
+#else
+COMMON_HANDLER_ASYNC(doorbell_super_common, 0xa00, unknown_exception)
+#endif
 
-__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
 
-__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
+VECTOR_HANDLER_REAL(trap_0b, 0xb00, 0xc00)
+VECTOR_HANDLER_VIRT(trap_0b, 0x4b00, 0x4c00, 0xb00)
+COMMON_HANDLER(trap_0b_common, 0xb00, unknown_exception)
+TRAMP_KVM(PACA_EXGEN, 0xb00)
 
-VECTOR_HANDLER_VIRT_NONE(0x4fa0, 0x5200)
 
-VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
+/* Syscall routine is used twice, in reloc-off and reloc-on paths */
+#define SYSCALL_PSERIES_1 					\
+BEGIN_FTR_SECTION						\
+	cmpdi	r0,0x1ebe ; 					\
+	beq-	1f ;						\
+END_FTR_SECTION_IFSET(CPU_FTR_REAL_LE)				\
+	mr	r9,r13 ;					\
+	GET_PACA(r13) ;						\
+	mfspr	r11,SPRN_SRR0 ;					\
+0:
 
-VECTOR_HANDLER_VIRT(instruction_breakpoint, 0x5300, 0x5400, 0x1300)
+#define SYSCALL_PSERIES_2_RFID 					\
+	mfspr	r12,SPRN_SRR1 ;					\
+	ld	r10,PACAKBASE(r13) ; 				\
+	LOAD_HANDLER(r10, system_call_common) ; 		\
+	mtspr	SPRN_SRR0,r10 ; 				\
+	ld	r10,PACAKMSR(r13) ;				\
+	mtspr	SPRN_SRR1,r10 ; 				\
+	rfid ; 							\
+	b	. ;	/* prevent speculative execution */
 
-#ifdef CONFIG_PPC_DENORMALISATION
-VECTOR_HANDLER_VIRT_BEGIN(denorm_exception, 0x5500, 0x5600)
-	b	exc_0x1500_denorm_exception_hv
-VECTOR_HANDLER_VIRT_END(denorm_exception, 0x5500, 0x5600)
+#define SYSCALL_PSERIES_3					\
+	/* Fast LE/BE switch system call */			\
+1:	mfspr	r12,SPRN_SRR1 ;					\
+	xori	r12,r12,MSR_LE ;				\
+	mtspr	SPRN_SRR1,r12 ;					\
+	rfid ;		/* return to userspace */		\
+	b	. ;	/* prevent speculative execution */
+
+#if defined(CONFIG_RELOCATABLE)
+	/*
+	 * We can't branch directly so we do it via the CTR which
+	 * is volatile across system calls.
+	 */
+#define SYSCALL_PSERIES_2_DIRECT				\
+	mflr	r10 ;						\
+	ld	r12,PACAKBASE(r13) ; 				\
+	LOAD_HANDLER(r12, system_call_common) ;			\
+	mtctr	r12 ;						\
+	mfspr	r12,SPRN_SRR1 ;					\
+	/* Re-use of r13... No spare regs to do this */	\
+	li	r13,MSR_RI ;					\
+	mtmsrd 	r13,1 ;						\
+	GET_PACA(r13) ;	/* get r13 back */			\
+	bctr ;
 #else
-VECTOR_HANDLER_VIRT_NONE(0x5500, 0x5600)
+	/* We can branch directly */
+#define SYSCALL_PSERIES_2_DIRECT				\
+	mfspr	r12,SPRN_SRR1 ;					\
+	li	r10,MSR_RI ;					\
+	mtmsrd 	r10,1 ;			/* Set RI (EE=0) */	\
+	b	system_call_common ;
 #endif
 
-VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
-
-VECTOR_HANDLER_VIRT(altivec_assist, 0x5700, 0x5800, 0x1700)
+VECTOR_HANDLER_REAL_BEGIN(system_call, 0xc00, 0xd00)
+	 /*
+	  * If CONFIG_KVM_BOOK3S_64_HANDLER is set, save the PPR (on systems
+	  * that support it) before changing to HMT_MEDIUM. That allows the KVM
+	  * code to save that value into the guest state (it is the guest's PPR
+	  * value). Otherwise just change to HMT_MEDIUM as userspace has
+	  * already saved the PPR.
+	  */
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+	SET_SCRATCH0(r13)
+	GET_PACA(r13)
+	std	r9,PACA_EXGEN+EX_R9(r13)
+	OPT_GET_SPR(r9, SPRN_PPR, CPU_FTR_HAS_PPR);
+	HMT_MEDIUM;
+	std	r10,PACA_EXGEN+EX_R10(r13)
+	OPT_SAVE_REG_TO_PACA(PACA_EXGEN+EX_PPR, r9, CPU_FTR_HAS_PPR);
+	mfcr	r9
+	KVMTEST_PR(0xc00)
+	GET_SCRATCH0(r13)
+#else
+	HMT_MEDIUM;
+#endif
+	SYSCALL_PSERIES_1
+	SYSCALL_PSERIES_2_RFID
+	SYSCALL_PSERIES_3
+VECTOR_HANDLER_REAL_END(system_call, 0xc00, 0xd00)
 
-VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
+VECTOR_HANDLER_VIRT_BEGIN(system_call, 0x4c00, 0x4d00)
+	HMT_MEDIUM
+	SYSCALL_PSERIES_1
+	SYSCALL_PSERIES_2_DIRECT
+	SYSCALL_PSERIES_3
+VECTOR_HANDLER_VIRT_END(system_call, 0x4c00, 0x4d00)
 
-TRAMP_HANDLER_BEGIN(ppc64_runlatch_on_trampoline)
-	b	__ppc64_runlatch_on
-TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
+TRAMP_KVM(PACA_EXGEN, 0xc00)
 
-/*
- * Here r13 points to the paca, r9 contains the saved CR,
- * SRR0 and SRR1 are saved in r11 and r12,
- * r9 - r13 are saved in paca->exgen.
- */
-COMMON_HANDLER_BEGIN(data_access_common)
-	mfspr	r10,SPRN_DAR
-	std	r10,PACA_EXGEN+EX_DAR(r13)
-	mfspr	r10,SPRN_DSISR
-	stw	r10,PACA_EXGEN+EX_DSISR(r13)
-	EXCEPTION_PROLOG_COMMON(0x300, PACA_EXGEN)
-	RECONCILE_IRQ_STATE(r10, r11)
-	ld	r12,_MSR(r1)
-	ld	r3,PACA_EXGEN+EX_DAR(r13)
-	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
-	li	r5,0x300
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-BEGIN_MMU_FTR_SECTION
-	b	do_hash_page		/* Try to handle as hpte fault */
-MMU_FTR_SECTION_ELSE
-	b	handle_page_fault
-ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
-COMMON_HANDLER_END(data_access_common)
 
+VECTOR_HANDLER_REAL(single_step, 0xd00, 0xe00)
+VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
+TRAMP_KVM(PACA_EXGEN, 0xd00)
+COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
+
+__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
+__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
 COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
 	std     r10,PACA_EXGEN+EX_DAR(r13)
@@ -969,106 +919,121 @@ COMMON_HANDLER_BEGIN(h_data_storage_common)
 	bl      unknown_exception
 	b       ret_from_except
 COMMON_HANDLER_END(h_data_storage_common)
+COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
 
-COMMON_HANDLER_BEGIN(instruction_access_common)
-	EXCEPTION_PROLOG_COMMON(0x400, PACA_EXGEN)
-	RECONCILE_IRQ_STATE(r10, r11)
-	ld	r12,_MSR(r1)
-	ld	r3,_NIP(r1)
-	andis.	r4,r12,0x5820
-	li	r5,0x400
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-BEGIN_MMU_FTR_SECTION
-	b	do_hash_page		/* Try to handle as hpte fault */
-MMU_FTR_SECTION_ELSE
-	b	handle_page_fault
-ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
-COMMON_HANDLER_END(instruction_access_common)
 
+__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
+__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
-	/*
-	 * Machine check is different because we use a different
-	 * save area: PACA_EXMC instead of PACA_EXGEN.
-	 */
-COMMON_HANDLER_BEGIN(machine_check_common)
-	mfspr	r10,SPRN_DAR
-	std	r10,PACA_EXMC+EX_DAR(r13)
-	mfspr	r10,SPRN_DSISR
-	stw	r10,PACA_EXMC+EX_DSISR(r13)
-	EXCEPTION_PROLOG_COMMON(0x200, PACA_EXMC)
-	FINISH_NAP
-	RECONCILE_IRQ_STATE(r10, r11)
-	ld	r3,PACA_EXMC+EX_DAR(r13)
-	lwz	r4,PACA_EXMC+EX_DSISR(r13)
-	/* Enable MSR_RI when finished with PACA_EXMC */
-	li	r10,MSR_RI
-	mtmsrd 	r10,1
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-	bl	save_nvgprs
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	machine_check_exception
-	b	ret_from_except
-COMMON_HANDLER_END(machine_check_common)
 
-COMMON_HANDLER_BEGIN(alignment_common)
-	mfspr	r10,SPRN_DAR
-	std	r10,PACA_EXGEN+EX_DAR(r13)
-	mfspr	r10,SPRN_DSISR
-	stw	r10,PACA_EXGEN+EX_DSISR(r13)
-	EXCEPTION_PROLOG_COMMON(0x600, PACA_EXGEN)
-	ld	r3,PACA_EXGEN+EX_DAR(r13)
-	lwz	r4,PACA_EXGEN+EX_DSISR(r13)
-	std	r3,_DAR(r1)
-	std	r4,_DSISR(r1)
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	alignment_exception
-	b	ret_from_except
-COMMON_HANDLER_END(alignment_common)
+__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
+__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
+__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
+__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
+COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
 
-COMMON_HANDLER_BEGIN(program_check_common)
-	EXCEPTION_PROLOG_COMMON(0x700, PACA_EXGEN)
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	program_check_exception
-	b	ret_from_except
-COMMON_HANDLER_END(program_check_common)
 
-COMMON_HANDLER_BEGIN(fp_unavailable_common)
-	EXCEPTION_PROLOG_COMMON(0x800, PACA_EXGEN)
-	bne	1f			/* if from user, just load it up */
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
+__VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
+VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
+	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
+VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
+COMMON_HANDLER_BEGIN(hmi_exception_early)
+	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, 0xe60)
+	mr	r10,r1			/* Save r1			*/
+	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
+	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame		*/
+	std	r9,_CCR(r1)		/* save CR in stackframe	*/
+	mfspr	r11,SPRN_HSRR0		/* Save HSRR0 */
+	std	r11,_NIP(r1)		/* save HSRR0 in stackframe	*/
+	mfspr	r12,SPRN_HSRR1		/* Save SRR1 */
+	std	r12,_MSR(r1)		/* save SRR1 in stackframe	*/
+	std	r10,0(r1)		/* make stack chain pointer	*/
+	std	r0,GPR0(r1)		/* save r0 in stackframe	*/
+	std	r10,GPR1(r1)		/* save r1 in stackframe	*/
+	EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN)
+	EXCEPTION_PROLOG_COMMON_3(0xe60)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	kernel_fp_unavailable_exception
-	BUG_OPCODE
-1:
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-BEGIN_FTR_SECTION
-	/* Test if 2 TM state bits are zero.  If non-zero (ie. userspace was in
-	 * transaction), go do TM stuff
+	bl	hmi_exception_realmode
+	/* Windup the stack. */
+	/* Move original HSRR0 and HSRR1 into the respective regs */
+	ld	r9,_MSR(r1)
+	mtspr	SPRN_HSRR1,r9
+	ld	r3,_NIP(r1)
+	mtspr	SPRN_HSRR0,r3
+	ld	r9,_CTR(r1)
+	mtctr	r9
+	ld	r9,_XER(r1)
+	mtxer	r9
+	ld	r9,_LINK(r1)
+	mtlr	r9
+	REST_GPR(0, r1)
+	REST_8GPRS(2, r1)
+	REST_GPR(10, r1)
+	ld	r11,_CCR(r1)
+	mtcr	r11
+	REST_GPR(11, r1)
+	REST_2GPRS(12, r1)
+	/* restore original r1. */
+	ld	r1,GPR1(r1)
+
+	/*
+	 * Go to virtual mode and pull the HMI event information from
+	 * firmware.
 	 */
-	rldicl.	r0, r12, (64-MSR_TS_LG), (64-2)
-	bne-	2f
-END_FTR_SECTION_IFSET(CPU_FTR_TM)
-#endif
-	bl	load_up_fpu
-	b	fast_exception_return
-#ifdef CONFIG_PPC_TRANSACTIONAL_MEM
-2:	/* User process was in a transaction */
-	bl	save_nvgprs
-	RECONCILE_IRQ_STATE(r10, r11)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	fp_unavailable_tm
-	b	ret_from_except
+	.globl hmi_exception_after_realmode
+hmi_exception_after_realmode:
+	SET_SCRATCH0(r13)
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	b	tramp_real_hmi_exception
+COMMON_HANDLER_END(hmi_exception_early)
+COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
+
+
+__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
+__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
+#ifdef CONFIG_PPC_DOORBELL
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
+#else
+COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
-COMMON_HANDLER_END(fp_unavailable_common)
 
+
+__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0, 0xec0)
+__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
+__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0x4ea0, 0x4ec0)
+__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
+TRAMP_KVM_HV(PACA_EXGEN, 0xea0)
+COMMON_HANDLER_ASYNC(h_virt_irq_common, 0xea0, do_IRQ)
+
+
+VECTOR_HANDLER_REAL_NONE(0xec0, 0xf00)
+VECTOR_HANDLER_VIRT_NONE(0x4ec0, 0x4f00)
+
+
+__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
+__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
+__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
+__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
+TRAMP_KVM(PACA_EXGEN, 0xf00)
+COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
+
+
+__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
+__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
+__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
+__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
+TRAMP_KVM(PACA_EXGEN, 0xf20)
 COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
 #ifdef CONFIG_ALTIVEC
@@ -1103,6 +1068,13 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	b	ret_from_except
 COMMON_HANDLER_END(altivec_unavailable_common)
 
+
+
+__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
+__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
+__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
+__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
+TRAMP_KVM(PACA_EXGEN, 0xf40)
 COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
 #ifdef CONFIG_VSX
@@ -1136,334 +1108,296 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 	b	ret_from_except
 COMMON_HANDLER_END(vsx_unavailable_common)
 
-	/* Equivalents to the above handlers for relocation-on interrupt vectors */
-__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
-__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
-__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
-__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
-__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
-__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
-__TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
-__TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
-
-USE_FIXED_SECTION(virt_trampolines)
-	/*
-	 * The __end_interrupts marker must be past the out-of-line (OOL)
-	 * handlers, so that they are copied to real address 0x100 when running
-	 * a relocatable kernel. This ensures they can be reached from the short
-	 * trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
-	 * directly, without using LOAD_HANDLER().
-	 */
-	.align	7
-	.globl	__end_interrupts
-__end_interrupts:
-UNUSE_FIXED_SECTION(virt_trampolines)
 
+__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
+__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
+__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
+__TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
+TRAMP_KVM(PACA_EXGEN, 0xf60)
 COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
+
+
+__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
+__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
+__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
+__TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
+TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
 COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
+
+VECTOR_HANDLER_REAL_NONE(0xfa0, 0x1200)
+VECTOR_HANDLER_VIRT_NONE(0x4fa0, 0x5200)
+
+
 #ifdef CONFIG_CBE_RAS
+VECTOR_HANDLER_REAL_HV(cbe_system_error, 0x1200, 0x1300)
+VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1200)
 COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
-COMMON_HANDLER(cbe_maintenance_common, 0x1600, cbe_maintenance_exception)
-COMMON_HANDLER(cbe_thermal_common, 0x1800, cbe_thermal_exception)
-#endif /* CONFIG_CBE_RAS */
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
+#endif
 
 
-COMMON_HANDLER_BEGIN(hmi_exception_early)
-	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, 0xe60)
-	mr	r10,r1			/* Save r1			*/
-	ld	r1,PACAEMERGSP(r13)	/* Use emergency stack		*/
-	subi	r1,r1,INT_FRAME_SIZE	/* alloc stack frame		*/
-	std	r9,_CCR(r1)		/* save CR in stackframe	*/
-	mfspr	r11,SPRN_HSRR0		/* Save HSRR0 */
-	std	r11,_NIP(r1)		/* save HSRR0 in stackframe	*/
-	mfspr	r12,SPRN_HSRR1		/* Save SRR1 */
-	std	r12,_MSR(r1)		/* save SRR1 in stackframe	*/
-	std	r10,0(r1)		/* make stack chain pointer	*/
-	std	r0,GPR0(r1)		/* save r0 in stackframe	*/
-	std	r10,GPR1(r1)		/* save r1 in stackframe	*/
-	EXCEPTION_PROLOG_COMMON_2(PACA_EXGEN)
-	EXCEPTION_PROLOG_COMMON_3(0xe60)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	hmi_exception_realmode
-	/* Windup the stack. */
-	/* Move original HSRR0 and HSRR1 into the respective regs */
-	ld	r9,_MSR(r1)
-	mtspr	SPRN_HSRR1,r9
-	ld	r3,_NIP(r1)
-	mtspr	SPRN_HSRR0,r3
-	ld	r9,_CTR(r1)
-	mtctr	r9
-	ld	r9,_XER(r1)
-	mtxer	r9
-	ld	r9,_LINK(r1)
-	mtlr	r9
-	REST_GPR(0, r1)
-	REST_8GPRS(2, r1)
-	REST_GPR(10, r1)
-	ld	r11,_CCR(r1)
-	mtcr	r11
-	REST_GPR(11, r1)
-	REST_2GPRS(12, r1)
-	/* restore original r1. */
-	ld	r1,GPR1(r1)
+VECTOR_HANDLER_REAL(instruction_breakpoint, 0x1300, 0x1400)
+VECTOR_HANDLER_VIRT(instruction_breakpoint, 0x5300, 0x5400, 0x1300)
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1300)
+COMMON_HANDLER(instruction_breakpoint_common, 0x1300, instruction_breakpoint_exception)
 
-	/*
-	 * Go to virtual mode and pull the HMI event information from
-	 * firmware.
-	 */
-	.globl hmi_exception_after_realmode
-hmi_exception_after_realmode:
-	SET_SCRATCH0(r13)
-	EXCEPTION_PROLOG_0(PACA_EXGEN)
-	b	tramp_real_hmi_exception
-COMMON_HANDLER_END(hmi_exception_early)
+VECTOR_HANDLER_REAL_NONE(0x1400, 0x1500)
+VECTOR_HANDLER_VIRT_NONE(0x5400, 0x5500)
 
+VECTOR_HANDLER_REAL_BEGIN(denorm_exception_hv, 0x1500, 0x1600)
+	mtspr	SPRN_SPRG_HSCRATCH0,r13
+	EXCEPTION_PROLOG_0(PACA_EXGEN)
+	EXCEPTION_PROLOG_1(PACA_EXGEN, NOTEST, 0x1500)
 
-#define MACHINE_CHECK_HANDLER_WINDUP			\
-	/* Clear MSR_RI before setting SRR0 and SRR1. */\
-	li	r0,MSR_RI;				\
-	mfmsr	r9;		/* get MSR value */	\
-	andc	r9,r9,r0;				\
-	mtmsrd	r9,1;		/* Clear MSR_RI */	\
-	/* Move original SRR0 and SRR1 into the respective regs */	\
-	ld	r9,_MSR(r1);				\
-	mtspr	SPRN_SRR1,r9;				\
-	ld	r3,_NIP(r1);				\
-	mtspr	SPRN_SRR0,r3;				\
-	ld	r9,_CTR(r1);				\
-	mtctr	r9;					\
-	ld	r9,_XER(r1);				\
-	mtxer	r9;					\
-	ld	r9,_LINK(r1);				\
-	mtlr	r9;					\
-	REST_GPR(0, r1);				\
-	REST_8GPRS(2, r1);				\
-	REST_GPR(10, r1);				\
-	ld	r11,_CCR(r1);				\
-	mtcr	r11;					\
-	/* Decrement paca->in_mce. */			\
-	lhz	r12,PACA_IN_MCE(r13);			\
-	subi	r12,r12,1;				\
-	sth	r12,PACA_IN_MCE(r13);			\
-	REST_GPR(11, r1);				\
-	REST_2GPRS(12, r1);				\
-	/* restore original r1. */			\
-	ld	r1,GPR1(r1)
+#ifdef CONFIG_PPC_DENORMALISATION
+	mfspr	r10,SPRN_HSRR1
+	mfspr	r11,SPRN_HSRR0		/* save HSRR0 */
+	andis.	r10,r10,(HSRR1_DENORM)@h /* denorm? */
+	addi	r11,r11,-4		/* HSRR0 is next instruction */
+	bne+	denorm_assist
+#endif
 
-	/*
-	 * Handle machine check early in real mode. We come here with
-	 * ME=1, MMU (IR=0 and DR=0) off and using MC emergency stack.
-	 */
-COMMON_HANDLER_BEGIN(machine_check_handle_early)
-	std	r0,GPR0(r1)	/* Save r0 */
-	EXCEPTION_PROLOG_COMMON_3(0x200)
-	bl	save_nvgprs
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	machine_check_early
-	std	r3,RESULT(r1)	/* Save result */
-	ld	r12,_MSR(r1)
-#ifdef	CONFIG_PPC_P7_NAP
-	/*
-	 * Check if thread was in power saving mode. We come here when any
-	 * of the following is true:
-	 * a. thread wasn't in power saving mode
-	 * b. thread was in power saving mode with no state loss,
-	 *    supervisor state loss or hypervisor state loss.
-	 *
-	 * Go back to nap/sleep/winkle mode again if (b) is true.
-	 */
-	rlwinm.	r11,r12,47-31,30,31	/* Was it in power saving mode? */
-	beq	4f			/* No, it wasn;t */
-	/* Thread was in power saving mode. Go back to nap again. */
-	cmpwi	r11,2
-	blt	3f
-	/* Supervisor/Hypervisor state loss */
-	li	r0,1
-	stb	r0,PACA_NAPSTATELOST(r13)
-3:	bl	machine_check_queue_event
-	MACHINE_CHECK_HANDLER_WINDUP
-	GET_PACA(r13)
-	ld	r1,PACAR1(r13)
-	/*
-	 * Check what idle state this CPU was in and go back to same mode
-	 * again.
-	 */
-	lbz	r3,PACA_THREAD_IDLE_STATE(r13)
-	cmpwi	r3,PNV_THREAD_NAP
-	bgt	10f
-	IDLE_STATE_ENTER_SEQ(PPC_NAP)
-	/* No return */
-10:
-	cmpwi	r3,PNV_THREAD_SLEEP
-	bgt	2f
-	IDLE_STATE_ENTER_SEQ(PPC_SLEEP)
-	/* No return */
+	KVMTEST_PR(0x1500)
+	EXCEPTION_PROLOG_PSERIES_1(denorm_common, EXC_HV)
+VECTOR_HANDLER_REAL_END(denorm_exception_hv, 0x1500, 0x1600)
 
-2:
-	/*
-	 * Go back to winkle. Please note that this thread was woken up in
-	 * machine check from winkle and have not restored the per-subcore
-	 * state. Hence before going back to winkle, set last bit of HSPGR0
-	 * to 1. This will make sure that if this thread gets woken up
-	 * again at reset vector 0x100 then it will get chance to restore
-	 * the subcore state.
-	 */
-	ori	r13,r13,1
-	SET_PACA(r13)
-	IDLE_STATE_ENTER_SEQ(PPC_WINKLE)
-	/* No return */
-4:
+#ifdef CONFIG_PPC_DENORMALISATION
+VECTOR_HANDLER_VIRT_BEGIN(denorm_exception, 0x5500, 0x5600)
+	b	exc_0x1500_denorm_exception_hv
+VECTOR_HANDLER_VIRT_END(denorm_exception, 0x5500, 0x5600)
+#else
+VECTOR_HANDLER_VIRT_NONE(0x5500, 0x5600)
 #endif
-	/*
-	 * Check if we are coming from hypervisor userspace. If yes then we
-	 * continue in host kernel in V mode to deliver the MC event.
-	 */
-	rldicl.	r11,r12,4,63		/* See if MC hit while in HV mode. */
-	beq	5f
-	andi.	r11,r12,MSR_PR		/* See if coming from user. */
-	bne	9f			/* continue in V mode if we are. */
 
-5:
-#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
-	/*
-	 * We are coming from kernel context. Check if we are coming from
-	 * guest. if yes, then we can continue. We will fall through
-	 * do_kvm_200->kvmppc_interrupt to deliver the MC event to guest.
-	 */
-	lbz	r11,HSTATE_IN_GUEST(r13)
-	cmpwi	r11,0			/* Check if coming from guest */
-	bne	9f			/* continue if we are. */
-#endif
-	/*
-	 * At this point we are not sure about what context we come from.
-	 * Queue up the MCE event and return from the interrupt.
-	 * But before that, check if this is an un-recoverable exception.
-	 * If yes, then stay on emergency stack and panic.
-	 */
-	andi.	r11,r12,MSR_RI
-	bne	2f
-1:	mfspr	r11,SPRN_SRR0
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10,unrecover_mce)
-	mtspr	SPRN_SRR0,r10
-	ld	r10,PACAKMSR(r13)
-	/*
-	 * We are going down. But there are chances that we might get hit by
-	 * another MCE during panic path and we may run into unstable state
-	 * with no way out. Hence, turn ME bit off while going down, so that
-	 * when another MCE is hit during panic path, system will checkstop
-	 * and hypervisor will get restarted cleanly by SP.
-	 */
-	li	r3,MSR_ME
-	andc	r10,r10,r3		/* Turn off MSR_ME */
-	mtspr	SPRN_SRR1,r10
-	rfid
-	b	.
-2:
-	/*
-	 * Check if we have successfully handled/recovered from error, if not
-	 * then stay on emergency stack and panic.
-	 */
-	ld	r3,RESULT(r1)	/* Load result */
-	cmpdi	r3,0		/* see if we handled MCE successfully */
+TRAMP_KVM_SKIP(PACA_EXGEN, 0x1500)
+
+#ifdef CONFIG_PPC_DENORMALISATION
+TRAMP_HANDLER_BEGIN(denorm_assist)
+BEGIN_FTR_SECTION
+/*
+ * To denormalise we need to move a copy of the register to itself.
+ * For POWER6 do that here for all FP regs.
+ */
+	mfmsr	r10
+	ori	r10,r10,(MSR_FP|MSR_FE0|MSR_FE1)
+	xori	r10,r10,(MSR_FE0|MSR_FE1)
+	mtmsrd	r10
+	sync
+
+#define FMR2(n)  fmr (n), (n) ; fmr n+1, n+1
+#define FMR4(n)  FMR2(n) ; FMR2(n+2)
+#define FMR8(n)  FMR4(n) ; FMR4(n+4)
+#define FMR16(n) FMR8(n) ; FMR8(n+8)
+#define FMR32(n) FMR16(n) ; FMR16(n+16)
+	FMR32(0)
 
-	beq	1b		/* if !handled then panic */
-	/*
-	 * Return from MC interrupt.
-	 * Queue up the MCE event so that we can log it later, while
-	 * returning from kernel or opal call.
-	 */
-	bl	machine_check_queue_event
-	MACHINE_CHECK_HANDLER_WINDUP
-	rfid
-9:
-	/* Deliver the machine check to host kernel in V mode. */
-	MACHINE_CHECK_HANDLER_WINDUP
-	b	machine_check_pSeries
+FTR_SECTION_ELSE
+/*
+ * To denormalise we need to move a copy of the register to itself.
+ * For POWER7 do that here for the first 32 VSX registers only.
+ */
+	mfmsr	r10
+	oris	r10,r10,MSR_VSX@h
+	mtmsrd	r10
+	sync
 
-unrecover_mce:
-	/* Invoke machine_check_exception to print MCE event and panic. */
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	machine_check_exception
-	/*
-	 * We will not reach here. Even if we did, there is no way out. Call
-	 * unrecoverable_exception and die.
-	 */
-1:	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	unrecoverable_exception
-	b	1b
-COMMON_HANDLER_END(machine_check_handle_early)
+#define XVCPSGNDP2(n) XVCPSGNDP(n,n,n) ; XVCPSGNDP(n+1,n+1,n+1)
+#define XVCPSGNDP4(n) XVCPSGNDP2(n) ; XVCPSGNDP2(n+2)
+#define XVCPSGNDP8(n) XVCPSGNDP4(n) ; XVCPSGNDP4(n+4)
+#define XVCPSGNDP16(n) XVCPSGNDP8(n) ; XVCPSGNDP8(n+8)
+#define XVCPSGNDP32(n) XVCPSGNDP16(n) ; XVCPSGNDP16(n+16)
+	XVCPSGNDP32(0)
+
+ALT_FTR_SECTION_END_IFCLR(CPU_FTR_ARCH_206)
 
+BEGIN_FTR_SECTION
+	b	denorm_done
+END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
 /*
- * r13 points to the PACA, r9 contains the saved CR,
- * r12 contain the saved SRR1, SRR0 is still ready for return
- * r3 has the faulting address
- * r9 - r13 are saved in paca->exslb.
- * r3 is saved in paca->slb_r3
- * We assume we aren't going to take any exceptions during this procedure.
+ * To denormalise we need to move a copy of the register to itself.
+ * For POWER8 we need to do that for all 64 VSX registers
  */
-COMMON_HANDLER_BEGIN(slb_miss_realmode)
-	mflr	r10
-#ifdef CONFIG_RELOCATABLE
-	mtctr	r11
+	XVCPSGNDP32(32)
+denorm_done:
+	mtspr	SPRN_HSRR0,r11
+	mtcrf	0x80,r9
+	ld	r9,PACA_EXGEN+EX_R9(r13)
+	RESTORE_PPR_PACA(PACA_EXGEN, r10)
+BEGIN_FTR_SECTION
+	ld	r10,PACA_EXGEN+EX_CFAR(r13)
+	mtspr	SPRN_CFAR,r10
+END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
+	ld	r10,PACA_EXGEN+EX_R10(r13)
+	ld	r11,PACA_EXGEN+EX_R11(r13)
+	ld	r12,PACA_EXGEN+EX_R12(r13)
+	ld	r13,PACA_EXGEN+EX_R13(r13)
+	HRFID
+	b	.
 #endif
+TRAMP_HANDLER_END(denorm_assist)
 
-	stw	r9,PACA_EXSLB+EX_CCR(r13)	/* save CR in exc. frame */
-	std	r10,PACA_EXSLB+EX_LR(r13)	/* save LR */
+COMMON_HANDLER_HV(denorm_common, 0x1500, unknown_exception)
 
-#ifdef CONFIG_PPC_STD_MMU_64
-BEGIN_MMU_FTR_SECTION
-	bl	slb_allocate_realmode
-END_MMU_FTR_SECTION_IFCLR(MMU_FTR_TYPE_RADIX)
+
+#ifdef CONFIG_CBE_RAS
+VECTOR_HANDLER_REAL_HV(cbe_maintenance, 0x1600, 0x1700)
+VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1600)
+COMMON_HANDLER(cbe_maintenance_common, 0x1600, cbe_maintenance_exception)
+
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
 #endif
-	/* All done -- return from exception. */
 
-	ld	r10,PACA_EXSLB+EX_LR(r13)
-	ld	r3,PACA_EXSLB+EX_R3(r13)
-	lwz	r9,PACA_EXSLB+EX_CCR(r13)	/* get saved CR */
 
-	mtlr	r10
-	andi.	r10,r12,MSR_RI	/* check for unrecoverable exception */
-BEGIN_MMU_FTR_SECTION
-	beq-	2f
-FTR_SECTION_ELSE
-	b	2f
-ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_TYPE_RADIX)
+VECTOR_HANDLER_REAL(altivec_assist, 0x1700, 0x1800)
+VECTOR_HANDLER_VIRT(altivec_assist, 0x5700, 0x5800, 0x1700)
+TRAMP_KVM(PACA_EXGEN, 0x1700)
+#ifdef CONFIG_ALTIVEC
+COMMON_HANDLER(altivec_assist_common, 0x1700, altivec_assist_exception)
+#else
+COMMON_HANDLER(altivec_assist_common, 0x1700, unknown_exception)
+#endif
 
-.machine	push
-.machine	"power4"
-	mtcrf	0x80,r9
-	mtcrf	0x01,r9		/* slb_allocate uses cr0 and cr7 */
-.machine	pop
 
-	RESTORE_PPR_PACA(PACA_EXSLB, r9)
-	ld	r9,PACA_EXSLB+EX_R9(r13)
-	ld	r10,PACA_EXSLB+EX_R10(r13)
-	ld	r11,PACA_EXSLB+EX_R11(r13)
-	ld	r12,PACA_EXSLB+EX_R12(r13)
-	ld	r13,PACA_EXSLB+EX_R13(r13)
-	rfid
-	b	.	/* prevent speculative execution */
+#ifdef CONFIG_CBE_RAS
+VECTOR_HANDLER_REAL_HV(cbe_thermal, 0x1800, 0x1900)
+VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
+TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1800)
+COMMON_HANDLER(cbe_thermal_common, 0x1800, cbe_thermal_exception)
 
-2:	mfspr	r11,SPRN_SRR0
-	ld	r10,PACAKBASE(r13)
-	LOAD_HANDLER(r10,unrecov_slb)
-	mtspr	SPRN_SRR0,r10
-	ld	r10,PACAKMSR(r13)
-	mtspr	SPRN_SRR1,r10
+#else /* CONFIG_CBE_RAS */
+VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
+#endif
+
+
+/*
+ * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
+ * - If it was a decrementer interrupt, we bump the dec to max and and return.
+ * - If it was a doorbell we return immediately since doorbells are edge
+ *   triggered and won't automatically refire.
+ * - If it was a HMI we return immediately since we handled it in realmode
+ *   and it won't refire.
+ * - else we hard disable and return.
+ * This is called with r10 containing the value to OR to the paca field.
+ */
+#define MASKED_INTERRUPT(_H)				\
+masked_##_H##interrupt:					\
+	std	r11,PACA_EXGEN+EX_R11(r13);		\
+	lbz	r11,PACAIRQHAPPENED(r13);		\
+	or	r11,r11,r10;				\
+	stb	r11,PACAIRQHAPPENED(r13);		\
+	cmpwi	r10,PACA_IRQ_DEC;			\
+	bne	1f;					\
+	lis	r10,0x7fff;				\
+	ori	r10,r10,0xffff;				\
+	mtspr	SPRN_DEC,r10;				\
+	b	2f;					\
+1:	cmpwi	r10,PACA_IRQ_DBELL;			\
+	beq	2f;					\
+	cmpwi	r10,PACA_IRQ_HMI;			\
+	beq	2f;					\
+	mfspr	r10,SPRN_##_H##SRR1;			\
+	rldicl	r10,r10,48,1; /* clear MSR_EE */	\
+	rotldi	r10,r10,16;				\
+	mtspr	SPRN_##_H##SRR1,r10;			\
+2:	mtcrf	0x80,r9;				\
+	ld	r9,PACA_EXGEN+EX_R9(r13);		\
+	ld	r10,PACA_EXGEN+EX_R10(r13);		\
+	ld	r11,PACA_EXGEN+EX_R11(r13);		\
+	GET_SCRATCH0(r13);				\
+	##_H##rfid;					\
+	b	.
+
+USE_FIXED_SECTION(virt_trampolines)
+	MASKED_INTERRUPT()
+	MASKED_INTERRUPT(H)
+UNUSE_FIXED_SECTION(real_trampolines)
+
+/*
+ * Called from arch_local_irq_enable when an interrupt needs
+ * to be resent. r3 contains 0x500, 0x900, 0xa00 or 0xe80 to indicate
+ * which kind of interrupt. MSR:EE is already off. We generate a
+ * stackframe like if a real interrupt had happened.
+ *
+ * Note: While MSR:EE is off, we need to make sure that _MSR
+ * in the generated frame has EE set to 1 or the exception
+ * handler will not properly re-enable them.
+ */
+COMMON_HANDLER_BEGIN(__replay_interrupt)
+	/* We are going to jump to the exception common code which
+	 * will retrieve various register values from the PACA which
+	 * we don't give a damn about, so we don't bother storing them.
+	 */
+	mfmsr	r12
+	mflr	r11
+	mfcr	r9
+	ori	r12,r12,MSR_EE
+	cmpwi	r3,0x900
+	beq	decrementer_common
+	cmpwi	r3,0x500
+	beq	hardware_interrupt_common
+BEGIN_FTR_SECTION
+	cmpwi	r3,0xe80
+	beq	h_doorbell_common
+	cmpwi	r3,0xea0
+	beq	h_virt_irq_common
+	cmpwi	r3,0xe60
+	beq	hmi_exception_common
+FTR_SECTION_ELSE
+	cmpwi	r3,0xa00
+	beq	doorbell_super_common
+ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
+	blr
+COMMON_HANDLER_END(__replay_interrupt)
+
+#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
+TRAMP_HANDLER_BEGIN(kvmppc_skip_interrupt)
+	/*
+	 * Here all GPRs are unchanged from when the interrupt happened
+	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 */
+	mfspr	r13, SPRN_SRR0
+	addi	r13, r13, 4
+	mtspr	SPRN_SRR0, r13
+	GET_SCRATCH0(r13)
 	rfid
 	b	.
+TRAMP_HANDLER_END(kvmppc_skip_interrupt)
 
-unrecov_slb:
-	EXCEPTION_PROLOG_COMMON(0x4100, PACA_EXSLB)
-	RECONCILE_IRQ_STATE(r10, r11)
-	bl	save_nvgprs
-1:	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	unrecoverable_exception
-	b	1b
-COMMON_HANDLER_END(slb_miss_realmode)
+TRAMP_HANDLER_BEGIN(kvmppc_skip_Hinterrupt)
+	/*
+	 * Here all GPRs are unchanged from when the interrupt happened
+	 * except for r13, which is saved in SPRG_SCRATCH0.
+	 */
+	mfspr	r13, SPRN_HSRR0
+	addi	r13, r13, 4
+	mtspr	SPRN_HSRR0, r13
+	GET_SCRATCH0(r13)
+	hrfid
+	b	.
+TRAMP_HANDLER_END(kvmppc_skip_Hinterrupt)
+#endif
 
+TRAMP_HANDLER_BEGIN(ppc64_runlatch_on_trampoline)
+	b	__ppc64_runlatch_on
+TRAMP_HANDLER_END(ppc64_runlatch_on_trampoline)
+
+USE_FIXED_SECTION(virt_trampolines)
+	/*
+	 * The __end_interrupts marker must be past the out-of-line (OOL)
+	 * handlers, so that they are copied to real address 0x100 when running
+	 * a relocatable kernel. This ensures they can be reached from the short
+	 * trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
+	 * directly, without using LOAD_HANDLER().
+	 */
+	.align	7
+	.globl	__end_interrupts
+__end_interrupts:
+UNUSE_FIXED_SECTION(virt_trampolines)
 
 #ifdef CONFIG_PPC_970_NAP
 TRAMP_HANDLER_BEGIN(power4_fixup_nap)
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 7/8] powerpc/pseries: use single macro for both parts of OOL exception
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
                   ` (5 preceding siblings ...)
  2016-09-13  3:08 ` [PATCH 6/8] powerpc/pseries: move related exception code together Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  2016-09-13  3:08 ` [PATCH 8/8] powerpc/pseries: remove unused exception code, small cleanups Nicholas Piggin
  7 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

Simple substitution. This is possible now that both parts of the OOL
initial handler get linked into their correct location.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 55 ++++++++++++------------------------
 1 file changed, 18 insertions(+), 37 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 5dd7f0b..a647779 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -901,8 +901,7 @@ VECTOR_HANDLER_VIRT(single_step, 0x4d00, 0x4e00, 0xd00)
 TRAMP_KVM(PACA_EXGEN, 0xd00)
 COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
 
-__VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
-__TRAMP_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00)
+VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
 VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
 	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
 VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
@@ -922,8 +921,7 @@ COMMON_HANDLER_END(h_data_storage_common)
 COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
-__TRAMP_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20)
+VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
 VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
 	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
 VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
@@ -931,10 +929,8 @@ TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
-__TRAMP_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40)
-__VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60)
-__TRAMP_HANDLER_VIRT_OOL_HV(emulation_assist, 0xe40)
+VECTOR_HANDLER_REAL_OOL_HV(emulation_assist, 0xe40, 0xe60)
+VECTOR_HANDLER_VIRT_OOL_HV(emulation_assist, 0x4e40, 0x4e60, 0xe40)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
 COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
 
@@ -997,10 +993,8 @@ COMMON_HANDLER_END(hmi_exception_early)
 COMMON_HANDLER_ASYNC(hmi_exception_common, 0xe60, handle_hmi_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80)
-__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0)
-__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0xe80)
+VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_doorbell, 0xe80, 0xea0)
+VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_doorbell, 0x4e80, 0x4ea0, 0xe80)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe80)
 #ifdef CONFIG_PPC_DOORBELL
 COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, doorbell_exception)
@@ -1009,10 +1003,8 @@ COMMON_HANDLER_ASYNC(h_doorbell_common, 0xe80, unknown_exception)
 #endif
 
 
-__VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0, 0xec0)
-__TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
-__VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0x4ea0, 0x4ec0)
-__TRAMP_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0xea0)
+VECTOR_HANDLER_REAL_OOL_MASKABLE_HV(h_virt_irq, 0xea0, 0xec0)
+VECTOR_HANDLER_VIRT_OOL_MASKABLE_HV(h_virt_irq, 0x4ea0, 0x4ec0, 0xea0)
 TRAMP_KVM_HV(PACA_EXGEN, 0xea0)
 COMMON_HANDLER_ASYNC(h_virt_irq_common, 0xea0, do_IRQ)
 
@@ -1021,18 +1013,14 @@ VECTOR_HANDLER_REAL_NONE(0xec0, 0xf00)
 VECTOR_HANDLER_VIRT_NONE(0x4ec0, 0x4f00)
 
 
-__VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
-__TRAMP_HANDLER_REAL_OOL(performance_monitor, 0xf00)
-__VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20)
-__TRAMP_HANDLER_VIRT_OOL(performance_monitor, 0xf00)
+VECTOR_HANDLER_REAL_OOL(performance_monitor, 0xf00, 0xf20)
+VECTOR_HANDLER_VIRT_OOL(performance_monitor, 0x4f00, 0x4f20, 0xf00)
 TRAMP_KVM(PACA_EXGEN, 0xf00)
 COMMON_HANDLER_ASYNC(performance_monitor_common, 0xf00, performance_monitor_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
-__TRAMP_HANDLER_REAL_OOL(altivec_unavailable, 0xf20)
-__VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40)
-__TRAMP_HANDLER_VIRT_OOL(altivec_unavailable, 0xf20)
+VECTOR_HANDLER_REAL_OOL(altivec_unavailable, 0xf20, 0xf40)
+VECTOR_HANDLER_VIRT_OOL(altivec_unavailable, 0x4f20, 0x4f40, 0xf20)
 TRAMP_KVM(PACA_EXGEN, 0xf20)
 COMMON_HANDLER_BEGIN(altivec_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf20, PACA_EXGEN)
@@ -1069,11 +1057,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 COMMON_HANDLER_END(altivec_unavailable_common)
 
 
-
-__VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
-__TRAMP_HANDLER_REAL_OOL(vsx_unavailable, 0xf40)
-__VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60)
-__TRAMP_HANDLER_VIRT_OOL(vsx_unavailable, 0xf40)
+VECTOR_HANDLER_REAL_OOL(vsx_unavailable, 0xf40, 0xf60)
+VECTOR_HANDLER_VIRT_OOL(vsx_unavailable, 0x4f40, 0x4f60, 0xf40)
 TRAMP_KVM(PACA_EXGEN, 0xf40)
 COMMON_HANDLER_BEGIN(vsx_unavailable_common)
 	EXCEPTION_PROLOG_COMMON(0xf40, PACA_EXGEN)
@@ -1109,18 +1094,14 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
 COMMON_HANDLER_END(vsx_unavailable_common)
 
 
-__VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
-__TRAMP_HANDLER_REAL_OOL(facility_unavailable, 0xf60)
-__VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80)
-__TRAMP_HANDLER_VIRT_OOL(facility_unavailable, 0xf60)
+VECTOR_HANDLER_REAL_OOL(facility_unavailable, 0xf60, 0xf80)
+VECTOR_HANDLER_VIRT_OOL(facility_unavailable, 0x4f60, 0x4f80, 0xf60)
 TRAMP_KVM(PACA_EXGEN, 0xf60)
 COMMON_HANDLER(facility_unavailable_common, 0xf60, facility_unavailable_exception)
 
 
-__VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
-__TRAMP_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80)
-__VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0)
-__TRAMP_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0xf80)
+VECTOR_HANDLER_REAL_OOL_HV(h_facility_unavailable, 0xf80, 0xfa0)
+VECTOR_HANDLER_VIRT_OOL_HV(h_facility_unavailable, 0x4f80, 0x4fa0, 0xf80)
 TRAMP_KVM_HV(PACA_EXGEN, 0xf80)
 COMMON_HANDLER(h_facility_unavailable_common, 0xf80, facility_unavailable_exception)
 
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 8/8] powerpc/pseries: remove unused exception code, small cleanups
  2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
                   ` (6 preceding siblings ...)
  2016-09-13  3:08 ` [PATCH 7/8] powerpc/pseries: use single macro for both parts of OOL exception Nicholas Piggin
@ 2016-09-13  3:08 ` Nicholas Piggin
  7 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  3:08 UTC (permalink / raw)
  To: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

This was not done before the big patches because I only noticed
them afterwards. It has become much easier to see which handlers
are branched to from which exception vectors now, and to see
exactly what vector space is being used for what.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64s.S | 26 ++++++++++++++++----------
 1 file changed, 16 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index a647779..4d06af3 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -89,6 +89,9 @@ USE_FIXED_SECTION(real_vectors)
 	.globl __start_interrupts
 __start_interrupts:
 
+/* No virt vectors corresponding with 0x0..0x100 */
+VECTOR_HANDLER_VIRT_NONE(0x4000, 0x4100)
+
 VECTOR_HANDLER_REAL_BEGIN(system_reset, 0x100, 0x200)
 	SET_SCRATCH0(r13)
 #ifdef CONFIG_PPC_P7_NAP
@@ -902,9 +905,7 @@ TRAMP_KVM(PACA_EXGEN, 0xd00)
 COMMON_HANDLER(single_step_common, 0xd00, single_step_exception)
 
 VECTOR_HANDLER_REAL_OOL_HV(h_data_storage, 0xe00, 0xe20)
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e00, 0x4e20)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e00, 0x4e20)
+VECTOR_HANDLER_VIRT_NONE(0x4e00, 0x4e20)
 TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0xe00)
 COMMON_HANDLER_BEGIN(h_data_storage_common)
 	mfspr   r10,SPRN_HDAR
@@ -918,13 +919,10 @@ COMMON_HANDLER_BEGIN(h_data_storage_common)
 	bl      unknown_exception
 	b       ret_from_except
 COMMON_HANDLER_END(h_data_storage_common)
-COMMON_HANDLER(trap_0e_common, 0xe00, unknown_exception)
 
 
 VECTOR_HANDLER_REAL_OOL_HV(h_instr_storage, 0xe20, 0xe40)
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e20, 0x4e40)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e20, 0x4e40)
+VECTOR_HANDLER_VIRT_NONE(0x4e20, 0x4e40)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe20)
 COMMON_HANDLER(h_instr_storage_common, 0xe20, unknown_exception)
 
@@ -935,11 +933,14 @@ TRAMP_KVM_HV(PACA_EXGEN, 0xe40)
 COMMON_HANDLER(emulation_assist_common, 0xe40, emulation_assist_interrupt)
 
 
+/*
+ * hmi_exception trampoline is a special case. It jumps to hmi_exception_early
+ * first, and then eventaully from there to the trampoline to get into virtual
+ * mode.
+ */
 __VECTOR_HANDLER_REAL_OOL_HV_DIRECT(hmi_exception, 0xe60, 0xe80, hmi_exception_early)
 __TRAMP_HANDLER_REAL_OOL_MASKABLE_HV(hmi_exception, 0xe60)
-VECTOR_HANDLER_VIRT_BEGIN(unused, 0x4e60, 0x4e80)
-	b       .       /* Can't happen, see v2.07 Book III-S section 6.5 */
-VECTOR_HANDLER_VIRT_END(unused, 0x4e60, 0x4e80)
+VECTOR_HANDLER_VIRT_NONE(0x4e60, 0x4e80)
 TRAMP_KVM_HV(PACA_EXGEN, 0xe60)
 COMMON_HANDLER_BEGIN(hmi_exception_early)
 	EXCEPTION_PROLOG_1(PACA_EXGEN, KVMTEST_HV, 0xe60)
@@ -1117,6 +1118,7 @@ TRAMP_KVM_HV_SKIP(PACA_EXGEN, 0x1200)
 COMMON_HANDLER(cbe_system_error_common, 0x1200, cbe_system_error_exception)
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1200, 0x1300)
+VECTOR_HANDLER_VIRT_NONE(0x5200, 0x5300)
 #endif
 
 
@@ -1231,6 +1233,7 @@ COMMON_HANDLER(cbe_maintenance_common, 0x1600, cbe_maintenance_exception)
 
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1600, 0x1700)
+VECTOR_HANDLER_VIRT_NONE(0x5600, 0x5700)
 #endif
 
 
@@ -1252,8 +1255,11 @@ COMMON_HANDLER(cbe_thermal_common, 0x1800, cbe_thermal_exception)
 
 #else /* CONFIG_CBE_RAS */
 VECTOR_HANDLER_REAL_NONE(0x1800, 0x1900)
+VECTOR_HANDLER_VIRT_NONE(0x5800, 0x5900)
 #endif
 
+/* Real vector area ends at 0x18ff */
+VECTOR_HANDLER_VIRT_NONE(0x5900, 0x6000)
 
 /*
  * An interrupt came in while soft-disabled. We set paca->irq_happened, then:
-- 
2.9.3

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/8] powerpc/pseries: exception vector macros
  2016-09-13  3:08 ` [PATCH 3/8] powerpc/pseries: exception vector macros Nicholas Piggin
@ 2016-09-13  6:56   ` kbuild test robot
  2016-09-13  9:22     ` Nicholas Piggin
  0 siblings, 1 reply; 13+ messages in thread
From: kbuild test robot @ 2016-09-13  6:56 UTC (permalink / raw)
  To: Nicholas Piggin
  Cc: kbuild-all, linuxppc-dev, Michael Ellerman,
	Benjamin Herrenschmidt, Paul Mackerras, Nicholas Piggin

[-- Attachment #1: Type: text/plain, Size: 1360 bytes --]

Hi Nicholas,

[auto build test ERROR on powerpc/next]
[also build test ERROR on v4.8-rc6 next-20160912]
[if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
[Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
[Check https://git-scm.com/docs/git-format-patch for more information]

url:    https://github.com/0day-ci/linux/commits/Nicholas-Piggin/powerpc-64-use-asm-sections-for-head-exception-layout/20160913-113052
base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
config: powerpc-defconfig (attached as .config)
compiler: powerpc64-linux-gnu-gcc (Debian 5.4.0-6) 5.4.0 20160609
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=powerpc 

All errors (new ones prefixed by >>):

   arch/powerpc/kernel/built-in.o: In function `.arch_local_irq_restore':
>> (.text+0x7390): undefined reference to `.__replay_interrupt'

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 22482 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/8] powerpc/pseries: exception vector macros
  2016-09-13  6:56   ` kbuild test robot
@ 2016-09-13  9:22     ` Nicholas Piggin
  0 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2016-09-13  9:22 UTC (permalink / raw)
  Cc: linuxppc-dev, Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras

On Tue, 13 Sep 2016 14:56:47 +0800
kbuild test robot <lkp@intel.com> wrote:

> Hi Nicholas,
> 
> [auto build test ERROR on powerpc/next]
> [also build test ERROR on v4.8-rc6 next-20160912]
> [if your patch is applied to the wrong git tree, please drop us a note to help improve the system]
> [Suggest to use git(>=2.9.0) format-patch --base=<commit> (or --base=auto for convenience) to record what (public, well-known) commit your patch series was built on]
> [Check https://git-scm.com/docs/git-format-patch for more information]
> 
> url:    https://github.com/0day-ci/linux/commits/Nicholas-Piggin/powerpc-64-use-asm-sections-for-head-exception-layout/20160913-113052
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git next
> config: powerpc-defconfig (attached as .config)
> compiler: powerpc64-linux-gnu-gcc (Debian 5.4.0-6) 5.4.0 20160609
> reproduce:
>         wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # save the attached .config to linux build tree
>         make.cross ARCH=powerpc 
> 
> All errors (new ones prefixed by >>):
> 
>    arch/powerpc/kernel/built-in.o: In function `.arch_local_irq_restore':
> >> (.text+0x7390): undefined reference to `.__replay_interrupt'  

Ah, __replay_interrupt lost its _GLOBAL annotation, that must be
it. I'm not sure why I didn't see this -- I tested big endian build...
I'll fix that up.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [1/8] powerpc/pseries: hypervisor facility unavailable use correct handler
  2016-09-13  3:08 ` [PATCH 1/8] powerpc/pseries: hypervisor facility unavailable use correct handler Nicholas Piggin
@ 2016-09-25  3:00   ` Michael Ellerman
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2016-09-25  3:00 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

On Tue, 2016-13-09 at 03:08:39 UTC, Nicholas Piggin wrote:
> The 0xf80 hv_facility_unavailable trampoline branches to the 0xf60
> handler. This works because they both do the same thing, but it should
> be fixed.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/40e1b1cfb529891307b21f6e33

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [2/8] powerpc/pseries: syscall remove trampoline
  2016-09-13  3:08 ` [PATCH 2/8] powerpc/pseries: syscall remove trampoline Nicholas Piggin
@ 2016-09-25  3:00   ` Michael Ellerman
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2016-09-25  3:00 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev, Benjamin Herrenschmidt, Paul Mackerras
  Cc: Nicholas Piggin

On Tue, 2016-13-09 at 03:08:40 UTC, Nicholas Piggin wrote:
> The syscall trampoline is not required, remove it.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

Applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/a24553dd02dc6c7d2912af0b4b

I rewrote the change log to be:

    powerpc/pseries: Remove unnecessary syscall trampoline

    When we originally added the ability to split the exception vectors from
    the kernel (commit 1f6a93e4c35e ("powerpc: Make it possible to move the
    interrupt handlers away from the kernel" 2008-09-15)), the LOAD_HANDLER() macro
    used an addi instruction to compute the offset of the common handler
    from the kernel base address.

    Using addi meant the handler had to be within 32K of the kernel base
    address, due to the addi instruction taking a signed immediate value.
    That necessitated creating a trampoline for the system call handler,
    because system_call_common (in entry64.S) is not linked within 32K of
    the kernel base address.

    Later in commit 61e2390ede3c ("powerpc: Make load_hander handle upto 64k
    offset" 2012-11-15) we changed LOAD_HANDLER to take a 64K offset, by
    changing it to use ori.

    Although system_call_common is not in head_64.S or exceptions-64s.S, it
    is included in head-y, which causes it to be linked early in the kernel
    text, so in practice it ends up below 64K. Additionally if it can't be
    placed below 64K the linker will fail to build with a "relocation
    truncated to fit" error.

    So remove the trampoline.

    Newer toolchains are able to work out that the ori in LOAD_HANDLER only
    takes a 16 bit offset, and so they generate a 16 bit relocation. Older
    toolchains (binutils 2.22 at least) are not so smart, so we have to add
    the @l annotation to tell the assembler to generate a 16 bit relocation.

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-09-25  3:00 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-13  3:08 [PATCH 0/8] powerpc/64: use asm sections for head/exception layout Nicholas Piggin
2016-09-13  3:08 ` [PATCH 1/8] powerpc/pseries: hypervisor facility unavailable use correct handler Nicholas Piggin
2016-09-25  3:00   ` [1/8] " Michael Ellerman
2016-09-13  3:08 ` [PATCH 2/8] powerpc/pseries: syscall remove trampoline Nicholas Piggin
2016-09-25  3:00   ` [2/8] " Michael Ellerman
2016-09-13  3:08 ` [PATCH 3/8] powerpc/pseries: exception vector macros Nicholas Piggin
2016-09-13  6:56   ` kbuild test robot
2016-09-13  9:22     ` Nicholas Piggin
2016-09-13  3:08 ` [PATCH 4/8] powerpc/pseries: consolidate exception handler alignment Nicholas Piggin
2016-09-13  3:08 ` [PATCH 5/8] powerpc/64: use gas sections for arranging exception vectors Nicholas Piggin
2016-09-13  3:08 ` [PATCH 6/8] powerpc/pseries: move related exception code together Nicholas Piggin
2016-09-13  3:08 ` [PATCH 7/8] powerpc/pseries: use single macro for both parts of OOL exception Nicholas Piggin
2016-09-13  3:08 ` [PATCH 8/8] powerpc/pseries: remove unused exception code, small cleanups Nicholas Piggin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.