All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding
@ 2015-01-07 12:04 Mark Rutland
  2015-01-07 12:04 ` [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions Mark Rutland
                   ` (6 more replies)
  0 siblings, 7 replies; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

Currently we have two sets of macros used for ESR_ELx handling, one used
by core arm64 code and the other used by KVM. These differ slightly in
naming convention and style of definition.

This patch series introduces and migrates all users to a common set of
macros for ESR_ELx handling, preventing further drift.

Additionally this series adds exception class decoding when reporting
exceptions, saving deveopers from having to perform tedious mental
arithmetic to figure out what the likely cause of an unexpected
exception was.

Thanks,
Mark.

Mark Rutland (7):
  arm64: introduce common ESR_ELx_* definitions
  arm64: move to ESR_ELx macros
  arm64: remove ESR_EL1_* macros
  arm64: decode ESR_ELx.EC when reporting exceptions
  arm64: kvm: move to ESR_ELx macros
  arm64: kvm: remove ESR_EL2_* macros
  arm64: kvm: decode ESR_ELx.EC when reporting exceptions

 arch/arm64/include/asm/esr.h         | 116 +++++++++++++++++++++++++----------
 arch/arm64/include/asm/kvm_arm.h     |  73 ++--------------------
 arch/arm64/include/asm/kvm_emulate.h |  28 +++++----
 arch/arm64/kernel/entry.S            |  64 +++++++++----------
 arch/arm64/kernel/signal32.c         |   2 +-
 arch/arm64/kernel/traps.c            |  50 ++++++++++++++-
 arch/arm64/kvm/emulate.c             |   5 +-
 arch/arm64/kvm/handle_exit.c         |  39 ++++++------
 arch/arm64/kvm/hyp.S                 |  17 ++---
 arch/arm64/kvm/inject_fault.c        |  14 ++---
 arch/arm64/kvm/sys_regs.c            |  23 ++++---
 arch/arm64/mm/fault.c                |   2 +-
 12 files changed, 236 insertions(+), 197 deletions(-)

-- 
1.9.1

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions
  2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
@ 2015-01-07 12:04 ` Mark Rutland
  2015-01-07 16:23   ` Catalin Marinas
  2015-01-11 16:59   ` Christoffer Dall
  2015-01-07 12:04 ` [PATCH 2/7] arm64: move to ESR_ELx macros Mark Rutland
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
the encodings are common. While encodings are architected to refer to
the current EL or a lower EL, the macros refer to particular ELs (e.g.
ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
and their naming is misleading.

This patch introduces common ESR_ELx_* macros that can be used in all
cases, in preparation for later patches which will migrate existing
users over. Some additional cleanups are made in the process:

* Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
  replaced with more general _LOW and _CUR suffixes, matching the
  architectural intent.

* ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
  EC encoding covers traps from both WFE and WFI. Similarly,
  ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.

* Multi-bit fields are given consistently named _SHIFT and _MASK macros.

* UL() is used for compatiblity with assembly files.

* Comments are added for currently unallocated ESR_ELx.EC encodings.

For fields other than ESR_ELx.EC, macros are only implemented for fields
for which there is already an ESR_EL{1,2}_* macro.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/esr.h | 78 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 78 insertions(+)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 72674f4..eaee379 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -54,4 +54,82 @@
 #define ESR_EL1_EC_BKPT32	(0x38)
 #define ESR_EL1_EC_BRK64	(0x3C)
 
+#define ESR_ELx_EC_UNKNOWN	(0x00)
+#define ESR_ELx_EC_WFx		(0x01)
+/* Unallocated EC: 0x02 */
+#define ESR_ELx_EC_CP15_32	(0x03)
+#define ESR_ELx_EC_CP15_64	(0x04)
+#define ESR_ELx_EC_CP14_MR	(0x05)
+#define ESR_ELx_EC_CP14_LS	(0x06)
+#define ESR_ELx_EC_FP_ASIMD	(0x07)
+#define ESR_ELx_EC_CP10_ID	(0x08)
+/* Unallocated EC: 0x09 - 0x0B */
+#define ESR_ELx_EC_CP14_64	(0x0C)
+/* Unallocated: 0x0d */
+#define ESR_ELx_EC_ILL		(0x0E)
+/* Unallocated EC: 0x0F - 0x10 */
+#define ESR_ELx_EC_SVC32	(0x11)
+#define ESR_ELx_EC_HVC32	(0x12)
+#define ESR_ELx_EC_SMC32	(0x13)
+/* Unallocated EC: 0x14 */
+#define ESR_ELx_EC_SVC64	(0x15)
+#define ESR_ELx_EC_HVC64	(0x16)
+#define ESR_ELx_EC_SMC64	(0x17)
+#define ESR_ELx_EC_SYS64	(0x18)
+/* Unallocated EC: 0x19 - 0x1E */
+#define ESR_ELx_EC_IMP_DEF	(0x1f)
+#define ESR_ELx_EC_IABT_LOW	(0x20)
+#define ESR_ELx_EC_IABT_CUR	(0x21)
+#define ESR_ELx_EC_PC_ALIGN	(0x22)
+/* Unallocated EC: 0x23 */
+#define ESR_ELx_EC_DABT_LOW	(0x24)
+#define ESR_ELx_EC_DABT_CUR	(0x25)
+#define ESR_ELx_EC_SP_ALIGN	(0x26)
+/* Unallocated EC: 0x27 */
+#define ESR_ELx_EC_FP_EXC32	(0x28)
+/* Unallocated EC: 0x29 - 0x2B */
+#define ESR_ELx_EC_FP_EXC64	(0x2C)
+/* Unallocated EC: 0x2D - 0x2E */
+#define ESR_ELx_EC_SERROR	(0x2F)
+#define ESR_ELx_EC_BREAKPT_LOW	(0x30)
+#define ESR_ELx_EC_BREAKPT_CUR	(0x31)
+#define ESR_ELx_EC_SOFTSTP_LOW	(0x32)
+#define ESR_ELx_EC_SOFTSTP_CUR	(0x33)
+#define ESR_ELx_EC_WATCHPT_LOW	(0x34)
+#define ESR_ELx_EC_WATCHPT_CUR	(0x35)
+/* Unallocated EC: 0x36 - 0x37 */
+#define ESR_ELx_EC_BKPT32	(0x38)
+/* Unallocated EC: 0x39 */
+#define ESR_ELx_EC_VECTOR32	(0x3A)
+/* Unallocted EC: 0x3B */
+#define ESR_ELx_EC_BRK64	(0x3C)
+/* Unallocated EC: 0x3D - 0x3F */
+#define ESR_ELx_EC_MAX		(0x3F)
+
+#define ESR_ELx_EC_SHIFT	(26)
+#define ESR_ELx_EC_MASK		(UL(0x3F) << ESR_ELx_EC_SHIFT)
+
+#define ESR_ELx_IL		(UL(1) << 25)
+#define ESR_ELx_ISS_MASK	(ESR_ELx_IL - 1)
+#define ESR_ELx_ISV		(UL(1) << 24)
+#define ESR_ELx_SAS		(UL(1) << 22)
+#define ESR_ELx_SSE		(UL(1) << 21)
+#define ESR_ELx_SRT_SHIFT	(16)
+#define ESR_ELx_SRT_MASK	(UL(0x1F) << ESR_ELx_SRT_SHIFT)
+#define ESR_ELx_SF 		(UL(1) << 15)
+#define ESR_ELx_AR 		(UL(1) << 14)
+#define ESR_ELx_EA 		(UL(1) << 9)
+#define ESR_ELx_CM 		(UL(1) << 8)
+#define ESR_ELx_S1PTW 		(UL(1) << 7)
+#define ESR_ELx_WNR		(UL(1) << 6)
+#define ESR_ELx_FSC		(0x3F)
+#define ESR_ELx_FSC_TYPE	(0x3C)
+#define ESR_ELx_FSC_EXTABT	(0x10)
+#define ESR_ELx_FSC_FAULT	(0x04)
+#define ESR_ELx_FSC_PERM	(0x0F)
+#define ESR_ELx_CV		(UL(1) << 24)
+#define ESR_ELx_COND_SHIFT	(20)
+#define ESR_ELx_COND_MASK	(UL(0xF) << ESR_ELx_COND_SHIFT)
+#define ESR_ELx_WFx_ISS_WFE	(UL(1) << 0)
+
 #endif /* __ASM_ESR_H */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 2/7] arm64: move to ESR_ELx macros
  2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
  2015-01-07 12:04 ` [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions Mark Rutland
@ 2015-01-07 12:04 ` Mark Rutland
  2015-01-11 17:01   ` Christoffer Dall
  2015-01-07 12:04 ` [PATCH 3/7] arm64: remove ESR_EL1_* macros Mark Rutland
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have common ESR_ELx_* macros, move the core arm64 code over
to them.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kernel/entry.S    | 64 ++++++++++++++++++++++----------------------
 arch/arm64/kernel/signal32.c |  2 +-
 arch/arm64/mm/fault.c        |  2 +-
 3 files changed, 34 insertions(+), 34 deletions(-)

diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
index fd4fa37..02e6af1 100644
--- a/arch/arm64/kernel/entry.S
+++ b/arch/arm64/kernel/entry.S
@@ -269,18 +269,18 @@ ENDPROC(el1_error_invalid)
 el1_sync:
 	kernel_entry 1
 	mrs	x1, esr_el1			// read the syndrome register
-	lsr	x24, x1, #ESR_EL1_EC_SHIFT	// exception class
-	cmp	x24, #ESR_EL1_EC_DABT_EL1	// data abort in EL1
+	lsr	x24, x1, #ESR_ELx_EC_SHIFT	// exception class
+	cmp	x24, #ESR_ELx_EC_DABT_CUR	// data abort in EL1
 	b.eq	el1_da
-	cmp	x24, #ESR_EL1_EC_SYS64		// configurable trap
+	cmp	x24, #ESR_ELx_EC_SYS64		// configurable trap
 	b.eq	el1_undef
-	cmp	x24, #ESR_EL1_EC_SP_ALIGN	// stack alignment exception
+	cmp	x24, #ESR_ELx_EC_SP_ALIGN	// stack alignment exception
 	b.eq	el1_sp_pc
-	cmp	x24, #ESR_EL1_EC_PC_ALIGN	// pc alignment exception
+	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
 	b.eq	el1_sp_pc
-	cmp	x24, #ESR_EL1_EC_UNKNOWN	// unknown exception in EL1
+	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL1
 	b.eq	el1_undef
-	cmp	x24, #ESR_EL1_EC_BREAKPT_EL1	// debug exception in EL1
+	cmp	x24, #ESR_ELx_EC_BREAKPT_CUR	// debug exception in EL1
 	b.ge	el1_dbg
 	b	el1_inv
 el1_da:
@@ -318,7 +318,7 @@ el1_dbg:
 	/*
 	 * Debug exception handling
 	 */
-	cmp	x24, #ESR_EL1_EC_BRK64		// if BRK64
+	cmp	x24, #ESR_ELx_EC_BRK64		// if BRK64
 	cinc	x24, x24, eq			// set bit '0'
 	tbz	x24, #0, el1_inv		// EL1 only
 	mrs	x0, far_el1
@@ -375,26 +375,26 @@ el1_preempt:
 el0_sync:
 	kernel_entry 0
 	mrs	x25, esr_el1			// read the syndrome register
-	lsr	x24, x25, #ESR_EL1_EC_SHIFT	// exception class
-	cmp	x24, #ESR_EL1_EC_SVC64		// SVC in 64-bit state
+	lsr	x24, x25, #ESR_ELx_EC_SHIFT	// exception class
+	cmp	x24, #ESR_ELx_EC_SVC64		// SVC in 64-bit state
 	b.eq	el0_svc
-	cmp	x24, #ESR_EL1_EC_DABT_EL0	// data abort in EL0
+	cmp	x24, #ESR_ELx_EC_DABT_LOW	// data abort in EL0
 	b.eq	el0_da
-	cmp	x24, #ESR_EL1_EC_IABT_EL0	// instruction abort in EL0
+	cmp	x24, #ESR_ELx_EC_IABT_LOW	// instruction abort in EL0
 	b.eq	el0_ia
-	cmp	x24, #ESR_EL1_EC_FP_ASIMD	// FP/ASIMD access
+	cmp	x24, #ESR_ELx_EC_FP_ASIMD	// FP/ASIMD access
 	b.eq	el0_fpsimd_acc
-	cmp	x24, #ESR_EL1_EC_FP_EXC64	// FP/ASIMD exception
+	cmp	x24, #ESR_ELx_EC_FP_EXC64	// FP/ASIMD exception
 	b.eq	el0_fpsimd_exc
-	cmp	x24, #ESR_EL1_EC_SYS64		// configurable trap
+	cmp	x24, #ESR_ELx_EC_SYS64		// configurable trap
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_SP_ALIGN	// stack alignment exception
+	cmp	x24, #ESR_ELx_EC_SP_ALIGN	// stack alignment exception
 	b.eq	el0_sp_pc
-	cmp	x24, #ESR_EL1_EC_PC_ALIGN	// pc alignment exception
+	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
 	b.eq	el0_sp_pc
-	cmp	x24, #ESR_EL1_EC_UNKNOWN	// unknown exception in EL0
+	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL0
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_BREAKPT_EL0	// debug exception in EL0
+	cmp	x24, #ESR_ELx_EC_BREAKPT_LOW	// debug exception in EL0
 	b.ge	el0_dbg
 	b	el0_inv
 
@@ -403,30 +403,30 @@ el0_sync:
 el0_sync_compat:
 	kernel_entry 0, 32
 	mrs	x25, esr_el1			// read the syndrome register
-	lsr	x24, x25, #ESR_EL1_EC_SHIFT	// exception class
-	cmp	x24, #ESR_EL1_EC_SVC32		// SVC in 32-bit state
+	lsr	x24, x25, #ESR_ELx_EC_SHIFT	// exception class
+	cmp	x24, #ESR_ELx_EC_SVC32		// SVC in 32-bit state
 	b.eq	el0_svc_compat
-	cmp	x24, #ESR_EL1_EC_DABT_EL0	// data abort in EL0
+	cmp	x24, #ESR_ELx_EC_DABT_LOW	// data abort in EL0
 	b.eq	el0_da
-	cmp	x24, #ESR_EL1_EC_IABT_EL0	// instruction abort in EL0
+	cmp	x24, #ESR_ELx_EC_IABT_LOW	// instruction abort in EL0
 	b.eq	el0_ia
-	cmp	x24, #ESR_EL1_EC_FP_ASIMD	// FP/ASIMD access
+	cmp	x24, #ESR_ELx_EC_FP_ASIMD	// FP/ASIMD access
 	b.eq	el0_fpsimd_acc
-	cmp	x24, #ESR_EL1_EC_FP_EXC32	// FP/ASIMD exception
+	cmp	x24, #ESR_ELx_EC_FP_EXC32	// FP/ASIMD exception
 	b.eq	el0_fpsimd_exc
-	cmp	x24, #ESR_EL1_EC_UNKNOWN	// unknown exception in EL0
+	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL0
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_CP15_32	// CP15 MRC/MCR trap
+	cmp	x24, #ESR_ELx_EC_CP15_32	// CP15 MRC/MCR trap
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_CP15_64	// CP15 MRRC/MCRR trap
+	cmp	x24, #ESR_ELx_EC_CP15_64	// CP15 MRRC/MCRR trap
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_CP14_MR	// CP14 MRC/MCR trap
+	cmp	x24, #ESR_ELx_EC_CP14_MR	// CP14 MRC/MCR trap
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_CP14_LS	// CP14 LDC/STC trap
+	cmp	x24, #ESR_ELx_EC_CP14_LS	// CP14 LDC/STC trap
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_CP14_64	// CP14 MRRC/MCRR trap
+	cmp	x24, #ESR_ELx_EC_CP14_64	// CP14 MRRC/MCRR trap
 	b.eq	el0_undef
-	cmp	x24, #ESR_EL1_EC_BREAKPT_EL0	// debug exception in EL0
+	cmp	x24, #ESR_ELx_EC_BREAKPT_LOW	// debug exception in EL0
 	b.ge	el0_dbg
 	b	el0_inv
 el0_svc_compat:
diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
index 5a1ba6e..192d900 100644
--- a/arch/arm64/kernel/signal32.c
+++ b/arch/arm64/kernel/signal32.c
@@ -501,7 +501,7 @@ static int compat_setup_sigframe(struct compat_sigframe __user *sf,
 
 	__put_user_error((compat_ulong_t)0, &sf->uc.uc_mcontext.trap_no, err);
 	/* set the compat FSR WnR */
-	__put_user_error(!!(current->thread.fault_code & ESR_EL1_WRITE) <<
+	__put_user_error(!!(current->thread.fault_code & ESR_ELx_WNR) <<
 			 FSR_WRITE_SHIFT, &sf->uc.uc_mcontext.error_code, err);
 	__put_user_error(current->thread.fault_address, &sf->uc.uc_mcontext.fault_address, err);
 	__put_user_error(set->sig[0], &sf->uc.uc_mcontext.oldmask, err);
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index c11cd27..96da131 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -219,7 +219,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
 
 	if (esr & ESR_LNX_EXEC) {
 		vm_flags = VM_EXEC;
-	} else if ((esr & ESR_EL1_WRITE) && !(esr & ESR_EL1_CM)) {
+	} else if ((esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM)) {
 		vm_flags = VM_WRITE;
 		mm_flags |= FAULT_FLAG_WRITE;
 	}
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 3/7] arm64: remove ESR_EL1_* macros
  2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
  2015-01-07 12:04 ` [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions Mark Rutland
  2015-01-07 12:04 ` [PATCH 2/7] arm64: move to ESR_ELx macros Mark Rutland
@ 2015-01-07 12:04 ` Mark Rutland
  2015-01-11 18:08   ` Christoffer Dall
  2015-01-07 12:04 ` [PATCH 4/7] arm64: decode ESR_ELx.EC when reporting exceptions Mark Rutland
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

Now that all users have been moved over to the common ESR_ELx_* macros,
remove the redundant ESR_EL1 macros.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/esr.h | 36 ------------------------------------
 1 file changed, 36 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index eaee379..19492e1 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -18,42 +18,6 @@
 #ifndef __ASM_ESR_H
 #define __ASM_ESR_H
 
-#define ESR_EL1_WRITE		(1 << 6)
-#define ESR_EL1_CM		(1 << 8)
-#define ESR_EL1_IL		(1 << 25)
-
-#define ESR_EL1_EC_SHIFT	(26)
-#define ESR_EL1_EC_UNKNOWN	(0x00)
-#define ESR_EL1_EC_WFI		(0x01)
-#define ESR_EL1_EC_CP15_32	(0x03)
-#define ESR_EL1_EC_CP15_64	(0x04)
-#define ESR_EL1_EC_CP14_MR	(0x05)
-#define ESR_EL1_EC_CP14_LS	(0x06)
-#define ESR_EL1_EC_FP_ASIMD	(0x07)
-#define ESR_EL1_EC_CP10_ID	(0x08)
-#define ESR_EL1_EC_CP14_64	(0x0C)
-#define ESR_EL1_EC_ILL_ISS	(0x0E)
-#define ESR_EL1_EC_SVC32	(0x11)
-#define ESR_EL1_EC_SVC64	(0x15)
-#define ESR_EL1_EC_SYS64	(0x18)
-#define ESR_EL1_EC_IABT_EL0	(0x20)
-#define ESR_EL1_EC_IABT_EL1	(0x21)
-#define ESR_EL1_EC_PC_ALIGN	(0x22)
-#define ESR_EL1_EC_DABT_EL0	(0x24)
-#define ESR_EL1_EC_DABT_EL1	(0x25)
-#define ESR_EL1_EC_SP_ALIGN	(0x26)
-#define ESR_EL1_EC_FP_EXC32	(0x28)
-#define ESR_EL1_EC_FP_EXC64	(0x2C)
-#define ESR_EL1_EC_SERROR	(0x2F)
-#define ESR_EL1_EC_BREAKPT_EL0	(0x30)
-#define ESR_EL1_EC_BREAKPT_EL1	(0x31)
-#define ESR_EL1_EC_SOFTSTP_EL0	(0x32)
-#define ESR_EL1_EC_SOFTSTP_EL1	(0x33)
-#define ESR_EL1_EC_WATCHPT_EL0	(0x34)
-#define ESR_EL1_EC_WATCHPT_EL1	(0x35)
-#define ESR_EL1_EC_BKPT32	(0x38)
-#define ESR_EL1_EC_BRK64	(0x3C)
-
 #define ESR_ELx_EC_UNKNOWN	(0x00)
 #define ESR_ELx_EC_WFx		(0x01)
 /* Unallocated EC: 0x02 */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 4/7] arm64: decode ESR_ELx.EC when reporting exceptions
  2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
                   ` (2 preceding siblings ...)
  2015-01-07 12:04 ` [PATCH 3/7] arm64: remove ESR_EL1_* macros Mark Rutland
@ 2015-01-07 12:04 ` Mark Rutland
  2015-01-11 18:22   ` Christoffer Dall
  2015-01-07 12:04 ` [PATCH 5/7] arm64: kvm: move to ESR_ELx macros Mark Rutland
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

To aid the developer when something triggers an unexpected exception,
decode the ESR_ELx.EC field when logging an ESR_ELx value. This doesn't
tell the developer the specifics of the exception encoded in the
remaining IL and ISS bits, but it can be helpful to distinguish between
exception classes (e.g. SError and a data abort) without having to
manually decode the field, which can be tiresome.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/esr.h |  6 ++++++
 arch/arm64/kernel/traps.c    | 50 ++++++++++++++++++++++++++++++++++++++++++--
 2 files changed, 54 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
index 19492e1..7669a7a 100644
--- a/arch/arm64/include/asm/esr.h
+++ b/arch/arm64/include/asm/esr.h
@@ -96,4 +96,10 @@
 #define ESR_ELx_COND_MASK	(UL(0xF) << ESR_ELx_COND_SHIFT)
 #define ESR_ELx_WFx_ISS_WFE	(UL(1) << 0)
 
+#ifndef __ASSEMBLY__
+#include <asm/types.h>
+
+const char *esr_get_class_string(u32 esr);
+#endif /* __ASSEMBLY */
+
 #endif /* __ASM_ESR_H */
diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
index 0a801e3..1ef2940 100644
--- a/arch/arm64/kernel/traps.c
+++ b/arch/arm64/kernel/traps.c
@@ -33,6 +33,7 @@
 
 #include <asm/atomic.h>
 #include <asm/debug-monitors.h>
+#include <asm/esr.h>
 #include <asm/traps.h>
 #include <asm/stacktrace.h>
 #include <asm/exception.h>
@@ -373,6 +374,51 @@ asmlinkage long do_ni_syscall(struct pt_regs *regs)
 	return sys_ni_syscall();
 }
 
+static const char *esr_class_str[] = {
+	[0 ... ESR_ELx_EC_MAX]		= "UNRECOGNIZED EC",
+	[ESR_ELx_EC_UNKNOWN]		= "Unknown/Uncategorized",
+	[ESR_ELx_EC_WFx]		= "WFI/WFE",
+	[ESR_ELx_EC_CP15_32]		= "CP15 MCR/MRC",
+	[ESR_ELx_EC_CP15_64]		= "CP15 MCRR/MRRC",
+	[ESR_ELx_EC_CP14_MR]		= "CP14 MCR/MRC",
+	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
+	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
+	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
+	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
+	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
+	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",
+	[ESR_ELx_EC_HVC32]		= "HVC (AArch32)",
+	[ESR_ELx_EC_SMC32]		= "SMC (AArch32)",
+	[ESR_ELx_EC_SVC64]		= "SVC (AArch64)",
+	[ESR_ELx_EC_HVC64]		= "HVC (AArch64)",
+	[ESR_ELx_EC_SMC64]		= "SMC (AArch64)",
+	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
+	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
+	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
+	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
+	[ESR_ELx_EC_PC_ALIGN]		= "PC Alignment",
+	[ESR_ELx_EC_DABT_LOW]		= "DABT (lower EL)",
+	[ESR_ELx_EC_DABT_CUR]		= "DABT (current EL)",
+	[ESR_ELx_EC_SP_ALIGN]		= "SP Alignment",
+	[ESR_ELx_EC_FP_EXC32]		= "FP (AArch32)",
+	[ESR_ELx_EC_FP_EXC64]		= "FP (AArch64)",
+	[ESR_ELx_EC_SERROR]		= "SError",
+	[ESR_ELx_EC_BREAKPT_LOW]	= "Breakpoint (lower EL)",
+	[ESR_ELx_EC_BREAKPT_CUR]	= "Breakpoint (current EL)",
+	[ESR_ELx_EC_SOFTSTP_LOW]	= "Software Step (lower EL)",
+	[ESR_ELx_EC_SOFTSTP_CUR]	= "Software Step (current EL)",
+	[ESR_ELx_EC_WATCHPT_LOW]	= "Watchpoint (lower EL)",
+	[ESR_ELx_EC_WATCHPT_CUR]	= "Watchpoint (current EL)",
+	[ESR_ELx_EC_BKPT32]		= "BKPT (AArch32)",
+	[ESR_ELx_EC_VECTOR32]		= "Vector catch (AArch32)",
+	[ESR_ELx_EC_BRK64]		= "BRK (AArch64)",
+};
+
+const char *esr_get_class_string(u32 esr)
+{
+	return esr_class_str[esr >> ESR_ELx_EC_SHIFT];
+}
+
 /*
  * bad_mode handles the impossible case in the exception vector.
  */
@@ -382,8 +428,8 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
 	void __user *pc = (void __user *)instruction_pointer(regs);
 	console_verbose();
 
-	pr_crit("Bad mode in %s handler detected, code 0x%08x\n",
-		handler[reason], esr);
+	pr_crit("Bad mode in %s handler detected, code 0x%08x -- %s\n",
+		handler[reason], esr, esr_get_class_string(esr));
 	__show_regs(regs);
 
 	info.si_signo = SIGILL;
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 5/7] arm64: kvm: move to ESR_ELx macros
  2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
                   ` (3 preceding siblings ...)
  2015-01-07 12:04 ` [PATCH 4/7] arm64: decode ESR_ELx.EC when reporting exceptions Mark Rutland
@ 2015-01-07 12:04 ` Mark Rutland
  2015-01-11 18:27   ` Christoffer Dall
  2015-01-07 12:04 ` [PATCH 6/7] arm64: kvm: remove ESR_EL2_* macros Mark Rutland
  2015-01-07 12:04 ` [PATCH 7/7] arm64: kvm: decode ESR_ELx.EC when reporting exceptions Mark Rutland
  6 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have common ESR_ELx macros, make use of them in the arm64
KVM code. The addition of <asm/esr.h> to the include path highlighted
badly ordered (i.e. not alphabetical) include lists; these are changed
to alphabetical order.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 28 +++++++++++++++-------------
 arch/arm64/kvm/emulate.c             |  5 +++--
 arch/arm64/kvm/handle_exit.c         | 32 +++++++++++++++++---------------
 arch/arm64/kvm/hyp.S                 | 17 +++++++++--------
 arch/arm64/kvm/inject_fault.c        | 14 +++++++-------
 arch/arm64/kvm/sys_regs.c            | 23 +++++++++++++----------
 6 files changed, 64 insertions(+), 55 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 8127e45..6a9fa89 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -23,8 +23,10 @@
 #define __ARM64_KVM_EMULATE_H__
 
 #include <linux/kvm_host.h>
-#include <asm/kvm_asm.h>
+
+#include <asm/esr.h>
 #include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
 #include <asm/kvm_mmio.h>
 #include <asm/ptrace.h>
 
@@ -128,63 +130,63 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu)
 
 static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_ISV);
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
 }
 
 static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_WNR);
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
 }
 
 static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SSE);
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
 }
 
 static inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu)
 {
-	return (kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SRT_MASK) >> ESR_EL2_SRT_SHIFT;
+	return (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
 }
 
 static inline bool kvm_vcpu_dabt_isextabt(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_EA);
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_EA);
 }
 
 static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_S1PTW);
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
 }
 
 static inline int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu)
 {
-	return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SAS) >> ESR_EL2_SAS_SHIFT);
+	return 1 << !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS);
 }
 
 /* This one is not specific to Data Abort */
 static inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu)
 {
-	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_IL);
+	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_IL);
 }
 
 static inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_hsr(vcpu) >> ESR_EL2_EC_SHIFT;
+	return kvm_vcpu_get_hsr(vcpu) >> ESR_ELx_EC_SHIFT;
 }
 
 static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_trap_get_class(vcpu) == ESR_EL2_EC_IABT;
+	return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW;
 }
 
 static inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_hsr(vcpu) & ESR_EL2_FSC;
+	return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC;
 }
 
 static inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu)
 {
-	return kvm_vcpu_get_hsr(vcpu) & ESR_EL2_FSC_TYPE;
+	return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC_TYPE;
 }
 
 static inline unsigned long kvm_vcpu_get_mpidr(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
index 124418d..f87d8fb 100644
--- a/arch/arm64/kvm/emulate.c
+++ b/arch/arm64/kvm/emulate.c
@@ -22,6 +22,7 @@
  */
 
 #include <linux/kvm_host.h>
+#include <asm/esr.h>
 #include <asm/kvm_emulate.h>
 
 /*
@@ -55,8 +56,8 @@ static int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
 {
 	u32 esr = kvm_vcpu_get_hsr(vcpu);
 
-	if (esr & ESR_EL2_CV)
-		return (esr & ESR_EL2_COND) >> ESR_EL2_COND_SHIFT;
+	if (esr & ESR_ELx_CV)
+		return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
 
 	return -1;
 }
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index 34b8bd0..bcbc923 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -21,8 +21,10 @@
 
 #include <linux/kvm.h>
 #include <linux/kvm_host.h>
-#include <asm/kvm_emulate.h>
+
+#include <asm/esr.h>
 #include <asm/kvm_coproc.h>
+#include <asm/kvm_emulate.h>
 #include <asm/kvm_mmu.h>
 #include <asm/kvm_psci.h>
 
@@ -61,7 +63,7 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
  */
 static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	if (kvm_vcpu_get_hsr(vcpu) & ESR_EL2_EC_WFI_ISS_WFE)
+	if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE)
 		kvm_vcpu_on_spin(vcpu);
 	else
 		kvm_vcpu_block(vcpu);
@@ -72,19 +74,19 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
 }
 
 static exit_handle_fn arm_exit_handlers[] = {
-	[ESR_EL2_EC_WFI]	= kvm_handle_wfx,
-	[ESR_EL2_EC_CP15_32]	= kvm_handle_cp15_32,
-	[ESR_EL2_EC_CP15_64]	= kvm_handle_cp15_64,
-	[ESR_EL2_EC_CP14_MR]	= kvm_handle_cp14_32,
-	[ESR_EL2_EC_CP14_LS]	= kvm_handle_cp14_load_store,
-	[ESR_EL2_EC_CP14_64]	= kvm_handle_cp14_64,
-	[ESR_EL2_EC_HVC32]	= handle_hvc,
-	[ESR_EL2_EC_SMC32]	= handle_smc,
-	[ESR_EL2_EC_HVC64]	= handle_hvc,
-	[ESR_EL2_EC_SMC64]	= handle_smc,
-	[ESR_EL2_EC_SYS64]	= kvm_handle_sys_reg,
-	[ESR_EL2_EC_IABT]	= kvm_handle_guest_abort,
-	[ESR_EL2_EC_DABT]	= kvm_handle_guest_abort,
+	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
+	[ESR_ELx_EC_CP15_32]	= kvm_handle_cp15_32,
+	[ESR_ELx_EC_CP15_64]	= kvm_handle_cp15_64,
+	[ESR_ELx_EC_CP14_MR]	= kvm_handle_cp14_32,
+	[ESR_ELx_EC_CP14_LS]	= kvm_handle_cp14_load_store,
+	[ESR_ELx_EC_CP14_64]	= kvm_handle_cp14_64,
+	[ESR_ELx_EC_HVC32]	= handle_hvc,
+	[ESR_ELx_EC_SMC32]	= handle_smc,
+	[ESR_ELx_EC_HVC64]	= handle_hvc,
+	[ESR_ELx_EC_SMC64]	= handle_smc,
+	[ESR_ELx_EC_SYS64]	= kvm_handle_sys_reg,
+	[ESR_ELx_EC_IABT_LOW]	= kvm_handle_guest_abort,
+	[ESR_ELx_EC_DABT_LOW]	= kvm_handle_guest_abort,
 };
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index fbe909f..c0d8202 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -17,15 +17,16 @@
 
 #include <linux/linkage.h>
 
-#include <asm/assembler.h>
-#include <asm/memory.h>
 #include <asm/asm-offsets.h>
+#include <asm/assembler.h>
 #include <asm/debug-monitors.h>
+#include <asm/esr.h>
 #include <asm/fpsimdmacros.h>
 #include <asm/kvm.h>
-#include <asm/kvm_asm.h>
 #include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
 #include <asm/kvm_mmu.h>
+#include <asm/memory.h>
 
 #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
 #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
@@ -1140,9 +1141,9 @@ el1_sync:					// Guest trapped into EL2
 	push	x2, x3
 
 	mrs	x1, esr_el2
-	lsr	x2, x1, #ESR_EL2_EC_SHIFT
+	lsr	x2, x1, #ESR_ELx_EC_SHIFT
 
-	cmp	x2, #ESR_EL2_EC_HVC64
+	cmp	x2, #ESR_ELx_EC_HVC64
 	b.ne	el1_trap
 
 	mrs	x3, vttbr_el2			// If vttbr is valid, the 64bit guest
@@ -1177,13 +1178,13 @@ el1_trap:
 	 * x1: ESR
 	 * x2: ESR_EC
 	 */
-	cmp	x2, #ESR_EL2_EC_DABT
-	mov	x0, #ESR_EL2_EC_IABT
+	cmp	x2, #ESR_ELx_EC_DABT_LOW
+	mov	x0, #ESR_ELx_EC_IABT_LOW
 	ccmp	x2, x0, #4, ne
 	b.ne	1f		// Not an abort we care about
 
 	/* This is an abort. Check for permission fault */
-	and	x2, x1, #ESR_EL2_FSC_TYPE
+	and	x2, x1, #ESR_ELx_FSC_TYPE
 	cmp	x2, #FSC_PERM
 	b.ne	1f		// Not a permission fault
 
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 81a02a8..f02530e 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -118,27 +118,27 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
 	 * instruction set. Report an external synchronous abort.
 	 */
 	if (kvm_vcpu_trap_il_is32bit(vcpu))
-		esr |= ESR_EL1_IL;
+		esr |= ESR_ELx_IL;
 
 	/*
 	 * Here, the guest runs in AArch64 mode when in EL1. If we get
 	 * an AArch32 fault, it means we managed to trap an EL0 fault.
 	 */
 	if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
-		esr |= (ESR_EL1_EC_IABT_EL0 << ESR_EL1_EC_SHIFT);
+		esr |= (ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT);
 	else
-		esr |= (ESR_EL1_EC_IABT_EL1 << ESR_EL1_EC_SHIFT);
+		esr |= (ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT);
 
 	if (!is_iabt)
-		esr |= ESR_EL1_EC_DABT_EL0;
+		esr |= ESR_ELx_EC_DABT_LOW;
 
-	vcpu_sys_reg(vcpu, ESR_EL1) = esr | ESR_EL2_EC_xABT_xFSR_EXTABT;
+	vcpu_sys_reg(vcpu, ESR_EL1) = esr | ESR_ELx_FSC_EXTABT;
 }
 
 static void inject_undef64(struct kvm_vcpu *vcpu)
 {
 	unsigned long cpsr = *vcpu_cpsr(vcpu);
-	u32 esr = (ESR_EL1_EC_UNKNOWN << ESR_EL1_EC_SHIFT);
+	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
 
 	*vcpu_spsr(vcpu) = cpsr;
 	*vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu);
@@ -151,7 +151,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
 	 * set.
 	 */
 	if (kvm_vcpu_trap_il_is32bit(vcpu))
-		esr |= ESR_EL1_IL;
+		esr |= ESR_ELx_IL;
 
 	vcpu_sys_reg(vcpu, ESR_EL1) = esr;
 }
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 3d7c2df..6b859d7 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -20,17 +20,20 @@
  * along with this program.  If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <linux/mm.h>
 #include <linux/kvm_host.h>
+#include <linux/mm.h>
 #include <linux/uaccess.h>
-#include <asm/kvm_arm.h>
-#include <asm/kvm_host.h>
-#include <asm/kvm_emulate.h>
-#include <asm/kvm_coproc.h>
-#include <asm/kvm_mmu.h>
+
 #include <asm/cacheflush.h>
 #include <asm/cputype.h>
 #include <asm/debug-monitors.h>
+#include <asm/esr.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_coproc.h>
+#include <asm/kvm_emulate.h>
+#include <asm/kvm_host.h>
+#include <asm/kvm_mmu.h>
+
 #include <trace/events/kvm.h>
 
 #include "sys_regs.h"
@@ -815,12 +818,12 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu,
 	int cp;
 
 	switch(hsr_ec) {
-	case ESR_EL2_EC_CP15_32:
-	case ESR_EL2_EC_CP15_64:
+	case ESR_ELx_EC_CP15_32:
+	case ESR_ELx_EC_CP15_64:
 		cp = 15;
 		break;
-	case ESR_EL2_EC_CP14_MR:
-	case ESR_EL2_EC_CP14_64:
+	case ESR_ELx_EC_CP14_MR:
+	case ESR_ELx_EC_CP14_64:
 		cp = 14;
 		break;
 	default:
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 6/7] arm64: kvm: remove ESR_EL2_* macros
  2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
                   ` (4 preceding siblings ...)
  2015-01-07 12:04 ` [PATCH 5/7] arm64: kvm: move to ESR_ELx macros Mark Rutland
@ 2015-01-07 12:04 ` Mark Rutland
  2015-01-11 18:27   ` Christoffer Dall
  2015-01-07 12:04 ` [PATCH 7/7] arm64: kvm: decode ESR_ELx.EC when reporting exceptions Mark Rutland
  6 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

Now that all users have been moved over to the common ESR_ELx_* macros,
remove the redundant ESR_EL2 macros. To maintain compatibility with the
fault handling code shared with 32-bit, the FSC_{FAULT,PERM} macros are
retained as aliases for the common ESR_ELx_FSC_{FAULT,PERM} definitions.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/kvm_arm.h | 73 +++-------------------------------------
 1 file changed, 4 insertions(+), 69 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 8afb863..94674eb 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -18,6 +18,7 @@
 #ifndef __ARM64_KVM_ARM_H__
 #define __ARM64_KVM_ARM_H__
 
+#include <asm/esr.h>
 #include <asm/memory.h>
 #include <asm/types.h>
 
@@ -184,77 +185,11 @@
 #define MDCR_EL2_TPMCR		(1 << 5)
 #define MDCR_EL2_HPMN_MASK	(0x1F)
 
-/* Exception Syndrome Register (ESR) bits */
-#define ESR_EL2_EC_SHIFT	(26)
-#define ESR_EL2_EC		(UL(0x3f) << ESR_EL2_EC_SHIFT)
-#define ESR_EL2_IL		(UL(1) << 25)
-#define ESR_EL2_ISS		(ESR_EL2_IL - 1)
-#define ESR_EL2_ISV_SHIFT	(24)
-#define ESR_EL2_ISV		(UL(1) << ESR_EL2_ISV_SHIFT)
-#define ESR_EL2_SAS_SHIFT	(22)
-#define ESR_EL2_SAS		(UL(3) << ESR_EL2_SAS_SHIFT)
-#define ESR_EL2_SSE		(1 << 21)
-#define ESR_EL2_SRT_SHIFT	(16)
-#define ESR_EL2_SRT_MASK	(0x1f << ESR_EL2_SRT_SHIFT)
-#define ESR_EL2_SF 		(1 << 15)
-#define ESR_EL2_AR 		(1 << 14)
-#define ESR_EL2_EA 		(1 << 9)
-#define ESR_EL2_CM 		(1 << 8)
-#define ESR_EL2_S1PTW 		(1 << 7)
-#define ESR_EL2_WNR		(1 << 6)
-#define ESR_EL2_FSC		(0x3f)
-#define ESR_EL2_FSC_TYPE	(0x3c)
-
-#define ESR_EL2_CV_SHIFT	(24)
-#define ESR_EL2_CV		(UL(1) << ESR_EL2_CV_SHIFT)
-#define ESR_EL2_COND_SHIFT	(20)
-#define ESR_EL2_COND		(UL(0xf) << ESR_EL2_COND_SHIFT)
-
-
-#define FSC_FAULT	(0x04)
-#define FSC_PERM	(0x0c)
+/* For compatibility with fault code shared with 32-bit */
+#define FSC_FAULT	ESR_ELx_FSC_FAULT
+#define FSC_PERM	ESR_ELx_FSC_PERM
 
 /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
 #define HPFAR_MASK	(~UL(0xf))
 
-#define ESR_EL2_EC_UNKNOWN	(0x00)
-#define ESR_EL2_EC_WFI		(0x01)
-#define ESR_EL2_EC_CP15_32	(0x03)
-#define ESR_EL2_EC_CP15_64	(0x04)
-#define ESR_EL2_EC_CP14_MR	(0x05)
-#define ESR_EL2_EC_CP14_LS	(0x06)
-#define ESR_EL2_EC_FP_ASIMD	(0x07)
-#define ESR_EL2_EC_CP10_ID	(0x08)
-#define ESR_EL2_EC_CP14_64	(0x0C)
-#define ESR_EL2_EC_ILL_ISS	(0x0E)
-#define ESR_EL2_EC_SVC32	(0x11)
-#define ESR_EL2_EC_HVC32	(0x12)
-#define ESR_EL2_EC_SMC32	(0x13)
-#define ESR_EL2_EC_SVC64	(0x15)
-#define ESR_EL2_EC_HVC64	(0x16)
-#define ESR_EL2_EC_SMC64	(0x17)
-#define ESR_EL2_EC_SYS64	(0x18)
-#define ESR_EL2_EC_IABT		(0x20)
-#define ESR_EL2_EC_IABT_HYP	(0x21)
-#define ESR_EL2_EC_PC_ALIGN	(0x22)
-#define ESR_EL2_EC_DABT		(0x24)
-#define ESR_EL2_EC_DABT_HYP	(0x25)
-#define ESR_EL2_EC_SP_ALIGN	(0x26)
-#define ESR_EL2_EC_FP_EXC32	(0x28)
-#define ESR_EL2_EC_FP_EXC64	(0x2C)
-#define ESR_EL2_EC_SERROR	(0x2F)
-#define ESR_EL2_EC_BREAKPT	(0x30)
-#define ESR_EL2_EC_BREAKPT_HYP	(0x31)
-#define ESR_EL2_EC_SOFTSTP	(0x32)
-#define ESR_EL2_EC_SOFTSTP_HYP	(0x33)
-#define ESR_EL2_EC_WATCHPT	(0x34)
-#define ESR_EL2_EC_WATCHPT_HYP	(0x35)
-#define ESR_EL2_EC_BKPT32	(0x38)
-#define ESR_EL2_EC_VECTOR32	(0x3A)
-#define ESR_EL2_EC_BRK64	(0x3C)
-
-#define ESR_EL2_EC_xABT_xFSR_EXTABT	0x10
-
-#define ESR_EL2_EC_WFI_ISS_WFE	(1 << 0)
-
 #endif /* __ARM64_KVM_ARM_H__ */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 7/7] arm64: kvm: decode ESR_ELx.EC when reporting exceptions
  2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
                   ` (5 preceding siblings ...)
  2015-01-07 12:04 ` [PATCH 6/7] arm64: kvm: remove ESR_EL2_* macros Mark Rutland
@ 2015-01-07 12:04 ` Mark Rutland
  2015-01-11 18:29   ` Christoffer Dall
  6 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 12:04 UTC (permalink / raw)
  To: linux-arm-kernel

To aid the developer when something triggers an unexpected exception,
decode the ESR_ELx.EC field when logging an ESR_ELx value using the
newly introduced esr_get_class_string. This doesn't tell the developer
the specifics of the exception encoded in the remaining IL and ISS bits,
but it can be helpful to distinguish between exception classes (e.g.
SError and a data abort) without having to manually decode the field,
which can be tiresome.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Peter Maydell <peter.maydell@linaro.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/kvm/handle_exit.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index bcbc923..29b184a 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -91,12 +91,13 @@ static exit_handle_fn arm_exit_handlers[] = {
 
 static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
 {
-	u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
+	u32 hsr = kvm_vcpu_get_hsr(vcpu);
+	u8 hsr_ec = hsr >> ESR_ELx_EC_SHIFT;
 
 	if (hsr_ec >= ARRAY_SIZE(arm_exit_handlers) ||
 	    !arm_exit_handlers[hsr_ec]) {
-		kvm_err("Unknown exception class: hsr: %#08x\n",
-			(unsigned int)kvm_vcpu_get_hsr(vcpu));
+		kvm_err("Unknown exception class: hsr: %#08x -- %s\n",
+			hsr, esr_get_class_string(hsr));
 		BUG();
 	}
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions
  2015-01-07 12:04 ` [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions Mark Rutland
@ 2015-01-07 16:23   ` Catalin Marinas
  2015-01-07 16:42     ` Mark Rutland
  2015-01-11 16:59   ` Christoffer Dall
  1 sibling, 1 reply; 23+ messages in thread
From: Catalin Marinas @ 2015-01-07 16:23 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:14PM +0000, Mark Rutland wrote:
> Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
> the encodings are common. While encodings are architected to refer to
> the current EL or a lower EL, the macros refer to particular ELs (e.g.
> ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
> and their naming is misleading.
> 
> This patch introduces common ESR_ELx_* macros that can be used in all
> cases, in preparation for later patches which will migrate existing
> users over. Some additional cleanups are made in the process:
> 
> * Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
>   replaced with more general _LOW and _CUR suffixes, matching the
>   architectural intent.
> 
> * ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
>   EC encoding covers traps from both WFE and WFI. Similarly,
>   ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.
> 
> * Multi-bit fields are given consistently named _SHIFT and _MASK macros.
> 
> * UL() is used for compatiblity with assembly files.
> 
> * Comments are added for currently unallocated ESR_ELx.EC encodings.
> 
> For fields other than ESR_ELx.EC, macros are only implemented for fields
> for which there is already an ESR_EL{1,2}_* macro.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

I assume this series would go in via the kvm tree. In which case, for
the first two patches in the series:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions
  2015-01-07 16:23   ` Catalin Marinas
@ 2015-01-07 16:42     ` Mark Rutland
  2015-01-07 16:57       ` Catalin Marinas
  0 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 16:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 04:23:20PM +0000, Catalin Marinas wrote:
> On Wed, Jan 07, 2015 at 12:04:14PM +0000, Mark Rutland wrote:
> > Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
> > the encodings are common. While encodings are architected to refer to
> > the current EL or a lower EL, the macros refer to particular ELs (e.g.
> > ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
> > and their naming is misleading.
> > 
> > This patch introduces common ESR_ELx_* macros that can be used in all
> > cases, in preparation for later patches which will migrate existing
> > users over. Some additional cleanups are made in the process:
> > 
> > * Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
> >   replaced with more general _LOW and _CUR suffixes, matching the
> >   architectural intent.
> > 
> > * ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
> >   EC encoding covers traps from both WFE and WFI. Similarly,
> >   ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.
> > 
> > * Multi-bit fields are given consistently named _SHIFT and _MASK macros.
> > 
> > * UL() is used for compatiblity with assembly files.
> > 
> > * Comments are added for currently unallocated ESR_ELx.EC encodings.
> > 
> > For fields other than ESR_ELx.EC, macros are only implemented for fields
> > for which there is already an ESR_EL{1,2}_* macro.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Cc: Peter Maydell <peter.maydell@linaro.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> 
> I assume this series would go in via the kvm tree. In which case, for
> the first two patches in the series:
> 
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks!

Patches 3 and 4 also affect the arm64 core code and shouldn't affect
KVM. Can I get your ack for those too, or do you have any comments?

Mark.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions
  2015-01-07 16:42     ` Mark Rutland
@ 2015-01-07 16:57       ` Catalin Marinas
  2015-01-07 18:49         ` Mark Rutland
  0 siblings, 1 reply; 23+ messages in thread
From: Catalin Marinas @ 2015-01-07 16:57 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 04:42:04PM +0000, Mark Rutland wrote:
> On Wed, Jan 07, 2015 at 04:23:20PM +0000, Catalin Marinas wrote:
> > On Wed, Jan 07, 2015 at 12:04:14PM +0000, Mark Rutland wrote:
> > > Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
> > > the encodings are common. While encodings are architected to refer to
> > > the current EL or a lower EL, the macros refer to particular ELs (e.g.
> > > ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
> > > and their naming is misleading.
> > > 
> > > This patch introduces common ESR_ELx_* macros that can be used in all
> > > cases, in preparation for later patches which will migrate existing
> > > users over. Some additional cleanups are made in the process:
> > > 
> > > * Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
> > >   replaced with more general _LOW and _CUR suffixes, matching the
> > >   architectural intent.
> > > 
> > > * ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
> > >   EC encoding covers traps from both WFE and WFI. Similarly,
> > >   ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.
> > > 
> > > * Multi-bit fields are given consistently named _SHIFT and _MASK macros.
> > > 
> > > * UL() is used for compatiblity with assembly files.
> > > 
> > > * Comments are added for currently unallocated ESR_ELx.EC encodings.
> > > 
> > > For fields other than ESR_ELx.EC, macros are only implemented for fields
> > > for which there is already an ESR_EL{1,2}_* macro.
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > > Cc: Peter Maydell <peter.maydell@linaro.org>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > 
> > I assume this series would go in via the kvm tree. In which case, for
> > the first two patches in the series:
> > 
> > Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> 
> Thanks!
> 
> Patches 3 and 4 also affect the arm64 core code and shouldn't affect
> KVM. Can I get your ack for those too, or do you have any comments?

They look fine to me. For the first 4 patches:

Acked-by: Catalin Marinas <catalin.marinas@arm.com>

(BTW, I'll start preparing the merging window next week, if we get
conflicts, we may need to put the first 4 patches on some common branch;
I don't expect any though)

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions
  2015-01-07 16:57       ` Catalin Marinas
@ 2015-01-07 18:49         ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2015-01-07 18:49 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 04:57:53PM +0000, Catalin Marinas wrote:
> On Wed, Jan 07, 2015 at 04:42:04PM +0000, Mark Rutland wrote:
> > On Wed, Jan 07, 2015 at 04:23:20PM +0000, Catalin Marinas wrote:
> > > On Wed, Jan 07, 2015 at 12:04:14PM +0000, Mark Rutland wrote:
> > > > Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
> > > > the encodings are common. While encodings are architected to refer to
> > > > the current EL or a lower EL, the macros refer to particular ELs (e.g.
> > > > ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
> > > > and their naming is misleading.
> > > > 
> > > > This patch introduces common ESR_ELx_* macros that can be used in all
> > > > cases, in preparation for later patches which will migrate existing
> > > > users over. Some additional cleanups are made in the process:
> > > > 
> > > > * Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
> > > >   replaced with more general _LOW and _CUR suffixes, matching the
> > > >   architectural intent.
> > > > 
> > > > * ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
> > > >   EC encoding covers traps from both WFE and WFI. Similarly,
> > > >   ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.
> > > > 
> > > > * Multi-bit fields are given consistently named _SHIFT and _MASK macros.
> > > > 
> > > > * UL() is used for compatiblity with assembly files.
> > > > 
> > > > * Comments are added for currently unallocated ESR_ELx.EC encodings.
> > > > 
> > > > For fields other than ESR_ELx.EC, macros are only implemented for fields
> > > > for which there is already an ESR_EL{1,2}_* macro.
> > > > 
> > > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > > > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > > > Cc: Peter Maydell <peter.maydell@linaro.org>
> > > > Cc: Will Deacon <will.deacon@arm.com>
> > > 
> > > I assume this series would go in via the kvm tree. In which case, for
> > > the first two patches in the series:
> > > 
> > > Acked-by: Catalin Marinas <catalin.marinas@arm.com>
> > 
> > Thanks!
> > 
> > Patches 3 and 4 also affect the arm64 core code and shouldn't affect
> > KVM. Can I get your ack for those too, or do you have any comments?
> 
> They look fine to me. For the first 4 patches:
> 
> Acked-by: Catalin Marinas <catalin.marinas@arm.com>

Thanks.

> (BTW, I'll start preparing the merging window next week, if we get
> conflicts, we may need to put the first 4 patches on some common branch;
> I don't expect any though)

Noted. I'll keep an eye out.

Mark.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions
  2015-01-07 12:04 ` [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions Mark Rutland
  2015-01-07 16:23   ` Catalin Marinas
@ 2015-01-11 16:59   ` Christoffer Dall
  2015-01-12 11:20     ` Mark Rutland
  1 sibling, 1 reply; 23+ messages in thread
From: Christoffer Dall @ 2015-01-11 16:59 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:14PM +0000, Mark Rutland wrote:
> Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
> the encodings are common. While encodings are architected to refer to
> the current EL or a lower EL, the macros refer to particular ELs (e.g.
> ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
> and their naming is misleading.
> 
> This patch introduces common ESR_ELx_* macros that can be used in all
> cases, in preparation for later patches which will migrate existing
> users over. Some additional cleanups are made in the process:
> 
> * Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
>   replaced with more general _LOW and _CUR suffixes, matching the
>   architectural intent.
> 
> * ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
>   EC encoding covers traps from both WFE and WFI. Similarly,
>   ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.
> 
> * Multi-bit fields are given consistently named _SHIFT and _MASK macros.
> 
> * UL() is used for compatiblity with assembly files.
> 
> * Comments are added for currently unallocated ESR_ELx.EC encodings.
> 
> For fields other than ESR_ELx.EC, macros are only implemented for fields
> for which there is already an ESR_EL{1,2}_* macro.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/esr.h | 78 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 78 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> index 72674f4..eaee379 100644
> --- a/arch/arm64/include/asm/esr.h
> +++ b/arch/arm64/include/asm/esr.h
> @@ -54,4 +54,82 @@
>  #define ESR_EL1_EC_BKPT32	(0x38)
>  #define ESR_EL1_EC_BRK64	(0x3C)
>  
> +#define ESR_ELx_EC_UNKNOWN	(0x00)
> +#define ESR_ELx_EC_WFx		(0x01)
> +/* Unallocated EC: 0x02 */
> +#define ESR_ELx_EC_CP15_32	(0x03)
> +#define ESR_ELx_EC_CP15_64	(0x04)
> +#define ESR_ELx_EC_CP14_MR	(0x05)
> +#define ESR_ELx_EC_CP14_LS	(0x06)
> +#define ESR_ELx_EC_FP_ASIMD	(0x07)
> +#define ESR_ELx_EC_CP10_ID	(0x08)
> +/* Unallocated EC: 0x09 - 0x0B */
> +#define ESR_ELx_EC_CP14_64	(0x0C)
> +/* Unallocated: 0x0d */
> +#define ESR_ELx_EC_ILL		(0x0E)
> +/* Unallocated EC: 0x0F - 0x10 */
> +#define ESR_ELx_EC_SVC32	(0x11)
> +#define ESR_ELx_EC_HVC32	(0x12)
> +#define ESR_ELx_EC_SMC32	(0x13)
> +/* Unallocated EC: 0x14 */
> +#define ESR_ELx_EC_SVC64	(0x15)
> +#define ESR_ELx_EC_HVC64	(0x16)
> +#define ESR_ELx_EC_SMC64	(0x17)
> +#define ESR_ELx_EC_SYS64	(0x18)
> +/* Unallocated EC: 0x19 - 0x1E */
> +#define ESR_ELx_EC_IMP_DEF	(0x1f)
> +#define ESR_ELx_EC_IABT_LOW	(0x20)
> +#define ESR_ELx_EC_IABT_CUR	(0x21)
> +#define ESR_ELx_EC_PC_ALIGN	(0x22)
> +/* Unallocated EC: 0x23 */
> +#define ESR_ELx_EC_DABT_LOW	(0x24)
> +#define ESR_ELx_EC_DABT_CUR	(0x25)
> +#define ESR_ELx_EC_SP_ALIGN	(0x26)
> +/* Unallocated EC: 0x27 */
> +#define ESR_ELx_EC_FP_EXC32	(0x28)
> +/* Unallocated EC: 0x29 - 0x2B */
> +#define ESR_ELx_EC_FP_EXC64	(0x2C)
> +/* Unallocated EC: 0x2D - 0x2E */
> +#define ESR_ELx_EC_SERROR	(0x2F)
> +#define ESR_ELx_EC_BREAKPT_LOW	(0x30)
> +#define ESR_ELx_EC_BREAKPT_CUR	(0x31)
> +#define ESR_ELx_EC_SOFTSTP_LOW	(0x32)
> +#define ESR_ELx_EC_SOFTSTP_CUR	(0x33)
> +#define ESR_ELx_EC_WATCHPT_LOW	(0x34)
> +#define ESR_ELx_EC_WATCHPT_CUR	(0x35)
> +/* Unallocated EC: 0x36 - 0x37 */
> +#define ESR_ELx_EC_BKPT32	(0x38)
> +/* Unallocated EC: 0x39 */
> +#define ESR_ELx_EC_VECTOR32	(0x3A)
> +/* Unallocted EC: 0x3B */
> +#define ESR_ELx_EC_BRK64	(0x3C)
> +/* Unallocated EC: 0x3D - 0x3F */
> +#define ESR_ELx_EC_MAX		(0x3F)
> +
> +#define ESR_ELx_EC_SHIFT	(26)
> +#define ESR_ELx_EC_MASK		(UL(0x3F) << ESR_ELx_EC_SHIFT)
> +
> +#define ESR_ELx_IL		(UL(1) << 25)
> +#define ESR_ELx_ISS_MASK	(ESR_ELx_IL - 1)
> +#define ESR_ELx_ISV		(UL(1) << 24)
> +#define ESR_ELx_SAS		(UL(1) << 22)

shouldn't this be UL(3) << 22 (or a mask/shift equivalend declaration)?

> +#define ESR_ELx_SSE		(UL(1) << 21)
> +#define ESR_ELx_SRT_SHIFT	(16)
> +#define ESR_ELx_SRT_MASK	(UL(0x1F) << ESR_ELx_SRT_SHIFT)
> +#define ESR_ELx_SF 		(UL(1) << 15)
> +#define ESR_ELx_AR 		(UL(1) << 14)
> +#define ESR_ELx_EA 		(UL(1) << 9)
> +#define ESR_ELx_CM 		(UL(1) << 8)
> +#define ESR_ELx_S1PTW 		(UL(1) << 7)
> +#define ESR_ELx_WNR		(UL(1) << 6)
> +#define ESR_ELx_FSC		(0x3F)
> +#define ESR_ELx_FSC_TYPE	(0x3C)
> +#define ESR_ELx_FSC_EXTABT	(0x10)
> +#define ESR_ELx_FSC_FAULT	(0x04)
> +#define ESR_ELx_FSC_PERM	(0x0F)

this should be 0x0C right?

> +#define ESR_ELx_CV		(UL(1) << 24)
> +#define ESR_ELx_COND_SHIFT	(20)
> +#define ESR_ELx_COND_MASK	(UL(0xF) << ESR_ELx_COND_SHIFT)
> +#define ESR_ELx_WFx_ISS_WFE	(UL(1) << 0)
> +
>  #endif /* __ASM_ESR_H */
> -- 
> 1.9.1
> 

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 2/7] arm64: move to ESR_ELx macros
  2015-01-07 12:04 ` [PATCH 2/7] arm64: move to ESR_ELx macros Mark Rutland
@ 2015-01-11 17:01   ` Christoffer Dall
  0 siblings, 0 replies; 23+ messages in thread
From: Christoffer Dall @ 2015-01-11 17:01 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:15PM +0000, Mark Rutland wrote:
> Now that we have common ESR_ELx_* macros, move the core arm64 code over
> to them.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/kernel/entry.S    | 64 ++++++++++++++++++++++----------------------
>  arch/arm64/kernel/signal32.c |  2 +-
>  arch/arm64/mm/fault.c        |  2 +-
>  3 files changed, 34 insertions(+), 34 deletions(-)
> 
> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S
> index fd4fa37..02e6af1 100644
> --- a/arch/arm64/kernel/entry.S
> +++ b/arch/arm64/kernel/entry.S
> @@ -269,18 +269,18 @@ ENDPROC(el1_error_invalid)
>  el1_sync:
>  	kernel_entry 1
>  	mrs	x1, esr_el1			// read the syndrome register
> -	lsr	x24, x1, #ESR_EL1_EC_SHIFT	// exception class
> -	cmp	x24, #ESR_EL1_EC_DABT_EL1	// data abort in EL1
> +	lsr	x24, x1, #ESR_ELx_EC_SHIFT	// exception class
> +	cmp	x24, #ESR_ELx_EC_DABT_CUR	// data abort in EL1
>  	b.eq	el1_da
> -	cmp	x24, #ESR_EL1_EC_SYS64		// configurable trap
> +	cmp	x24, #ESR_ELx_EC_SYS64		// configurable trap
>  	b.eq	el1_undef
> -	cmp	x24, #ESR_EL1_EC_SP_ALIGN	// stack alignment exception
> +	cmp	x24, #ESR_ELx_EC_SP_ALIGN	// stack alignment exception
>  	b.eq	el1_sp_pc
> -	cmp	x24, #ESR_EL1_EC_PC_ALIGN	// pc alignment exception
> +	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
>  	b.eq	el1_sp_pc
> -	cmp	x24, #ESR_EL1_EC_UNKNOWN	// unknown exception in EL1
> +	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL1
>  	b.eq	el1_undef
> -	cmp	x24, #ESR_EL1_EC_BREAKPT_EL1	// debug exception in EL1
> +	cmp	x24, #ESR_ELx_EC_BREAKPT_CUR	// debug exception in EL1
>  	b.ge	el1_dbg
>  	b	el1_inv
>  el1_da:
> @@ -318,7 +318,7 @@ el1_dbg:
>  	/*
>  	 * Debug exception handling
>  	 */
> -	cmp	x24, #ESR_EL1_EC_BRK64		// if BRK64
> +	cmp	x24, #ESR_ELx_EC_BRK64		// if BRK64
>  	cinc	x24, x24, eq			// set bit '0'
>  	tbz	x24, #0, el1_inv		// EL1 only
>  	mrs	x0, far_el1
> @@ -375,26 +375,26 @@ el1_preempt:
>  el0_sync:
>  	kernel_entry 0
>  	mrs	x25, esr_el1			// read the syndrome register
> -	lsr	x24, x25, #ESR_EL1_EC_SHIFT	// exception class
> -	cmp	x24, #ESR_EL1_EC_SVC64		// SVC in 64-bit state
> +	lsr	x24, x25, #ESR_ELx_EC_SHIFT	// exception class
> +	cmp	x24, #ESR_ELx_EC_SVC64		// SVC in 64-bit state
>  	b.eq	el0_svc
> -	cmp	x24, #ESR_EL1_EC_DABT_EL0	// data abort in EL0
> +	cmp	x24, #ESR_ELx_EC_DABT_LOW	// data abort in EL0
>  	b.eq	el0_da
> -	cmp	x24, #ESR_EL1_EC_IABT_EL0	// instruction abort in EL0
> +	cmp	x24, #ESR_ELx_EC_IABT_LOW	// instruction abort in EL0
>  	b.eq	el0_ia
> -	cmp	x24, #ESR_EL1_EC_FP_ASIMD	// FP/ASIMD access
> +	cmp	x24, #ESR_ELx_EC_FP_ASIMD	// FP/ASIMD access
>  	b.eq	el0_fpsimd_acc
> -	cmp	x24, #ESR_EL1_EC_FP_EXC64	// FP/ASIMD exception
> +	cmp	x24, #ESR_ELx_EC_FP_EXC64	// FP/ASIMD exception
>  	b.eq	el0_fpsimd_exc
> -	cmp	x24, #ESR_EL1_EC_SYS64		// configurable trap
> +	cmp	x24, #ESR_ELx_EC_SYS64		// configurable trap
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_SP_ALIGN	// stack alignment exception
> +	cmp	x24, #ESR_ELx_EC_SP_ALIGN	// stack alignment exception
>  	b.eq	el0_sp_pc
> -	cmp	x24, #ESR_EL1_EC_PC_ALIGN	// pc alignment exception
> +	cmp	x24, #ESR_ELx_EC_PC_ALIGN	// pc alignment exception
>  	b.eq	el0_sp_pc
> -	cmp	x24, #ESR_EL1_EC_UNKNOWN	// unknown exception in EL0
> +	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL0
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_BREAKPT_EL0	// debug exception in EL0
> +	cmp	x24, #ESR_ELx_EC_BREAKPT_LOW	// debug exception in EL0
>  	b.ge	el0_dbg
>  	b	el0_inv
>  
> @@ -403,30 +403,30 @@ el0_sync:
>  el0_sync_compat:
>  	kernel_entry 0, 32
>  	mrs	x25, esr_el1			// read the syndrome register
> -	lsr	x24, x25, #ESR_EL1_EC_SHIFT	// exception class
> -	cmp	x24, #ESR_EL1_EC_SVC32		// SVC in 32-bit state
> +	lsr	x24, x25, #ESR_ELx_EC_SHIFT	// exception class
> +	cmp	x24, #ESR_ELx_EC_SVC32		// SVC in 32-bit state
>  	b.eq	el0_svc_compat
> -	cmp	x24, #ESR_EL1_EC_DABT_EL0	// data abort in EL0
> +	cmp	x24, #ESR_ELx_EC_DABT_LOW	// data abort in EL0
>  	b.eq	el0_da
> -	cmp	x24, #ESR_EL1_EC_IABT_EL0	// instruction abort in EL0
> +	cmp	x24, #ESR_ELx_EC_IABT_LOW	// instruction abort in EL0
>  	b.eq	el0_ia
> -	cmp	x24, #ESR_EL1_EC_FP_ASIMD	// FP/ASIMD access
> +	cmp	x24, #ESR_ELx_EC_FP_ASIMD	// FP/ASIMD access
>  	b.eq	el0_fpsimd_acc
> -	cmp	x24, #ESR_EL1_EC_FP_EXC32	// FP/ASIMD exception
> +	cmp	x24, #ESR_ELx_EC_FP_EXC32	// FP/ASIMD exception
>  	b.eq	el0_fpsimd_exc
> -	cmp	x24, #ESR_EL1_EC_UNKNOWN	// unknown exception in EL0
> +	cmp	x24, #ESR_ELx_EC_UNKNOWN	// unknown exception in EL0
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_CP15_32	// CP15 MRC/MCR trap
> +	cmp	x24, #ESR_ELx_EC_CP15_32	// CP15 MRC/MCR trap
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_CP15_64	// CP15 MRRC/MCRR trap
> +	cmp	x24, #ESR_ELx_EC_CP15_64	// CP15 MRRC/MCRR trap
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_CP14_MR	// CP14 MRC/MCR trap
> +	cmp	x24, #ESR_ELx_EC_CP14_MR	// CP14 MRC/MCR trap
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_CP14_LS	// CP14 LDC/STC trap
> +	cmp	x24, #ESR_ELx_EC_CP14_LS	// CP14 LDC/STC trap
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_CP14_64	// CP14 MRRC/MCRR trap
> +	cmp	x24, #ESR_ELx_EC_CP14_64	// CP14 MRRC/MCRR trap
>  	b.eq	el0_undef
> -	cmp	x24, #ESR_EL1_EC_BREAKPT_EL0	// debug exception in EL0
> +	cmp	x24, #ESR_ELx_EC_BREAKPT_LOW	// debug exception in EL0
>  	b.ge	el0_dbg
>  	b	el0_inv
>  el0_svc_compat:
> diff --git a/arch/arm64/kernel/signal32.c b/arch/arm64/kernel/signal32.c
> index 5a1ba6e..192d900 100644
> --- a/arch/arm64/kernel/signal32.c
> +++ b/arch/arm64/kernel/signal32.c
> @@ -501,7 +501,7 @@ static int compat_setup_sigframe(struct compat_sigframe __user *sf,
>  
>  	__put_user_error((compat_ulong_t)0, &sf->uc.uc_mcontext.trap_no, err);
>  	/* set the compat FSR WnR */
> -	__put_user_error(!!(current->thread.fault_code & ESR_EL1_WRITE) <<
> +	__put_user_error(!!(current->thread.fault_code & ESR_ELx_WNR) <<
>  			 FSR_WRITE_SHIFT, &sf->uc.uc_mcontext.error_code, err);
>  	__put_user_error(current->thread.fault_address, &sf->uc.uc_mcontext.fault_address, err);
>  	__put_user_error(set->sig[0], &sf->uc.uc_mcontext.oldmask, err);
> diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
> index c11cd27..96da131 100644
> --- a/arch/arm64/mm/fault.c
> +++ b/arch/arm64/mm/fault.c
> @@ -219,7 +219,7 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
>  
>  	if (esr & ESR_LNX_EXEC) {
>  		vm_flags = VM_EXEC;
> -	} else if ((esr & ESR_EL1_WRITE) && !(esr & ESR_EL1_CM)) {
> +	} else if ((esr & ESR_ELx_WNR) && !(esr & ESR_ELx_CM)) {
>  		vm_flags = VM_WRITE;
>  		mm_flags |= FAULT_FLAG_WRITE;
>  	}
> -- 
> 1.9.1
> 

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 3/7] arm64: remove ESR_EL1_* macros
  2015-01-07 12:04 ` [PATCH 3/7] arm64: remove ESR_EL1_* macros Mark Rutland
@ 2015-01-11 18:08   ` Christoffer Dall
  2015-01-12 11:27     ` Mark Rutland
  0 siblings, 1 reply; 23+ messages in thread
From: Christoffer Dall @ 2015-01-11 18:08 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:16PM +0000, Mark Rutland wrote:
> Now that all users have been moved over to the common ESR_ELx_* macros,
> remove the redundant ESR_EL1 macros.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

FYI: This breaks bisectability with KVM, so we should probably move the
existing KVM references to the common definitions as part of this patch?

-Christoffer

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 4/7] arm64: decode ESR_ELx.EC when reporting exceptions
  2015-01-07 12:04 ` [PATCH 4/7] arm64: decode ESR_ELx.EC when reporting exceptions Mark Rutland
@ 2015-01-11 18:22   ` Christoffer Dall
  0 siblings, 0 replies; 23+ messages in thread
From: Christoffer Dall @ 2015-01-11 18:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:17PM +0000, Mark Rutland wrote:
> To aid the developer when something triggers an unexpected exception,
> decode the ESR_ELx.EC field when logging an ESR_ELx value. This doesn't
> tell the developer the specifics of the exception encoded in the
> remaining IL and ISS bits, but it can be helpful to distinguish between
> exception classes (e.g. SError and a data abort) without having to
> manually decode the field, which can be tiresome.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/esr.h |  6 ++++++
>  arch/arm64/kernel/traps.c    | 50 ++++++++++++++++++++++++++++++++++++++++++--
>  2 files changed, 54 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/esr.h b/arch/arm64/include/asm/esr.h
> index 19492e1..7669a7a 100644
> --- a/arch/arm64/include/asm/esr.h
> +++ b/arch/arm64/include/asm/esr.h
> @@ -96,4 +96,10 @@
>  #define ESR_ELx_COND_MASK	(UL(0xF) << ESR_ELx_COND_SHIFT)
>  #define ESR_ELx_WFx_ISS_WFE	(UL(1) << 0)
>  
> +#ifndef __ASSEMBLY__
> +#include <asm/types.h>
> +
> +const char *esr_get_class_string(u32 esr);
> +#endif /* __ASSEMBLY */
> +
>  #endif /* __ASM_ESR_H */
> diff --git a/arch/arm64/kernel/traps.c b/arch/arm64/kernel/traps.c
> index 0a801e3..1ef2940 100644
> --- a/arch/arm64/kernel/traps.c
> +++ b/arch/arm64/kernel/traps.c
> @@ -33,6 +33,7 @@
>  
>  #include <asm/atomic.h>
>  #include <asm/debug-monitors.h>
> +#include <asm/esr.h>
>  #include <asm/traps.h>
>  #include <asm/stacktrace.h>
>  #include <asm/exception.h>
> @@ -373,6 +374,51 @@ asmlinkage long do_ni_syscall(struct pt_regs *regs)
>  	return sys_ni_syscall();
>  }
>  
> +static const char *esr_class_str[] = {
> +	[0 ... ESR_ELx_EC_MAX]		= "UNRECOGNIZED EC",
> +	[ESR_ELx_EC_UNKNOWN]		= "Unknown/Uncategorized",
> +	[ESR_ELx_EC_WFx]		= "WFI/WFE",
> +	[ESR_ELx_EC_CP15_32]		= "CP15 MCR/MRC",
> +	[ESR_ELx_EC_CP15_64]		= "CP15 MCRR/MRRC",
> +	[ESR_ELx_EC_CP14_MR]		= "CP14 MCR/MRC",
> +	[ESR_ELx_EC_CP14_LS]		= "CP14 LDC/STC",
> +	[ESR_ELx_EC_FP_ASIMD]		= "ASIMD",
> +	[ESR_ELx_EC_CP10_ID]		= "CP10 MRC/VMRS",
> +	[ESR_ELx_EC_CP14_64]		= "CP14 MCRR/MRRC",
> +	[ESR_ELx_EC_ILL]		= "PSTATE.IL",
> +	[ESR_ELx_EC_SVC32]		= "SVC (AArch32)",
> +	[ESR_ELx_EC_HVC32]		= "HVC (AArch32)",
> +	[ESR_ELx_EC_SMC32]		= "SMC (AArch32)",
> +	[ESR_ELx_EC_SVC64]		= "SVC (AArch64)",
> +	[ESR_ELx_EC_HVC64]		= "HVC (AArch64)",
> +	[ESR_ELx_EC_SMC64]		= "SMC (AArch64)",
> +	[ESR_ELx_EC_SYS64]		= "MSR/MRS (AArch64)",
> +	[ESR_ELx_EC_IMP_DEF]		= "EL3 IMP DEF",
> +	[ESR_ELx_EC_IABT_LOW]		= "IABT (lower EL)",
> +	[ESR_ELx_EC_IABT_CUR]		= "IABT (current EL)",
> +	[ESR_ELx_EC_PC_ALIGN]		= "PC Alignment",
> +	[ESR_ELx_EC_DABT_LOW]		= "DABT (lower EL)",
> +	[ESR_ELx_EC_DABT_CUR]		= "DABT (current EL)",
> +	[ESR_ELx_EC_SP_ALIGN]		= "SP Alignment",
> +	[ESR_ELx_EC_FP_EXC32]		= "FP (AArch32)",
> +	[ESR_ELx_EC_FP_EXC64]		= "FP (AArch64)",
> +	[ESR_ELx_EC_SERROR]		= "SError",
> +	[ESR_ELx_EC_BREAKPT_LOW]	= "Breakpoint (lower EL)",
> +	[ESR_ELx_EC_BREAKPT_CUR]	= "Breakpoint (current EL)",
> +	[ESR_ELx_EC_SOFTSTP_LOW]	= "Software Step (lower EL)",
> +	[ESR_ELx_EC_SOFTSTP_CUR]	= "Software Step (current EL)",
> +	[ESR_ELx_EC_WATCHPT_LOW]	= "Watchpoint (lower EL)",
> +	[ESR_ELx_EC_WATCHPT_CUR]	= "Watchpoint (current EL)",
> +	[ESR_ELx_EC_BKPT32]		= "BKPT (AArch32)",
> +	[ESR_ELx_EC_VECTOR32]		= "Vector catch (AArch32)",
> +	[ESR_ELx_EC_BRK64]		= "BRK (AArch64)",
> +};
> +
> +const char *esr_get_class_string(u32 esr)
> +{
> +	return esr_class_str[esr >> ESR_ELx_EC_SHIFT];
> +}
> +
>  /*
>   * bad_mode handles the impossible case in the exception vector.
>   */
> @@ -382,8 +428,8 @@ asmlinkage void bad_mode(struct pt_regs *regs, int reason, unsigned int esr)
>  	void __user *pc = (void __user *)instruction_pointer(regs);
>  	console_verbose();
>  
> -	pr_crit("Bad mode in %s handler detected, code 0x%08x\n",
> -		handler[reason], esr);
> +	pr_crit("Bad mode in %s handler detected, code 0x%08x -- %s\n",
> +		handler[reason], esr, esr_get_class_string(esr));
>  	__show_regs(regs);
>  
>  	info.si_signo = SIGILL;
> -- 
> 1.9.1
> 

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 5/7] arm64: kvm: move to ESR_ELx macros
  2015-01-07 12:04 ` [PATCH 5/7] arm64: kvm: move to ESR_ELx macros Mark Rutland
@ 2015-01-11 18:27   ` Christoffer Dall
  2015-01-12 11:40     ` Mark Rutland
  0 siblings, 1 reply; 23+ messages in thread
From: Christoffer Dall @ 2015-01-11 18:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:18PM +0000, Mark Rutland wrote:
> Now that we have common ESR_ELx macros, make use of them in the arm64
> KVM code. The addition of <asm/esr.h> to the include path highlighted
> badly ordered (i.e. not alphabetical) include lists; these are changed
> to alphabetical order.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 28 +++++++++++++++-------------
>  arch/arm64/kvm/emulate.c             |  5 +++--
>  arch/arm64/kvm/handle_exit.c         | 32 +++++++++++++++++---------------
>  arch/arm64/kvm/hyp.S                 | 17 +++++++++--------
>  arch/arm64/kvm/inject_fault.c        | 14 +++++++-------
>  arch/arm64/kvm/sys_regs.c            | 23 +++++++++++++----------
>  6 files changed, 64 insertions(+), 55 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 8127e45..6a9fa89 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -23,8 +23,10 @@
>  #define __ARM64_KVM_EMULATE_H__
>  
>  #include <linux/kvm_host.h>
> -#include <asm/kvm_asm.h>
> +
> +#include <asm/esr.h>
>  #include <asm/kvm_arm.h>
> +#include <asm/kvm_asm.h>
>  #include <asm/kvm_mmio.h>
>  #include <asm/ptrace.h>
>  
> @@ -128,63 +130,63 @@ static inline phys_addr_t kvm_vcpu_get_fault_ipa(const struct kvm_vcpu *vcpu)
>  
>  static inline bool kvm_vcpu_dabt_isvalid(const struct kvm_vcpu *vcpu)
>  {
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_ISV);
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_ISV);
>  }
>  
>  static inline bool kvm_vcpu_dabt_iswrite(const struct kvm_vcpu *vcpu)
>  {
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_WNR);
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WNR);
>  }
>  
>  static inline bool kvm_vcpu_dabt_issext(const struct kvm_vcpu *vcpu)
>  {
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SSE);
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SSE);
>  }
>  
>  static inline int kvm_vcpu_dabt_get_rd(const struct kvm_vcpu *vcpu)
>  {
> -	return (kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SRT_MASK) >> ESR_EL2_SRT_SHIFT;
> +	return (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SRT_MASK) >> ESR_ELx_SRT_SHIFT;
>  }
>  
>  static inline bool kvm_vcpu_dabt_isextabt(const struct kvm_vcpu *vcpu)
>  {
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_EA);
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_EA);
>  }
>  
>  static inline bool kvm_vcpu_dabt_iss1tw(const struct kvm_vcpu *vcpu)
>  {
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_S1PTW);
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_S1PTW);
>  }
>  
>  static inline int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu)
>  {
> -	return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SAS) >> ESR_EL2_SAS_SHIFT);
> +	return 1 << !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS);

huh?

>  }
>  
>  /* This one is not specific to Data Abort */
>  static inline bool kvm_vcpu_trap_il_is32bit(const struct kvm_vcpu *vcpu)
>  {
> -	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_EL2_IL);
> +	return !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_IL);
>  }
>  
>  static inline u8 kvm_vcpu_trap_get_class(const struct kvm_vcpu *vcpu)
>  {
> -	return kvm_vcpu_get_hsr(vcpu) >> ESR_EL2_EC_SHIFT;
> +	return kvm_vcpu_get_hsr(vcpu) >> ESR_ELx_EC_SHIFT;
>  }
>  
>  static inline bool kvm_vcpu_trap_is_iabt(const struct kvm_vcpu *vcpu)
>  {
> -	return kvm_vcpu_trap_get_class(vcpu) == ESR_EL2_EC_IABT;
> +	return kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_IABT_LOW;
>  }
>  
>  static inline u8 kvm_vcpu_trap_get_fault(const struct kvm_vcpu *vcpu)
>  {
> -	return kvm_vcpu_get_hsr(vcpu) & ESR_EL2_FSC;
> +	return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC;
>  }
>  
>  static inline u8 kvm_vcpu_trap_get_fault_type(const struct kvm_vcpu *vcpu)
>  {
> -	return kvm_vcpu_get_hsr(vcpu) & ESR_EL2_FSC_TYPE;
> +	return kvm_vcpu_get_hsr(vcpu) & ESR_ELx_FSC_TYPE;
>  }
>  
>  static inline unsigned long kvm_vcpu_get_mpidr(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
> index 124418d..f87d8fb 100644
> --- a/arch/arm64/kvm/emulate.c
> +++ b/arch/arm64/kvm/emulate.c
> @@ -22,6 +22,7 @@
>   */
>  
>  #include <linux/kvm_host.h>
> +#include <asm/esr.h>
>  #include <asm/kvm_emulate.h>
>  
>  /*
> @@ -55,8 +56,8 @@ static int kvm_vcpu_get_condition(const struct kvm_vcpu *vcpu)
>  {
>  	u32 esr = kvm_vcpu_get_hsr(vcpu);
>  
> -	if (esr & ESR_EL2_CV)
> -		return (esr & ESR_EL2_COND) >> ESR_EL2_COND_SHIFT;
> +	if (esr & ESR_ELx_CV)
> +		return (esr & ESR_ELx_COND_MASK) >> ESR_ELx_COND_SHIFT;
>  
>  	return -1;
>  }
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 34b8bd0..bcbc923 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -21,8 +21,10 @@
>  
>  #include <linux/kvm.h>
>  #include <linux/kvm_host.h>
> -#include <asm/kvm_emulate.h>
> +
> +#include <asm/esr.h>
>  #include <asm/kvm_coproc.h>
> +#include <asm/kvm_emulate.h>
>  #include <asm/kvm_mmu.h>
>  #include <asm/kvm_psci.h>
>  
> @@ -61,7 +63,7 @@ static int handle_smc(struct kvm_vcpu *vcpu, struct kvm_run *run)
>   */
>  static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	if (kvm_vcpu_get_hsr(vcpu) & ESR_EL2_EC_WFI_ISS_WFE)
> +	if (kvm_vcpu_get_hsr(vcpu) & ESR_ELx_WFx_ISS_WFE)
>  		kvm_vcpu_on_spin(vcpu);
>  	else
>  		kvm_vcpu_block(vcpu);
> @@ -72,19 +74,19 @@ static int kvm_handle_wfx(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  }
>  
>  static exit_handle_fn arm_exit_handlers[] = {
> -	[ESR_EL2_EC_WFI]	= kvm_handle_wfx,
> -	[ESR_EL2_EC_CP15_32]	= kvm_handle_cp15_32,
> -	[ESR_EL2_EC_CP15_64]	= kvm_handle_cp15_64,
> -	[ESR_EL2_EC_CP14_MR]	= kvm_handle_cp14_32,
> -	[ESR_EL2_EC_CP14_LS]	= kvm_handle_cp14_load_store,
> -	[ESR_EL2_EC_CP14_64]	= kvm_handle_cp14_64,
> -	[ESR_EL2_EC_HVC32]	= handle_hvc,
> -	[ESR_EL2_EC_SMC32]	= handle_smc,
> -	[ESR_EL2_EC_HVC64]	= handle_hvc,
> -	[ESR_EL2_EC_SMC64]	= handle_smc,
> -	[ESR_EL2_EC_SYS64]	= kvm_handle_sys_reg,
> -	[ESR_EL2_EC_IABT]	= kvm_handle_guest_abort,
> -	[ESR_EL2_EC_DABT]	= kvm_handle_guest_abort,
> +	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
> +	[ESR_ELx_EC_CP15_32]	= kvm_handle_cp15_32,
> +	[ESR_ELx_EC_CP15_64]	= kvm_handle_cp15_64,
> +	[ESR_ELx_EC_CP14_MR]	= kvm_handle_cp14_32,
> +	[ESR_ELx_EC_CP14_LS]	= kvm_handle_cp14_load_store,
> +	[ESR_ELx_EC_CP14_64]	= kvm_handle_cp14_64,
> +	[ESR_ELx_EC_HVC32]	= handle_hvc,
> +	[ESR_ELx_EC_SMC32]	= handle_smc,
> +	[ESR_ELx_EC_HVC64]	= handle_hvc,
> +	[ESR_ELx_EC_SMC64]	= handle_smc,
> +	[ESR_ELx_EC_SYS64]	= kvm_handle_sys_reg,
> +	[ESR_ELx_EC_IABT_LOW]	= kvm_handle_guest_abort,
> +	[ESR_ELx_EC_DABT_LOW]	= kvm_handle_guest_abort,
>  };
>  
>  static exit_handle_fn kvm_get_exit_handler(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
> index fbe909f..c0d8202 100644
> --- a/arch/arm64/kvm/hyp.S
> +++ b/arch/arm64/kvm/hyp.S
> @@ -17,15 +17,16 @@
>  
>  #include <linux/linkage.h>
>  
> -#include <asm/assembler.h>
> -#include <asm/memory.h>
>  #include <asm/asm-offsets.h>
> +#include <asm/assembler.h>
>  #include <asm/debug-monitors.h>
> +#include <asm/esr.h>
>  #include <asm/fpsimdmacros.h>
>  #include <asm/kvm.h>
> -#include <asm/kvm_asm.h>
>  #include <asm/kvm_arm.h>
> +#include <asm/kvm_asm.h>
>  #include <asm/kvm_mmu.h>
> +#include <asm/memory.h>
>  
>  #define CPU_GP_REG_OFFSET(x)	(CPU_GP_REGS + x)
>  #define CPU_XREG_OFFSET(x)	CPU_GP_REG_OFFSET(CPU_USER_PT_REGS + 8*x)
> @@ -1140,9 +1141,9 @@ el1_sync:					// Guest trapped into EL2
>  	push	x2, x3
>  
>  	mrs	x1, esr_el2
> -	lsr	x2, x1, #ESR_EL2_EC_SHIFT
> +	lsr	x2, x1, #ESR_ELx_EC_SHIFT
>  
> -	cmp	x2, #ESR_EL2_EC_HVC64
> +	cmp	x2, #ESR_ELx_EC_HVC64
>  	b.ne	el1_trap
>  
>  	mrs	x3, vttbr_el2			// If vttbr is valid, the 64bit guest
> @@ -1177,13 +1178,13 @@ el1_trap:
>  	 * x1: ESR
>  	 * x2: ESR_EC
>  	 */
> -	cmp	x2, #ESR_EL2_EC_DABT
> -	mov	x0, #ESR_EL2_EC_IABT
> +	cmp	x2, #ESR_ELx_EC_DABT_LOW
> +	mov	x0, #ESR_ELx_EC_IABT_LOW
>  	ccmp	x2, x0, #4, ne
>  	b.ne	1f		// Not an abort we care about
>  
>  	/* This is an abort. Check for permission fault */
> -	and	x2, x1, #ESR_EL2_FSC_TYPE
> +	and	x2, x1, #ESR_ELx_FSC_TYPE
>  	cmp	x2, #FSC_PERM
>  	b.ne	1f		// Not a permission fault
>  
> diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
> index 81a02a8..f02530e 100644
> --- a/arch/arm64/kvm/inject_fault.c
> +++ b/arch/arm64/kvm/inject_fault.c
> @@ -118,27 +118,27 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
>  	 * instruction set. Report an external synchronous abort.
>  	 */
>  	if (kvm_vcpu_trap_il_is32bit(vcpu))
> -		esr |= ESR_EL1_IL;
> +		esr |= ESR_ELx_IL;
>  
>  	/*
>  	 * Here, the guest runs in AArch64 mode when in EL1. If we get
>  	 * an AArch32 fault, it means we managed to trap an EL0 fault.
>  	 */
>  	if (is_aarch32 || (cpsr & PSR_MODE_MASK) == PSR_MODE_EL0t)
> -		esr |= (ESR_EL1_EC_IABT_EL0 << ESR_EL1_EC_SHIFT);
> +		esr |= (ESR_ELx_EC_IABT_LOW << ESR_ELx_EC_SHIFT);
>  	else
> -		esr |= (ESR_EL1_EC_IABT_EL1 << ESR_EL1_EC_SHIFT);
> +		esr |= (ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT);
>  
>  	if (!is_iabt)
> -		esr |= ESR_EL1_EC_DABT_EL0;
> +		esr |= ESR_ELx_EC_DABT_LOW;
>  
> -	vcpu_sys_reg(vcpu, ESR_EL1) = esr | ESR_EL2_EC_xABT_xFSR_EXTABT;
> +	vcpu_sys_reg(vcpu, ESR_EL1) = esr | ESR_ELx_FSC_EXTABT;
>  }
>  
>  static void inject_undef64(struct kvm_vcpu *vcpu)
>  {
>  	unsigned long cpsr = *vcpu_cpsr(vcpu);
> -	u32 esr = (ESR_EL1_EC_UNKNOWN << ESR_EL1_EC_SHIFT);
> +	u32 esr = (ESR_ELx_EC_UNKNOWN << ESR_ELx_EC_SHIFT);
>  
>  	*vcpu_spsr(vcpu) = cpsr;
>  	*vcpu_elr_el1(vcpu) = *vcpu_pc(vcpu);
> @@ -151,7 +151,7 @@ static void inject_undef64(struct kvm_vcpu *vcpu)
>  	 * set.
>  	 */
>  	if (kvm_vcpu_trap_il_is32bit(vcpu))
> -		esr |= ESR_EL1_IL;
> +		esr |= ESR_ELx_IL;
>  
>  	vcpu_sys_reg(vcpu, ESR_EL1) = esr;
>  }
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 3d7c2df..6b859d7 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -20,17 +20,20 @@
>   * along with this program.  If not, see <http://www.gnu.org/licenses/>.
>   */
>  
> -#include <linux/mm.h>
>  #include <linux/kvm_host.h>
> +#include <linux/mm.h>
>  #include <linux/uaccess.h>
> -#include <asm/kvm_arm.h>
> -#include <asm/kvm_host.h>
> -#include <asm/kvm_emulate.h>
> -#include <asm/kvm_coproc.h>
> -#include <asm/kvm_mmu.h>
> +
>  #include <asm/cacheflush.h>
>  #include <asm/cputype.h>
>  #include <asm/debug-monitors.h>
> +#include <asm/esr.h>
> +#include <asm/kvm_arm.h>
> +#include <asm/kvm_coproc.h>
> +#include <asm/kvm_emulate.h>
> +#include <asm/kvm_host.h>
> +#include <asm/kvm_mmu.h>
> +
>  #include <trace/events/kvm.h>
>  
>  #include "sys_regs.h"
> @@ -815,12 +818,12 @@ static void unhandled_cp_access(struct kvm_vcpu *vcpu,
>  	int cp;
>  
>  	switch(hsr_ec) {
> -	case ESR_EL2_EC_CP15_32:
> -	case ESR_EL2_EC_CP15_64:
> +	case ESR_ELx_EC_CP15_32:
> +	case ESR_ELx_EC_CP15_64:
>  		cp = 15;
>  		break;
> -	case ESR_EL2_EC_CP14_MR:
> -	case ESR_EL2_EC_CP14_64:
> +	case ESR_ELx_EC_CP14_MR:
> +	case ESR_ELx_EC_CP14_64:
>  		cp = 14;
>  		break;
>  	default:
> -- 
> 1.9.1
> 

Otherwise looks good.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 6/7] arm64: kvm: remove ESR_EL2_* macros
  2015-01-07 12:04 ` [PATCH 6/7] arm64: kvm: remove ESR_EL2_* macros Mark Rutland
@ 2015-01-11 18:27   ` Christoffer Dall
  0 siblings, 0 replies; 23+ messages in thread
From: Christoffer Dall @ 2015-01-11 18:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:19PM +0000, Mark Rutland wrote:
> Now that all users have been moved over to the common ESR_ELx_* macros,
> remove the redundant ESR_EL2 macros. To maintain compatibility with the
> fault handling code shared with 32-bit, the FSC_{FAULT,PERM} macros are
> retained as aliases for the common ESR_ELx_FSC_{FAULT,PERM} definitions.
> 
> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/kvm_arm.h | 73 +++-------------------------------------
>  1 file changed, 4 insertions(+), 69 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index 8afb863..94674eb 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -18,6 +18,7 @@
>  #ifndef __ARM64_KVM_ARM_H__
>  #define __ARM64_KVM_ARM_H__
>  
> +#include <asm/esr.h>
>  #include <asm/memory.h>
>  #include <asm/types.h>
>  
> @@ -184,77 +185,11 @@
>  #define MDCR_EL2_TPMCR		(1 << 5)
>  #define MDCR_EL2_HPMN_MASK	(0x1F)
>  
> -/* Exception Syndrome Register (ESR) bits */
> -#define ESR_EL2_EC_SHIFT	(26)
> -#define ESR_EL2_EC		(UL(0x3f) << ESR_EL2_EC_SHIFT)
> -#define ESR_EL2_IL		(UL(1) << 25)
> -#define ESR_EL2_ISS		(ESR_EL2_IL - 1)
> -#define ESR_EL2_ISV_SHIFT	(24)
> -#define ESR_EL2_ISV		(UL(1) << ESR_EL2_ISV_SHIFT)
> -#define ESR_EL2_SAS_SHIFT	(22)
> -#define ESR_EL2_SAS		(UL(3) << ESR_EL2_SAS_SHIFT)
> -#define ESR_EL2_SSE		(1 << 21)
> -#define ESR_EL2_SRT_SHIFT	(16)
> -#define ESR_EL2_SRT_MASK	(0x1f << ESR_EL2_SRT_SHIFT)
> -#define ESR_EL2_SF 		(1 << 15)
> -#define ESR_EL2_AR 		(1 << 14)
> -#define ESR_EL2_EA 		(1 << 9)
> -#define ESR_EL2_CM 		(1 << 8)
> -#define ESR_EL2_S1PTW 		(1 << 7)
> -#define ESR_EL2_WNR		(1 << 6)
> -#define ESR_EL2_FSC		(0x3f)
> -#define ESR_EL2_FSC_TYPE	(0x3c)
> -
> -#define ESR_EL2_CV_SHIFT	(24)
> -#define ESR_EL2_CV		(UL(1) << ESR_EL2_CV_SHIFT)
> -#define ESR_EL2_COND_SHIFT	(20)
> -#define ESR_EL2_COND		(UL(0xf) << ESR_EL2_COND_SHIFT)
> -
> -
> -#define FSC_FAULT	(0x04)
> -#define FSC_PERM	(0x0c)
> +/* For compatibility with fault code shared with 32-bit */
> +#define FSC_FAULT	ESR_ELx_FSC_FAULT
> +#define FSC_PERM	ESR_ELx_FSC_PERM
>  
>  /* Hyp Prefetch Fault Address Register (HPFAR/HDFAR) */
>  #define HPFAR_MASK	(~UL(0xf))
>  
> -#define ESR_EL2_EC_UNKNOWN	(0x00)
> -#define ESR_EL2_EC_WFI		(0x01)
> -#define ESR_EL2_EC_CP15_32	(0x03)
> -#define ESR_EL2_EC_CP15_64	(0x04)
> -#define ESR_EL2_EC_CP14_MR	(0x05)
> -#define ESR_EL2_EC_CP14_LS	(0x06)
> -#define ESR_EL2_EC_FP_ASIMD	(0x07)
> -#define ESR_EL2_EC_CP10_ID	(0x08)
> -#define ESR_EL2_EC_CP14_64	(0x0C)
> -#define ESR_EL2_EC_ILL_ISS	(0x0E)
> -#define ESR_EL2_EC_SVC32	(0x11)
> -#define ESR_EL2_EC_HVC32	(0x12)
> -#define ESR_EL2_EC_SMC32	(0x13)
> -#define ESR_EL2_EC_SVC64	(0x15)
> -#define ESR_EL2_EC_HVC64	(0x16)
> -#define ESR_EL2_EC_SMC64	(0x17)
> -#define ESR_EL2_EC_SYS64	(0x18)
> -#define ESR_EL2_EC_IABT		(0x20)
> -#define ESR_EL2_EC_IABT_HYP	(0x21)
> -#define ESR_EL2_EC_PC_ALIGN	(0x22)
> -#define ESR_EL2_EC_DABT		(0x24)
> -#define ESR_EL2_EC_DABT_HYP	(0x25)
> -#define ESR_EL2_EC_SP_ALIGN	(0x26)
> -#define ESR_EL2_EC_FP_EXC32	(0x28)
> -#define ESR_EL2_EC_FP_EXC64	(0x2C)
> -#define ESR_EL2_EC_SERROR	(0x2F)
> -#define ESR_EL2_EC_BREAKPT	(0x30)
> -#define ESR_EL2_EC_BREAKPT_HYP	(0x31)
> -#define ESR_EL2_EC_SOFTSTP	(0x32)
> -#define ESR_EL2_EC_SOFTSTP_HYP	(0x33)
> -#define ESR_EL2_EC_WATCHPT	(0x34)
> -#define ESR_EL2_EC_WATCHPT_HYP	(0x35)
> -#define ESR_EL2_EC_BKPT32	(0x38)
> -#define ESR_EL2_EC_VECTOR32	(0x3A)
> -#define ESR_EL2_EC_BRK64	(0x3C)
> -
> -#define ESR_EL2_EC_xABT_xFSR_EXTABT	0x10
> -
> -#define ESR_EL2_EC_WFI_ISS_WFE	(1 << 0)
> -
>  #endif /* __ARM64_KVM_ARM_H__ */
> -- 
> 1.9.1
> 

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 7/7] arm64: kvm: decode ESR_ELx.EC when reporting exceptions
  2015-01-07 12:04 ` [PATCH 7/7] arm64: kvm: decode ESR_ELx.EC when reporting exceptions Mark Rutland
@ 2015-01-11 18:29   ` Christoffer Dall
  0 siblings, 0 replies; 23+ messages in thread
From: Christoffer Dall @ 2015-01-11 18:29 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Jan 07, 2015 at 12:04:20PM +0000, Mark Rutland wrote:
> To aid the developer when something triggers an unexpected exception,
> decode the ESR_ELx.EC field when logging an ESR_ELx value using the
> newly introduced esr_get_class_string. This doesn't tell the developer
> the specifics of the exception encoded in the remaining IL and ISS bits,
> but it can be helpful to distinguish between exception classes (e.g.
> SError and a data abort) without having to manually decode the field,
> which can be tiresome.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>
> Cc: Will Deacon <will.deacon@arm.com>

Acked-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions
  2015-01-11 16:59   ` Christoffer Dall
@ 2015-01-12 11:20     ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2015-01-12 11:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 11, 2015 at 04:59:06PM +0000, Christoffer Dall wrote:
> On Wed, Jan 07, 2015 at 12:04:14PM +0000, Mark Rutland wrote:
> > Currently we have separate ESR_EL{1,2}_* macros, despite the fact that
> > the encodings are common. While encodings are architected to refer to
> > the current EL or a lower EL, the macros refer to particular ELs (e.g.
> > ESR_ELx_EC_DABT_EL0). Having these duplicate definitions is redundant,
> > and their naming is misleading.
> > 
> > This patch introduces common ESR_ELx_* macros that can be used in all
> > cases, in preparation for later patches which will migrate existing
> > users over. Some additional cleanups are made in the process:
> > 
> > * Suffixes for particular exception levelts (e.g. _EL0, _EL1) are
> >   replaced with more general _LOW and _CUR suffixes, matching the
> >   architectural intent.
> > 
> > * ESR_ELx_EC_WFx, rather than ESR_ELx_EC_WFI is introduced, as this
> >   EC encoding covers traps from both WFE and WFI. Similarly,
> >   ESR_ELx_WFx_ISS_WFE rather than ESR_ELx_EC_WFI_ISS_WFE is introduced.
> > 
> > * Multi-bit fields are given consistently named _SHIFT and _MASK macros.
> > 
> > * UL() is used for compatiblity with assembly files.
> > 
> > * Comments are added for currently unallocated ESR_ELx.EC encodings.
> > 
> > For fields other than ESR_ELx.EC, macros are only implemented for fields
> > for which there is already an ESR_EL{1,2}_* macro.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Cc: Peter Maydell <peter.maydell@linaro.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/include/asm/esr.h | 78 ++++++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 78 insertions(+)

[...]

> > +#define ESR_ELx_IL		(UL(1) << 25)
> > +#define ESR_ELx_ISS_MASK	(ESR_ELx_IL - 1)
> > +#define ESR_ELx_ISV		(UL(1) << 24)
> > +#define ESR_ELx_SAS		(UL(1) << 22)
> 
> shouldn't this be UL(3) << 22 (or a mask/shift equivalend declaration)?

Yes, it should.

[...]

> > +#define ESR_ELx_FSC		(0x3F)
> > +#define ESR_ELx_FSC_TYPE	(0x3C)
> > +#define ESR_ELx_FSC_EXTABT	(0x10)
> > +#define ESR_ELx_FSC_FAULT	(0x04)
> > +#define ESR_ELx_FSC_PERM	(0x0F)
> 
> this should be 0x0C right?

Yes.

Thanks for spotting these! I've fixed them up locally and I'll give the
rest another once-over before I post v2.

Mark.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 3/7] arm64: remove ESR_EL1_* macros
  2015-01-11 18:08   ` Christoffer Dall
@ 2015-01-12 11:27     ` Mark Rutland
  2015-01-12 17:20       ` Christoffer Dall
  0 siblings, 1 reply; 23+ messages in thread
From: Mark Rutland @ 2015-01-12 11:27 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 11, 2015 at 06:08:05PM +0000, Christoffer Dall wrote:
> On Wed, Jan 07, 2015 at 12:04:16PM +0000, Mark Rutland wrote:
> > Now that all users have been moved over to the common ESR_ELx_* macros,
> > remove the redundant ESR_EL1 macros.
> > 
> > There should be no functional change as a result of this patch.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Cc: Peter Maydell <peter.maydell@linaro.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> 
> FYI: This breaks bisectability with KVM, so we should probably move the
> existing KVM references to the common definitions as part of this patch?

Sorry about that, evidently I forgot the KVM code referred to some
ESR_EL1_* definitions when I reorganised the series.

Are you happy if I just move this patch after the KVM changes? That
should keep everything bisectable and leaves the KVM changes confined to
a single patch.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 5/7] arm64: kvm: move to ESR_ELx macros
  2015-01-11 18:27   ` Christoffer Dall
@ 2015-01-12 11:40     ` Mark Rutland
  0 siblings, 0 replies; 23+ messages in thread
From: Mark Rutland @ 2015-01-12 11:40 UTC (permalink / raw)
  To: linux-arm-kernel

On Sun, Jan 11, 2015 at 06:27:16PM +0000, Christoffer Dall wrote:
> On Wed, Jan 07, 2015 at 12:04:18PM +0000, Mark Rutland wrote:
> > Now that we have common ESR_ELx macros, make use of them in the arm64
> > KVM code. The addition of <asm/esr.h> to the include path highlighted
> > badly ordered (i.e. not alphabetical) include lists; these are changed
> > to alphabetical order.
> >
> > There should be no functional change as a result of this patch.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > Cc: Peter Maydell <peter.maydell@linaro.org>
> > Cc: Will Deacon <will.deacon@arm.com>
> > ---
> >  arch/arm64/include/asm/kvm_emulate.h | 28 +++++++++++++++-------------
> >  arch/arm64/kvm/emulate.c             |  5 +++--
> >  arch/arm64/kvm/handle_exit.c         | 32 +++++++++++++++++---------------
> >  arch/arm64/kvm/hyp.S                 | 17 +++++++++--------
> >  arch/arm64/kvm/inject_fault.c        | 14 +++++++-------
> >  arch/arm64/kvm/sys_regs.c            | 23 +++++++++++++----------
> >  6 files changed, 64 insertions(+), 55 deletions(-)

[...]

> >  static inline int kvm_vcpu_dabt_get_as(const struct kvm_vcpu *vcpu)
> >  {
> > -     return 1 << ((kvm_vcpu_get_hsr(vcpu) & ESR_EL2_SAS) >> ESR_EL2_SAS_SHIFT);
> > +     return 1 << !!(kvm_vcpu_get_hsr(vcpu) & ESR_ELx_SAS);
> 
> huh?

Sorry, this is nonsense I derived from thinking the SAS field was a
single bit and believing I could remove the need for the shift
definition.

I'll introduce ESR_ELx_SAS_SHIFT in patch 1 and use it here.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH 3/7] arm64: remove ESR_EL1_* macros
  2015-01-12 11:27     ` Mark Rutland
@ 2015-01-12 17:20       ` Christoffer Dall
  0 siblings, 0 replies; 23+ messages in thread
From: Christoffer Dall @ 2015-01-12 17:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jan 12, 2015 at 11:27:37AM +0000, Mark Rutland wrote:
> On Sun, Jan 11, 2015 at 06:08:05PM +0000, Christoffer Dall wrote:
> > On Wed, Jan 07, 2015 at 12:04:16PM +0000, Mark Rutland wrote:
> > > Now that all users have been moved over to the common ESR_ELx_* macros,
> > > remove the redundant ESR_EL1 macros.
> > > 
> > > There should be no functional change as a result of this patch.
> > > 
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Catalin Marinas <catalin.marinas@arm.com>
> > > Cc: Christoffer Dall <christoffer.dall@linaro.org>
> > > Cc: Marc Zyngier <marc.zyngier@arm.com>
> > > Cc: Peter Maydell <peter.maydell@linaro.org>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > 
> > FYI: This breaks bisectability with KVM, so we should probably move the
> > existing KVM references to the common definitions as part of this patch?
> 
> Sorry about that, evidently I forgot the KVM code referred to some
> ESR_EL1_* definitions when I reorganised the series.
> 
> Are you happy if I just move this patch after the KVM changes? That
> should keep everything bisectable and leaves the KVM changes confined to
> a single patch.
> 
sounds fine.

-Christoffer

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2015-01-12 17:20 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-01-07 12:04 [PATCH 0/7] arm64/kvm: common ESR_ELx definitions and decoding Mark Rutland
2015-01-07 12:04 ` [PATCH 1/7] arm64: introduce common ESR_ELx_* definitions Mark Rutland
2015-01-07 16:23   ` Catalin Marinas
2015-01-07 16:42     ` Mark Rutland
2015-01-07 16:57       ` Catalin Marinas
2015-01-07 18:49         ` Mark Rutland
2015-01-11 16:59   ` Christoffer Dall
2015-01-12 11:20     ` Mark Rutland
2015-01-07 12:04 ` [PATCH 2/7] arm64: move to ESR_ELx macros Mark Rutland
2015-01-11 17:01   ` Christoffer Dall
2015-01-07 12:04 ` [PATCH 3/7] arm64: remove ESR_EL1_* macros Mark Rutland
2015-01-11 18:08   ` Christoffer Dall
2015-01-12 11:27     ` Mark Rutland
2015-01-12 17:20       ` Christoffer Dall
2015-01-07 12:04 ` [PATCH 4/7] arm64: decode ESR_ELx.EC when reporting exceptions Mark Rutland
2015-01-11 18:22   ` Christoffer Dall
2015-01-07 12:04 ` [PATCH 5/7] arm64: kvm: move to ESR_ELx macros Mark Rutland
2015-01-11 18:27   ` Christoffer Dall
2015-01-12 11:40     ` Mark Rutland
2015-01-07 12:04 ` [PATCH 6/7] arm64: kvm: remove ESR_EL2_* macros Mark Rutland
2015-01-11 18:27   ` Christoffer Dall
2015-01-07 12:04 ` [PATCH 7/7] arm64: kvm: decode ESR_ELx.EC when reporting exceptions Mark Rutland
2015-01-11 18:29   ` Christoffer Dall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.