kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/11] KVM: arm: debug infrastructure support
@ 2015-05-31  4:27 Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 01/11] KVM: arm: plug guest debug exploit Zhichao Huang
                   ` (10 more replies)
  0 siblings, 11 replies; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

This patch series adds debug support, a key feature missing from the
KVM/armv7 port.

The main idea is borrowed from armv8, which is to keep track of whether 
the debug registers are "dirty" (changed by the guest) or not. In this 
case, perform the usual save/restore dance, for one run only. It means 
we only have a penalty if a guest is actively using the debug registers.

The amount of registers is properly frightening, but CPUs actually
only implement a subset of them. Also, there is a number of registers
we don't bother emulating (things having to do with external debug and
OSlock).

External debug is when you actually plug a physical JTAG into the CPU.
OSlock is a way to prevent "other software" to play with the debug
registers. My understanding is that it is only useful in combination
with the external debug. In both case, implementing support for this
is probably not worth the effort, at least for the time being.

This has been tested on a Cortex-A15 platform, running 32bit guests.

The patches for this series are based off v4.1-rc5 and can be found
at:

https://git.linaro.org/people/zhichao.huang/linux.git
branch: guest-debug/4.1-rc5-v2

>From v1 [1]:
- Added missing cp14 reset functions
- Disable debug mode if we don't need it to reduce unnecessary switch

[1]: https://lists.cs.columbia.edu/pipermail/kvmarm/2015-May/014729.html

Zhichao Huang (11):
  KVM: arm: plug guest debug exploit
  KVM: arm: rename pm_fake handler to trap_raz_wi
  KVM: arm: enable to use the ARM_DSCR_MDBGEN macro from KVM assembly
    code
  KVM: arm: common infrastructure for handling AArch32 CP14/CP15
  KVM: arm: check ordering of all system register tables
  KVM: arm: add trap handlers for 32-bit debug registers
  KVM: arm: add trap handlers for 64-bit debug registers
  KVM: arm: implement dirty bit mechanism for debug registers
  KVM: arm: disable debug mode if we don't actually need it.
  KVM: arm: implement lazy world switch for debug registers
  KVM: arm: enable trapping of all debug registers

 arch/arm/include/asm/hw_breakpoint.h |  54 ++---
 arch/arm/include/asm/kvm_asm.h       |  15 ++
 arch/arm/include/asm/kvm_coproc.h    |   3 +-
 arch/arm/include/asm/kvm_host.h      |   6 +
 arch/arm/kernel/asm-offsets.c        |   2 +
 arch/arm/kernel/hw_breakpoint.c      |  55 ++++-
 arch/arm/kvm/coproc.c                | 386 +++++++++++++++++++++++++++++------
 arch/arm/kvm/handle_exit.c           |   4 +-
 arch/arm/kvm/interrupts.S            |  16 ++
 arch/arm/kvm/interrupts_head.S       | 313 +++++++++++++++++++++++++++-
 10 files changed, 757 insertions(+), 97 deletions(-)

-- 
1.7.12.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2 01/11] KVM: arm: plug guest debug exploit
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-06-01 10:56   ` Marc Zyngier
  2015-05-31  4:27 ` [PATCH v2 02/11] KVM: arm: rename pm_fake handler to trap_raz_wi Zhichao Huang
                   ` (9 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang, stable

Hardware debugging in guests is not intercepted currently, it means
that a malicious guest can bring down the entire machine by writing
to the debug registers.

This patch enable trapping of all debug registers, preventing the guests
to mess with the host state.

However, it is a precursor for later patches which will need to do
more to world switch debug states while necessary.

Cc: <stable@vger.kernel.org>
Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/include/asm/kvm_coproc.h |  3 +-
 arch/arm/kvm/coproc.c             | 60 +++++++++++++++++++++++++++++++++++----
 arch/arm/kvm/handle_exit.c        |  4 +--
 arch/arm/kvm/interrupts_head.S    |  2 +-
 4 files changed, 59 insertions(+), 10 deletions(-)

diff --git a/arch/arm/include/asm/kvm_coproc.h b/arch/arm/include/asm/kvm_coproc.h
index 4917c2f..e74ab0f 100644
--- a/arch/arm/include/asm/kvm_coproc.h
+++ b/arch/arm/include/asm/kvm_coproc.h
@@ -31,7 +31,8 @@ void kvm_register_target_coproc_table(struct kvm_coproc_target_table *table);
 int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run);
 int kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
 int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
-int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
+int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
 int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
 int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
 
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index f3d88dc..2e12760 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -91,12 +91,6 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return 1;
 }
 
-int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
-{
-	kvm_inject_undefined(vcpu);
-	return 1;
-}
-
 static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
 {
 	/*
@@ -519,6 +513,60 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	return emulate_cp15(vcpu, &params);
 }
 
+/**
+ * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
+ * @vcpu: The VCPU pointer
+ * @run:  The kvm_run struct
+ */
+int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	struct coproc_params params;
+
+	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
+	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
+	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
+	params.is_64bit = true;
+
+	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
+	params.Op2 = 0;
+	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
+	params.CRm = 0;
+
+	/* raz_wi */
+	(void)pm_fake(vcpu, &params, NULL);
+
+	/* handled */
+	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+	return 1;
+}
+
+/**
+ * kvm_handle_cp14_32 -- handles a mrc/mcr trap on a guest CP14 access
+ * @vcpu: The VCPU pointer
+ * @run:  The kvm_run struct
+ */
+int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	struct coproc_params params;
+
+	params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
+	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
+	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
+	params.is_64bit = false;
+
+	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
+	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
+	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
+	params.Rt2 = 0;
+
+	/* raz_wi */
+	(void)pm_fake(vcpu, &params, NULL);
+
+	/* handled */
+	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+	return 1;
+}
+
 /******************************************************************************
  * Userspace API
  *****************************************************************************/
diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
index 95f12b2..357ad1b 100644
--- a/arch/arm/kvm/handle_exit.c
+++ b/arch/arm/kvm/handle_exit.c
@@ -104,9 +104,9 @@ static exit_handle_fn arm_exit_handlers[] = {
 	[HSR_EC_WFI]		= kvm_handle_wfx,
 	[HSR_EC_CP15_32]	= kvm_handle_cp15_32,
 	[HSR_EC_CP15_64]	= kvm_handle_cp15_64,
-	[HSR_EC_CP14_MR]	= kvm_handle_cp14_access,
+	[HSR_EC_CP14_MR]	= kvm_handle_cp14_32,
 	[HSR_EC_CP14_LS]	= kvm_handle_cp14_load_store,
-	[HSR_EC_CP14_64]	= kvm_handle_cp14_access,
+	[HSR_EC_CP14_64]	= kvm_handle_cp14_64,
 	[HSR_EC_CP_0_13]	= kvm_handle_cp_0_13_access,
 	[HSR_EC_CP10_ID]	= kvm_handle_cp10_id,
 	[HSR_EC_SVC_HYP]	= handle_svc_hyp,
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 35e4a3a..a9f3a56 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -607,7 +607,7 @@ ARM_BE8(rev	r6, r6  )
  * (hardware reset value is 0) */
 .macro set_hdcr operation
 	mrc	p15, 4, r2, c1, c1, 1
-	ldr	r3, =(HDCR_TPM|HDCR_TPMCR)
+	ldr	r3, =(HDCR_TPM|HDCR_TPMCR|HDCR_TDRA|HDCR_TDOSA|HDCR_TDA)
 	.if \operation == vmentry
 	orr	r2, r2, r3		@ Trap some perfmon accesses
 	.else
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 02/11] KVM: arm: rename pm_fake handler to trap_raz_wi
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 01/11] KVM: arm: plug guest debug exploit Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-06-09 10:42   ` Alex Bennée
  2015-05-31  4:27 ` [PATCH v2 03/11] KVM: arm: enable to use the ARM_DSCR_MDBGEN macro from KVM assembly code Zhichao Huang
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).

As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/kvm/coproc.c | 34 ++++++++++++++++------------------
 1 file changed, 16 insertions(+), 18 deletions(-)

diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 2e12760..9d283d9 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -229,7 +229,7 @@ bool access_vm_reg(struct kvm_vcpu *vcpu,
  * must always support PMCCNTR (the cycle counter): we just RAZ/WI for
  * all PM registers, which doesn't crash the guest kernel at least.
  */
-static bool pm_fake(struct kvm_vcpu *vcpu,
+static bool trap_raz_wi(struct kvm_vcpu *vcpu,
 		    const struct coproc_params *p,
 		    const struct coproc_reg *r)
 {
@@ -239,19 +239,19 @@ static bool pm_fake(struct kvm_vcpu *vcpu,
 		return read_zero(vcpu, p);
 }
 
-#define access_pmcr pm_fake
-#define access_pmcntenset pm_fake
-#define access_pmcntenclr pm_fake
-#define access_pmovsr pm_fake
-#define access_pmselr pm_fake
-#define access_pmceid0 pm_fake
-#define access_pmceid1 pm_fake
-#define access_pmccntr pm_fake
-#define access_pmxevtyper pm_fake
-#define access_pmxevcntr pm_fake
-#define access_pmuserenr pm_fake
-#define access_pmintenset pm_fake
-#define access_pmintenclr pm_fake
+#define access_pmcr trap_raz_wi
+#define access_pmcntenset trap_raz_wi
+#define access_pmcntenclr trap_raz_wi
+#define access_pmovsr trap_raz_wi
+#define access_pmselr trap_raz_wi
+#define access_pmceid0 trap_raz_wi
+#define access_pmceid1 trap_raz_wi
+#define access_pmccntr trap_raz_wi
+#define access_pmxevtyper trap_raz_wi
+#define access_pmxevcntr trap_raz_wi
+#define access_pmuserenr trap_raz_wi
+#define access_pmintenset trap_raz_wi
+#define access_pmintenclr trap_raz_wi
 
 /* Architected CP15 registers.
  * CRn denotes the primary register number, but is copied to the CRm in the
@@ -532,8 +532,7 @@ int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
 	params.CRm = 0;
 
-	/* raz_wi */
-	(void)pm_fake(vcpu, &params, NULL);
+	(void)trap_raz_wi(vcpu, &params, NULL);
 
 	/* handled */
 	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
@@ -559,8 +558,7 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
 	params.Rt2 = 0;
 
-	/* raz_wi */
-	(void)pm_fake(vcpu, &params, NULL);
+	(void)trap_raz_wi(vcpu, &params, NULL);
 
 	/* handled */
 	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 03/11] KVM: arm: enable to use the ARM_DSCR_MDBGEN macro from KVM assembly code
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 01/11] KVM: arm: plug guest debug exploit Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 02/11] KVM: arm: rename pm_fake handler to trap_raz_wi Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-06-09 13:42   ` Alex Bennée
  2015-05-31  4:27 ` [PATCH v2 04/11] KVM: arm: common infrastructure for handling AArch32 CP14/CP15 Zhichao Huang
                   ` (7 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

Add #ifndef __ASSEMBLY__ in hw_breakpoint.h, in order to use
the ARM_DSCR_MDBGEN macro from KVM assembly code.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/include/asm/hw_breakpoint.h | 54 +++++++++++++++++++-----------------
 1 file changed, 29 insertions(+), 25 deletions(-)

diff --git a/arch/arm/include/asm/hw_breakpoint.h b/arch/arm/include/asm/hw_breakpoint.h
index 8e427c7..f2f4c61 100644
--- a/arch/arm/include/asm/hw_breakpoint.h
+++ b/arch/arm/include/asm/hw_breakpoint.h
@@ -3,6 +3,8 @@
 
 #ifdef __KERNEL__
 
+#ifndef __ASSEMBLY__
+
 struct task_struct;
 
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
@@ -44,6 +46,33 @@ static inline void decode_ctrl_reg(u32 reg,
 	ctrl->mismatch	= reg & 0x1;
 }
 
+struct notifier_block;
+struct perf_event;
+struct pmu;
+
+extern struct pmu perf_ops_bp;
+extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
+				  int *gen_len, int *gen_type);
+extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
+extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
+extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
+					   unsigned long val, void *data);
+
+extern u8 arch_get_debug_arch(void);
+extern u8 arch_get_max_wp_len(void);
+extern void clear_ptrace_hw_breakpoint(struct task_struct *tsk);
+
+int arch_install_hw_breakpoint(struct perf_event *bp);
+void arch_uninstall_hw_breakpoint(struct perf_event *bp);
+void hw_breakpoint_pmu_read(struct perf_event *bp);
+int hw_breakpoint_slots(int type);
+
+#else
+static inline void clear_ptrace_hw_breakpoint(struct task_struct *tsk) {}
+
+#endif	/* CONFIG_HAVE_HW_BREAKPOINT */
+#endif  /* __ASSEMBLY */
+
 /* Debug architecture numbers. */
 #define ARM_DEBUG_ARCH_RESERVED	0	/* In case of ptrace ABI updates. */
 #define ARM_DEBUG_ARCH_V6	1
@@ -110,30 +139,5 @@ static inline void decode_ctrl_reg(u32 reg,
 	asm volatile("mcr p14, 0, %0, " #N "," #M ", " #OP2 : : "r" (VAL));\
 } while (0)
 
-struct notifier_block;
-struct perf_event;
-struct pmu;
-
-extern struct pmu perf_ops_bp;
-extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
-				  int *gen_len, int *gen_type);
-extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
-extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
-extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
-					   unsigned long val, void *data);
-
-extern u8 arch_get_debug_arch(void);
-extern u8 arch_get_max_wp_len(void);
-extern void clear_ptrace_hw_breakpoint(struct task_struct *tsk);
-
-int arch_install_hw_breakpoint(struct perf_event *bp);
-void arch_uninstall_hw_breakpoint(struct perf_event *bp);
-void hw_breakpoint_pmu_read(struct perf_event *bp);
-int hw_breakpoint_slots(int type);
-
-#else
-static inline void clear_ptrace_hw_breakpoint(struct task_struct *tsk) {}
-
-#endif	/* CONFIG_HAVE_HW_BREAKPOINT */
 #endif	/* __KERNEL__ */
 #endif	/* _ARM_HW_BREAKPOINT_H */
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 04/11] KVM: arm: common infrastructure for handling AArch32 CP14/CP15
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (2 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 03/11] KVM: arm: enable to use the ARM_DSCR_MDBGEN macro from KVM assembly code Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-06-09 10:45   ` Alex Bennée
  2015-05-31  4:27 ` [PATCH v2 05/11] KVM: arm: check ordering of all system register tables Zhichao Huang
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

As we're about to trap a bunch of CP14 registers, let's rework
the CP15 handling so it can be generalized and work with multiple
tables.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/kvm/coproc.c          | 176 ++++++++++++++++++++++++++---------------
 arch/arm/kvm/interrupts_head.S |   2 +-
 2 files changed, 112 insertions(+), 66 deletions(-)

diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 9d283d9..d23395b 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -375,6 +375,9 @@ static const struct coproc_reg cp15_regs[] = {
 	{ CRn(15), CRm( 0), Op1( 4), Op2( 0), is32, access_cbar},
 };
 
+static const struct coproc_reg cp14_regs[] = {
+};
+
 /* Target specific emulation tables */
 static struct kvm_coproc_target_table *target_tables[KVM_ARM_NUM_TARGETS];
 
@@ -424,47 +427,75 @@ static const struct coproc_reg *find_reg(const struct coproc_params *params,
 	return NULL;
 }
 
-static int emulate_cp15(struct kvm_vcpu *vcpu,
-			const struct coproc_params *params)
+/*
+ * emulate_cp --  tries to match a cp14/cp15 access in a handling table,
+ *                and call the corresponding trap handler.
+ *
+ * @params: pointer to the descriptor of the access
+ * @table: array of trap descriptors
+ * @num: size of the trap descriptor array
+ *
+ * Return 0 if the access has been handled, and -1 if not.
+ */
+static int emulate_cp(struct kvm_vcpu *vcpu,
+			const struct coproc_params *params,
+			const struct coproc_reg *table,
+			size_t num)
 {
-	size_t num;
-	const struct coproc_reg *table, *r;
-
-	trace_kvm_emulate_cp15_imp(params->Op1, params->Rt1, params->CRn,
-				   params->CRm, params->Op2, params->is_write);
+	const struct coproc_reg *r;
 
-	table = get_target_table(vcpu->arch.target, &num);
+	if (!table)
+		return -1;	/* Not handled */
 
-	/* Search target-specific then generic table. */
 	r = find_reg(params, table, num);
-	if (!r)
-		r = find_reg(params, cp15_regs, ARRAY_SIZE(cp15_regs));
 
-	if (likely(r)) {
+	if (r) {
 		/* If we don't have an accessor, we should never get here! */
 		BUG_ON(!r->access);
 
 		if (likely(r->access(vcpu, params, r))) {
 			/* Skip instruction, since it was emulated */
 			kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
-			return 1;
 		}
-		/* If access function fails, it should complain. */
-	} else {
-		kvm_err("Unsupported guest CP15 access at: %08lx\n",
-			*vcpu_pc(vcpu));
-		print_cp_instr(params);
+
+		/* Handled */
+		return 0;
 	}
+
+	/* Not handled */
+	return -1;
+}
+
+static void unhandled_cp_access(struct kvm_vcpu *vcpu,
+				const struct coproc_params *params)
+{
+	u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
+	int cp;
+
+	switch (hsr_ec) {
+	case HSR_EC_CP15_32:
+	case HSR_EC_CP15_64:
+		cp = 15;
+		break;
+	case HSR_EC_CP14_MR:
+	case HSR_EC_CP14_64:
+		cp = 14;
+		break;
+	default:
+		WARN_ON((cp = -1));
+	}
+
+	kvm_err("Unsupported guest CP%d access at: %08lx\n",
+		cp, *vcpu_pc(vcpu));
+	print_cp_instr(params);
 	kvm_inject_undefined(vcpu);
-	return 1;
 }
 
-/**
- * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access
- * @vcpu: The VCPU pointer
- * @run:  The kvm_run struct
- */
-int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp_64(struct kvm_vcpu *vcpu,
+			const struct coproc_reg *global,
+			size_t nr_global,
+			const struct coproc_reg *target_specific,
+			size_t nr_specific)
 {
 	struct coproc_params params;
 
@@ -478,7 +509,13 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
 	params.CRm = 0;
 
-	return emulate_cp15(vcpu, &params);
+	if (!emulate_cp(vcpu, &params, target_specific, nr_specific))
+		return 1;
+	if (!emulate_cp(vcpu, &params, global, nr_global))
+		return 1;
+
+	unhandled_cp_access(vcpu, &params);
+	return 1;
 }
 
 static void reset_coproc_regs(struct kvm_vcpu *vcpu,
@@ -491,12 +528,11 @@ static void reset_coproc_regs(struct kvm_vcpu *vcpu,
 			table[i].reset(vcpu, &table[i]);
 }
 
-/**
- * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access
- * @vcpu: The VCPU pointer
- * @run:  The kvm_run struct
- */
-int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp_32(struct kvm_vcpu *vcpu,
+			const struct coproc_reg *global,
+			size_t nr_global,
+			const struct coproc_reg *target_specific,
+			size_t nr_specific)
 {
 	struct coproc_params params;
 
@@ -510,33 +546,57 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
 	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
 	params.Rt2 = 0;
 
-	return emulate_cp15(vcpu, &params);
+	if (!emulate_cp(vcpu, &params, target_specific, nr_specific))
+		return 1;
+	if (!emulate_cp(vcpu, &params, global, nr_global))
+		return 1;
+
+	unhandled_cp_access(vcpu, &params);
+	return 1;
 }
 
 /**
- * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
+ * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access
  * @vcpu: The VCPU pointer
  * @run:  The kvm_run struct
  */
-int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	struct coproc_params params;
+	const struct coproc_reg *target_specific;
+	size_t num;
 
-	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
-	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
-	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
-	params.is_64bit = true;
+	target_specific = get_target_table(vcpu->arch.target, &num);
+	return kvm_handle_cp_64(vcpu,
+				cp15_regs, ARRAY_SIZE(cp15_regs),
+				target_specific, num);
+}
 
-	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
-	params.Op2 = 0;
-	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
-	params.CRm = 0;
+/**
+ * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access
+ * @vcpu: The VCPU pointer
+ * @run:  The kvm_run struct
+ */
+int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	const struct coproc_reg *target_specific;
+	size_t num;
 
-	(void)trap_raz_wi(vcpu, &params, NULL);
+	target_specific = get_target_table(vcpu->arch.target, &num);
+	return kvm_handle_cp_32(vcpu,
+				cp15_regs, ARRAY_SIZE(cp15_regs),
+				target_specific, num);
+}
 
-	/* handled */
-	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
-	return 1;
+/**
+ * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
+ * @vcpu: The VCPU pointer
+ * @run:  The kvm_run struct
+ */
+int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
+{
+	return kvm_handle_cp_64(vcpu,
+				cp14_regs, ARRAY_SIZE(cp14_regs),
+				NULL, 0);
 }
 
 /**
@@ -546,23 +606,9 @@ int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
  */
 int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
 {
-	struct coproc_params params;
-
-	params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
-	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
-	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
-	params.is_64bit = false;
-
-	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
-	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
-	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
-	params.Rt2 = 0;
-
-	(void)trap_raz_wi(vcpu, &params, NULL);
-
-	/* handled */
-	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
-	return 1;
+	return kvm_handle_cp_32(vcpu,
+				cp14_regs, ARRAY_SIZE(cp14_regs),
+				NULL, 0);
 }
 
 /******************************************************************************
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index a9f3a56..35e4a3a 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -607,7 +607,7 @@ ARM_BE8(rev	r6, r6  )
  * (hardware reset value is 0) */
 .macro set_hdcr operation
 	mrc	p15, 4, r2, c1, c1, 1
-	ldr	r3, =(HDCR_TPM|HDCR_TPMCR|HDCR_TDRA|HDCR_TDOSA|HDCR_TDA)
+	ldr	r3, =(HDCR_TPM|HDCR_TPMCR)
 	.if \operation == vmentry
 	orr	r2, r2, r3		@ Trap some perfmon accesses
 	.else
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 05/11] KVM: arm: check ordering of all system register tables
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (3 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 04/11] KVM: arm: common infrastructure for handling AArch32 CP14/CP15 Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-06-10 13:52   ` Alex Bennée
  2015-05-31  4:27 ` [PATCH v2 06/11] KVM: arm: add trap handlers for 32-bit debug registers Zhichao Huang
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

We now have multiple tables for the various system registers
we trap. Make sure we check the order of all of them, as it is
critical that we get the order right (been there, done that...).

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/kvm/coproc.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index d23395b..16d5f69 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -737,6 +737,9 @@ static struct coproc_reg invariant_cp15[] = {
 	{ CRn( 0), CRm( 0), Op1( 0), Op2( 3), is32, NULL, get_TLBTR },
 	{ CRn( 0), CRm( 0), Op1( 0), Op2( 6), is32, NULL, get_REVIDR },
 
+	{ CRn( 0), CRm( 0), Op1( 1), Op2( 1), is32, NULL, get_CLIDR },
+	{ CRn( 0), CRm( 0), Op1( 1), Op2( 7), is32, NULL, get_AIDR },
+
 	{ CRn( 0), CRm( 1), Op1( 0), Op2( 0), is32, NULL, get_ID_PFR0 },
 	{ CRn( 0), CRm( 1), Op1( 0), Op2( 1), is32, NULL, get_ID_PFR1 },
 	{ CRn( 0), CRm( 1), Op1( 0), Op2( 2), is32, NULL, get_ID_DFR0 },
@@ -752,9 +755,6 @@ static struct coproc_reg invariant_cp15[] = {
 	{ CRn( 0), CRm( 2), Op1( 0), Op2( 3), is32, NULL, get_ID_ISAR3 },
 	{ CRn( 0), CRm( 2), Op1( 0), Op2( 4), is32, NULL, get_ID_ISAR4 },
 	{ CRn( 0), CRm( 2), Op1( 0), Op2( 5), is32, NULL, get_ID_ISAR5 },
-
-	{ CRn( 0), CRm( 0), Op1( 1), Op2( 1), is32, NULL, get_CLIDR },
-	{ CRn( 0), CRm( 0), Op1( 1), Op2( 7), is32, NULL, get_AIDR },
 };
 
 /*
@@ -1297,13 +1297,29 @@ int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
 	return write_demux_regids(uindices);
 }
 
+static int check_sysreg_table(const struct coproc_reg *table, unsigned int n)
+{
+	unsigned int i;
+
+	for (i = 1; i < n; i++) {
+		if (cmp_reg(&table[i-1], &table[i]) >= 0) {
+			kvm_err("sys_reg table %p out of order (%d)\n",
+					table, i - 1);
+			return 1;
+		}
+	}
+
+	return 0;
+}
+
 void kvm_coproc_table_init(void)
 {
 	unsigned int i;
 
 	/* Make sure tables are unique and in order. */
-	for (i = 1; i < ARRAY_SIZE(cp15_regs); i++)
-		BUG_ON(cmp_reg(&cp15_regs[i-1], &cp15_regs[i]) >= 0);
+	BUG_ON(check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs)));
+	BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs)));
+	BUG_ON(check_sysreg_table(invariant_cp15, ARRAY_SIZE(invariant_cp15)));
 
 	/* We abuse the reset function to overwrite the table itself. */
 	for (i = 0; i < ARRAY_SIZE(invariant_cp15); i++)
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 06/11] KVM: arm: add trap handlers for 32-bit debug registers
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (4 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 05/11] KVM: arm: check ordering of all system register tables Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 07/11] KVM: arm: add trap handlers for 64-bit " Zhichao Huang
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

Add handlers for all the 32-bit debug registers.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/include/asm/kvm_asm.h  |  12 ++++
 arch/arm/include/asm/kvm_host.h |   3 +
 arch/arm/kernel/asm-offsets.c   |   1 +
 arch/arm/kvm/coproc.c           | 122 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 138 insertions(+)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index 25410b2..ba65e05 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -52,6 +52,18 @@
 #define c10_AMAIR1	30	/* Auxilary Memory Attribute Indirection Reg1 */
 #define NR_CP15_REGS	31	/* Number of regs (incl. invalid) */
 
+/* 0 is reserved as an invalid value. */
+#define cp14_DBGBVR0	1	/* Debug Breakpoint Control Registers (0-15) */
+#define cp14_DBGBVR15	16
+#define cp14_DBGBCR0	17	/* Debug Breakpoint Value Registers (0-15) */
+#define cp14_DBGBCR15	32
+#define cp14_DBGWVR0	33	/* Debug Watchpoint Control Registers (0-15) */
+#define cp14_DBGWVR15	48
+#define cp14_DBGWCR0	49	/* Debug Watchpoint Value Registers (0-15) */
+#define cp14_DBGWCR15	64
+#define cp14_DBGDSCRext	65	/* Debug Status and Control external */
+#define NR_CP14_REGS	66	/* Number of regs (incl. invalid) */
+
 #define ARM_EXCEPTION_RESET	  0
 #define ARM_EXCEPTION_UNDEFINED   1
 #define ARM_EXCEPTION_SOFTWARE    2
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index d71607c..3d16820 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -124,6 +124,9 @@ struct kvm_vcpu_arch {
 	struct vgic_cpu vgic_cpu;
 	struct arch_timer_cpu timer_cpu;
 
+	/* System control coprocessor (cp14) */
+	u32 cp14[NR_CP14_REGS];
+
 	/*
 	 * Anything that is not used directly from assembly code goes
 	 * here.
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 871b826..9158de0 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -172,6 +172,7 @@ int main(void)
 #ifdef CONFIG_KVM_ARM_HOST
   DEFINE(VCPU_KVM,		offsetof(struct kvm_vcpu, kvm));
   DEFINE(VCPU_MIDR,		offsetof(struct kvm_vcpu, arch.midr));
+  DEFINE(VCPU_CP14,		offsetof(struct kvm_vcpu, arch.cp14));
   DEFINE(VCPU_CP15,		offsetof(struct kvm_vcpu, arch.cp15));
   DEFINE(VCPU_VFP_GUEST,	offsetof(struct kvm_vcpu, arch.vfp_guest));
   DEFINE(VCPU_VFP_HOST,		offsetof(struct kvm_vcpu, arch.host_cpu_context));
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 16d5f69..59b65b7 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -220,6 +220,47 @@ bool access_vm_reg(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static bool trap_debug32(struct kvm_vcpu *vcpu,
+			const struct coproc_params *p,
+			const struct coproc_reg *r)
+{
+	if (p->is_write)
+		vcpu->arch.cp14[r->reg] = *vcpu_reg(vcpu, p->Rt1);
+	else
+		*vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp14[r->reg];
+
+	return true;
+}
+
+/* DBGIDR (RO) Debug ID */
+static bool trap_dbgidr(struct kvm_vcpu *vcpu,
+			const struct coproc_params *p,
+			const struct coproc_reg *r)
+{
+	u32 val;
+
+	if (p->is_write)
+		return ignore_write(vcpu, p);
+
+	ARM_DBG_READ(c0, c0, 0, val);
+	*vcpu_reg(vcpu, p->Rt1) = val;
+
+	return true;
+}
+
+/* DBGDSCRint (RO) Debug Status and Control Register */
+static bool trap_dbgdscr(struct kvm_vcpu *vcpu,
+			const struct coproc_params *p,
+			const struct coproc_reg *r)
+{
+	if (p->is_write)
+		return ignore_write(vcpu, p);
+
+	*vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp14[r->reg];
+
+	return true;
+}
+
 /*
  * We could trap ID_DFR0 and tell the guest we don't support performance
  * monitoring.  Unfortunately the patch to make the kernel check ID_DFR0 was
@@ -375,7 +416,88 @@ static const struct coproc_reg cp15_regs[] = {
 	{ CRn(15), CRm( 0), Op1( 4), Op2( 0), is32, access_cbar},
 };
 
+#define DBG_BCR_BVR_WCR_WVR(n)					\
+	/* DBGBVRn */						\
+	{ CRn( 0), CRm((n)), Op1( 0), Op2( 4), is32,		\
+	  trap_debug32,	reset_val, (cp14_DBGBVR0 + (n)), 0 },	\
+	/* DBGBCRn */						\
+	{ CRn( 0), CRm((n)), Op1( 0), Op2( 5), is32,		\
+	  trap_debug32,	reset_val, (cp14_DBGBCR0 + (n)), 0 },	\
+	/* DBGWVRn */						\
+	{ CRn( 0), CRm((n)), Op1( 0), Op2( 6), is32,		\
+	  trap_debug32,	reset_val, (cp14_DBGWVR0 + (n)), 0 },	\
+	/* DBGWCRn */						\
+	{ CRn( 0), CRm((n)), Op1( 0), Op2( 7), is32,		\
+	  trap_debug32,	reset_val, (cp14_DBGWCR0 + (n)), 0 }
+
+/* No OS DBGBXVR machanism implemented. */
+#define DBGBXVR(n)						\
+	{ CRn( 1), CRm((n)), Op1( 0), Op2( 1), is32, trap_raz_wi }
+
+/*
+ * Trapped cp14 registers. We generally ignore most of the external
+ * debug, on the principle that they don't really make sense to a
+ * guest. Revisit this one day, whould this principle change.
+ */
 static const struct coproc_reg cp14_regs[] = {
+	/* DBGIDR */
+	{ CRn( 0), CRm( 0), Op1( 0), Op2( 0), is32, trap_dbgidr},
+	/* DBGDTRRXext */
+	{ CRn( 0), CRm( 0), Op1( 0), Op2( 2), is32, trap_raz_wi },
+	DBG_BCR_BVR_WCR_WVR(0),
+	/* DBGDSCRint */
+	{ CRn( 0), CRm( 1), Op1( 0), Op2( 0), is32, trap_dbgdscr,
+				NULL, cp14_DBGDSCRext },
+	DBG_BCR_BVR_WCR_WVR(1),
+	/* DBGDSCRext */
+	{ CRn( 0), CRm( 2), Op1( 0), Op2( 2), is32, trap_debug32,
+				reset_val, cp14_DBGDSCRext, 0 },
+	DBG_BCR_BVR_WCR_WVR(2),
+	/* DBGDTRRXext */
+	{ CRn( 0), CRm( 3), Op1( 0), Op2( 2), is32, trap_raz_wi },
+	DBG_BCR_BVR_WCR_WVR(3),
+	DBG_BCR_BVR_WCR_WVR(4),
+	/* DBGDTR[RT]Xint */
+	{ CRn( 0), CRm( 5), Op1( 0), Op2( 0), is32, trap_raz_wi },
+	DBG_BCR_BVR_WCR_WVR(5),
+	DBG_BCR_BVR_WCR_WVR(6),
+	/* DBGVCR */
+	{ CRn( 0), CRm( 7), Op1( 0), Op2( 0), is32, trap_debug32 },
+	DBG_BCR_BVR_WCR_WVR(7),
+	DBG_BCR_BVR_WCR_WVR(8),
+	DBG_BCR_BVR_WCR_WVR(9),
+	DBG_BCR_BVR_WCR_WVR(10),
+	DBG_BCR_BVR_WCR_WVR(11),
+	DBG_BCR_BVR_WCR_WVR(12),
+	DBG_BCR_BVR_WCR_WVR(13),
+	DBG_BCR_BVR_WCR_WVR(14),
+	DBG_BCR_BVR_WCR_WVR(15),
+
+	DBGBXVR(0),
+	/* DBGOSLAR */
+	{ CRn( 1), CRm( 0), Op1( 0), Op2( 4), is32, trap_raz_wi },
+	DBGBXVR(1),
+	/* DBGOSLSR */
+	{ CRn( 1), CRm( 1), Op1( 0), Op2( 4), is32, trap_raz_wi },
+	DBGBXVR(2),
+	DBGBXVR(3),
+	/* DBGOSDLRd */
+	{ CRn( 1), CRm( 3), Op1( 0), Op2( 4), is32, trap_raz_wi },
+	DBGBXVR(4),
+	DBGBXVR(5),
+	/* DBGPRSRa */
+	{ CRn( 1), CRm( 5), Op1( 0), Op2( 4), is32, trap_raz_wi },
+
+	DBGBXVR(6),
+	DBGBXVR(7),
+	DBGBXVR(8),
+	DBGBXVR(9),
+	DBGBXVR(10),
+	DBGBXVR(11),
+	DBGBXVR(12),
+	DBGBXVR(13),
+	DBGBXVR(14),
+	DBGBXVR(15),
 };
 
 /* Target specific emulation tables */
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 07/11] KVM: arm: add trap handlers for 64-bit debug registers
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (5 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 06/11] KVM: arm: add trap handlers for 32-bit debug registers Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 08/11] KVM: arm: implement dirty bit mechanism for " Zhichao Huang
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

Add handlers for all the 64-bit debug registers.

There is an overlap between 32 and 64bit registers. Make sure that
64-bit registers preceding 32-bit ones.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/kvm/coproc.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index 59b65b7..eeee648 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -435,9 +435,17 @@ static const struct coproc_reg cp15_regs[] = {
 	{ CRn( 1), CRm((n)), Op1( 0), Op2( 1), is32, trap_raz_wi }
 
 /*
+ * Architected CP14 registers.
+ *
  * Trapped cp14 registers. We generally ignore most of the external
  * debug, on the principle that they don't really make sense to a
  * guest. Revisit this one day, whould this principle change.
+ *
+ * CRn denotes the primary register number, but is copied to the CRm in the
+ * user space API for 64-bit register access in line with the terminology used
+ * in the ARM ARM.
+ * Important: Must be sorted ascending by CRn, CRM, Op1, Op2 and with 64-bit
+ *            registers preceding 32-bit ones.
  */
 static const struct coproc_reg cp14_regs[] = {
 	/* DBGIDR */
@@ -445,10 +453,14 @@ static const struct coproc_reg cp14_regs[] = {
 	/* DBGDTRRXext */
 	{ CRn( 0), CRm( 0), Op1( 0), Op2( 2), is32, trap_raz_wi },
 	DBG_BCR_BVR_WCR_WVR(0),
+	/* DBGDRAR (64bit) */
+	{ CRn( 0), CRm( 1), Op1( 0), Op2( 0), is64, trap_raz_wi},
 	/* DBGDSCRint */
 	{ CRn( 0), CRm( 1), Op1( 0), Op2( 0), is32, trap_dbgdscr,
 				NULL, cp14_DBGDSCRext },
 	DBG_BCR_BVR_WCR_WVR(1),
+	/* DBGDSAR (64bit) */
+	{ CRn( 0), CRm( 2), Op1( 0), Op2( 0), is64, trap_raz_wi},
 	/* DBGDSCRext */
 	{ CRn( 0), CRm( 2), Op1( 0), Op2( 2), is32, trap_debug32,
 				reset_val, cp14_DBGDSCRext, 0 },
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 08/11] KVM: arm: implement dirty bit mechanism for debug registers
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (6 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 07/11] KVM: arm: add trap handlers for 64-bit " Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 09/11] KVM: arm: disable debug mode if we don't actually need it Zhichao Huang
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

The trapping code keeps track of the state of the debug registers,
allowing for the switch code to implement a lazy switching strategy.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/include/asm/kvm_asm.h  |  3 +++
 arch/arm/include/asm/kvm_host.h |  3 +++
 arch/arm/kernel/asm-offsets.c   |  1 +
 arch/arm/kvm/coproc.c           | 32 +++++++++++++++++++++++++++++--
 arch/arm/kvm/interrupts_head.S  | 42 +++++++++++++++++++++++++++++++++++++++++
 5 files changed, 79 insertions(+), 2 deletions(-)

diff --git a/arch/arm/include/asm/kvm_asm.h b/arch/arm/include/asm/kvm_asm.h
index ba65e05..4fb64cf 100644
--- a/arch/arm/include/asm/kvm_asm.h
+++ b/arch/arm/include/asm/kvm_asm.h
@@ -64,6 +64,9 @@
 #define cp14_DBGDSCRext	65	/* Debug Status and Control external */
 #define NR_CP14_REGS	66	/* Number of regs (incl. invalid) */
 
+#define KVM_ARM_DEBUG_DIRTY_SHIFT	0
+#define KVM_ARM_DEBUG_DIRTY		(1 << KVM_ARM_DEBUG_DIRTY_SHIFT)
+
 #define ARM_EXCEPTION_RESET	  0
 #define ARM_EXCEPTION_UNDEFINED   1
 #define ARM_EXCEPTION_SOFTWARE    2
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 3d16820..09b54bf 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -127,6 +127,9 @@ struct kvm_vcpu_arch {
 	/* System control coprocessor (cp14) */
 	u32 cp14[NR_CP14_REGS];
 
+	/* Debug state */
+	u32 debug_flags;
+
 	/*
 	 * Anything that is not used directly from assembly code goes
 	 * here.
diff --git a/arch/arm/kernel/asm-offsets.c b/arch/arm/kernel/asm-offsets.c
index 9158de0..e876109 100644
--- a/arch/arm/kernel/asm-offsets.c
+++ b/arch/arm/kernel/asm-offsets.c
@@ -185,6 +185,7 @@ int main(void)
   DEFINE(VCPU_FIQ_REGS,		offsetof(struct kvm_vcpu, arch.regs.fiq_regs));
   DEFINE(VCPU_PC,		offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_pc));
   DEFINE(VCPU_CPSR,		offsetof(struct kvm_vcpu, arch.regs.usr_regs.ARM_cpsr));
+  DEFINE(VCPU_DEBUG_FLAGS,	offsetof(struct kvm_vcpu, arch.debug_flags));
   DEFINE(VCPU_HCR,		offsetof(struct kvm_vcpu, arch.hcr));
   DEFINE(VCPU_IRQ_LINES,	offsetof(struct kvm_vcpu, arch.irq_lines));
   DEFINE(VCPU_HSR,		offsetof(struct kvm_vcpu, arch.fault.hsr));
diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
index eeee648..1cc74d8 100644
--- a/arch/arm/kvm/coproc.c
+++ b/arch/arm/kvm/coproc.c
@@ -220,14 +220,42 @@ bool access_vm_reg(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+/*
+ * We want to avoid world-switching all the DBG registers all the
+ * time:
+ *
+ * - If we've touched any debug register, it is likely that we're
+ *   going to touch more of them. It then makes sense to disable the
+ *   traps and start doing the save/restore dance
+ * - If debug is active (ARM_DSCR_MDBGEN set), it is then mandatory
+ *   to save/restore the registers, as the guest depends on them.
+ *
+ * For this, we use a DIRTY bit, indicating the guest has modified the
+ * debug registers, used as follow:
+ *
+ * On guest entry:
+ * - If the dirty bit is set (because we're coming back from trapping),
+ *   disable the traps, save host registers, restore guest registers.
+ * - If debug is actively in use (ARM_DSCR_MDBGEN set),
+ *   set the dirty bit, disable the traps, save host registers,
+ *   restore guest registers.
+ * - Otherwise, enable the traps
+ *
+ * On guest exit:
+ * - If the dirty bit is set, save guest registers, restore host
+ *   registers and clear the dirty bit. This ensure that the host can
+ *   now use the debug registers.
+ */
 static bool trap_debug32(struct kvm_vcpu *vcpu,
 			const struct coproc_params *p,
 			const struct coproc_reg *r)
 {
-	if (p->is_write)
+	if (p->is_write) {
 		vcpu->arch.cp14[r->reg] = *vcpu_reg(vcpu, p->Rt1);
-	else
+		vcpu->arch.debug_flags |= KVM_ARM_DEBUG_DIRTY;
+	} else {
 		*vcpu_reg(vcpu, p->Rt1) = vcpu->arch.cp14[r->reg];
+	}
 
 	return true;
 }
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 35e4a3a..3a0128c 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -1,4 +1,6 @@
 #include <linux/irqchip/arm-gic.h>
+#include <asm/hw_breakpoint.h>
+#include <asm/kvm_asm.h>
 #include <asm/assembler.h>
 
 #define VCPU_USR_REG(_reg_nr)	(VCPU_USR_REGS + (_reg_nr * 4))
@@ -396,6 +398,46 @@ vcpu	.req	r0		@ vcpu pointer always in r0
 	mcr	p15, 2, r12, c0, c0, 0	@ CSSELR
 .endm
 
+/* Assume vcpu pointer in vcpu reg, clobbers r5 */
+.macro skip_debug_state target
+	ldr	r5, [vcpu, #VCPU_DEBUG_FLAGS]
+	cmp	r5, #KVM_ARM_DEBUG_DIRTY
+	bne	\target
+1:
+.endm
+
+/* Compute debug state: If ARM_DSCR_MDBGEN or KVM_ARM_DEBUG_DIRTY
+ * is set, we do a full save/restore cycle and disable trapping.
+ *
+ * Assumes vcpu pointer in vcpu reg
+ *
+ * Clobbers r5, r6
+ */
+.macro compute_debug_state target
+	// Check the state of MDSCR_EL1
+	ldr	r5, [vcpu, #CP14_OFFSET(cp14_DBGDSCRext)]
+	and	r6, r5, #ARM_DSCR_MDBGEN
+	cmp	r6, #0
+	beq	9998f	   // Nothing to see there
+
+	// If ARM_DSCR_MDBGEN bit was set, we must set the flag
+	mov	r5, #KVM_ARM_DEBUG_DIRTY
+	str	r5, [vcpu, #VCPU_DEBUG_FLAGS]
+	b	9999f	   // Don't skip restore
+
+9998:
+	// Otherwise load the flags from memory in case we recently
+	// trapped
+	skip_debug_state \target
+9999:
+.endm
+
+/* Assume vcpu pointer in vcpu reg, clobbers r5 */
+.macro clear_debug_dirty_bit
+	mov	r5, #0
+	str	r5, [vcpu, #VCPU_DEBUG_FLAGS]
+.endm
+
 /*
  * Save the VGIC CPU state into memory
  *
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 09/11] KVM: arm: disable debug mode if we don't actually need it.
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (7 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 08/11] KVM: arm: implement dirty bit mechanism for " Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-06-01 10:16   ` Will Deacon
  2015-05-31  4:27 ` [PATCH v2 10/11] KVM: arm: implement lazy world switch for debug registers Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 11/11] KVM: arm: enable trapping of all " Zhichao Huang
  10 siblings, 1 reply; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

Until now we enable debug mode all the time even if we don't
actually need it.

Inspired by the implementation in arm64, disable debug mode if
we don't need it. And then we are able to reduce unnecessary
registers saving/restoring when the debug mode is disabled.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/kernel/hw_breakpoint.c | 55 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 46 insertions(+), 9 deletions(-)

diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
index dc7d0a9..1d27563 100644
--- a/arch/arm/kernel/hw_breakpoint.c
+++ b/arch/arm/kernel/hw_breakpoint.c
@@ -266,8 +266,7 @@ static int enable_monitor_mode(void)
 	}
 
 	/* Check that the write made it through. */
-	ARM_DBG_READ(c0, c1, 0, dscr);
-	if (!(dscr & ARM_DSCR_MDBGEN)) {
+	if (!monitor_mode_enabled()) {
 		pr_warn_once("Failed to enable monitor mode on CPU %d.\n",
 				smp_processor_id());
 		return -EPERM;
@@ -277,6 +276,43 @@ out:
 	return 0;
 }
 
+static int disable_monitor_mode(void)
+{
+	u32 dscr;
+
+	ARM_DBG_READ(c0, c1, 0, dscr);
+
+	/* If monitor mode is already disabled, just return. */
+	if (!(dscr & ARM_DSCR_MDBGEN))
+		goto out;
+
+	/* Write to the corresponding DSCR. */
+	switch (get_debug_arch()) {
+	case ARM_DEBUG_ARCH_V6:
+	case ARM_DEBUG_ARCH_V6_1:
+		ARM_DBG_WRITE(c0, c1, 0, (dscr & ~ARM_DSCR_MDBGEN));
+		break;
+	case ARM_DEBUG_ARCH_V7_ECP14:
+	case ARM_DEBUG_ARCH_V7_1:
+	case ARM_DEBUG_ARCH_V8:
+		ARM_DBG_WRITE(c0, c2, 2, (dscr & ~ARM_DSCR_MDBGEN));
+		isb();
+		break;
+	default:
+		return -ENODEV;
+	}
+
+	/* Check that the write made it through. */
+	if (monitor_mode_enabled()) {
+		pr_warn_once("Failed to disable monitor mode on CPU %d.\n",
+				smp_processor_id());
+		return -EPERM;
+	}
+
+out:
+	return 0;
+}
+
 int hw_breakpoint_slots(int type)
 {
 	if (!debug_arch_supported())
@@ -338,6 +374,8 @@ int arch_install_hw_breakpoint(struct perf_event *bp)
 	int i, max_slots, ctrl_base, val_base;
 	u32 addr, ctrl;
 
+	enable_monitor_mode();
+
 	addr = info->address;
 	ctrl = encode_ctrl_reg(info->ctrl) | 0x1;
 
@@ -430,6 +468,8 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp)
 
 	/* Reset the control register. */
 	write_wb_reg(base + i, 0);
+
+	disable_monitor_mode();
 }
 
 static int get_hbp_len(u8 hbp_len)
@@ -598,9 +638,6 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
 	int ret = 0;
 	u32 offset, alignment_mask = 0x3;
 
-	/* Ensure that we are in monitor debug mode. */
-	if (!monitor_mode_enabled())
-		return -ENODEV;
 
 	/* Build the arch_hw_breakpoint. */
 	ret = arch_build_bp_info(bp);
@@ -1013,12 +1050,12 @@ clear_vcr:
 	}
 
 	/*
-	 * Have a crack at enabling monitor mode. We don't actually need
-	 * it yet, but reporting an error early is useful if it fails.
+	 * We don't enable monitor mode here as we don't actually need it yet.
+	 * We would enable monitor mode when we actually install hwbreakpoint.
 	 */
 out_mdbgen:
-	if (enable_monitor_mode())
-		cpumask_or(&debug_err_mask, &debug_err_mask, cpumask_of(cpu));
+
+	return;
 }
 
 static int dbg_reset_notify(struct notifier_block *self,
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 10/11] KVM: arm: implement lazy world switch for debug registers
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (8 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 09/11] KVM: arm: disable debug mode if we don't actually need it Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  2015-05-31  4:27 ` [PATCH v2 11/11] KVM: arm: enable trapping of all " Zhichao Huang
  10 siblings, 0 replies; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A15 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers "only").

Also, we only save/restore them when DBGDSCR has debug enabled,
or when we've flagged the debug registers as dirty. It means that
most of the time, we only save/restore DBGDSCR.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/kvm/interrupts.S      |  16 +++
 arch/arm/kvm/interrupts_head.S | 256 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 272 insertions(+)

diff --git a/arch/arm/kvm/interrupts.S b/arch/arm/kvm/interrupts.S
index 79caf79..d626275 100644
--- a/arch/arm/kvm/interrupts.S
+++ b/arch/arm/kvm/interrupts.S
@@ -116,6 +116,12 @@ ENTRY(__kvm_vcpu_run)
 	read_cp15_state store_to_vcpu = 0
 	write_cp15_state read_from_vcpu = 1
 
+	@ Store hardware CP14 state and load guest state
+	compute_debug_state 1f
+	bl __save_host_debug_regs
+	bl __restore_guest_debug_regs
+
+1:
 	@ If the host kernel has not been configured with VFPv3 support,
 	@ then it is safer if we deny guests from using it as well.
 #ifdef CONFIG_VFPv3
@@ -201,6 +207,16 @@ after_vfp_restore:
 	mrc	p15, 0, r2, c0, c0, 5
 	mcr	p15, 4, r2, c0, c0, 5
 
+	@ Store guest CP14 state and restore host state
+	skip_debug_state 1f
+	bl __save_guest_debug_regs
+	bl __restore_host_debug_regs
+	/* Clear the dirty flag for the next run, as all the state has
+	 * already been saved. Note that we nuke the whole 32bit word.
+	 * If we ever add more flags, we'll have to be more careful...
+	 */
+	clear_debug_dirty_bit
+1:
 	@ Store guest CP15 state and restore host state
 	read_cp15_state store_to_vcpu = 1
 	write_cp15_state read_from_vcpu = 0
diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index 3a0128c..ed406be 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -7,6 +7,7 @@
 #define VCPU_USR_SP		(VCPU_USR_REG(13))
 #define VCPU_USR_LR		(VCPU_USR_REG(14))
 #define CP15_OFFSET(_cp15_reg_idx) (VCPU_CP15 + (_cp15_reg_idx * 4))
+#define CP14_OFFSET(_cp14_reg_idx) (VCPU_CP14 + ((_cp14_reg_idx) * 4))
 
 /*
  * Many of these macros need to access the VCPU structure, which is always
@@ -99,6 +100,10 @@ vcpu	.req	r0		@ vcpu pointer always in r0
 	mrs	r8, LR_fiq
 	mrs	r9, SPSR_fiq
 	push	{r2-r9}
+
+	/* DBGDSCR reg */
+	mrc	p14, 0, r2, c0, c1, 0
+	push	{r2}
 .endm
 
 .macro pop_host_regs_mode mode
@@ -113,6 +118,9 @@ vcpu	.req	r0		@ vcpu pointer always in r0
  * Clobbers all registers, in all modes, except r0 and r1.
  */
 .macro restore_host_regs
+	pop	{r2}
+	mcr	p14, 0, r2, c0, c2, 2
+
 	pop	{r2-r9}
 	msr	r8_fiq, r2
 	msr	r9_fiq, r3
@@ -161,6 +169,9 @@ vcpu	.req	r0		@ vcpu pointer always in r0
  * Clobbers *all* registers.
  */
 .macro restore_guest_regs
+	ldr	r2, [vcpu, #CP14_OFFSET(cp14_DBGDSCRext)]
+	mcr	p14, 0, r2, c0, c2, 2
+
 	restore_guest_regs_mode svc, #VCPU_SVC_REGS
 	restore_guest_regs_mode abt, #VCPU_ABT_REGS
 	restore_guest_regs_mode und, #VCPU_UND_REGS
@@ -239,6 +250,10 @@ vcpu	.req	r0		@ vcpu pointer always in r0
 	save_guest_regs_mode abt, #VCPU_ABT_REGS
 	save_guest_regs_mode und, #VCPU_UND_REGS
 	save_guest_regs_mode irq, #VCPU_IRQ_REGS
+
+	/* DBGDSCR reg */
+	mrc	p14, 0, r2, c0, c1, 0
+	str	r2, [vcpu, #CP14_OFFSET(cp14_DBGDSCRext)]
 .endm
 
 /* Reads cp15 registers from hardware and stores them in memory
@@ -438,6 +453,231 @@ vcpu	.req	r0		@ vcpu pointer always in r0
 	str	r5, [vcpu, #VCPU_DEBUG_FLAGS]
 .endm
 
+/* Assume r11/r12 in used, clobbers r2-r10 */
+.macro cp14_read_and_push Op2 skip_num
+	cmp	\skip_num, #8
+	// if (skip_num >= 8) then skip c8-c15 directly
+	bge	1f
+	adr	r2, 9998f
+	add	r2, r2, \skip_num, lsl #2
+	bx	r2
+1:
+	adr	r2, 9999f
+	sub	r3, \skip_num, #8
+	add	r2, r2, r3, lsl #2
+	bx	r2
+9998:
+	mrc	p14, 0, r10, c0, c15, \Op2
+	mrc	p14, 0, r9, c0, c14, \Op2
+	mrc	p14, 0, r8, c0, c13, \Op2
+	mrc	p14, 0, r7, c0, c12, \Op2
+	mrc	p14, 0, r6, c0, c11, \Op2
+	mrc	p14, 0, r5, c0, c10, \Op2
+	mrc	p14, 0, r4, c0, c9, \Op2
+	mrc	p14, 0, r3, c0, c8, \Op2
+	push	{r3-r10}
+9999:
+	mrc	p14, 0, r10, c0, c7, \Op2
+	mrc	p14, 0, r9, c0, c6, \Op2
+	mrc	p14, 0, r8, c0, c5, \Op2
+	mrc	p14, 0, r7, c0, c4, \Op2
+	mrc	p14, 0, r6, c0, c3, \Op2
+	mrc	p14, 0, r5, c0, c2, \Op2
+	mrc	p14, 0, r4, c0, c1, \Op2
+	mrc	p14, 0, r3, c0, c0, \Op2
+	push	{r3-r10}
+.endm
+
+/* Assume r11/r12 in used, clobbers r2-r10 */
+.macro cp14_pop_and_write Op2 skip_num
+	cmp	\skip_num, #8
+	// if (skip_num >= 8) then skip c8-c15 directly
+	bge	1f
+	adr	r2, 9998f
+	add	r2, r2, \skip_num, lsl #2
+	pop	{r3-r10}
+	bx	r2
+1:
+	adr	r2, 9999f
+	sub	r3, \skip_num, #8
+	add	r2, r2, r3, lsl #2
+	pop	{r3-r10}
+	bx	r2
+
+9998:
+	mcr	p14, 0, r10, c0, c15, \Op2
+	mcr	p14, 0, r9, c0, c14, \Op2
+	mcr	p14, 0, r8, c0, c13, \Op2
+	mcr	p14, 0, r7, c0, c12, \Op2
+	mcr	p14, 0, r6, c0, c11, \Op2
+	mcr	p14, 0, r5, c0, c10, \Op2
+	mcr	p14, 0, r4, c0, c9, \Op2
+	mcr	p14, 0, r3, c0, c8, \Op2
+
+	pop	{r3-r10}
+9999:
+	mcr	p14, 0, r10, c0, c7, \Op2
+	mcr	p14, 0, r9, c0, c6, \Op2
+	mcr	p14, 0, r8, c0, c5, \Op2
+	mcr	p14, 0, r7, c0, c4, \Op2
+	mcr	p14, 0, r6, c0, c3, \Op2
+	mcr	p14, 0, r5, c0, c2, \Op2
+	mcr	p14, 0, r4, c0, c1, \Op2
+	mcr	p14, 0, r3, c0, c0, \Op2
+.endm
+
+/* Assume r11/r12 in used, clobbers r2-r3 */
+.macro cp14_read_and_str Op2 cp14_reg0 skip_num
+	adr	r3, 1f
+	add	r3, r3, \skip_num, lsl #3
+	bx	r3
+1:
+	mrc	p14, 0, r2, c0, c15, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+15)]
+	mrc	p14, 0, r2, c0, c14, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+14)]
+	mrc	p14, 0, r2, c0, c13, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+13)]
+	mrc	p14, 0, r2, c0, c12, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+12)]
+	mrc	p14, 0, r2, c0, c11, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+11)]
+	mrc	p14, 0, r2, c0, c10, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+10)]
+	mrc	p14, 0, r2, c0, c9, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+9)]
+	mrc	p14, 0, r2, c0, c8, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+8)]
+	mrc	p14, 0, r2, c0, c7, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+7)]
+	mrc	p14, 0, r2, c0, c6, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+6)]
+	mrc	p14, 0, r2, c0, c5, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+5)]
+	mrc	p14, 0, r2, c0, c4, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+4)]
+	mrc	p14, 0, r2, c0, c3, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+3)]
+	mrc	p14, 0, r2, c0, c2, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+2)]
+	mrc	p14, 0, r2, c0, c1, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+1)]
+	mrc	p14, 0, r2, c0, c0, \Op2
+	str     r2, [vcpu, #CP14_OFFSET(\cp14_reg0)]
+.endm
+
+/* Assume r11/r12 in used, clobbers r2-r3 */
+.macro cp14_ldr_and_write Op2 cp14_reg0 skip_num
+	adr	r3, 1f
+	add	r3, r3, \skip_num, lsl #3
+	bx	r3
+1:
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+15)]
+	mcr	p14, 0, r2, c0, c15, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+14)]
+	mcr	p14, 0, r2, c0, c14, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+13)]
+	mcr	p14, 0, r2, c0, c13, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+12)]
+	mcr	p14, 0, r2, c0, c12, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+11)]
+	mcr	p14, 0, r2, c0, c11, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+10)]
+	mcr	p14, 0, r2, c0, c10, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+9)]
+	mcr	p14, 0, r2, c0, c9, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+8)]
+	mcr	p14, 0, r2, c0, c8, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+7)]
+	mcr	p14, 0, r2, c0, c7, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+6)]
+	mcr	p14, 0, r2, c0, c6, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+5)]
+	mcr	p14, 0, r2, c0, c5, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+4)]
+	mcr	p14, 0, r2, c0, c4, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+3)]
+	mcr	p14, 0, r2, c0, c3, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+2)]
+	mcr	p14, 0, r2, c0, c2, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0+1)]
+	mcr	p14, 0, r2, c0, c1, \Op2
+	ldr     r2, [vcpu, #CP14_OFFSET(\cp14_reg0)]
+	mcr	p14, 0, r2, c0, c0, \Op2
+.endm
+
+/* Get extract number of BRPs and WRPs. Saved in r11/r12 */
+.macro read_hw_dbg_num
+	mrc	p14, 0, r2, c0, c0, 0
+	ubfx	r11, r2, #24, #4
+	add	r11, r11, #1		// Extract BRPs
+	ubfx	r12, r2, #28, #4
+	add	r12, r12, #1		// Extract WRPs
+	mov	r2, #16
+	sub	r11, r2, r11		// How many BPs to skip
+	sub	r12, r2, r12		// How many WPs to skip
+.endm
+
+/* Reads cp14 registers from hardware.
+ * Writes cp14 registers in-order to the stack.
+ *
+ * Assumes vcpu pointer in vcpu reg
+ *
+ * Clobbers r2-r12
+ */
+.macro save_host_debug_regs
+	read_hw_dbg_num
+	cp14_read_and_push #4, r11	@ DBGBVR
+	cp14_read_and_push #5, r11	@ DBGBCR
+	cp14_read_and_push #6, r12	@ DBGWVR
+	cp14_read_and_push #7, r12	@ DBGWCR
+.endm
+
+/* Reads cp14 registers from hardware.
+ * Writes cp14 registers in-order to the VCPU struct pointed to by vcpup.
+ *
+ * Assumes vcpu pointer in vcpu reg
+ *
+ * Clobbers r2-r12
+ */
+.macro save_guest_debug_regs
+	read_hw_dbg_num
+	cp14_read_and_str #4, cp14_DBGBVR0, r11
+	cp14_read_and_str #5, cp14_DBGBCR0, r11
+	cp14_read_and_str #6, cp14_DBGWVR0, r12
+	cp14_read_and_str #7, cp14_DBGWCR0, r12
+.endm
+
+/* Reads cp14 registers in-order from the stack.
+ * Writes cp14 registers to hardware.
+ *
+ * Assumes vcpu pointer in vcpu reg
+ *
+ * Clobbers r2-r12
+ */
+.macro restore_host_debug_regs
+	read_hw_dbg_num
+	cp14_pop_and_write #4, r11	@ DBGBVR
+	cp14_pop_and_write #5, r11	@ DBGBCR
+	cp14_pop_and_write #6, r12	@ DBGWVR
+	cp14_pop_and_write #7, r12	@ DBGWCR
+.endm
+
+/* Reads cp14 registers in-order from the VCPU struct pointed to by vcpup
+ * Writes cp14 registers to hardware.
+ *
+ * Assumes vcpu pointer in vcpu reg
+ *
+ * Clobbers r2-r12
+ */
+.macro restore_guest_debug_regs
+	read_hw_dbg_num
+	cp14_ldr_and_write #4, cp14_DBGBVR0, r11
+	cp14_ldr_and_write #5, cp14_DBGBCR0, r11
+	cp14_ldr_and_write #6, cp14_DBGWVR0, r12
+	cp14_ldr_and_write #7, cp14_DBGWCR0, r12
+.endm
+
 /*
  * Save the VGIC CPU state into memory
  *
@@ -673,3 +913,19 @@ ARM_BE8(rev	r6, r6  )
 .macro load_vcpu
 	mrc	p15, 4, vcpu, c13, c0, 2	@ HTPIDR
 .endm
+
+__save_host_debug_regs:
+	save_host_debug_regs
+	bx	lr
+
+__save_guest_debug_regs:
+	save_guest_debug_regs
+	bx	lr
+
+__restore_host_debug_regs:
+	restore_host_debug_regs
+	bx	lr
+
+__restore_guest_debug_regs:
+	restore_guest_debug_regs
+	bx	lr
-- 
1.7.12.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v2 11/11] KVM: arm: enable trapping of all debug registers
  2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
                   ` (9 preceding siblings ...)
  2015-05-31  4:27 ` [PATCH v2 10/11] KVM: arm: implement lazy world switch for debug registers Zhichao Huang
@ 2015-05-31  4:27 ` Zhichao Huang
  10 siblings, 0 replies; 26+ messages in thread
From: Zhichao Huang @ 2015-05-31  4:27 UTC (permalink / raw)
  To: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	alex.bennee, will.deacon
  Cc: huangzhichao, Zhichao Huang

Enable trapping of the debug registers, allowing guests to use
the debug infrastructure.

Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
---
 arch/arm/kvm/interrupts_head.S | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
index ed406be..107bda4 100644
--- a/arch/arm/kvm/interrupts_head.S
+++ b/arch/arm/kvm/interrupts_head.S
@@ -886,10 +886,21 @@ ARM_BE8(rev	r6, r6  )
 .endm
 
 /* Configures the HDCR (Hyp Debug Configuration Register) on entry/return
- * (hardware reset value is 0) */
+ * (hardware reset value is 0)
+ *
+ * Clobbers r2-r4
+ */
 .macro set_hdcr operation
 	mrc	p15, 4, r2, c1, c1, 1
-	ldr	r3, =(HDCR_TPM|HDCR_TPMCR)
+	ldr	r3, =(HDCR_TPM|HDCR_TPMCR|HDCR_TDRA|HDCR_TDOSA)
+
+	// Check for KVM_ARM_DEBUG_DIRTY, and set debug to trap
+	// if not dirty.
+	ldr	r4, [vcpu, #VCPU_DEBUG_FLAGS]
+	cmp	r4, #KVM_ARM_DEBUG_DIRTY
+	beq	1f
+	orr	r3, r3,  #HDCR_TDA
+1:
 	.if \operation == vmentry
 	orr	r2, r2, r3		@ Trap some perfmon accesses
 	.else
-- 
1.7.12.4


^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 09/11] KVM: arm: disable debug mode if we don't actually need it.
  2015-05-31  4:27 ` [PATCH v2 09/11] KVM: arm: disable debug mode if we don't actually need it Zhichao Huang
@ 2015-06-01 10:16   ` Will Deacon
  2015-06-07 14:08     ` zichao
  0 siblings, 1 reply; 26+ messages in thread
From: Will Deacon @ 2015-06-01 10:16 UTC (permalink / raw)
  To: Zhichao Huang; +Cc: kvm, Marc Zyngier, huangzhichao, linux-arm-kernel, kvmarm

On Sun, May 31, 2015 at 05:27:10AM +0100, Zhichao Huang wrote:
> Until now we enable debug mode all the time even if we don't
> actually need it.
> 
> Inspired by the implementation in arm64, disable debug mode if
> we don't need it. And then we are able to reduce unnecessary
> registers saving/restoring when the debug mode is disabled.

I'm terrified about this patch. Enabling monitor mode has proven to be
*extremely* fragile in practice on 32-bit ARM SoCs, so trying to do this
morei often makes me very nervous.

> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
> ---
>  arch/arm/kernel/hw_breakpoint.c | 55 ++++++++++++++++++++++++++++++++++-------
>  1 file changed, 46 insertions(+), 9 deletions(-)
> 
> diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
> index dc7d0a9..1d27563 100644
> --- a/arch/arm/kernel/hw_breakpoint.c
> +++ b/arch/arm/kernel/hw_breakpoint.c
> @@ -266,8 +266,7 @@ static int enable_monitor_mode(void)
>  	}
>  
>  	/* Check that the write made it through. */
> -	ARM_DBG_READ(c0, c1, 0, dscr);
> -	if (!(dscr & ARM_DSCR_MDBGEN)) {
> +	if (!monitor_mode_enabled()) {
>  		pr_warn_once("Failed to enable monitor mode on CPU %d.\n",
>  				smp_processor_id());
>  		return -EPERM;

Ok, this hunk is harmless :)

> @@ -277,6 +276,43 @@ out:
>  	return 0;
>  }
>  
> +static int disable_monitor_mode(void)
> +{
> +	u32 dscr;
> +
> +	ARM_DBG_READ(c0, c1, 0, dscr);
> +
> +	/* If monitor mode is already disabled, just return. */
> +	if (!(dscr & ARM_DSCR_MDBGEN))
> +		goto out;
> +
> +	/* Write to the corresponding DSCR. */
> +	switch (get_debug_arch()) {
> +	case ARM_DEBUG_ARCH_V6:
> +	case ARM_DEBUG_ARCH_V6_1:
> +		ARM_DBG_WRITE(c0, c1, 0, (dscr & ~ARM_DSCR_MDBGEN));
> +		break;
> +	case ARM_DEBUG_ARCH_V7_ECP14:
> +	case ARM_DEBUG_ARCH_V7_1:
> +	case ARM_DEBUG_ARCH_V8:
> +		ARM_DBG_WRITE(c0, c2, 2, (dscr & ~ARM_DSCR_MDBGEN));
> +		isb();
> +		break;
> +	default:
> +		return -ENODEV;
> +	}
> +
> +	/* Check that the write made it through. */
> +	if (monitor_mode_enabled()) {
> +		pr_warn_once("Failed to disable monitor mode on CPU %d.\n",
> +				smp_processor_id());
> +		return -EPERM;
> +	}
> +
> +out:
> +	return 0;
> +}

I'm not comfortable with this. enable_monitor_mode has precisly one caller
[reset_ctrl_regs] which goes to some lengths to get the system into a
well-defined state. On top of that, the whole thing is run with an undef
hook registered because there isn't an architected way to discover whether
or not DBGSWENABLE is driven low.

Why exactly do you need this? Can you not trap guest debug accesses some
other way?

>  int hw_breakpoint_slots(int type)
>  {
>  	if (!debug_arch_supported())
> @@ -338,6 +374,8 @@ int arch_install_hw_breakpoint(struct perf_event *bp)
>  	int i, max_slots, ctrl_base, val_base;
>  	u32 addr, ctrl;
>  
> +	enable_monitor_mode();
> +
>  	addr = info->address;
>  	ctrl = encode_ctrl_reg(info->ctrl) | 0x1;
>  
> @@ -430,6 +468,8 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp)
>  
>  	/* Reset the control register. */
>  	write_wb_reg(base + i, 0);
> +
> +	disable_monitor_mode();

My previous concerns notwithstanding, shouldn't this be refcounted?

Will

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/11] KVM: arm: plug guest debug exploit
  2015-05-31  4:27 ` [PATCH v2 01/11] KVM: arm: plug guest debug exploit Zhichao Huang
@ 2015-06-01 10:56   ` Marc Zyngier
  2015-06-07 13:40     ` zichao
  0 siblings, 1 reply; 26+ messages in thread
From: Marc Zyngier @ 2015-06-01 10:56 UTC (permalink / raw)
  To: Zhichao Huang, kvm, linux-arm-kernel, kvmarm, christoffer.dall,
	alex.bennee, Will Deacon
  Cc: huangzhichao, stable

Hi Zhichao,

On 31/05/15 05:27, Zhichao Huang wrote:
> Hardware debugging in guests is not intercepted currently, it means
> that a malicious guest can bring down the entire machine by writing
> to the debug registers.
> 
> This patch enable trapping of all debug registers, preventing the guests
> to mess with the host state.
> 
> However, it is a precursor for later patches which will need to do
> more to world switch debug states while necessary.
> 
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
> ---
>  arch/arm/include/asm/kvm_coproc.h |  3 +-
>  arch/arm/kvm/coproc.c             | 60 +++++++++++++++++++++++++++++++++++----
>  arch/arm/kvm/handle_exit.c        |  4 +--
>  arch/arm/kvm/interrupts_head.S    |  2 +-
>  4 files changed, 59 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/arm/include/asm/kvm_coproc.h b/arch/arm/include/asm/kvm_coproc.h
> index 4917c2f..e74ab0f 100644
> --- a/arch/arm/include/asm/kvm_coproc.h
> +++ b/arch/arm/include/asm/kvm_coproc.h
> @@ -31,7 +31,8 @@ void kvm_register_target_coproc_table(struct kvm_coproc_target_table *table);
>  int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run);
>  int kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
>  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
> -int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
>  int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
>  int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
>  
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index f3d88dc..2e12760 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -91,12 +91,6 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>  
> -int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
> -{
> -	kvm_inject_undefined(vcpu);
> -	return 1;
> -}
> -
>  static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
>  {
>  	/*
> @@ -519,6 +513,60 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return emulate_cp15(vcpu, &params);
>  }
>  
> +/**
> + * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
> + * @vcpu: The VCPU pointer
> + * @run:  The kvm_run struct
> + */
> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	struct coproc_params params;
> +
> +	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
> +	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
> +	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
> +	params.is_64bit = true;
> +
> +	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
> +	params.Op2 = 0;
> +	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
> +	params.CRm = 0;
> +
> +	/* raz_wi */
> +	(void)pm_fake(vcpu, &params, NULL);
> +
> +	/* handled */
> +	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> +	return 1;
> +}
> +
> +/**
> + * kvm_handle_cp14_32 -- handles a mrc/mcr trap on a guest CP14 access
> + * @vcpu: The VCPU pointer
> + * @run:  The kvm_run struct
> + */
> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	struct coproc_params params;
> +
> +	params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
> +	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
> +	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
> +	params.is_64bit = false;
> +
> +	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
> +	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
> +	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
> +	params.Rt2 = 0;
> +
> +	/* raz_wi */
> +	(void)pm_fake(vcpu, &params, NULL);
> +
> +	/* handled */
> +	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> +	return 1;
> +}
> +
>  /******************************************************************************
>   * Userspace API
>   *****************************************************************************/
> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
> index 95f12b2..357ad1b 100644
> --- a/arch/arm/kvm/handle_exit.c
> +++ b/arch/arm/kvm/handle_exit.c
> @@ -104,9 +104,9 @@ static exit_handle_fn arm_exit_handlers[] = {
>  	[HSR_EC_WFI]		= kvm_handle_wfx,
>  	[HSR_EC_CP15_32]	= kvm_handle_cp15_32,
>  	[HSR_EC_CP15_64]	= kvm_handle_cp15_64,
> -	[HSR_EC_CP14_MR]	= kvm_handle_cp14_access,
> +	[HSR_EC_CP14_MR]	= kvm_handle_cp14_32,
>  	[HSR_EC_CP14_LS]	= kvm_handle_cp14_load_store,
> -	[HSR_EC_CP14_64]	= kvm_handle_cp14_access,
> +	[HSR_EC_CP14_64]	= kvm_handle_cp14_64,
>  	[HSR_EC_CP_0_13]	= kvm_handle_cp_0_13_access,
>  	[HSR_EC_CP10_ID]	= kvm_handle_cp10_id,
>  	[HSR_EC_SVC_HYP]	= handle_svc_hyp,
> diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
> index 35e4a3a..a9f3a56 100644
> --- a/arch/arm/kvm/interrupts_head.S
> +++ b/arch/arm/kvm/interrupts_head.S
> @@ -607,7 +607,7 @@ ARM_BE8(rev	r6, r6  )
>   * (hardware reset value is 0) */
>  .macro set_hdcr operation
>  	mrc	p15, 4, r2, c1, c1, 1
> -	ldr	r3, =(HDCR_TPM|HDCR_TPMCR)
> +	ldr	r3, =(HDCR_TPM|HDCR_TPMCR|HDCR_TDRA|HDCR_TDOSA|HDCR_TDA)

There is a small problem here. Imagine the host has programmed a
watchpoint on some VA. We switch to the guest, and then access the same
VA. At that stage, the guest will take the debug exception. That's
really not pretty.

I think using HDCR_TDE as well should sort it, effectively preventing
the exception from being delivered to the guest, but you will need to
handle this on the HYP side. Performance wise, this is also really horrible.

A better way would be to disable the host's BPs/WPs if any is enabled.

>  	.if \operation == vmentry
>  	orr	r2, r2, r3		@ Trap some perfmon accesses
>  	.else
> 

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/11] KVM: arm: plug guest debug exploit
  2015-06-01 10:56   ` Marc Zyngier
@ 2015-06-07 13:40     ` zichao
  2015-06-09 10:29       ` Marc Zyngier
  0 siblings, 1 reply; 26+ messages in thread
From: zichao @ 2015-06-07 13:40 UTC (permalink / raw)
  To: Marc Zyngier, kvm, linux-arm-kernel, kvmarm, christoffer.dall,
	alex.bennee, Will Deacon
  Cc: huangzhichao, stable

Hi, Marc,

On 2015/6/1 18:56, Marc Zyngier wrote:
> Hi Zhichao,
> 
> On 31/05/15 05:27, Zhichao Huang wrote:
>> Hardware debugging in guests is not intercepted currently, it means
>> that a malicious guest can bring down the entire machine by writing
>> to the debug registers.
>>
>> This patch enable trapping of all debug registers, preventing the guests
>> to mess with the host state.
>>
>> However, it is a precursor for later patches which will need to do
>> more to world switch debug states while necessary.
>>
>> Cc: <stable@vger.kernel.org>
>> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
>> ---
>>  arch/arm/include/asm/kvm_coproc.h |  3 +-
>>  arch/arm/kvm/coproc.c             | 60 +++++++++++++++++++++++++++++++++++----
>>  arch/arm/kvm/handle_exit.c        |  4 +--
>>  arch/arm/kvm/interrupts_head.S    |  2 +-
>>  4 files changed, 59 insertions(+), 10 deletions(-)
>>
>> diff --git a/arch/arm/include/asm/kvm_coproc.h b/arch/arm/include/asm/kvm_coproc.h
>> index 4917c2f..e74ab0f 100644
>> --- a/arch/arm/include/asm/kvm_coproc.h
>> +++ b/arch/arm/include/asm/kvm_coproc.h
>> @@ -31,7 +31,8 @@ void kvm_register_target_coproc_table(struct kvm_coproc_target_table *table);
>>  int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>  int kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
>> -int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
>> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
>> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>  int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>  int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>  
>> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
>> index f3d88dc..2e12760 100644
>> --- a/arch/arm/kvm/coproc.c
>> +++ b/arch/arm/kvm/coproc.c
>> @@ -91,12 +91,6 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>  	return 1;
>>  }
>>  
>> -int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
>> -{
>> -	kvm_inject_undefined(vcpu);
>> -	return 1;
>> -}
>> -
>>  static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
>>  {
>>  	/*
>> @@ -519,6 +513,60 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>  	return emulate_cp15(vcpu, &params);
>>  }
>>  
>> +/**
>> + * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
>> + * @vcpu: The VCPU pointer
>> + * @run:  The kvm_run struct
>> + */
>> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
>> +{
>> +	struct coproc_params params;
>> +
>> +	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
>> +	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
>> +	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
>> +	params.is_64bit = true;
>> +
>> +	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
>> +	params.Op2 = 0;
>> +	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
>> +	params.CRm = 0;
>> +
>> +	/* raz_wi */
>> +	(void)pm_fake(vcpu, &params, NULL);
>> +
>> +	/* handled */
>> +	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
>> +	return 1;
>> +}
>> +
>> +/**
>> + * kvm_handle_cp14_32 -- handles a mrc/mcr trap on a guest CP14 access
>> + * @vcpu: The VCPU pointer
>> + * @run:  The kvm_run struct
>> + */
>> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>> +{
>> +	struct coproc_params params;
>> +
>> +	params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
>> +	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
>> +	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
>> +	params.is_64bit = false;
>> +
>> +	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
>> +	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
>> +	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
>> +	params.Rt2 = 0;
>> +
>> +	/* raz_wi */
>> +	(void)pm_fake(vcpu, &params, NULL);
>> +
>> +	/* handled */
>> +	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
>> +	return 1;
>> +}
>> +
>>  /******************************************************************************
>>   * Userspace API
>>   *****************************************************************************/
>> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
>> index 95f12b2..357ad1b 100644
>> --- a/arch/arm/kvm/handle_exit.c
>> +++ b/arch/arm/kvm/handle_exit.c
>> @@ -104,9 +104,9 @@ static exit_handle_fn arm_exit_handlers[] = {
>>  	[HSR_EC_WFI]		= kvm_handle_wfx,
>>  	[HSR_EC_CP15_32]	= kvm_handle_cp15_32,
>>  	[HSR_EC_CP15_64]	= kvm_handle_cp15_64,
>> -	[HSR_EC_CP14_MR]	= kvm_handle_cp14_access,
>> +	[HSR_EC_CP14_MR]	= kvm_handle_cp14_32,
>>  	[HSR_EC_CP14_LS]	= kvm_handle_cp14_load_store,
>> -	[HSR_EC_CP14_64]	= kvm_handle_cp14_access,
>> +	[HSR_EC_CP14_64]	= kvm_handle_cp14_64,
>>  	[HSR_EC_CP_0_13]	= kvm_handle_cp_0_13_access,
>>  	[HSR_EC_CP10_ID]	= kvm_handle_cp10_id,
>>  	[HSR_EC_SVC_HYP]	= handle_svc_hyp,
>> diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
>> index 35e4a3a..a9f3a56 100644
>> --- a/arch/arm/kvm/interrupts_head.S
>> +++ b/arch/arm/kvm/interrupts_head.S
>> @@ -607,7 +607,7 @@ ARM_BE8(rev	r6, r6  )
>>   * (hardware reset value is 0) */
>>  .macro set_hdcr operation
>>  	mrc	p15, 4, r2, c1, c1, 1
>> -	ldr	r3, =(HDCR_TPM|HDCR_TPMCR)
>> +	ldr	r3, =(HDCR_TPM|HDCR_TPMCR|HDCR_TDRA|HDCR_TDOSA|HDCR_TDA)
> 
> There is a small problem here. Imagine the host has programmed a
> watchpoint on some VA. We switch to the guest, and then access the same
> VA. At that stage, the guest will take the debug exception. That's
> really not pretty.
> 

I've thought about it and I think there maybe OK, because when we switch from the
host to the guest, the context_switch() in host must be called first, and then
the host will switch debug registers, and the guest will not see the watchpoint
the host programmed before.

Or am I missing some circumstances here?

> I think using HDCR_TDE as well should sort it, effectively preventing
> the exception from being delivered to the guest, but you will need to
> handle this on the HYP side. Performance wise, this is also really horrible.
> 
> A better way would be to disable the host's BPs/WPs if any is enabled.
> 
>>  	.if \operation == vmentry
>>  	orr	r2, r2, r3		@ Trap some perfmon accesses
>>  	.else
>>
> 
> Thanks,
> 
> 	M.
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 09/11] KVM: arm: disable debug mode if we don't actually need it.
  2015-06-01 10:16   ` Will Deacon
@ 2015-06-07 14:08     ` zichao
  0 siblings, 0 replies; 26+ messages in thread
From: zichao @ 2015-06-07 14:08 UTC (permalink / raw)
  To: Will Deacon; +Cc: kvm, Marc Zyngier, huangzhichao, linux-arm-kernel, kvmarm

Hi, Will,

On 2015/6/1 18:16, Will Deacon wrote:
> On Sun, May 31, 2015 at 05:27:10AM +0100, Zhichao Huang wrote:
>> Until now we enable debug mode all the time even if we don't
>> actually need it.
>>
>> Inspired by the implementation in arm64, disable debug mode if
>> we don't need it. And then we are able to reduce unnecessary
>> registers saving/restoring when the debug mode is disabled.
> 
> I'm terrified about this patch. Enabling monitor mode has proven to be
> *extremely* fragile in practice on 32-bit ARM SoCs, so trying to do this
> morei often makes me very nervous.
> 
>> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
>> ---
>>  arch/arm/kernel/hw_breakpoint.c | 55 ++++++++++++++++++++++++++++++++++-------
>>  1 file changed, 46 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
>> index dc7d0a9..1d27563 100644
>> --- a/arch/arm/kernel/hw_breakpoint.c
>> +++ b/arch/arm/kernel/hw_breakpoint.c
>> @@ -266,8 +266,7 @@ static int enable_monitor_mode(void)
>>  	}
>>  
>>  	/* Check that the write made it through. */
>> -	ARM_DBG_READ(c0, c1, 0, dscr);
>> -	if (!(dscr & ARM_DSCR_MDBGEN)) {
>> +	if (!monitor_mode_enabled()) {
>>  		pr_warn_once("Failed to enable monitor mode on CPU %d.\n",
>>  				smp_processor_id());
>>  		return -EPERM;
> 
> Ok, this hunk is harmless :)
> 
>> @@ -277,6 +276,43 @@ out:
>>  	return 0;
>>  }
>>  
>> +static int disable_monitor_mode(void)
>> +{
>> +	u32 dscr;
>> +
>> +	ARM_DBG_READ(c0, c1, 0, dscr);
>> +
>> +	/* If monitor mode is already disabled, just return. */
>> +	if (!(dscr & ARM_DSCR_MDBGEN))
>> +		goto out;
>> +
>> +	/* Write to the corresponding DSCR. */
>> +	switch (get_debug_arch()) {
>> +	case ARM_DEBUG_ARCH_V6:
>> +	case ARM_DEBUG_ARCH_V6_1:
>> +		ARM_DBG_WRITE(c0, c1, 0, (dscr & ~ARM_DSCR_MDBGEN));
>> +		break;
>> +	case ARM_DEBUG_ARCH_V7_ECP14:
>> +	case ARM_DEBUG_ARCH_V7_1:
>> +	case ARM_DEBUG_ARCH_V8:
>> +		ARM_DBG_WRITE(c0, c2, 2, (dscr & ~ARM_DSCR_MDBGEN));
>> +		isb();
>> +		break;
>> +	default:
>> +		return -ENODEV;
>> +	}
>> +
>> +	/* Check that the write made it through. */
>> +	if (monitor_mode_enabled()) {
>> +		pr_warn_once("Failed to disable monitor mode on CPU %d.\n",
>> +				smp_processor_id());
>> +		return -EPERM;
>> +	}
>> +
>> +out:
>> +	return 0;
>> +}
> 
> I'm not comfortable with this. enable_monitor_mode has precisly one caller
> [reset_ctrl_regs] which goes to some lengths to get the system into a
> well-defined state. On top of that, the whole thing is run with an undef
> hook registered because there isn't an architected way to discover whether
> or not DBGSWENABLE is driven low.
> 
> Why exactly do you need this? Can you not trap guest debug accesses some
> other way?

OK, I shall look for some other ways to reduce the overhead of world switch.

And another problem might be that, when we start an ARMv7 vm (specify CPU to
be Cortex-A57) on the ARMv8 platform, debug registers will always be saving/loading
because it is always detected that the debug mode is enabled even though we
didn't acutally use it.

> 
>>  int hw_breakpoint_slots(int type)
>>  {
>>  	if (!debug_arch_supported())
>> @@ -338,6 +374,8 @@ int arch_install_hw_breakpoint(struct perf_event *bp)
>>  	int i, max_slots, ctrl_base, val_base;
>>  	u32 addr, ctrl;
>>  
>> +	enable_monitor_mode();
>> +
>>  	addr = info->address;
>>  	ctrl = encode_ctrl_reg(info->ctrl) | 0x1;
>>  
>> @@ -430,6 +468,8 @@ void arch_uninstall_hw_breakpoint(struct perf_event *bp)
>>  
>>  	/* Reset the control register. */
>>  	write_wb_reg(base + i, 0);
>> +
>> +	disable_monitor_mode();
> 
> My previous concerns notwithstanding, shouldn't this be refcounted?

Maybe shouldn't in my opinion, because we install/uninstall breakpoints only when
we does context switch, and then we always install/uninstall all the breakpoints.

> 
> Will
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/11] KVM: arm: plug guest debug exploit
  2015-06-07 13:40     ` zichao
@ 2015-06-09 10:29       ` Marc Zyngier
  2015-06-14 16:08         ` zichao
  0 siblings, 1 reply; 26+ messages in thread
From: Marc Zyngier @ 2015-06-09 10:29 UTC (permalink / raw)
  To: zichao, kvm, linux-arm-kernel, kvmarm, christoffer.dall,
	alex.bennee, Will Deacon
  Cc: huangzhichao, stable

On 07/06/15 14:40, zichao wrote:
> Hi, Marc,
> 
> On 2015/6/1 18:56, Marc Zyngier wrote:
>> Hi Zhichao,
>>
>> On 31/05/15 05:27, Zhichao Huang wrote:
>>> Hardware debugging in guests is not intercepted currently, it means
>>> that a malicious guest can bring down the entire machine by writing
>>> to the debug registers.
>>>
>>> This patch enable trapping of all debug registers, preventing the guests
>>> to mess with the host state.
>>>
>>> However, it is a precursor for later patches which will need to do
>>> more to world switch debug states while necessary.
>>>
>>> Cc: <stable@vger.kernel.org>
>>> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
>>> ---
>>>  arch/arm/include/asm/kvm_coproc.h |  3 +-
>>>  arch/arm/kvm/coproc.c             | 60 +++++++++++++++++++++++++++++++++++----
>>>  arch/arm/kvm/handle_exit.c        |  4 +--
>>>  arch/arm/kvm/interrupts_head.S    |  2 +-
>>>  4 files changed, 59 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/arch/arm/include/asm/kvm_coproc.h b/arch/arm/include/asm/kvm_coproc.h
>>> index 4917c2f..e74ab0f 100644
>>> --- a/arch/arm/include/asm/kvm_coproc.h
>>> +++ b/arch/arm/include/asm/kvm_coproc.h
>>> @@ -31,7 +31,8 @@ void kvm_register_target_coproc_table(struct kvm_coproc_target_table *table);
>>>  int kvm_handle_cp10_id(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>>  int kvm_handle_cp_0_13_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>>  int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>> -int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>>  int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>>  int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run);
>>>  
>>> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
>>> index f3d88dc..2e12760 100644
>>> --- a/arch/arm/kvm/coproc.c
>>> +++ b/arch/arm/kvm/coproc.c
>>> @@ -91,12 +91,6 @@ int kvm_handle_cp14_load_store(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>  	return 1;
>>>  }
>>>  
>>> -int kvm_handle_cp14_access(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>> -{
>>> -	kvm_inject_undefined(vcpu);
>>> -	return 1;
>>> -}
>>> -
>>>  static void reset_mpidr(struct kvm_vcpu *vcpu, const struct coproc_reg *r)
>>>  {
>>>  	/*
>>> @@ -519,6 +513,60 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>>  	return emulate_cp15(vcpu, &params);
>>>  }
>>>  
>>> +/**
>>> + * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
>>> + * @vcpu: The VCPU pointer
>>> + * @run:  The kvm_run struct
>>> + */
>>> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>> +{
>>> +	struct coproc_params params;
>>> +
>>> +	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
>>> +	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
>>> +	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
>>> +	params.is_64bit = true;
>>> +
>>> +	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
>>> +	params.Op2 = 0;
>>> +	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
>>> +	params.CRm = 0;
>>> +
>>> +	/* raz_wi */
>>> +	(void)pm_fake(vcpu, &params, NULL);
>>> +
>>> +	/* handled */
>>> +	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
>>> +	return 1;
>>> +}
>>> +
>>> +/**
>>> + * kvm_handle_cp14_32 -- handles a mrc/mcr trap on a guest CP14 access
>>> + * @vcpu: The VCPU pointer
>>> + * @run:  The kvm_run struct
>>> + */
>>> +int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>>> +{
>>> +	struct coproc_params params;
>>> +
>>> +	params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
>>> +	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
>>> +	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
>>> +	params.is_64bit = false;
>>> +
>>> +	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
>>> +	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
>>> +	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
>>> +	params.Rt2 = 0;
>>> +
>>> +	/* raz_wi */
>>> +	(void)pm_fake(vcpu, &params, NULL);
>>> +
>>> +	/* handled */
>>> +	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
>>> +	return 1;
>>> +}
>>> +
>>>  /******************************************************************************
>>>   * Userspace API
>>>   *****************************************************************************/
>>> diff --git a/arch/arm/kvm/handle_exit.c b/arch/arm/kvm/handle_exit.c
>>> index 95f12b2..357ad1b 100644
>>> --- a/arch/arm/kvm/handle_exit.c
>>> +++ b/arch/arm/kvm/handle_exit.c
>>> @@ -104,9 +104,9 @@ static exit_handle_fn arm_exit_handlers[] = {
>>>  	[HSR_EC_WFI]		= kvm_handle_wfx,
>>>  	[HSR_EC_CP15_32]	= kvm_handle_cp15_32,
>>>  	[HSR_EC_CP15_64]	= kvm_handle_cp15_64,
>>> -	[HSR_EC_CP14_MR]	= kvm_handle_cp14_access,
>>> +	[HSR_EC_CP14_MR]	= kvm_handle_cp14_32,
>>>  	[HSR_EC_CP14_LS]	= kvm_handle_cp14_load_store,
>>> -	[HSR_EC_CP14_64]	= kvm_handle_cp14_access,
>>> +	[HSR_EC_CP14_64]	= kvm_handle_cp14_64,
>>>  	[HSR_EC_CP_0_13]	= kvm_handle_cp_0_13_access,
>>>  	[HSR_EC_CP10_ID]	= kvm_handle_cp10_id,
>>>  	[HSR_EC_SVC_HYP]	= handle_svc_hyp,
>>> diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
>>> index 35e4a3a..a9f3a56 100644
>>> --- a/arch/arm/kvm/interrupts_head.S
>>> +++ b/arch/arm/kvm/interrupts_head.S
>>> @@ -607,7 +607,7 @@ ARM_BE8(rev	r6, r6  )
>>>   * (hardware reset value is 0) */
>>>  .macro set_hdcr operation
>>>  	mrc	p15, 4, r2, c1, c1, 1
>>> -	ldr	r3, =(HDCR_TPM|HDCR_TPMCR)
>>> +	ldr	r3, =(HDCR_TPM|HDCR_TPMCR|HDCR_TDRA|HDCR_TDOSA|HDCR_TDA)
>>
>> There is a small problem here. Imagine the host has programmed a
>> watchpoint on some VA. We switch to the guest, and then access the same
>> VA. At that stage, the guest will take the debug exception. That's
>> really not pretty.
>>
> 
> I've thought about it and I think there maybe OK, because when we switch from the
> host to the guest, the context_switch() in host must be called first, and then
> the host will switch debug registers, and the guest will not see the watchpoint
> the host programmed before.
> 
> Or am I missing some circumstances here?

I don't see anything in this patch that reprograms the debug registers.
You are simply trapping the guest access to these registers, but
whatever content the host has put there is still active.

So, assuming that the guest does not touch any debug register (and
legitimately assumes that they are inactive), a debug exception may fire
at PL1.

>> I think using HDCR_TDE as well should sort it, effectively preventing
>> the exception from being delivered to the guest, but you will need to
>> handle this on the HYP side. Performance wise, this is also really horrible.
>>
>> A better way would be to disable the host's BPs/WPs if any is enabled.

I still think you either need to fixup the host's registers if they are
active when you enter the guest.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 02/11] KVM: arm: rename pm_fake handler to trap_raz_wi
  2015-05-31  4:27 ` [PATCH v2 02/11] KVM: arm: rename pm_fake handler to trap_raz_wi Zhichao Huang
@ 2015-06-09 10:42   ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2015-06-09 10:42 UTC (permalink / raw)
  To: Zhichao Huang
  Cc: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	will.deacon, huangzhichao


Zhichao Huang <zhichao.huang@linaro.org> writes:

> pm_fake doesn't quite describe what the handler does (ignoring writes
> and returning 0 for reads).
>
> As we're about to use it (a lot) in a different context, rename it
> with a (admitedly cryptic) name that make sense for all users.
>
> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  arch/arm/kvm/coproc.c | 34 ++++++++++++++++------------------
>  1 file changed, 16 insertions(+), 18 deletions(-)
>
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 2e12760..9d283d9 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -229,7 +229,7 @@ bool access_vm_reg(struct kvm_vcpu *vcpu,
>   * must always support PMCCNTR (the cycle counter): we just RAZ/WI for
>   * all PM registers, which doesn't crash the guest kernel at least.
>   */
> -static bool pm_fake(struct kvm_vcpu *vcpu,
> +static bool trap_raz_wi(struct kvm_vcpu *vcpu,
>  		    const struct coproc_params *p,
>  		    const struct coproc_reg *r)
>  {
> @@ -239,19 +239,19 @@ static bool pm_fake(struct kvm_vcpu *vcpu,
>  		return read_zero(vcpu, p);
>  }
>  
> -#define access_pmcr pm_fake
> -#define access_pmcntenset pm_fake
> -#define access_pmcntenclr pm_fake
> -#define access_pmovsr pm_fake
> -#define access_pmselr pm_fake
> -#define access_pmceid0 pm_fake
> -#define access_pmceid1 pm_fake
> -#define access_pmccntr pm_fake
> -#define access_pmxevtyper pm_fake
> -#define access_pmxevcntr pm_fake
> -#define access_pmuserenr pm_fake
> -#define access_pmintenset pm_fake
> -#define access_pmintenclr pm_fake
> +#define access_pmcr trap_raz_wi
> +#define access_pmcntenset trap_raz_wi
> +#define access_pmcntenclr trap_raz_wi
> +#define access_pmovsr trap_raz_wi
> +#define access_pmselr trap_raz_wi
> +#define access_pmceid0 trap_raz_wi
> +#define access_pmceid1 trap_raz_wi
> +#define access_pmccntr trap_raz_wi
> +#define access_pmxevtyper trap_raz_wi
> +#define access_pmxevcntr trap_raz_wi
> +#define access_pmuserenr trap_raz_wi
> +#define access_pmintenset trap_raz_wi
> +#define access_pmintenclr trap_raz_wi
>  
>  /* Architected CP15 registers.
>   * CRn denotes the primary register number, but is copied to the CRm in the
> @@ -532,8 +532,7 @@ int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
>  	params.CRm = 0;
>  
> -	/* raz_wi */
> -	(void)pm_fake(vcpu, &params, NULL);
> +	(void)trap_raz_wi(vcpu, &params, NULL);
>  
>  	/* handled */
>  	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> @@ -559,8 +558,7 @@ int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
>  	params.Rt2 = 0;
>  
> -	/* raz_wi */
> -	(void)pm_fake(vcpu, &params, NULL);
> +	(void)trap_raz_wi(vcpu, &params, NULL);
>  
>  	/* handled */
>  	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));

-- 
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/11] KVM: arm: common infrastructure for handling AArch32 CP14/CP15
  2015-05-31  4:27 ` [PATCH v2 04/11] KVM: arm: common infrastructure for handling AArch32 CP14/CP15 Zhichao Huang
@ 2015-06-09 10:45   ` Alex Bennée
  2015-06-14 16:17     ` zichao
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Bennée @ 2015-06-09 10:45 UTC (permalink / raw)
  To: Zhichao Huang
  Cc: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	will.deacon, huangzhichao


Zhichao Huang <zhichao.huang@linaro.org> writes:

> As we're about to trap a bunch of CP14 registers, let's rework
> the CP15 handling so it can be generalized and work with multiple
> tables.
>
> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
> ---
>  arch/arm/kvm/coproc.c          | 176 ++++++++++++++++++++++++++---------------
>  arch/arm/kvm/interrupts_head.S |   2 +-
>  2 files changed, 112 insertions(+), 66 deletions(-)
>
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index 9d283d9..d23395b 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -375,6 +375,9 @@ static const struct coproc_reg cp15_regs[] = {
>  	{ CRn(15), CRm( 0), Op1( 4), Op2( 0), is32, access_cbar},
>  };
>  
> +static const struct coproc_reg cp14_regs[] = {
> +};
> +
>  /* Target specific emulation tables */
>  static struct kvm_coproc_target_table *target_tables[KVM_ARM_NUM_TARGETS];
>  
> @@ -424,47 +427,75 @@ static const struct coproc_reg *find_reg(const struct coproc_params *params,
>  	return NULL;
>  }
>  
> -static int emulate_cp15(struct kvm_vcpu *vcpu,
> -			const struct coproc_params *params)
> +/*
> + * emulate_cp --  tries to match a cp14/cp15 access in a handling table,
> + *                and call the corresponding trap handler.
> + *
> + * @params: pointer to the descriptor of the access
> + * @table: array of trap descriptors
> + * @num: size of the trap descriptor array
> + *
> + * Return 0 if the access has been handled, and -1 if not.
> + */
> +static int emulate_cp(struct kvm_vcpu *vcpu,
> +			const struct coproc_params *params,
> +			const struct coproc_reg *table,
> +			size_t num)
>  {
> -	size_t num;
> -	const struct coproc_reg *table, *r;
> -
> -	trace_kvm_emulate_cp15_imp(params->Op1, params->Rt1, params->CRn,
> -				   params->CRm, params->Op2,
> params->is_write);

Where has this trace gone? We still want to be able to view register
traps when debugging.

> +	const struct coproc_reg *r;
>  
> -	table = get_target_table(vcpu->arch.target, &num);
> +	if (!table)
> +		return -1;	/* Not handled */
>  
> -	/* Search target-specific then generic table. */
>  	r = find_reg(params, table, num);
> -	if (!r)
> -		r = find_reg(params, cp15_regs, ARRAY_SIZE(cp15_regs));
>  
> -	if (likely(r)) {
> +	if (r) {
>  		/* If we don't have an accessor, we should never get here! */
>  		BUG_ON(!r->access);
>  
>  		if (likely(r->access(vcpu, params, r))) {
>  			/* Skip instruction, since it was emulated */
>  			kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> -			return 1;
>  		}
> -		/* If access function fails, it should complain. */
> -	} else {
> -		kvm_err("Unsupported guest CP15 access at: %08lx\n",
> -			*vcpu_pc(vcpu));
> -		print_cp_instr(params);
> +
> +		/* Handled */
> +		return 0;
>  	}
> +
> +	/* Not handled */
> +	return -1;
> +}
> +
> +static void unhandled_cp_access(struct kvm_vcpu *vcpu,
> +				const struct coproc_params *params)
> +{
> +	u8 hsr_ec = kvm_vcpu_trap_get_class(vcpu);
> +	int cp;
> +
> +	switch (hsr_ec) {
> +	case HSR_EC_CP15_32:
> +	case HSR_EC_CP15_64:
> +		cp = 15;
> +		break;
> +	case HSR_EC_CP14_MR:
> +	case HSR_EC_CP14_64:
> +		cp = 14;
> +		break;
> +	default:
> +		WARN_ON((cp = -1));
> +	}
> +
> +	kvm_err("Unsupported guest CP%d access at: %08lx\n",
> +		cp, *vcpu_pc(vcpu));
> +	print_cp_instr(params);
>  	kvm_inject_undefined(vcpu);
> -	return 1;
>  }
>  
> -/**
> - * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access
> - * @vcpu: The VCPU pointer
> - * @run:  The kvm_run struct
> - */
> -int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp_64(struct kvm_vcpu *vcpu,
> +			const struct coproc_reg *global,
> +			size_t nr_global,
> +			const struct coproc_reg *target_specific,
> +			size_t nr_specific)
>  {
>  	struct coproc_params params;
>  
> @@ -478,7 +509,13 @@ int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
>  	params.CRm = 0;
>  
> -	return emulate_cp15(vcpu, &params);
> +	if (!emulate_cp(vcpu, &params, target_specific, nr_specific))
> +		return 1;
> +	if (!emulate_cp(vcpu, &params, global, nr_global))
> +		return 1;
> +
> +	unhandled_cp_access(vcpu, &params);
> +	return 1;
>  }
>  
>  static void reset_coproc_regs(struct kvm_vcpu *vcpu,
> @@ -491,12 +528,11 @@ static void reset_coproc_regs(struct kvm_vcpu *vcpu,
>  			table[i].reset(vcpu, &table[i]);
>  }
>  
> -/**
> - * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access
> - * @vcpu: The VCPU pointer
> - * @run:  The kvm_run struct
> - */
> -int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp_32(struct kvm_vcpu *vcpu,
> +			const struct coproc_reg *global,
> +			size_t nr_global,
> +			const struct coproc_reg *target_specific,
> +			size_t nr_specific)
>  {
>  	struct coproc_params params;
>  
> @@ -510,33 +546,57 @@ int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
>  	params.Rt2 = 0;
>  
> -	return emulate_cp15(vcpu, &params);
> +	if (!emulate_cp(vcpu, &params, target_specific, nr_specific))
> +		return 1;
> +	if (!emulate_cp(vcpu, &params, global, nr_global))
> +		return 1;
> +
> +	unhandled_cp_access(vcpu, &params);
> +	return 1;
>  }
>  
>  /**
> - * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
> + * kvm_handle_cp15_64 -- handles a mrrc/mcrr trap on a guest CP15 access
>   * @vcpu: The VCPU pointer
>   * @run:  The kvm_run struct
>   */
> -int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +int kvm_handle_cp15_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	struct coproc_params params;
> +	const struct coproc_reg *target_specific;
> +	size_t num;
>  
> -	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
> -	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
> -	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
> -	params.is_64bit = true;
> +	target_specific = get_target_table(vcpu->arch.target, &num);
> +	return kvm_handle_cp_64(vcpu,
> +				cp15_regs, ARRAY_SIZE(cp15_regs),
> +				target_specific, num);
> +}
>  
> -	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 16) & 0xf;
> -	params.Op2 = 0;
> -	params.Rt2 = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
> -	params.CRm = 0;
> +/**
> + * kvm_handle_cp15_32 -- handles a mrc/mcr trap on a guest CP15 access
> + * @vcpu: The VCPU pointer
> + * @run:  The kvm_run struct
> + */
> +int kvm_handle_cp15_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	const struct coproc_reg *target_specific;
> +	size_t num;
>  
> -	(void)trap_raz_wi(vcpu, &params, NULL);
> +	target_specific = get_target_table(vcpu->arch.target, &num);
> +	return kvm_handle_cp_32(vcpu,
> +				cp15_regs, ARRAY_SIZE(cp15_regs),
> +				target_specific, num);
> +}
>  
> -	/* handled */
> -	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> -	return 1;
> +/**
> + * kvm_handle_cp14_64 -- handles a mrrc/mcrr trap on a guest CP14 access
> + * @vcpu: The VCPU pointer
> + * @run:  The kvm_run struct
> + */
> +int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	return kvm_handle_cp_64(vcpu,
> +				cp14_regs, ARRAY_SIZE(cp14_regs),
> +				NULL, 0);
>  }
>  
>  /**
> @@ -546,23 +606,9 @@ int kvm_handle_cp14_64(struct kvm_vcpu *vcpu, struct kvm_run *run)
>   */
>  int kvm_handle_cp14_32(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  {
> -	struct coproc_params params;
> -
> -	params.CRm = (kvm_vcpu_get_hsr(vcpu) >> 1) & 0xf;
> -	params.Rt1 = (kvm_vcpu_get_hsr(vcpu) >> 5) & 0xf;
> -	params.is_write = ((kvm_vcpu_get_hsr(vcpu) & 1) == 0);
> -	params.is_64bit = false;
> -
> -	params.CRn = (kvm_vcpu_get_hsr(vcpu) >> 10) & 0xf;
> -	params.Op1 = (kvm_vcpu_get_hsr(vcpu) >> 14) & 0x7;
> -	params.Op2 = (kvm_vcpu_get_hsr(vcpu) >> 17) & 0x7;
> -	params.Rt2 = 0;
> -
> -	(void)trap_raz_wi(vcpu, &params, NULL);
> -
> -	/* handled */
> -	kvm_skip_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> -	return 1;
> +	return kvm_handle_cp_32(vcpu,
> +				cp14_regs, ARRAY_SIZE(cp14_regs),
> +				NULL, 0);
>  }
>  
>  /******************************************************************************
> diff --git a/arch/arm/kvm/interrupts_head.S b/arch/arm/kvm/interrupts_head.S
> index a9f3a56..35e4a3a 100644
> --- a/arch/arm/kvm/interrupts_head.S
> +++ b/arch/arm/kvm/interrupts_head.S
> @@ -607,7 +607,7 @@ ARM_BE8(rev	r6, r6  )
>   * (hardware reset value is 0) */
>  .macro set_hdcr operation
>  	mrc	p15, 4, r2, c1, c1, 1
> -	ldr	r3, =(HDCR_TPM|HDCR_TPMCR|HDCR_TDRA|HDCR_TDOSA|HDCR_TDA)
> +	ldr	r3, =(HDCR_TPM|HDCR_TPMCR)
>  	.if \operation == vmentry
>  	orr	r2, r2, r3		@ Trap some perfmon accesses
>  	.else

-- 
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 03/11] KVM: arm: enable to use the ARM_DSCR_MDBGEN macro from KVM assembly code
  2015-05-31  4:27 ` [PATCH v2 03/11] KVM: arm: enable to use the ARM_DSCR_MDBGEN macro from KVM assembly code Zhichao Huang
@ 2015-06-09 13:42   ` Alex Bennée
  0 siblings, 0 replies; 26+ messages in thread
From: Alex Bennée @ 2015-06-09 13:42 UTC (permalink / raw)
  To: Zhichao Huang
  Cc: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	will.deacon, huangzhichao


Zhichao Huang <zhichao.huang@linaro.org> writes:

> Add #ifndef __ASSEMBLY__ in hw_breakpoint.h, in order to use
> the ARM_DSCR_MDBGEN macro from KVM assembly code.
>
> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  arch/arm/include/asm/hw_breakpoint.h | 54 +++++++++++++++++++-----------------
>  1 file changed, 29 insertions(+), 25 deletions(-)
>
> diff --git a/arch/arm/include/asm/hw_breakpoint.h b/arch/arm/include/asm/hw_breakpoint.h
> index 8e427c7..f2f4c61 100644
> --- a/arch/arm/include/asm/hw_breakpoint.h
> +++ b/arch/arm/include/asm/hw_breakpoint.h
> @@ -3,6 +3,8 @@
>  
>  #ifdef __KERNEL__
>  
> +#ifndef __ASSEMBLY__
> +
>  struct task_struct;
>  
>  #ifdef CONFIG_HAVE_HW_BREAKPOINT
> @@ -44,6 +46,33 @@ static inline void decode_ctrl_reg(u32 reg,
>  	ctrl->mismatch	= reg & 0x1;
>  }
>  
> +struct notifier_block;
> +struct perf_event;
> +struct pmu;
> +
> +extern struct pmu perf_ops_bp;
> +extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
> +				  int *gen_len, int *gen_type);
> +extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
> +extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
> +extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
> +					   unsigned long val, void *data);
> +
> +extern u8 arch_get_debug_arch(void);
> +extern u8 arch_get_max_wp_len(void);
> +extern void clear_ptrace_hw_breakpoint(struct task_struct *tsk);
> +
> +int arch_install_hw_breakpoint(struct perf_event *bp);
> +void arch_uninstall_hw_breakpoint(struct perf_event *bp);
> +void hw_breakpoint_pmu_read(struct perf_event *bp);
> +int hw_breakpoint_slots(int type);
> +
> +#else
> +static inline void clear_ptrace_hw_breakpoint(struct task_struct *tsk) {}
> +
> +#endif	/* CONFIG_HAVE_HW_BREAKPOINT */
> +#endif  /* __ASSEMBLY */
> +
>  /* Debug architecture numbers. */
>  #define ARM_DEBUG_ARCH_RESERVED	0	/* In case of ptrace ABI updates. */
>  #define ARM_DEBUG_ARCH_V6	1
> @@ -110,30 +139,5 @@ static inline void decode_ctrl_reg(u32 reg,
>  	asm volatile("mcr p14, 0, %0, " #N "," #M ", " #OP2 : : "r" (VAL));\
>  } while (0)
>  
> -struct notifier_block;
> -struct perf_event;
> -struct pmu;
> -
> -extern struct pmu perf_ops_bp;
> -extern int arch_bp_generic_fields(struct arch_hw_breakpoint_ctrl ctrl,
> -				  int *gen_len, int *gen_type);
> -extern int arch_check_bp_in_kernelspace(struct perf_event *bp);
> -extern int arch_validate_hwbkpt_settings(struct perf_event *bp);
> -extern int hw_breakpoint_exceptions_notify(struct notifier_block *unused,
> -					   unsigned long val, void *data);
> -
> -extern u8 arch_get_debug_arch(void);
> -extern u8 arch_get_max_wp_len(void);
> -extern void clear_ptrace_hw_breakpoint(struct task_struct *tsk);
> -
> -int arch_install_hw_breakpoint(struct perf_event *bp);
> -void arch_uninstall_hw_breakpoint(struct perf_event *bp);
> -void hw_breakpoint_pmu_read(struct perf_event *bp);
> -int hw_breakpoint_slots(int type);
> -
> -#else
> -static inline void clear_ptrace_hw_breakpoint(struct task_struct *tsk) {}
> -
> -#endif	/* CONFIG_HAVE_HW_BREAKPOINT */
>  #endif	/* __KERNEL__ */
>  #endif	/* _ARM_HW_BREAKPOINT_H */

-- 
Alex Bennée

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 05/11] KVM: arm: check ordering of all system register tables
  2015-05-31  4:27 ` [PATCH v2 05/11] KVM: arm: check ordering of all system register tables Zhichao Huang
@ 2015-06-10 13:52   ` Alex Bennée
  2015-06-14 16:18     ` zichao
  0 siblings, 1 reply; 26+ messages in thread
From: Alex Bennée @ 2015-06-10 13:52 UTC (permalink / raw)
  To: Zhichao Huang
  Cc: kvm, marc.zyngier, will.deacon, huangzhichao, linux-arm-kernel, kvmarm


Zhichao Huang <zhichao.huang@linaro.org> writes:

> We now have multiple tables for the various system registers
> we trap. Make sure we check the order of all of them, as it is
> critical that we get the order right (been there, done that...).
>
> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
> ---
>  arch/arm/kvm/coproc.c | 26 +++++++++++++++++++++-----
>  1 file changed, 21 insertions(+), 5 deletions(-)
>
> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
> index d23395b..16d5f69 100644
> --- a/arch/arm/kvm/coproc.c
> +++ b/arch/arm/kvm/coproc.c
> @@ -737,6 +737,9 @@ static struct coproc_reg invariant_cp15[] = {
>  	{ CRn( 0), CRm( 0), Op1( 0), Op2( 3), is32, NULL, get_TLBTR },
>  	{ CRn( 0), CRm( 0), Op1( 0), Op2( 6), is32, NULL, get_REVIDR },
>  
> +	{ CRn( 0), CRm( 0), Op1( 1), Op2( 1), is32, NULL, get_CLIDR },
> +	{ CRn( 0), CRm( 0), Op1( 1), Op2( 7), is32, NULL, get_AIDR },
> +
>  	{ CRn( 0), CRm( 1), Op1( 0), Op2( 0), is32, NULL, get_ID_PFR0 },
>  	{ CRn( 0), CRm( 1), Op1( 0), Op2( 1), is32, NULL, get_ID_PFR1 },
>  	{ CRn( 0), CRm( 1), Op1( 0), Op2( 2), is32, NULL, get_ID_DFR0 },
> @@ -752,9 +755,6 @@ static struct coproc_reg invariant_cp15[] = {
>  	{ CRn( 0), CRm( 2), Op1( 0), Op2( 3), is32, NULL, get_ID_ISAR3 },
>  	{ CRn( 0), CRm( 2), Op1( 0), Op2( 4), is32, NULL, get_ID_ISAR4 },
>  	{ CRn( 0), CRm( 2), Op1( 0), Op2( 5), is32, NULL, get_ID_ISAR5 },
> -
> -	{ CRn( 0), CRm( 0), Op1( 1), Op2( 1), is32, NULL, get_CLIDR },
> -	{ CRn( 0), CRm( 0), Op1( 1), Op2( 7), is32, NULL, get_AIDR },
>  };
>  
>  /*
> @@ -1297,13 +1297,29 @@ int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
>  	return write_demux_regids(uindices);
>  }
>  
> +static int check_sysreg_table(const struct coproc_reg *table, unsigned int n)
> +{
> +	unsigned int i;
> +
> +	for (i = 1; i < n; i++) {
> +		if (cmp_reg(&table[i-1], &table[i]) >= 0) {
> +			kvm_err("sys_reg table %p out of order (%d)\n",
> +					table, i - 1);

Isn't a BUG_ON *and* a kvm_err() overkill?

> +			return 1;
> +		}
> +	}
> +
> +	return 0;
> +}
> +
>  void kvm_coproc_table_init(void)
>  {
>  	unsigned int i;
>  
>  	/* Make sure tables are unique and in order. */
> -	for (i = 1; i < ARRAY_SIZE(cp15_regs); i++)
> -		BUG_ON(cmp_reg(&cp15_regs[i-1], &cp15_regs[i]) >= 0);
> +	BUG_ON(check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs)));
> +	BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs)));
> +	BUG_ON(check_sysreg_table(invariant_cp15, ARRAY_SIZE(invariant_cp15)));
>  
>  	/* We abuse the reset function to overwrite the table itself. */
>  	for (i = 0; i < ARRAY_SIZE(invariant_cp15); i++)

-- 
Alex Bennée
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/11] KVM: arm: plug guest debug exploit
  2015-06-09 10:29       ` Marc Zyngier
@ 2015-06-14 16:08         ` zichao
  2015-06-14 16:13           ` zichao
  0 siblings, 1 reply; 26+ messages in thread
From: zichao @ 2015-06-14 16:08 UTC (permalink / raw)
  To: Marc Zyngier, kvm, linux-arm-kernel, kvmarm, christoffer.dall,
	alex.bennee, Will Deacon
  Cc: huangzhichao, stable

Hi, Marc,

On 2015/6/9 18:29, Marc Zyngier wrote:
> On 07/06/15 14:40, zichao wrote:
>> Hi, Marc,
>>
>> On 2015/6/1 18:56, Marc Zyngier wrote:
>>> Hi Zhichao,
>>>
>>> On 31/05/15 05:27, Zhichao Huang wrote:
>>>> Hardware debugging in guests is not intercepted currently, it means
>>>> that a malicious guest can bring down the entire machine by writing
>>>> to the debug registers.
>>>>
>>>> This patch enable trapping of all debug registers, preventing the guests
>>>> to mess with the host state.
>>>>
>>>> However, it is a precursor for later patches which will need to do
>>>> more to world switch debug states while necessary.
>>>>
>>>> Cc: <stable@vger.kernel.org>
>>>> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
>>>> ---
>>>>  arch/arm/include/asm/kvm_coproc.h |  3 +-
>>>>  arch/arm/kvm/coproc.c             | 60 +++++++++++++++++++++++++++++++++++----
>>>>  arch/arm/kvm/handle_exit.c        |  4 +--
>>>>  arch/arm/kvm/interrupts_head.S    |  2 +-
>>>>  4 files changed, 59 insertions(+), 10 deletions(-)
>>>>
......
>>>
>>> There is a small problem here. Imagine the host has programmed a
>>> watchpoint on some VA. We switch to the guest, and then access the same
>>> VA. At that stage, the guest will take the debug exception. That's
>>> really not pretty.
>>>
>>
>> I've thought about it and I think there maybe OK, because when we switch from the
>> host to the guest, the context_switch() in host must be called first, and then
>> the host will switch debug registers, and the guest will not see the watchpoint
>> the host programmed before.
>>
>> Or am I missing some circumstances here?
> 
> I don't see anything in this patch that reprograms the debug registers.
> You are simply trapping the guest access to these registers, but
> whatever content the host has put there is still active.
> 
> So, assuming that the guest does not touch any debug register (and
> legitimately assumes that they are inactive), a debug exception may fire
> at PL1.
> 

I have had a test on the problem you mentioned. I programmed a watchpoint in the host,
and then observe the value of debug registers in the guest.

The result is that in most cases, the guest would not be able to see the watchpoint because
when we switch from the host to the guest, the process schedule function(__schedule) would
be called, and it will uninstall debug registers the host just programed.

__schedule  ->  __perf_event_task_sched_out -> event_sched_out -> arch_uninstall_hw_breakpoint

However, there is one exception, if we programed a watchpoint based on the Qemu process in
the host, there would be no process schedule between the Qemu process and the guest, and then,
the problem you mentioned appear, the guest will see the value of debug registers.

>>> I think using HDCR_TDE as well should sort it, effectively preventing
>>> the exception from being delivered to the guest, but you will need to
>>> handle this on the HYP side. Performance wise, this is also really horrible.
>>>
>>> A better way would be to disable the host's BPs/WPs if any is enabled.
> 
> I still think you either need to fixup the host's registers if they are
> active when you enter the guest.

Compared to disable the whole debug feature in the host, I think there may be a slighter way to
plug the exploit.

We can only save/restore DBGDSCR on each switch between the guest and the host. It means that
the debug monitor in guest would be disabled forever(because the guest could not be able to enable
the debug monitor without the following patches), and then the guest would not be able to take
any debug exceptions.

What's your opinion?

> 
> Thanks,
> 
> 	M.
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/11] KVM: arm: plug guest debug exploit
  2015-06-14 16:08         ` zichao
@ 2015-06-14 16:13           ` zichao
  2015-06-16 16:49             ` Will Deacon
  0 siblings, 1 reply; 26+ messages in thread
From: zichao @ 2015-06-14 16:13 UTC (permalink / raw)
  To: Marc Zyngier, kvm, linux-arm-kernel, kvmarm, christoffer.dall,
	alex.bennee, Will Deacon
  Cc: huangzhichao, stable

Hi, Will,

I and marc are talking about how to plug the guest debug exploit in an easier way.

I remembered that you mentioned disabling monitor mode had proven to be extremely fragile
in practice on 32-bit ARM SoCs, what if I save/restore the debug monitor mode on each
switch between the guest and the host, would it be acceptable?


On 2015/6/15 0:08, zichao wrote:
> Hi, Marc,
> 
> On 2015/6/9 18:29, Marc Zyngier wrote:
>> On 07/06/15 14:40, zichao wrote:
>>> Hi, Marc,
>>>
>>> On 2015/6/1 18:56, Marc Zyngier wrote:
>>>> Hi Zhichao,
>>>>
>>>> On 31/05/15 05:27, Zhichao Huang wrote:
>>>>> Hardware debugging in guests is not intercepted currently, it means
>>>>> that a malicious guest can bring down the entire machine by writing
>>>>> to the debug registers.
>>>>>
>>>>> This patch enable trapping of all debug registers, preventing the guests
>>>>> to mess with the host state.
>>>>>
>>>>> However, it is a precursor for later patches which will need to do
>>>>> more to world switch debug states while necessary.
>>>>>
>>>>> Cc: <stable@vger.kernel.org>
>>>>> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
>>>>> ---
>>>>>  arch/arm/include/asm/kvm_coproc.h |  3 +-
>>>>>  arch/arm/kvm/coproc.c             | 60 +++++++++++++++++++++++++++++++++++----
>>>>>  arch/arm/kvm/handle_exit.c        |  4 +--
>>>>>  arch/arm/kvm/interrupts_head.S    |  2 +-
>>>>>  4 files changed, 59 insertions(+), 10 deletions(-)
>>>>>
> ......
>>>>
>>>> There is a small problem here. Imagine the host has programmed a
>>>> watchpoint on some VA. We switch to the guest, and then access the same
>>>> VA. At that stage, the guest will take the debug exception. That's
>>>> really not pretty.
>>>>
>>>
>>> I've thought about it and I think there maybe OK, because when we switch from the
>>> host to the guest, the context_switch() in host must be called first, and then
>>> the host will switch debug registers, and the guest will not see the watchpoint
>>> the host programmed before.
>>>
>>> Or am I missing some circumstances here?
>>
>> I don't see anything in this patch that reprograms the debug registers.
>> You are simply trapping the guest access to these registers, but
>> whatever content the host has put there is still active.
>>
>> So, assuming that the guest does not touch any debug register (and
>> legitimately assumes that they are inactive), a debug exception may fire
>> at PL1.
>>
> 
> I have had a test on the problem you mentioned. I programmed a watchpoint in the host,
> and then observe the value of debug registers in the guest.
> 
> The result is that in most cases, the guest would not be able to see the watchpoint because
> when we switch from the host to the guest, the process schedule function(__schedule) would
> be called, and it will uninstall debug registers the host just programed.
> 
> __schedule  ->  __perf_event_task_sched_out -> event_sched_out -> arch_uninstall_hw_breakpoint
> 
> However, there is one exception, if we programed a watchpoint based on the Qemu process in
> the host, there would be no process schedule between the Qemu process and the guest, and then,
> the problem you mentioned appear, the guest will see the value of debug registers.
> 
>>>> I think using HDCR_TDE as well should sort it, effectively preventing
>>>> the exception from being delivered to the guest, but you will need to
>>>> handle this on the HYP side. Performance wise, this is also really horrible.
>>>>
>>>> A better way would be to disable the host's BPs/WPs if any is enabled.
>>
>> I still think you either need to fixup the host's registers if they are
>> active when you enter the guest.
> 
> Compared to disable the whole debug feature in the host, I think there may be a slighter way to
> plug the exploit.
> 
> We can only save/restore DBGDSCR on each switch between the guest and the host. It means that
> the debug monitor in guest would be disabled forever(because the guest could not be able to enable
> the debug monitor without the following patches), and then the guest would not be able to take
> any debug exceptions.
> 
> What's your opinion?
> 
>>
>> Thanks,
>>
>> 	M.
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 04/11] KVM: arm: common infrastructure for handling AArch32 CP14/CP15
  2015-06-09 10:45   ` Alex Bennée
@ 2015-06-14 16:17     ` zichao
  0 siblings, 0 replies; 26+ messages in thread
From: zichao @ 2015-06-14 16:17 UTC (permalink / raw)
  To: Alex Bennée
  Cc: kvm, marc.zyngier, will.deacon, huangzhichao, linux-arm-kernel, kvmarm



On 2015/6/9 18:45, Alex Bennée wrote:
> 
> Zhichao Huang <zhichao.huang@linaro.org> writes:
> 
>> As we're about to trap a bunch of CP14 registers, let's rework
>> the CP15 handling so it can be generalized and work with multiple
>> tables.
>>
>> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
>> ---
>>  arch/arm/kvm/coproc.c          | 176 ++++++++++++++++++++++++++---------------
>>  arch/arm/kvm/interrupts_head.S |   2 +-
>>  2 files changed, 112 insertions(+), 66 deletions(-)
>>
>> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
>> index 9d283d9..d23395b 100644
>> --- a/arch/arm/kvm/coproc.c
>> +++ b/arch/arm/kvm/coproc.c
>> @@ -375,6 +375,9 @@ static const struct coproc_reg cp15_regs[] = {
>>  	{ CRn(15), CRm( 0), Op1( 4), Op2( 0), is32, access_cbar},
>>  };
>>  
>> +static const struct coproc_reg cp14_regs[] = {
>> +};
>> +
>>  /* Target specific emulation tables */
>>  static struct kvm_coproc_target_table *target_tables[KVM_ARM_NUM_TARGETS];
>>  
>> @@ -424,47 +427,75 @@ static const struct coproc_reg *find_reg(const struct coproc_params *params,
>>  	return NULL;
>>  }
>>  
>> -static int emulate_cp15(struct kvm_vcpu *vcpu,
>> -			const struct coproc_params *params)
>> +/*
>> + * emulate_cp --  tries to match a cp14/cp15 access in a handling table,
>> + *                and call the corresponding trap handler.
>> + *
>> + * @params: pointer to the descriptor of the access
>> + * @table: array of trap descriptors
>> + * @num: size of the trap descriptor array
>> + *
>> + * Return 0 if the access has been handled, and -1 if not.
>> + */
>> +static int emulate_cp(struct kvm_vcpu *vcpu,
>> +			const struct coproc_params *params,
>> +			const struct coproc_reg *table,
>> +			size_t num)
>>  {
>> -	size_t num;
>> -	const struct coproc_reg *table, *r;
>> -
>> -	trace_kvm_emulate_cp15_imp(params->Op1, params->Rt1, params->CRn,
>> -				   params->CRm, params->Op2,
>> params->is_write);
> 
> Where has this trace gone? We still want to be able to view register
> traps when debugging.
> 

OK, I will add it in v3 patches.

> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 05/11] KVM: arm: check ordering of all system register tables
  2015-06-10 13:52   ` Alex Bennée
@ 2015-06-14 16:18     ` zichao
  0 siblings, 0 replies; 26+ messages in thread
From: zichao @ 2015-06-14 16:18 UTC (permalink / raw)
  To: Alex Bennée
  Cc: kvm, linux-arm-kernel, kvmarm, christoffer.dall, marc.zyngier,
	will.deacon, huangzhichao



On 2015/6/10 21:52, Alex Bennée wrote:
> 
> Zhichao Huang <zhichao.huang@linaro.org> writes:
> 
>> We now have multiple tables for the various system registers
>> we trap. Make sure we check the order of all of them, as it is
>> critical that we get the order right (been there, done that...).
>>
>> Signed-off-by: Zhichao Huang <zhichao.huang@linaro.org>
>> ---
>>  arch/arm/kvm/coproc.c | 26 +++++++++++++++++++++-----
>>  1 file changed, 21 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/arm/kvm/coproc.c b/arch/arm/kvm/coproc.c
>> index d23395b..16d5f69 100644
>> --- a/arch/arm/kvm/coproc.c
>> +++ b/arch/arm/kvm/coproc.c
>> @@ -737,6 +737,9 @@ static struct coproc_reg invariant_cp15[] = {
>>  	{ CRn( 0), CRm( 0), Op1( 0), Op2( 3), is32, NULL, get_TLBTR },
>>  	{ CRn( 0), CRm( 0), Op1( 0), Op2( 6), is32, NULL, get_REVIDR },
>>  
>> +	{ CRn( 0), CRm( 0), Op1( 1), Op2( 1), is32, NULL, get_CLIDR },
>> +	{ CRn( 0), CRm( 0), Op1( 1), Op2( 7), is32, NULL, get_AIDR },
>> +
>>  	{ CRn( 0), CRm( 1), Op1( 0), Op2( 0), is32, NULL, get_ID_PFR0 },
>>  	{ CRn( 0), CRm( 1), Op1( 0), Op2( 1), is32, NULL, get_ID_PFR1 },
>>  	{ CRn( 0), CRm( 1), Op1( 0), Op2( 2), is32, NULL, get_ID_DFR0 },
>> @@ -752,9 +755,6 @@ static struct coproc_reg invariant_cp15[] = {
>>  	{ CRn( 0), CRm( 2), Op1( 0), Op2( 3), is32, NULL, get_ID_ISAR3 },
>>  	{ CRn( 0), CRm( 2), Op1( 0), Op2( 4), is32, NULL, get_ID_ISAR4 },
>>  	{ CRn( 0), CRm( 2), Op1( 0), Op2( 5), is32, NULL, get_ID_ISAR5 },
>> -
>> -	{ CRn( 0), CRm( 0), Op1( 1), Op2( 1), is32, NULL, get_CLIDR },
>> -	{ CRn( 0), CRm( 0), Op1( 1), Op2( 7), is32, NULL, get_AIDR },
>>  };
>>  
>>  /*
>> @@ -1297,13 +1297,29 @@ int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices)
>>  	return write_demux_regids(uindices);
>>  }
>>  
>> +static int check_sysreg_table(const struct coproc_reg *table, unsigned int n)
>> +{
>> +	unsigned int i;
>> +
>> +	for (i = 1; i < n; i++) {
>> +		if (cmp_reg(&table[i-1], &table[i]) >= 0) {
>> +			kvm_err("sys_reg table %p out of order (%d)\n",
>> +					table, i - 1);
> 
> Isn't a BUG_ON *and* a kvm_err() overkill?
> 

In deed, it would not be able to happened, because all the cp14_regs/cp15_regs are static codes.

I think the BUG_ON will make the developers to notice whether they get the order right.

And another reason may be to keep the same way with the ARM64.

>> +			return 1;
>> +		}
>> +	}
>> +
>> +	return 0;
>> +}
>> +
>>  void kvm_coproc_table_init(void)
>>  {
>>  	unsigned int i;
>>  
>>  	/* Make sure tables are unique and in order. */
>> -	for (i = 1; i < ARRAY_SIZE(cp15_regs); i++)
>> -		BUG_ON(cmp_reg(&cp15_regs[i-1], &cp15_regs[i]) >= 0);
>> +	BUG_ON(check_sysreg_table(cp14_regs, ARRAY_SIZE(cp14_regs)));
>> +	BUG_ON(check_sysreg_table(cp15_regs, ARRAY_SIZE(cp15_regs)));
>> +	BUG_ON(check_sysreg_table(invariant_cp15, ARRAY_SIZE(invariant_cp15)));
>>  
>>  	/* We abuse the reset function to overwrite the table itself. */
>>  	for (i = 0; i < ARRAY_SIZE(invariant_cp15); i++)
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2 01/11] KVM: arm: plug guest debug exploit
  2015-06-14 16:13           ` zichao
@ 2015-06-16 16:49             ` Will Deacon
  0 siblings, 0 replies; 26+ messages in thread
From: Will Deacon @ 2015-06-16 16:49 UTC (permalink / raw)
  To: zichao
  Cc: Marc Zyngier, kvm, linux-arm-kernel, kvmarm, christoffer.dall,
	alex.bennee, huangzhichao, stable

On Sun, Jun 14, 2015 at 05:13:05PM +0100, zichao wrote:
> I and marc are talking about how to plug the guest debug exploit in an
> easier way.
> 
> I remembered that you mentioned disabling monitor mode had proven to be
> extremely fragile in practice on 32-bit ARM SoCs, what if I save/restore
> the debug monitor mode on each switch between the guest and the host,
> would it be acceptable?

If you're just referring to DBGDSCRext, then you could give it a go, but
you'll certainly want to predicate any writes to that register on whether
or not hw_breakpoint managed to reset the debug regs on the host.

Like I said, accessing these registers always worries me, so I'd really
avoid it in KVM if you can. If not, you'll need to do extensive testing
on a bunch of platforms with and without the presence of external debug.

Will

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2015-06-16 16:49 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-05-31  4:27 [PATCH v2 00/11] KVM: arm: debug infrastructure support Zhichao Huang
2015-05-31  4:27 ` [PATCH v2 01/11] KVM: arm: plug guest debug exploit Zhichao Huang
2015-06-01 10:56   ` Marc Zyngier
2015-06-07 13:40     ` zichao
2015-06-09 10:29       ` Marc Zyngier
2015-06-14 16:08         ` zichao
2015-06-14 16:13           ` zichao
2015-06-16 16:49             ` Will Deacon
2015-05-31  4:27 ` [PATCH v2 02/11] KVM: arm: rename pm_fake handler to trap_raz_wi Zhichao Huang
2015-06-09 10:42   ` Alex Bennée
2015-05-31  4:27 ` [PATCH v2 03/11] KVM: arm: enable to use the ARM_DSCR_MDBGEN macro from KVM assembly code Zhichao Huang
2015-06-09 13:42   ` Alex Bennée
2015-05-31  4:27 ` [PATCH v2 04/11] KVM: arm: common infrastructure for handling AArch32 CP14/CP15 Zhichao Huang
2015-06-09 10:45   ` Alex Bennée
2015-06-14 16:17     ` zichao
2015-05-31  4:27 ` [PATCH v2 05/11] KVM: arm: check ordering of all system register tables Zhichao Huang
2015-06-10 13:52   ` Alex Bennée
2015-06-14 16:18     ` zichao
2015-05-31  4:27 ` [PATCH v2 06/11] KVM: arm: add trap handlers for 32-bit debug registers Zhichao Huang
2015-05-31  4:27 ` [PATCH v2 07/11] KVM: arm: add trap handlers for 64-bit " Zhichao Huang
2015-05-31  4:27 ` [PATCH v2 08/11] KVM: arm: implement dirty bit mechanism for " Zhichao Huang
2015-05-31  4:27 ` [PATCH v2 09/11] KVM: arm: disable debug mode if we don't actually need it Zhichao Huang
2015-06-01 10:16   ` Will Deacon
2015-06-07 14:08     ` zichao
2015-05-31  4:27 ` [PATCH v2 10/11] KVM: arm: implement lazy world switch for debug registers Zhichao Huang
2015-05-31  4:27 ` [PATCH v2 11/11] KVM: arm: enable trapping of all " Zhichao Huang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).