All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH RFC 0/4] arm64: cross cpu support
@ 2015-09-09  8:38 ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

Currently running guests having vcpus different from the host cpu
(for example running cortex-a57 guest on X-Gene) is not supported in
arm64 kvm.

This patchset adds basic support for running guests in cross cpu
configuration. Currently the cross cpu functionality is limited
to
- Target specific MIDR register value i.e. /proc/cpuinfo will reflect the
  value of cpu requested.
- Hardware debug capability infomation i.e. guest kernel will see the number
  of breakpoints and watchpoints as requested by the user.

These patches are based on top of kernel tag v4.2.

Marc Zyngier (1):
  arm64: KVM: add MIDR_EL1 switching

Tushar Jagad (3):
  arm64: kvm: enable trapping of read access to regs in TID3 group
  arm64: kvm: Setup MIDR as per target vcpu
  arm/arm64: kvm: Disable comparision of cpu and vcpu target

 Documentation/virtual/kvm/api.txt |    8 +
 arch/arm/kvm/arm.c                |   53 +++-
 arch/arm64/include/asm/kvm_arm.h  |    2 +-
 arch/arm64/include/asm/kvm_asm.h  |   40 ++-
 arch/arm64/include/asm/kvm_host.h |    4 +-
 arch/arm64/include/uapi/asm/kvm.h |    7 +
 arch/arm64/kvm/hyp.S              |    4 +
 arch/arm64/kvm/sys_regs.c         |  496 +++++++++++++++++++++++++++++++++----
 8 files changed, 556 insertions(+), 58 deletions(-)

--
1.7.9.5

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH RFC 0/4] arm64: cross cpu support
@ 2015-09-09  8:38 ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

Currently running guests having vcpus different from the host cpu
(for example running cortex-a57 guest on X-Gene) is not supported in
arm64 kvm.

This patchset adds basic support for running guests in cross cpu
configuration. Currently the cross cpu functionality is limited
to
- Target specific MIDR register value i.e. /proc/cpuinfo will reflect the
  value of cpu requested.
- Hardware debug capability infomation i.e. guest kernel will see the number
  of breakpoints and watchpoints as requested by the user.

These patches are based on top of kernel tag v4.2.

Marc Zyngier (1):
  arm64: KVM: add MIDR_EL1 switching

Tushar Jagad (3):
  arm64: kvm: enable trapping of read access to regs in TID3 group
  arm64: kvm: Setup MIDR as per target vcpu
  arm/arm64: kvm: Disable comparision of cpu and vcpu target

 Documentation/virtual/kvm/api.txt |    8 +
 arch/arm/kvm/arm.c                |   53 +++-
 arch/arm64/include/asm/kvm_arm.h  |    2 +-
 arch/arm64/include/asm/kvm_asm.h  |   40 ++-
 arch/arm64/include/asm/kvm_host.h |    4 +-
 arch/arm64/include/uapi/asm/kvm.h |    7 +
 arch/arm64/kvm/hyp.S              |    4 +
 arch/arm64/kvm/sys_regs.c         |  496 +++++++++++++++++++++++++++++++++----
 8 files changed, 556 insertions(+), 58 deletions(-)

--
1.7.9.5

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH RFC 0/4] arm64: cross cpu support
@ 2015-09-09  8:38 ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel

Currently running guests having vcpus different from the host cpu
(for example running cortex-a57 guest on X-Gene) is not supported in
arm64 kvm.

This patchset adds basic support for running guests in cross cpu
configuration. Currently the cross cpu functionality is limited
to
- Target specific MIDR register value i.e. /proc/cpuinfo will reflect the
  value of cpu requested.
- Hardware debug capability infomation i.e. guest kernel will see the number
  of breakpoints and watchpoints as requested by the user.

These patches are based on top of kernel tag v4.2.

Marc Zyngier (1):
  arm64: KVM: add MIDR_EL1 switching

Tushar Jagad (3):
  arm64: kvm: enable trapping of read access to regs in TID3 group
  arm64: kvm: Setup MIDR as per target vcpu
  arm/arm64: kvm: Disable comparision of cpu and vcpu target

 Documentation/virtual/kvm/api.txt |    8 +
 arch/arm/kvm/arm.c                |   53 +++-
 arch/arm64/include/asm/kvm_arm.h  |    2 +-
 arch/arm64/include/asm/kvm_asm.h  |   40 ++-
 arch/arm64/include/asm/kvm_host.h |    4 +-
 arch/arm64/include/uapi/asm/kvm.h |    7 +
 arch/arm64/kvm/hyp.S              |    4 +
 arch/arm64/kvm/sys_regs.c         |  496 +++++++++++++++++++++++++++++++++----
 8 files changed, 556 insertions(+), 58 deletions(-)

--
1.7.9.5

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH RFC 1/4] arm64: KVM: add MIDR_EL1 switching
  2015-09-09  8:38 ` Tushar Jagad
  (?)
@ 2015-09-09  8:38   ` Tushar Jagad
  -1 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

From: Marc Zyngier <marc.zyngier@arm.com>

Move MIDR_EL1 to be a world-switched register, instead of being
unchanged from the host.

The behaviour is preserved by using the host's MIDR_EL1 as a
reset value for the guest's register.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm64/include/asm/kvm_asm.h |   16 +++++++++-------
 arch/arm64/kvm/hyp.S             |    4 ++++
 arch/arm64/kvm/sys_regs.c        |   18 +++++++++++++-----
 3 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 3c5fe68..c1d5bde 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -55,17 +55,19 @@
 #define DBGWVR0_EL1	71	/* Debug Watchpoint Value Registers (0-15) */
 #define DBGWVR15_EL1	86
 #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
+#define MIDR_EL1	88	/* Main ID Register */
 
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	88	/* Domain Access Control Register */
-#define	IFSR32_EL2	89	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	90	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	91	/* Debug Vector Catch Register */
-#define	TEECR32_EL1	92	/* ThumbEE Configuration Register */
-#define	TEEHBR32_EL1	93	/* ThumbEE Handler Base Register */
-#define	NR_SYS_REGS	94
+#define	DACR32_EL2	89	/* Domain Access Control Register */
+#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
+#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
+#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
+#define	NR_SYS_REGS	95
 
 /* 32bit mapping */
+#define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
 #define c0_MPIDR	(MPIDR_EL1 * 2)	/* MultiProcessor ID Register */
 #define c0_CSSELR	(CSSELR_EL1 * 2)/* Cache Size Selection Register */
 #define c1_SCTLR	(SCTLR_EL1 * 2)	/* System Control Register */
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 17a8fb1..6013347 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -216,6 +216,7 @@
 	mrs	x23, 	cntkctl_el1
 	mrs	x24,	par_el1
 	mrs	x25,	mdscr_el1
+	mrs	x26,	vpidr_el2
 
 	stp	x4, x5, [x3]
 	stp	x6, x7, [x3, #16]
@@ -228,6 +229,7 @@
 	stp	x20, x21, [x3, #128]
 	stp	x22, x23, [x3, #144]
 	stp	x24, x25, [x3, #160]
+	str	x26, [x3, #696]
 .endm
 
 .macro save_debug
@@ -442,6 +444,7 @@
 	ldp	x20, x21, [x3, #128]
 	ldp	x22, x23, [x3, #144]
 	ldp	x24, x25, [x3, #160]
+	ldr	x26, [x3, #696]
 
 	msr	vmpidr_el2,	x4
 	msr	csselr_el1,	x5
@@ -465,6 +468,7 @@
 	msr	cntkctl_el1,	x23
 	msr	par_el1,	x24
 	msr	mdscr_el1,	x25
+	msr	vpidr_el2,	x26
 .endm
 
 .macro restore_debug
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c370b40..7047292 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -170,17 +170,25 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 	}
 }
 
+static void reset_midr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	/*
+	 * We only export the host's MPIDR_EL1 for now.
+	 */
+	vcpu_sys_reg(vcpu, MIDR_EL1) = read_cpuid_id();
+}
+
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
- * 
+ *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
  *   traps and start doing the save/restore dance
  * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
  *   then mandatory to save/restore the registers, as the guest
  *   depends on them.
- * 
+ *
  * For this, we use a DIRTY bit, indicating the guest has modified the
  * debug registers, used as follow:
  *
@@ -350,6 +358,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b10), Op1(0b100), CRn(0b0000), CRm(0b0111), Op2(0b000),
 	  NULL, reset_val, DBGVCR32_EL2, 0 },
 
+	/* MIDR_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
+	  NULL, reset_midr, MIDR_EL1 },
 	/* MPIDR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
 	  NULL, reset_mpidr, MPIDR_EL1 },
@@ -1091,7 +1102,6 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 		((struct sys_reg_desc *)r)->val = val;			\
 	}
 
-FUNCTION_INVARIANT(midr_el1)
 FUNCTION_INVARIANT(ctr_el0)
 FUNCTION_INVARIANT(revidr_el1)
 FUNCTION_INVARIANT(id_pfr0_el1)
@@ -1113,8 +1123,6 @@ FUNCTION_INVARIANT(aidr_el1)
 
 /* ->val is filled in by kvm_sys_reg_table_init() */
 static struct sys_reg_desc invariant_sys_regs[] = {
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
-	  NULL, get_midr_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
 	  NULL, get_revidr_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 1/4] arm64: KVM: add MIDR_EL1 switching
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

From: Marc Zyngier <marc.zyngier@arm.com>

Move MIDR_EL1 to be a world-switched register, instead of being
unchanged from the host.

The behaviour is preserved by using the host's MIDR_EL1 as a
reset value for the guest's register.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm64/include/asm/kvm_asm.h |   16 +++++++++-------
 arch/arm64/kvm/hyp.S             |    4 ++++
 arch/arm64/kvm/sys_regs.c        |   18 +++++++++++++-----
 3 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 3c5fe68..c1d5bde 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -55,17 +55,19 @@
 #define DBGWVR0_EL1	71	/* Debug Watchpoint Value Registers (0-15) */
 #define DBGWVR15_EL1	86
 #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
+#define MIDR_EL1	88	/* Main ID Register */
 
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	88	/* Domain Access Control Register */
-#define	IFSR32_EL2	89	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	90	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	91	/* Debug Vector Catch Register */
-#define	TEECR32_EL1	92	/* ThumbEE Configuration Register */
-#define	TEEHBR32_EL1	93	/* ThumbEE Handler Base Register */
-#define	NR_SYS_REGS	94
+#define	DACR32_EL2	89	/* Domain Access Control Register */
+#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
+#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
+#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
+#define	NR_SYS_REGS	95
 
 /* 32bit mapping */
+#define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
 #define c0_MPIDR	(MPIDR_EL1 * 2)	/* MultiProcessor ID Register */
 #define c0_CSSELR	(CSSELR_EL1 * 2)/* Cache Size Selection Register */
 #define c1_SCTLR	(SCTLR_EL1 * 2)	/* System Control Register */
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 17a8fb1..6013347 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -216,6 +216,7 @@
 	mrs	x23, 	cntkctl_el1
 	mrs	x24,	par_el1
 	mrs	x25,	mdscr_el1
+	mrs	x26,	vpidr_el2
 
 	stp	x4, x5, [x3]
 	stp	x6, x7, [x3, #16]
@@ -228,6 +229,7 @@
 	stp	x20, x21, [x3, #128]
 	stp	x22, x23, [x3, #144]
 	stp	x24, x25, [x3, #160]
+	str	x26, [x3, #696]
 .endm
 
 .macro save_debug
@@ -442,6 +444,7 @@
 	ldp	x20, x21, [x3, #128]
 	ldp	x22, x23, [x3, #144]
 	ldp	x24, x25, [x3, #160]
+	ldr	x26, [x3, #696]
 
 	msr	vmpidr_el2,	x4
 	msr	csselr_el1,	x5
@@ -465,6 +468,7 @@
 	msr	cntkctl_el1,	x23
 	msr	par_el1,	x24
 	msr	mdscr_el1,	x25
+	msr	vpidr_el2,	x26
 .endm
 
 .macro restore_debug
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c370b40..7047292 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -170,17 +170,25 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 	}
 }
 
+static void reset_midr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	/*
+	 * We only export the host's MPIDR_EL1 for now.
+	 */
+	vcpu_sys_reg(vcpu, MIDR_EL1) = read_cpuid_id();
+}
+
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
- * 
+ *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
  *   traps and start doing the save/restore dance
  * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
  *   then mandatory to save/restore the registers, as the guest
  *   depends on them.
- * 
+ *
  * For this, we use a DIRTY bit, indicating the guest has modified the
  * debug registers, used as follow:
  *
@@ -350,6 +358,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b10), Op1(0b100), CRn(0b0000), CRm(0b0111), Op2(0b000),
 	  NULL, reset_val, DBGVCR32_EL2, 0 },
 
+	/* MIDR_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
+	  NULL, reset_midr, MIDR_EL1 },
 	/* MPIDR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
 	  NULL, reset_mpidr, MPIDR_EL1 },
@@ -1091,7 +1102,6 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 		((struct sys_reg_desc *)r)->val = val;			\
 	}
 
-FUNCTION_INVARIANT(midr_el1)
 FUNCTION_INVARIANT(ctr_el0)
 FUNCTION_INVARIANT(revidr_el1)
 FUNCTION_INVARIANT(id_pfr0_el1)
@@ -1113,8 +1123,6 @@ FUNCTION_INVARIANT(aidr_el1)
 
 /* ->val is filled in by kvm_sys_reg_table_init() */
 static struct sys_reg_desc invariant_sys_regs[] = {
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
-	  NULL, get_midr_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
 	  NULL, get_revidr_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 1/4] arm64: KVM: add MIDR_EL1 switching
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel

From: Marc Zyngier <marc.zyngier@arm.com>

Move MIDR_EL1 to be a world-switched register, instead of being
unchanged from the host.

The behaviour is preserved by using the host's MIDR_EL1 as a
reset value for the guest's register.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm64/include/asm/kvm_asm.h |   16 +++++++++-------
 arch/arm64/kvm/hyp.S             |    4 ++++
 arch/arm64/kvm/sys_regs.c        |   18 +++++++++++++-----
 3 files changed, 26 insertions(+), 12 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 3c5fe68..c1d5bde 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -55,17 +55,19 @@
 #define DBGWVR0_EL1	71	/* Debug Watchpoint Value Registers (0-15) */
 #define DBGWVR15_EL1	86
 #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
+#define MIDR_EL1	88	/* Main ID Register */
 
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	88	/* Domain Access Control Register */
-#define	IFSR32_EL2	89	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	90	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	91	/* Debug Vector Catch Register */
-#define	TEECR32_EL1	92	/* ThumbEE Configuration Register */
-#define	TEEHBR32_EL1	93	/* ThumbEE Handler Base Register */
-#define	NR_SYS_REGS	94
+#define	DACR32_EL2	89	/* Domain Access Control Register */
+#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
+#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
+#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
+#define	NR_SYS_REGS	95
 
 /* 32bit mapping */
+#define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
 #define c0_MPIDR	(MPIDR_EL1 * 2)	/* MultiProcessor ID Register */
 #define c0_CSSELR	(CSSELR_EL1 * 2)/* Cache Size Selection Register */
 #define c1_SCTLR	(SCTLR_EL1 * 2)	/* System Control Register */
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 17a8fb1..6013347 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -216,6 +216,7 @@
 	mrs	x23, 	cntkctl_el1
 	mrs	x24,	par_el1
 	mrs	x25,	mdscr_el1
+	mrs	x26,	vpidr_el2
 
 	stp	x4, x5, [x3]
 	stp	x6, x7, [x3, #16]
@@ -228,6 +229,7 @@
 	stp	x20, x21, [x3, #128]
 	stp	x22, x23, [x3, #144]
 	stp	x24, x25, [x3, #160]
+	str	x26, [x3, #696]
 .endm
 
 .macro save_debug
@@ -442,6 +444,7 @@
 	ldp	x20, x21, [x3, #128]
 	ldp	x22, x23, [x3, #144]
 	ldp	x24, x25, [x3, #160]
+	ldr	x26, [x3, #696]
 
 	msr	vmpidr_el2,	x4
 	msr	csselr_el1,	x5
@@ -465,6 +468,7 @@
 	msr	cntkctl_el1,	x23
 	msr	par_el1,	x24
 	msr	mdscr_el1,	x25
+	msr	vpidr_el2,	x26
 .endm
 
 .macro restore_debug
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index c370b40..7047292 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -170,17 +170,25 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 	}
 }
 
+static void reset_midr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	/*
+	 * We only export the host's MPIDR_EL1 for now.
+	 */
+	vcpu_sys_reg(vcpu, MIDR_EL1) = read_cpuid_id();
+}
+
 /*
  * We want to avoid world-switching all the DBG registers all the
  * time:
- * 
+ *
  * - If we've touched any debug register, it is likely that we're
  *   going to touch more of them. It then makes sense to disable the
  *   traps and start doing the save/restore dance
  * - If debug is active (DBG_MDSCR_KDE or DBG_MDSCR_MDE set), it is
  *   then mandatory to save/restore the registers, as the guest
  *   depends on them.
- * 
+ *
  * For this, we use a DIRTY bit, indicating the guest has modified the
  * debug registers, used as follow:
  *
@@ -350,6 +358,9 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	{ Op0(0b10), Op1(0b100), CRn(0b0000), CRm(0b0111), Op2(0b000),
 	  NULL, reset_val, DBGVCR32_EL2, 0 },
 
+	/* MIDR_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
+	  NULL, reset_midr, MIDR_EL1 },
 	/* MPIDR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
 	  NULL, reset_mpidr, MPIDR_EL1 },
@@ -1091,7 +1102,6 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 		((struct sys_reg_desc *)r)->val = val;			\
 	}
 
-FUNCTION_INVARIANT(midr_el1)
 FUNCTION_INVARIANT(ctr_el0)
 FUNCTION_INVARIANT(revidr_el1)
 FUNCTION_INVARIANT(id_pfr0_el1)
@@ -1113,8 +1123,6 @@ FUNCTION_INVARIANT(aidr_el1)
 
 /* ->val is filled in by kvm_sys_reg_table_init() */
 static struct sys_reg_desc invariant_sys_regs[] = {
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
-	  NULL, get_midr_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
 	  NULL, get_revidr_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
  2015-09-09  8:38 ` Tushar Jagad
  (?)
@ 2015-09-09  8:38   ` Tushar Jagad
  -1 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

This patch modifies the HCR_GUEST_FLAGS to enable trapping of
non secure read to registers under the HCR_EL2.TID3 group to EL2.

We emulate the accesses to capability registers which list the number of
breakpoints, watchpoints, etc. These values are provided by the user when
starting the VM. The emulated values are constructed at runtime from the
trap handler.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 Documentation/virtual/kvm/api.txt |    8 +
 arch/arm/kvm/arm.c                |   50 ++++-
 arch/arm64/include/asm/kvm_arm.h  |    2 +-
 arch/arm64/include/asm/kvm_asm.h  |   38 +++-
 arch/arm64/include/asm/kvm_host.h |    4 +-
 arch/arm64/include/uapi/asm/kvm.h |    7 +
 arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
 7 files changed, 503 insertions(+), 49 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index a7926a9..b06c104 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2561,6 +2561,14 @@ Possible features:
 	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
 	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
+	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
+	  This is a 4-bit value which defines number of hardware
+	  breakpoints supported on guest. If this is not sepecified or
+	  set to zero then the guest sees the value as is from the host.
+	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
+	  This is a 4-bit value which defines number of hardware
+	  watchpoints supported on guest. If this is not sepecified or
+	  set to zero then the guest sees the value as is from the host.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bc738d2..8907d37 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			       const struct kvm_vcpu_init *init)
 {
 	unsigned int i;
+	u64 aa64dfr;
+
 	int phys_target = kvm_target_cpu();
 
 	if (init->target != phys_target)
@@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
 		return -EINVAL;
 
+	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
+
 	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
 	for (i = 0; i < sizeof(init->features) * 8; i++) {
 		bool set = (init->features[i / 32] & (1 << (i % 32)));
@@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 		if (set && i >= KVM_VCPU_MAX_FEATURES)
 			return -ENOENT;
 
+		if (i == KVM_ARM_VCPU_NUM_BPTS) {
+			int h_bpts;
+			int g_bpts;
+
+			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
+			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
+					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
+
+			/*
+			 * We ensure that the host can support the requested
+			 * number of hardware breakpoints.
+			 */
+			if (g_bpts > h_bpts)
+				return -EINVAL;
+
+			vcpu->arch.bpts = g_bpts;
+
+			i  += 3;
+
+			continue;
+		}
+
+		if (i == KVM_ARM_VCPU_NUM_WPTS) {
+			int h_wpts;
+			int g_wpts;
+
+			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
+			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
+					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
+
+			/*
+			 * We ensure that the host can support the requested
+			 * number of hardware watchpoints.
+			 */
+			if (g_wpts > h_wpts)
+				return -EINVAL;
+
+			vcpu->arch.wpts = g_wpts;
+
+			i += 3;
+
+			continue;
+		}
+
 		/*
 		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
 		 * use the same feature set.
@@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			set_bit(i, vcpu->arch.features);
 	}
 
-	vcpu->arch.target = phys_target;
+	vcpu->arch.target = init->target;
 
 	/* Now we know what it is, we can reset it. */
 	return kvm_reset_vcpu(vcpu);
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index ac6fafb..3b67051 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -78,7 +78,7 @@
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
 			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
-			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
+			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
 
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index c1d5bde..087d104 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -56,15 +56,39 @@
 #define DBGWVR15_EL1	86
 #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 #define MIDR_EL1	88	/* Main ID Register */
+#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
+#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
+#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
+#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
+#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
+#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
+#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
+#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
+#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
+#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
+#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
+#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
+#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
+#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
+#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
+#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
+#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
+#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
+#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
+#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
+#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
+#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
+#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
+#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
 
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	89	/* Domain Access Control Register */
-#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
-#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
-#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
-#define	NR_SYS_REGS	95
+#define	DACR32_EL2	113	/* Domain Access Control Register */
+#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
+#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
+#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
+#define	NR_SYS_REGS	119
 
 /* 32bit mapping */
 #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2709db2..c780227 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
 
-#define KVM_VCPU_MAX_FEATURES 3
+#define KVM_VCPU_MAX_FEATURES 12
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
@@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
 	/* Target CPU and feature flags */
 	int target;
 	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
+	u32 bpts;
+	u32 wpts;
 
 	/* Detect first run of a vcpu */
 	bool has_run_once;
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index d268320..94d1fc9 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -88,6 +88,13 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
+#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
+#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
+
+#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
+#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
+#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
+#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7047292..273eecd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static bool trap_tid3(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	if (p->is_write) {
+		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+	} else {
+		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
+	}
+
+	return true;
+}
+
+static bool trap_pfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 prf;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
+		idx = ID_PFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
+		idx = ID_PFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = prf;
+}
+
+static bool trap_dfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 dfr;
+
+	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
+	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
+}
+
+static bool trap_mmfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 mmfr;
+	u32 idx;
+
+	switch (r->CRm) {
+	case 1:
+		switch (r->Op2) {
+		case 4:
+			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR0_EL1;
+			break;
+
+		case 5:
+			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR1_EL1;
+			break;
+
+		case 6:
+			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR2_EL1;
+			break;
+
+		case 7:
+			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR3_EL1;
+			break;
+
+		default:
+			BUG();
+		}
+		break;
+
+#if 0
+	case 2:
+		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
+		idx = ID_MMFR4_EL1;
+		break;
+#endif
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = mmfr;
+}
+
+static bool trap_isar(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 isar;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
+		idx = ID_ISAR0_EL1;
+		break;
+
+	case 1:
+		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
+		idx = ID_ISAR1_EL1;
+		break;
+
+	case 2:
+		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
+		idx = ID_ISAR2_EL1;
+		break;
+
+	case 3:
+		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
+		idx = ID_ISAR3_EL1;
+		break;
+
+	case 4:
+		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
+		idx = ID_ISAR4_EL1;
+		break;
+
+	case 5:
+		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
+		idx = ID_ISAR5_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = isar;
+}
+
+static bool trap_mvfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 mvfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
+		idx = MVFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
+		idx = MVFR1_EL1;
+		break;
+
+	case 2:
+		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
+		idx = MVFR2_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = mvfr;
+}
+
+static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64pfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
+		idx = ID_AA64PFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
+		idx = ID_AA64PFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64pfr;
+}
+
+static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64dfr;
+	u32 idx;
+	u32 bpts;
+	u32 wpts;
+
+	bpts = vcpu->arch.bpts;
+	if (bpts)
+		bpts--;
+
+	wpts = vcpu->arch.wpts;
+	if (wpts)
+		wpts--;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
+		idx = ID_AA64DFR0_EL1;
+		if (bpts)
+			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
+		if (wpts)
+			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
+		idx = ID_AA64DFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64dfr;
+}
+
+static bool trap_aa64isar(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 aa64isar;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
+		idx = ID_AA64ISAR0_EL1;
+		break;
+
+	case 1:
+		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
+		idx = ID_AA64ISAR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = aa64isar;
+}
+
+static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64mmfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
+		idx = ID_AA64MMFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
+		idx = ID_AA64MMFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
+}
+
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* MPIDR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
 	  NULL, reset_mpidr, MPIDR_EL1 },
+
+	/* ID_PFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
+	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
+	/* ID_PFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
+	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
+	/* ID_DFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
+	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
+	/* ID_MMFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
+	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
+	/* ID_MMFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
+	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
+	/* ID_MMFR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
+	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
+	/* ID_MMFR3_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
+	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
+
+	/* ID_ISAR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
+	  trap_isar, reset_isar, ID_ISAR0_EL1 },
+	/* ID_ISAR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
+	  trap_isar, reset_isar, ID_ISAR1_EL1 },
+	/* ID_ISAR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
+	  trap_isar, reset_isar, ID_ISAR2_EL1 },
+	/* ID_ISAR3_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
+	  trap_isar, reset_isar, ID_ISAR3_EL1 },
+	/* ID_ISAR4_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
+	  trap_isar, reset_isar, ID_ISAR4_EL1 },
+	/* ID_ISAR5_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
+	  trap_isar, reset_isar, ID_ISAR5_EL1 },
+
+	/* MVFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
+	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
+	/* MVFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
+	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
+	/* MVFR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
+	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
+
+	/* ID_AA64PFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
+	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
+	/* ID_AA64PFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
+	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
+
+	/* ID_AA64DFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
+	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
+	/* ID_AA64DFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
+	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
+
+	/* ID_AA64ISAR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
+	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
+	/* ID_AA64ISAR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
+	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
+
+	/* ID_AA64MMFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
+	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
+	/* ID_AA64MMFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
+	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
+
 	/* SCTLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
 	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
@@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 
 FUNCTION_INVARIANT(ctr_el0)
 FUNCTION_INVARIANT(revidr_el1)
-FUNCTION_INVARIANT(id_pfr0_el1)
-FUNCTION_INVARIANT(id_pfr1_el1)
-FUNCTION_INVARIANT(id_dfr0_el1)
 FUNCTION_INVARIANT(id_afr0_el1)
-FUNCTION_INVARIANT(id_mmfr0_el1)
-FUNCTION_INVARIANT(id_mmfr1_el1)
-FUNCTION_INVARIANT(id_mmfr2_el1)
-FUNCTION_INVARIANT(id_mmfr3_el1)
-FUNCTION_INVARIANT(id_isar0_el1)
-FUNCTION_INVARIANT(id_isar1_el1)
-FUNCTION_INVARIANT(id_isar2_el1)
-FUNCTION_INVARIANT(id_isar3_el1)
-FUNCTION_INVARIANT(id_isar4_el1)
-FUNCTION_INVARIANT(id_isar5_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
@@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
 static struct sys_reg_desc invariant_sys_regs[] = {
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
 	  NULL, get_revidr_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-	  NULL, get_id_pfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
-	  NULL, get_id_pfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
-	  NULL, get_id_dfr0_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
 	  NULL, get_id_afr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
-	  NULL, get_id_mmfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
-	  NULL, get_id_mmfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
-	  NULL, get_id_mmfr2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
-	  NULL, get_id_mmfr3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
-	  NULL, get_id_isar0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
-	  NULL, get_id_isar1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
-	  NULL, get_id_isar2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
-	  NULL, get_id_isar3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
-	  NULL, get_id_isar4_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
-	  NULL, get_id_isar5_el1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
 	  NULL, get_clidr_el1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

This patch modifies the HCR_GUEST_FLAGS to enable trapping of
non secure read to registers under the HCR_EL2.TID3 group to EL2.

We emulate the accesses to capability registers which list the number of
breakpoints, watchpoints, etc. These values are provided by the user when
starting the VM. The emulated values are constructed at runtime from the
trap handler.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 Documentation/virtual/kvm/api.txt |    8 +
 arch/arm/kvm/arm.c                |   50 ++++-
 arch/arm64/include/asm/kvm_arm.h  |    2 +-
 arch/arm64/include/asm/kvm_asm.h  |   38 +++-
 arch/arm64/include/asm/kvm_host.h |    4 +-
 arch/arm64/include/uapi/asm/kvm.h |    7 +
 arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
 7 files changed, 503 insertions(+), 49 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index a7926a9..b06c104 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2561,6 +2561,14 @@ Possible features:
 	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
 	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
+	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
+	  This is a 4-bit value which defines number of hardware
+	  breakpoints supported on guest. If this is not sepecified or
+	  set to zero then the guest sees the value as is from the host.
+	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
+	  This is a 4-bit value which defines number of hardware
+	  watchpoints supported on guest. If this is not sepecified or
+	  set to zero then the guest sees the value as is from the host.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bc738d2..8907d37 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			       const struct kvm_vcpu_init *init)
 {
 	unsigned int i;
+	u64 aa64dfr;
+
 	int phys_target = kvm_target_cpu();
 
 	if (init->target != phys_target)
@@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
 		return -EINVAL;
 
+	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
+
 	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
 	for (i = 0; i < sizeof(init->features) * 8; i++) {
 		bool set = (init->features[i / 32] & (1 << (i % 32)));
@@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 		if (set && i >= KVM_VCPU_MAX_FEATURES)
 			return -ENOENT;
 
+		if (i == KVM_ARM_VCPU_NUM_BPTS) {
+			int h_bpts;
+			int g_bpts;
+
+			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
+			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
+					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
+
+			/*
+			 * We ensure that the host can support the requested
+			 * number of hardware breakpoints.
+			 */
+			if (g_bpts > h_bpts)
+				return -EINVAL;
+
+			vcpu->arch.bpts = g_bpts;
+
+			i  += 3;
+
+			continue;
+		}
+
+		if (i == KVM_ARM_VCPU_NUM_WPTS) {
+			int h_wpts;
+			int g_wpts;
+
+			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
+			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
+					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
+
+			/*
+			 * We ensure that the host can support the requested
+			 * number of hardware watchpoints.
+			 */
+			if (g_wpts > h_wpts)
+				return -EINVAL;
+
+			vcpu->arch.wpts = g_wpts;
+
+			i += 3;
+
+			continue;
+		}
+
 		/*
 		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
 		 * use the same feature set.
@@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			set_bit(i, vcpu->arch.features);
 	}
 
-	vcpu->arch.target = phys_target;
+	vcpu->arch.target = init->target;
 
 	/* Now we know what it is, we can reset it. */
 	return kvm_reset_vcpu(vcpu);
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index ac6fafb..3b67051 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -78,7 +78,7 @@
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
 			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
-			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
+			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
 
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index c1d5bde..087d104 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -56,15 +56,39 @@
 #define DBGWVR15_EL1	86
 #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 #define MIDR_EL1	88	/* Main ID Register */
+#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
+#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
+#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
+#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
+#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
+#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
+#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
+#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
+#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
+#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
+#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
+#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
+#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
+#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
+#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
+#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
+#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
+#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
+#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
+#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
+#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
+#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
+#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
+#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
 
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	89	/* Domain Access Control Register */
-#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
-#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
-#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
-#define	NR_SYS_REGS	95
+#define	DACR32_EL2	113	/* Domain Access Control Register */
+#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
+#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
+#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
+#define	NR_SYS_REGS	119
 
 /* 32bit mapping */
 #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2709db2..c780227 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
 
-#define KVM_VCPU_MAX_FEATURES 3
+#define KVM_VCPU_MAX_FEATURES 12
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
@@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
 	/* Target CPU and feature flags */
 	int target;
 	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
+	u32 bpts;
+	u32 wpts;
 
 	/* Detect first run of a vcpu */
 	bool has_run_once;
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index d268320..94d1fc9 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -88,6 +88,13 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
+#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
+#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
+
+#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
+#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
+#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
+#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7047292..273eecd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static bool trap_tid3(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	if (p->is_write) {
+		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+	} else {
+		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
+	}
+
+	return true;
+}
+
+static bool trap_pfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 prf;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
+		idx = ID_PFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
+		idx = ID_PFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = prf;
+}
+
+static bool trap_dfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 dfr;
+
+	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
+	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
+}
+
+static bool trap_mmfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 mmfr;
+	u32 idx;
+
+	switch (r->CRm) {
+	case 1:
+		switch (r->Op2) {
+		case 4:
+			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR0_EL1;
+			break;
+
+		case 5:
+			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR1_EL1;
+			break;
+
+		case 6:
+			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR2_EL1;
+			break;
+
+		case 7:
+			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR3_EL1;
+			break;
+
+		default:
+			BUG();
+		}
+		break;
+
+#if 0
+	case 2:
+		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
+		idx = ID_MMFR4_EL1;
+		break;
+#endif
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = mmfr;
+}
+
+static bool trap_isar(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 isar;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
+		idx = ID_ISAR0_EL1;
+		break;
+
+	case 1:
+		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
+		idx = ID_ISAR1_EL1;
+		break;
+
+	case 2:
+		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
+		idx = ID_ISAR2_EL1;
+		break;
+
+	case 3:
+		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
+		idx = ID_ISAR3_EL1;
+		break;
+
+	case 4:
+		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
+		idx = ID_ISAR4_EL1;
+		break;
+
+	case 5:
+		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
+		idx = ID_ISAR5_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = isar;
+}
+
+static bool trap_mvfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 mvfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
+		idx = MVFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
+		idx = MVFR1_EL1;
+		break;
+
+	case 2:
+		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
+		idx = MVFR2_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = mvfr;
+}
+
+static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64pfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
+		idx = ID_AA64PFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
+		idx = ID_AA64PFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64pfr;
+}
+
+static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64dfr;
+	u32 idx;
+	u32 bpts;
+	u32 wpts;
+
+	bpts = vcpu->arch.bpts;
+	if (bpts)
+		bpts--;
+
+	wpts = vcpu->arch.wpts;
+	if (wpts)
+		wpts--;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
+		idx = ID_AA64DFR0_EL1;
+		if (bpts)
+			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
+		if (wpts)
+			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
+		idx = ID_AA64DFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64dfr;
+}
+
+static bool trap_aa64isar(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 aa64isar;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
+		idx = ID_AA64ISAR0_EL1;
+		break;
+
+	case 1:
+		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
+		idx = ID_AA64ISAR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = aa64isar;
+}
+
+static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64mmfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
+		idx = ID_AA64MMFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
+		idx = ID_AA64MMFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
+}
+
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* MPIDR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
 	  NULL, reset_mpidr, MPIDR_EL1 },
+
+	/* ID_PFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
+	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
+	/* ID_PFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
+	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
+	/* ID_DFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
+	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
+	/* ID_MMFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
+	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
+	/* ID_MMFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
+	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
+	/* ID_MMFR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
+	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
+	/* ID_MMFR3_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
+	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
+
+	/* ID_ISAR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
+	  trap_isar, reset_isar, ID_ISAR0_EL1 },
+	/* ID_ISAR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
+	  trap_isar, reset_isar, ID_ISAR1_EL1 },
+	/* ID_ISAR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
+	  trap_isar, reset_isar, ID_ISAR2_EL1 },
+	/* ID_ISAR3_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
+	  trap_isar, reset_isar, ID_ISAR3_EL1 },
+	/* ID_ISAR4_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
+	  trap_isar, reset_isar, ID_ISAR4_EL1 },
+	/* ID_ISAR5_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
+	  trap_isar, reset_isar, ID_ISAR5_EL1 },
+
+	/* MVFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
+	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
+	/* MVFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
+	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
+	/* MVFR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
+	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
+
+	/* ID_AA64PFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
+	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
+	/* ID_AA64PFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
+	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
+
+	/* ID_AA64DFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
+	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
+	/* ID_AA64DFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
+	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
+
+	/* ID_AA64ISAR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
+	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
+	/* ID_AA64ISAR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
+	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
+
+	/* ID_AA64MMFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
+	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
+	/* ID_AA64MMFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
+	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
+
 	/* SCTLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
 	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
@@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 
 FUNCTION_INVARIANT(ctr_el0)
 FUNCTION_INVARIANT(revidr_el1)
-FUNCTION_INVARIANT(id_pfr0_el1)
-FUNCTION_INVARIANT(id_pfr1_el1)
-FUNCTION_INVARIANT(id_dfr0_el1)
 FUNCTION_INVARIANT(id_afr0_el1)
-FUNCTION_INVARIANT(id_mmfr0_el1)
-FUNCTION_INVARIANT(id_mmfr1_el1)
-FUNCTION_INVARIANT(id_mmfr2_el1)
-FUNCTION_INVARIANT(id_mmfr3_el1)
-FUNCTION_INVARIANT(id_isar0_el1)
-FUNCTION_INVARIANT(id_isar1_el1)
-FUNCTION_INVARIANT(id_isar2_el1)
-FUNCTION_INVARIANT(id_isar3_el1)
-FUNCTION_INVARIANT(id_isar4_el1)
-FUNCTION_INVARIANT(id_isar5_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
@@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
 static struct sys_reg_desc invariant_sys_regs[] = {
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
 	  NULL, get_revidr_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-	  NULL, get_id_pfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
-	  NULL, get_id_pfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
-	  NULL, get_id_dfr0_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
 	  NULL, get_id_afr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
-	  NULL, get_id_mmfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
-	  NULL, get_id_mmfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
-	  NULL, get_id_mmfr2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
-	  NULL, get_id_mmfr3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
-	  NULL, get_id_isar0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
-	  NULL, get_id_isar1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
-	  NULL, get_id_isar2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
-	  NULL, get_id_isar3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
-	  NULL, get_id_isar4_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
-	  NULL, get_id_isar5_el1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
 	  NULL, get_clidr_el1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel

This patch modifies the HCR_GUEST_FLAGS to enable trapping of
non secure read to registers under the HCR_EL2.TID3 group to EL2.

We emulate the accesses to capability registers which list the number of
breakpoints, watchpoints, etc. These values are provided by the user when
starting the VM. The emulated values are constructed at runtime from the
trap handler.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 Documentation/virtual/kvm/api.txt |    8 +
 arch/arm/kvm/arm.c                |   50 ++++-
 arch/arm64/include/asm/kvm_arm.h  |    2 +-
 arch/arm64/include/asm/kvm_asm.h  |   38 +++-
 arch/arm64/include/asm/kvm_host.h |    4 +-
 arch/arm64/include/uapi/asm/kvm.h |    7 +
 arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
 7 files changed, 503 insertions(+), 49 deletions(-)

diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index a7926a9..b06c104 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -2561,6 +2561,14 @@ Possible features:
 	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
 	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
 	  Depends on KVM_CAP_ARM_PSCI_0_2.
+	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
+	  This is a 4-bit value which defines number of hardware
+	  breakpoints supported on guest. If this is not sepecified or
+	  set to zero then the guest sees the value as is from the host.
+	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
+	  This is a 4-bit value which defines number of hardware
+	  watchpoints supported on guest. If this is not sepecified or
+	  set to zero then the guest sees the value as is from the host.
 
 
 4.83 KVM_ARM_PREFERRED_TARGET
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bc738d2..8907d37 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			       const struct kvm_vcpu_init *init)
 {
 	unsigned int i;
+	u64 aa64dfr;
+
 	int phys_target = kvm_target_cpu();
 
 	if (init->target != phys_target)
@@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
 		return -EINVAL;
 
+	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
+
 	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
 	for (i = 0; i < sizeof(init->features) * 8; i++) {
 		bool set = (init->features[i / 32] & (1 << (i % 32)));
@@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 		if (set && i >= KVM_VCPU_MAX_FEATURES)
 			return -ENOENT;
 
+		if (i == KVM_ARM_VCPU_NUM_BPTS) {
+			int h_bpts;
+			int g_bpts;
+
+			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
+			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
+					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
+
+			/*
+			 * We ensure that the host can support the requested
+			 * number of hardware breakpoints.
+			 */
+			if (g_bpts > h_bpts)
+				return -EINVAL;
+
+			vcpu->arch.bpts = g_bpts;
+
+			i  += 3;
+
+			continue;
+		}
+
+		if (i == KVM_ARM_VCPU_NUM_WPTS) {
+			int h_wpts;
+			int g_wpts;
+
+			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
+			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
+					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
+
+			/*
+			 * We ensure that the host can support the requested
+			 * number of hardware watchpoints.
+			 */
+			if (g_wpts > h_wpts)
+				return -EINVAL;
+
+			vcpu->arch.wpts = g_wpts;
+
+			i += 3;
+
+			continue;
+		}
+
 		/*
 		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
 		 * use the same feature set.
@@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			set_bit(i, vcpu->arch.features);
 	}
 
-	vcpu->arch.target = phys_target;
+	vcpu->arch.target = init->target;
 
 	/* Now we know what it is, we can reset it. */
 	return kvm_reset_vcpu(vcpu);
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index ac6fafb..3b67051 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -78,7 +78,7 @@
  */
 #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
 			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
-			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
+			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
 #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
 #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
 
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index c1d5bde..087d104 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -56,15 +56,39 @@
 #define DBGWVR15_EL1	86
 #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
 #define MIDR_EL1	88	/* Main ID Register */
+#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
+#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
+#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
+#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
+#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
+#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
+#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
+#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
+#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
+#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
+#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
+#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
+#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
+#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
+#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
+#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
+#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
+#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
+#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
+#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
+#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
+#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
+#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
+#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
 
 /* 32bit specific registers. Keep them at the end of the range */
-#define	DACR32_EL2	89	/* Domain Access Control Register */
-#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
-#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
-#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
-#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
-#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
-#define	NR_SYS_REGS	95
+#define	DACR32_EL2	113	/* Domain Access Control Register */
+#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
+#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
+#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
+#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
+#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
+#define	NR_SYS_REGS	119
 
 /* 32bit mapping */
 #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2709db2..c780227 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,7 +43,7 @@
 #include <kvm/arm_vgic.h>
 #include <kvm/arm_arch_timer.h>
 
-#define KVM_VCPU_MAX_FEATURES 3
+#define KVM_VCPU_MAX_FEATURES 12
 
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
@@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
 	/* Target CPU and feature flags */
 	int target;
 	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
+	u32 bpts;
+	u32 wpts;
 
 	/* Detect first run of a vcpu */
 	bool has_run_once;
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index d268320..94d1fc9 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -88,6 +88,13 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
+#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
+#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
+
+#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
+#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
+#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
+#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7047292..273eecd 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
 }
 
+static bool trap_tid3(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	if (p->is_write) {
+		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
+	} else {
+		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
+	}
+
+	return true;
+}
+
+static bool trap_pfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 prf;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
+		idx = ID_PFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
+		idx = ID_PFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = prf;
+}
+
+static bool trap_dfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 dfr;
+
+	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
+	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
+}
+
+static bool trap_mmfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 mmfr;
+	u32 idx;
+
+	switch (r->CRm) {
+	case 1:
+		switch (r->Op2) {
+		case 4:
+			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR0_EL1;
+			break;
+
+		case 5:
+			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR1_EL1;
+			break;
+
+		case 6:
+			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR2_EL1;
+			break;
+
+		case 7:
+			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
+			idx = ID_MMFR3_EL1;
+			break;
+
+		default:
+			BUG();
+		}
+		break;
+
+#if 0
+	case 2:
+		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
+		idx = ID_MMFR4_EL1;
+		break;
+#endif
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = mmfr;
+}
+
+static bool trap_isar(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 isar;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
+		idx = ID_ISAR0_EL1;
+		break;
+
+	case 1:
+		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
+		idx = ID_ISAR1_EL1;
+		break;
+
+	case 2:
+		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
+		idx = ID_ISAR2_EL1;
+		break;
+
+	case 3:
+		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
+		idx = ID_ISAR3_EL1;
+		break;
+
+	case 4:
+		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
+		idx = ID_ISAR4_EL1;
+		break;
+
+	case 5:
+		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
+		idx = ID_ISAR5_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = isar;
+}
+
+static bool trap_mvfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 mvfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
+		idx = MVFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
+		idx = MVFR1_EL1;
+		break;
+
+	case 2:
+		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
+		idx = MVFR2_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = mvfr;
+}
+
+static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64pfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
+		idx = ID_AA64PFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
+		idx = ID_AA64PFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64pfr;
+}
+
+static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64dfr;
+	u32 idx;
+	u32 bpts;
+	u32 wpts;
+
+	bpts = vcpu->arch.bpts;
+	if (bpts)
+		bpts--;
+
+	wpts = vcpu->arch.wpts;
+	if (wpts)
+		wpts--;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
+		idx = ID_AA64DFR0_EL1;
+		if (bpts)
+			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
+		if (wpts)
+			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
+		idx = ID_AA64DFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64dfr;
+}
+
+static bool trap_aa64isar(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u32 aa64isar;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
+		idx = ID_AA64ISAR0_EL1;
+		break;
+
+	case 1:
+		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
+		idx = ID_AA64ISAR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+	vcpu_sys_reg(vcpu, idx) = aa64isar;
+}
+
+static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
+		const struct sys_reg_params *p,
+		const struct sys_reg_desc *r)
+{
+	return trap_tid3(vcpu, p, r);
+}
+
+static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
+{
+	u64 aa64mmfr;
+	u32 idx;
+
+	switch (r->Op2) {
+	case 0:
+		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
+		idx = ID_AA64MMFR0_EL1;
+		break;
+	case 1:
+		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
+		idx = ID_AA64MMFR1_EL1;
+		break;
+
+	default:
+		BUG();
+	}
+
+	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
+}
+
+
 /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
 #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
 	/* DBGBVRn_EL1 */						\
@@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
 	/* MPIDR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
 	  NULL, reset_mpidr, MPIDR_EL1 },
+
+	/* ID_PFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
+	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
+	/* ID_PFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
+	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
+	/* ID_DFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
+	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
+	/* ID_MMFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
+	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
+	/* ID_MMFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
+	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
+	/* ID_MMFR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
+	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
+	/* ID_MMFR3_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
+	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
+
+	/* ID_ISAR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
+	  trap_isar, reset_isar, ID_ISAR0_EL1 },
+	/* ID_ISAR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
+	  trap_isar, reset_isar, ID_ISAR1_EL1 },
+	/* ID_ISAR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
+	  trap_isar, reset_isar, ID_ISAR2_EL1 },
+	/* ID_ISAR3_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
+	  trap_isar, reset_isar, ID_ISAR3_EL1 },
+	/* ID_ISAR4_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
+	  trap_isar, reset_isar, ID_ISAR4_EL1 },
+	/* ID_ISAR5_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
+	  trap_isar, reset_isar, ID_ISAR5_EL1 },
+
+	/* MVFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
+	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
+	/* MVFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
+	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
+	/* MVFR2_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
+	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
+
+	/* ID_AA64PFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
+	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
+	/* ID_AA64PFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
+	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
+
+	/* ID_AA64DFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
+	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
+	/* ID_AA64DFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
+	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
+
+	/* ID_AA64ISAR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
+	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
+	/* ID_AA64ISAR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
+	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
+
+	/* ID_AA64MMFR0_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
+	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
+	/* ID_AA64MMFR1_EL1 */
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
+	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
+
 	/* SCTLR_EL1 */
 	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
 	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
@@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 
 FUNCTION_INVARIANT(ctr_el0)
 FUNCTION_INVARIANT(revidr_el1)
-FUNCTION_INVARIANT(id_pfr0_el1)
-FUNCTION_INVARIANT(id_pfr1_el1)
-FUNCTION_INVARIANT(id_dfr0_el1)
 FUNCTION_INVARIANT(id_afr0_el1)
-FUNCTION_INVARIANT(id_mmfr0_el1)
-FUNCTION_INVARIANT(id_mmfr1_el1)
-FUNCTION_INVARIANT(id_mmfr2_el1)
-FUNCTION_INVARIANT(id_mmfr3_el1)
-FUNCTION_INVARIANT(id_isar0_el1)
-FUNCTION_INVARIANT(id_isar1_el1)
-FUNCTION_INVARIANT(id_isar2_el1)
-FUNCTION_INVARIANT(id_isar3_el1)
-FUNCTION_INVARIANT(id_isar4_el1)
-FUNCTION_INVARIANT(id_isar5_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
@@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
 static struct sys_reg_desc invariant_sys_regs[] = {
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
 	  NULL, get_revidr_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-	  NULL, get_id_pfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
-	  NULL, get_id_pfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
-	  NULL, get_id_dfr0_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
 	  NULL, get_id_afr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
-	  NULL, get_id_mmfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
-	  NULL, get_id_mmfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
-	  NULL, get_id_mmfr2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
-	  NULL, get_id_mmfr3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
-	  NULL, get_id_isar0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
-	  NULL, get_id_isar1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
-	  NULL, get_id_isar2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
-	  NULL, get_id_isar3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
-	  NULL, get_id_isar4_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
-	  NULL, get_id_isar5_el1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
 	  NULL, get_clidr_el1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH RFC 3/4] arm64: kvm: Setup MIDR as per target vcpu
  2015-09-09  8:38 ` Tushar Jagad
  (?)
@ 2015-09-09  8:38   ` Tushar Jagad
  -1 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

For Cross CPU targets guest kernel should see MIDR value as per the
target specified.

This patch adds support to construct the value for MIDR register
based on the target vcpu.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm64/kvm/sys_regs.c |   43 +++++++++++++++++++++++++++++++++++++++----
 1 file changed, 39 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 273eecd..cb12783 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -172,10 +172,45 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 
 static void reset_midr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
-	/*
-	 * We only export the host's MPIDR_EL1 for now.
-	 */
-	vcpu_sys_reg(vcpu, MIDR_EL1) = read_cpuid_id();
+	__u32 target;
+	unsigned long implementor;
+	unsigned long part_num;
+	__u32 midr_el1;
+
+	target = vcpu->arch.target;
+	switch (target) {
+	case KVM_ARM_TARGET_AEM_V8:
+		part_num = ARM_CPU_PART_AEM_V8;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_FOUNDATION_V8:
+		part_num = ARM_CPU_PART_FOUNDATION;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_CORTEX_A53:
+		part_num = ARM_CPU_PART_CORTEX_A53;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_CORTEX_A57:
+		part_num = ARM_CPU_PART_CORTEX_A57;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_XGENE_POTENZA:
+		part_num = APM_CPU_PART_POTENZA;
+		implementor = ARM_CPU_IMP_APM;
+		break;
+
+	default:
+		implementor = 0;
+		part_num = 0;
+	}
+
+	if (implementor && part_num)
+		midr_el1 = MIDR_CPU_PART(implementor, part_num);
+	else
+		midr_el1 = read_cpuid_id();
+
+	vcpu_sys_reg(vcpu, MIDR_EL1) = midr_el1;
 }
 
 /*
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 3/4] arm64: kvm: Setup MIDR as per target vcpu
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

For Cross CPU targets guest kernel should see MIDR value as per the
target specified.

This patch adds support to construct the value for MIDR register
based on the target vcpu.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm64/kvm/sys_regs.c |   43 +++++++++++++++++++++++++++++++++++++++----
 1 file changed, 39 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 273eecd..cb12783 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -172,10 +172,45 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 
 static void reset_midr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
-	/*
-	 * We only export the host's MPIDR_EL1 for now.
-	 */
-	vcpu_sys_reg(vcpu, MIDR_EL1) = read_cpuid_id();
+	__u32 target;
+	unsigned long implementor;
+	unsigned long part_num;
+	__u32 midr_el1;
+
+	target = vcpu->arch.target;
+	switch (target) {
+	case KVM_ARM_TARGET_AEM_V8:
+		part_num = ARM_CPU_PART_AEM_V8;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_FOUNDATION_V8:
+		part_num = ARM_CPU_PART_FOUNDATION;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_CORTEX_A53:
+		part_num = ARM_CPU_PART_CORTEX_A53;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_CORTEX_A57:
+		part_num = ARM_CPU_PART_CORTEX_A57;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_XGENE_POTENZA:
+		part_num = APM_CPU_PART_POTENZA;
+		implementor = ARM_CPU_IMP_APM;
+		break;
+
+	default:
+		implementor = 0;
+		part_num = 0;
+	}
+
+	if (implementor && part_num)
+		midr_el1 = MIDR_CPU_PART(implementor, part_num);
+	else
+		midr_el1 = read_cpuid_id();
+
+	vcpu_sys_reg(vcpu, MIDR_EL1) = midr_el1;
 }
 
 /*
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 3/4] arm64: kvm: Setup MIDR as per target vcpu
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel

For Cross CPU targets guest kernel should see MIDR value as per the
target specified.

This patch adds support to construct the value for MIDR register
based on the target vcpu.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm64/kvm/sys_regs.c |   43 +++++++++++++++++++++++++++++++++++++++----
 1 file changed, 39 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 273eecd..cb12783 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -172,10 +172,45 @@ static bool trap_dbgauthstatus_el1(struct kvm_vcpu *vcpu,
 
 static void reset_midr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
 {
-	/*
-	 * We only export the host's MPIDR_EL1 for now.
-	 */
-	vcpu_sys_reg(vcpu, MIDR_EL1) = read_cpuid_id();
+	__u32 target;
+	unsigned long implementor;
+	unsigned long part_num;
+	__u32 midr_el1;
+
+	target = vcpu->arch.target;
+	switch (target) {
+	case KVM_ARM_TARGET_AEM_V8:
+		part_num = ARM_CPU_PART_AEM_V8;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_FOUNDATION_V8:
+		part_num = ARM_CPU_PART_FOUNDATION;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_CORTEX_A53:
+		part_num = ARM_CPU_PART_CORTEX_A53;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_CORTEX_A57:
+		part_num = ARM_CPU_PART_CORTEX_A57;
+		implementor = ARM_CPU_IMP_ARM;
+		break;
+	case KVM_ARM_TARGET_XGENE_POTENZA:
+		part_num = APM_CPU_PART_POTENZA;
+		implementor = ARM_CPU_IMP_APM;
+		break;
+
+	default:
+		implementor = 0;
+		part_num = 0;
+	}
+
+	if (implementor && part_num)
+		midr_el1 = MIDR_CPU_PART(implementor, part_num);
+	else
+		midr_el1 = read_cpuid_id();
+
+	vcpu_sys_reg(vcpu, MIDR_EL1) = midr_el1;
 }
 
 /*
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH RFC 4/4] arm/arm64: kvm: Disable comparision of cpu and vcpu target
  2015-09-09  8:38 ` Tushar Jagad
  (?)
@ 2015-09-09  8:38   ` Tushar Jagad
  -1 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

This patch disables comparison of physical cpu and vcpu for supporting
cross cpu guests.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm/kvm/arm.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 8907d37..b3214b2 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -698,11 +698,6 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	unsigned int i;
 	u64 aa64dfr;
 
-	int phys_target = kvm_target_cpu();
-
-	if (init->target != phys_target)
-		return -EINVAL;
-
 	/*
 	 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
 	 * use the same target.
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 4/4] arm/arm64: kvm: Disable comparision of cpu and vcpu target
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel, tushar.jagad,
	christoffer.dall

This patch disables comparison of physical cpu and vcpu for supporting
cross cpu guests.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm/kvm/arm.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 8907d37..b3214b2 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -698,11 +698,6 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	unsigned int i;
 	u64 aa64dfr;
 
-	int phys_target = kvm_target_cpu();
-
-	if (init->target != phys_target)
-		return -EINVAL;
-
 	/*
 	 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
 	 * use the same target.
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* [PATCH RFC 4/4] arm/arm64: kvm: Disable comparision of cpu and vcpu target
@ 2015-09-09  8:38   ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-09  8:38 UTC (permalink / raw)
  To: linux-arm-kernel

This patch disables comparison of physical cpu and vcpu for supporting
cross cpu guests.

Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
---
 arch/arm/kvm/arm.c |    5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 8907d37..b3214b2 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -698,11 +698,6 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	unsigned int i;
 	u64 aa64dfr;
 
-	int phys_target = kvm_target_cpu();
-
-	if (init->target != phys_target)
-		return -EINVAL;
-
 	/*
 	 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
 	 * use the same target.
-- 
1.7.9.5

^ permalink raw reply related	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
  2015-09-09  8:38   ` Tushar Jagad
  (?)
@ 2015-09-15  4:23     ` Shannon Zhao
  -1 siblings, 0 replies; 24+ messages in thread
From: Shannon Zhao @ 2015-09-15  4:23 UTC (permalink / raw)
  To: Tushar Jagad, linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel,
	Huangpeng (Peter),
	christoffer.dall



On 2015/9/9 16:38, Tushar Jagad wrote:
> This patch modifies the HCR_GUEST_FLAGS to enable trapping of
> non secure read to registers under the HCR_EL2.TID3 group to EL2.
> 
> We emulate the accesses to capability registers which list the number of
> breakpoints, watchpoints, etc. These values are provided by the user when
> starting the VM. The emulated values are constructed at runtime from the
> trap handler.
> 
> Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
> ---
>  Documentation/virtual/kvm/api.txt |    8 +
>  arch/arm/kvm/arm.c                |   50 ++++-
>  arch/arm64/include/asm/kvm_arm.h  |    2 +-
>  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
>  arch/arm64/include/asm/kvm_host.h |    4 +-
>  arch/arm64/include/uapi/asm/kvm.h |    7 +
>  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
>  7 files changed, 503 insertions(+), 49 deletions(-)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index a7926a9..b06c104 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2561,6 +2561,14 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
>  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
> +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
> +	  This is a 4-bit value which defines number of hardware
> +	  breakpoints supported on guest. If this is not sepecified or
> +	  set to zero then the guest sees the value as is from the host.
> +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
> +	  This is a 4-bit value which defines number of hardware
> +	  watchpoints supported on guest. If this is not sepecified or
> +	  set to zero then the guest sees the value as is from the host.
>  
>  
>  4.83 KVM_ARM_PREFERRED_TARGET
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index bc738d2..8907d37 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			       const struct kvm_vcpu_init *init)
>  {
>  	unsigned int i;
> +	u64 aa64dfr;
> +
>  	int phys_target = kvm_target_cpu();
>  
>  	if (init->target != phys_target)
> @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
>  		return -EINVAL;
>  
> +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> +
>  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
>  	for (i = 0; i < sizeof(init->features) * 8; i++) {
>  		bool set = (init->features[i / 32] & (1 << (i % 32)));
> @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  		if (set && i >= KVM_VCPU_MAX_FEATURES)
>  			return -ENOENT;
>  
> +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
> +			int h_bpts;
> +			int g_bpts;
> +
> +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
> +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
> +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
> +
> +			/*
> +			 * We ensure that the host can support the requested
> +			 * number of hardware breakpoints.
> +			 */
> +			if (g_bpts > h_bpts)
> +				return -EINVAL;
> +
This may not work. Assuming that the number of source host hardware
breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
create VM on the source host. But if the number of destination host
hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
and fail to create VM on the destination host and migrate failed.

(P.S. I'm considering the guest PMU for cross-cpu type, so I have look
at this patch)

> +			vcpu->arch.bpts = g_bpts;
> +
> +			i  += 3;
> +
> +			continue;
> +		}
> +
> +		if (i == KVM_ARM_VCPU_NUM_WPTS) {
> +			int h_wpts;
> +			int g_wpts;
> +
> +			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
> +			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
> +					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
> +
> +			/*
> +			 * We ensure that the host can support the requested
> +			 * number of hardware watchpoints.
> +			 */
> +			if (g_wpts > h_wpts)
> +				return -EINVAL;
> +
> +			vcpu->arch.wpts = g_wpts;
> +
> +			i += 3;
> +
> +			continue;
> +		}
> +
>  		/*
>  		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
>  		 * use the same feature set.
> @@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			set_bit(i, vcpu->arch.features);
>  	}
>  
> -	vcpu->arch.target = phys_target;
> +	vcpu->arch.target = init->target;
>  
>  	/* Now we know what it is, we can reset it. */
>  	return kvm_reset_vcpu(vcpu);
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index ac6fafb..3b67051 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -78,7 +78,7 @@
>   */
>  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
>  			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> -			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
> +			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
>  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
>  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
>  
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index c1d5bde..087d104 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -56,15 +56,39 @@
>  #define DBGWVR15_EL1	86
>  #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
>  #define MIDR_EL1	88	/* Main ID Register */
> +#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
> +#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
> +#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
> +#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
> +#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
> +#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
> +#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
> +#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
> +#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
> +#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
> +#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
> +#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
> +#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
> +#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
> +#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
> +#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
> +#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
> +#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
> +#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
> +#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
> +#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
> +#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
> +#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
> +#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
>  
>  /* 32bit specific registers. Keep them at the end of the range */
> -#define	DACR32_EL2	89	/* Domain Access Control Register */
> -#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
> -#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
> -#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
> -#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
> -#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
> -#define	NR_SYS_REGS	95
> +#define	DACR32_EL2	113	/* Domain Access Control Register */
> +#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
> +#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
> +#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
> +#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
> +#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
> +#define	NR_SYS_REGS	119
>  
>  /* 32bit mapping */
>  #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 2709db2..c780227 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -43,7 +43,7 @@
>  #include <kvm/arm_vgic.h>
>  #include <kvm/arm_arch_timer.h>
>  
> -#define KVM_VCPU_MAX_FEATURES 3
> +#define KVM_VCPU_MAX_FEATURES 12
>  
>  int __attribute_const__ kvm_target_cpu(void);
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
> @@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
>  	/* Target CPU and feature flags */
>  	int target;
>  	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
> +	u32 bpts;
> +	u32 wpts;
>  
>  	/* Detect first run of a vcpu */
>  	bool has_run_once;
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index d268320..94d1fc9 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -88,6 +88,13 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> +#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
> +#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
> +
> +#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
> +#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
> +#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
> +#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7047292..273eecd 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static bool trap_tid3(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	if (p->is_write) {
> +		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +	} else {
> +		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> +	}
> +
> +	return true;
> +}
> +
> +static bool trap_pfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 prf;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
> +		idx = ID_PFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
> +		idx = ID_PFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = prf;
> +}
> +
> +static bool trap_dfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 dfr;
> +
> +	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
> +	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
> +}
> +
> +static bool trap_mmfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 mmfr;
> +	u32 idx;
> +
> +	switch (r->CRm) {
> +	case 1:
> +		switch (r->Op2) {
> +		case 4:
> +			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR0_EL1;
> +			break;
> +
> +		case 5:
> +			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR1_EL1;
> +			break;
> +
> +		case 6:
> +			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR2_EL1;
> +			break;
> +
> +		case 7:
> +			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR3_EL1;
> +			break;
> +
> +		default:
> +			BUG();
> +		}
> +		break;
> +
> +#if 0
> +	case 2:
> +		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
> +		idx = ID_MMFR4_EL1;
> +		break;
> +#endif
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = mmfr;
> +}
> +
> +static bool trap_isar(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 isar;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR0_EL1;
> +		break;
> +
> +	case 1:
> +		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR1_EL1;
> +		break;
> +
> +	case 2:
> +		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR2_EL1;
> +		break;
> +
> +	case 3:
> +		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR3_EL1;
> +		break;
> +
> +	case 4:
> +		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR4_EL1;
> +		break;
> +
> +	case 5:
> +		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR5_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = isar;
> +}
> +
> +static bool trap_mvfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 mvfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
> +		idx = MVFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
> +		idx = MVFR1_EL1;
> +		break;
> +
> +	case 2:
> +		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
> +		idx = MVFR2_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = mvfr;
> +}
> +
> +static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64pfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
> +		idx = ID_AA64PFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
> +		idx = ID_AA64PFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64pfr;
> +}
> +
> +static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64dfr;
> +	u32 idx;
> +	u32 bpts;
> +	u32 wpts;
> +
> +	bpts = vcpu->arch.bpts;
> +	if (bpts)
> +		bpts--;
> +
> +	wpts = vcpu->arch.wpts;
> +	if (wpts)
> +		wpts--;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> +		idx = ID_AA64DFR0_EL1;
> +		if (bpts)
> +			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
> +		if (wpts)
> +			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
> +		idx = ID_AA64DFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64dfr;
> +}
> +
> +static bool trap_aa64isar(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 aa64isar;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
> +		idx = ID_AA64ISAR0_EL1;
> +		break;
> +
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
> +		idx = ID_AA64ISAR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = aa64isar;
> +}
> +
> +static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64mmfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
> +		idx = ID_AA64MMFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
> +		idx = ID_AA64MMFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
> +}
> +
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	/* MPIDR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
>  	  NULL, reset_mpidr, MPIDR_EL1 },
> +
> +	/* ID_PFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> +	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
> +	/* ID_PFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> +	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
> +	/* ID_DFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> +	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
> +	/* ID_MMFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> +	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
> +	/* ID_MMFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> +	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
> +	/* ID_MMFR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> +	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
> +	/* ID_MMFR3_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> +	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
> +
> +	/* ID_ISAR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> +	  trap_isar, reset_isar, ID_ISAR0_EL1 },
> +	/* ID_ISAR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> +	  trap_isar, reset_isar, ID_ISAR1_EL1 },
> +	/* ID_ISAR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> +	  trap_isar, reset_isar, ID_ISAR2_EL1 },
> +	/* ID_ISAR3_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> +	  trap_isar, reset_isar, ID_ISAR3_EL1 },
> +	/* ID_ISAR4_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> +	  trap_isar, reset_isar, ID_ISAR4_EL1 },
> +	/* ID_ISAR5_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> +	  trap_isar, reset_isar, ID_ISAR5_EL1 },
> +
> +	/* MVFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> +	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
> +	/* MVFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> +	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
> +	/* MVFR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> +	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
> +
> +	/* ID_AA64PFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
> +	/* ID_AA64PFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
> +
> +	/* ID_AA64DFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
> +	/* ID_AA64DFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
> +
> +	/* ID_AA64ISAR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
> +	/* ID_AA64ISAR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
> +
> +	/* ID_AA64MMFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
> +	/* ID_AA64MMFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
> +
>  	/* SCTLR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
>  	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
> @@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  
>  FUNCTION_INVARIANT(ctr_el0)
>  FUNCTION_INVARIANT(revidr_el1)
> -FUNCTION_INVARIANT(id_pfr0_el1)
> -FUNCTION_INVARIANT(id_pfr1_el1)
> -FUNCTION_INVARIANT(id_dfr0_el1)
>  FUNCTION_INVARIANT(id_afr0_el1)
> -FUNCTION_INVARIANT(id_mmfr0_el1)
> -FUNCTION_INVARIANT(id_mmfr1_el1)
> -FUNCTION_INVARIANT(id_mmfr2_el1)
> -FUNCTION_INVARIANT(id_mmfr3_el1)
> -FUNCTION_INVARIANT(id_isar0_el1)
> -FUNCTION_INVARIANT(id_isar1_el1)
> -FUNCTION_INVARIANT(id_isar2_el1)
> -FUNCTION_INVARIANT(id_isar3_el1)
> -FUNCTION_INVARIANT(id_isar4_el1)
> -FUNCTION_INVARIANT(id_isar5_el1)
>  FUNCTION_INVARIANT(clidr_el1)
>  FUNCTION_INVARIANT(aidr_el1)
>  
> @@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
>  	  NULL, get_revidr_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> -	  NULL, get_id_pfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> -	  NULL, get_id_pfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> -	  NULL, get_id_dfr0_el1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
>  	  NULL, get_id_afr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> -	  NULL, get_id_mmfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> -	  NULL, get_id_mmfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> -	  NULL, get_id_mmfr2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> -	  NULL, get_id_mmfr3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> -	  NULL, get_id_isar0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> -	  NULL, get_id_isar1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> -	  NULL, get_id_isar2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> -	  NULL, get_id_isar3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> -	  NULL, get_id_isar4_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> -	  NULL, get_id_isar5_el1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
>  	  NULL, get_clidr_el1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-15  4:23     ` Shannon Zhao
  0 siblings, 0 replies; 24+ messages in thread
From: Shannon Zhao @ 2015-09-15  4:23 UTC (permalink / raw)
  To: Tushar Jagad, linux-arm-kernel, kvmarm
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel,
	Huangpeng (Peter),
	christoffer.dall



On 2015/9/9 16:38, Tushar Jagad wrote:
> This patch modifies the HCR_GUEST_FLAGS to enable trapping of
> non secure read to registers under the HCR_EL2.TID3 group to EL2.
> 
> We emulate the accesses to capability registers which list the number of
> breakpoints, watchpoints, etc. These values are provided by the user when
> starting the VM. The emulated values are constructed at runtime from the
> trap handler.
> 
> Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
> ---
>  Documentation/virtual/kvm/api.txt |    8 +
>  arch/arm/kvm/arm.c                |   50 ++++-
>  arch/arm64/include/asm/kvm_arm.h  |    2 +-
>  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
>  arch/arm64/include/asm/kvm_host.h |    4 +-
>  arch/arm64/include/uapi/asm/kvm.h |    7 +
>  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
>  7 files changed, 503 insertions(+), 49 deletions(-)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index a7926a9..b06c104 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2561,6 +2561,14 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
>  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
> +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
> +	  This is a 4-bit value which defines number of hardware
> +	  breakpoints supported on guest. If this is not sepecified or
> +	  set to zero then the guest sees the value as is from the host.
> +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
> +	  This is a 4-bit value which defines number of hardware
> +	  watchpoints supported on guest. If this is not sepecified or
> +	  set to zero then the guest sees the value as is from the host.
>  
>  
>  4.83 KVM_ARM_PREFERRED_TARGET
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index bc738d2..8907d37 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			       const struct kvm_vcpu_init *init)
>  {
>  	unsigned int i;
> +	u64 aa64dfr;
> +
>  	int phys_target = kvm_target_cpu();
>  
>  	if (init->target != phys_target)
> @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
>  		return -EINVAL;
>  
> +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> +
>  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
>  	for (i = 0; i < sizeof(init->features) * 8; i++) {
>  		bool set = (init->features[i / 32] & (1 << (i % 32)));
> @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  		if (set && i >= KVM_VCPU_MAX_FEATURES)
>  			return -ENOENT;
>  
> +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
> +			int h_bpts;
> +			int g_bpts;
> +
> +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
> +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
> +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
> +
> +			/*
> +			 * We ensure that the host can support the requested
> +			 * number of hardware breakpoints.
> +			 */
> +			if (g_bpts > h_bpts)
> +				return -EINVAL;
> +
This may not work. Assuming that the number of source host hardware
breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
create VM on the source host. But if the number of destination host
hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
and fail to create VM on the destination host and migrate failed.

(P.S. I'm considering the guest PMU for cross-cpu type, so I have look
at this patch)

> +			vcpu->arch.bpts = g_bpts;
> +
> +			i  += 3;
> +
> +			continue;
> +		}
> +
> +		if (i == KVM_ARM_VCPU_NUM_WPTS) {
> +			int h_wpts;
> +			int g_wpts;
> +
> +			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
> +			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
> +					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
> +
> +			/*
> +			 * We ensure that the host can support the requested
> +			 * number of hardware watchpoints.
> +			 */
> +			if (g_wpts > h_wpts)
> +				return -EINVAL;
> +
> +			vcpu->arch.wpts = g_wpts;
> +
> +			i += 3;
> +
> +			continue;
> +		}
> +
>  		/*
>  		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
>  		 * use the same feature set.
> @@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			set_bit(i, vcpu->arch.features);
>  	}
>  
> -	vcpu->arch.target = phys_target;
> +	vcpu->arch.target = init->target;
>  
>  	/* Now we know what it is, we can reset it. */
>  	return kvm_reset_vcpu(vcpu);
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index ac6fafb..3b67051 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -78,7 +78,7 @@
>   */
>  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
>  			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> -			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
> +			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
>  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
>  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
>  
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index c1d5bde..087d104 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -56,15 +56,39 @@
>  #define DBGWVR15_EL1	86
>  #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
>  #define MIDR_EL1	88	/* Main ID Register */
> +#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
> +#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
> +#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
> +#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
> +#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
> +#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
> +#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
> +#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
> +#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
> +#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
> +#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
> +#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
> +#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
> +#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
> +#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
> +#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
> +#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
> +#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
> +#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
> +#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
> +#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
> +#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
> +#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
> +#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
>  
>  /* 32bit specific registers. Keep them at the end of the range */
> -#define	DACR32_EL2	89	/* Domain Access Control Register */
> -#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
> -#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
> -#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
> -#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
> -#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
> -#define	NR_SYS_REGS	95
> +#define	DACR32_EL2	113	/* Domain Access Control Register */
> +#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
> +#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
> +#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
> +#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
> +#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
> +#define	NR_SYS_REGS	119
>  
>  /* 32bit mapping */
>  #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 2709db2..c780227 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -43,7 +43,7 @@
>  #include <kvm/arm_vgic.h>
>  #include <kvm/arm_arch_timer.h>
>  
> -#define KVM_VCPU_MAX_FEATURES 3
> +#define KVM_VCPU_MAX_FEATURES 12
>  
>  int __attribute_const__ kvm_target_cpu(void);
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
> @@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
>  	/* Target CPU and feature flags */
>  	int target;
>  	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
> +	u32 bpts;
> +	u32 wpts;
>  
>  	/* Detect first run of a vcpu */
>  	bool has_run_once;
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index d268320..94d1fc9 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -88,6 +88,13 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> +#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
> +#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
> +
> +#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
> +#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
> +#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
> +#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7047292..273eecd 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static bool trap_tid3(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	if (p->is_write) {
> +		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +	} else {
> +		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> +	}
> +
> +	return true;
> +}
> +
> +static bool trap_pfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 prf;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
> +		idx = ID_PFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
> +		idx = ID_PFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = prf;
> +}
> +
> +static bool trap_dfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 dfr;
> +
> +	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
> +	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
> +}
> +
> +static bool trap_mmfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 mmfr;
> +	u32 idx;
> +
> +	switch (r->CRm) {
> +	case 1:
> +		switch (r->Op2) {
> +		case 4:
> +			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR0_EL1;
> +			break;
> +
> +		case 5:
> +			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR1_EL1;
> +			break;
> +
> +		case 6:
> +			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR2_EL1;
> +			break;
> +
> +		case 7:
> +			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR3_EL1;
> +			break;
> +
> +		default:
> +			BUG();
> +		}
> +		break;
> +
> +#if 0
> +	case 2:
> +		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
> +		idx = ID_MMFR4_EL1;
> +		break;
> +#endif
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = mmfr;
> +}
> +
> +static bool trap_isar(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 isar;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR0_EL1;
> +		break;
> +
> +	case 1:
> +		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR1_EL1;
> +		break;
> +
> +	case 2:
> +		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR2_EL1;
> +		break;
> +
> +	case 3:
> +		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR3_EL1;
> +		break;
> +
> +	case 4:
> +		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR4_EL1;
> +		break;
> +
> +	case 5:
> +		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR5_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = isar;
> +}
> +
> +static bool trap_mvfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 mvfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
> +		idx = MVFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
> +		idx = MVFR1_EL1;
> +		break;
> +
> +	case 2:
> +		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
> +		idx = MVFR2_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = mvfr;
> +}
> +
> +static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64pfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
> +		idx = ID_AA64PFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
> +		idx = ID_AA64PFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64pfr;
> +}
> +
> +static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64dfr;
> +	u32 idx;
> +	u32 bpts;
> +	u32 wpts;
> +
> +	bpts = vcpu->arch.bpts;
> +	if (bpts)
> +		bpts--;
> +
> +	wpts = vcpu->arch.wpts;
> +	if (wpts)
> +		wpts--;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> +		idx = ID_AA64DFR0_EL1;
> +		if (bpts)
> +			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
> +		if (wpts)
> +			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
> +		idx = ID_AA64DFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64dfr;
> +}
> +
> +static bool trap_aa64isar(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 aa64isar;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
> +		idx = ID_AA64ISAR0_EL1;
> +		break;
> +
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
> +		idx = ID_AA64ISAR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = aa64isar;
> +}
> +
> +static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64mmfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
> +		idx = ID_AA64MMFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
> +		idx = ID_AA64MMFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
> +}
> +
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	/* MPIDR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
>  	  NULL, reset_mpidr, MPIDR_EL1 },
> +
> +	/* ID_PFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> +	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
> +	/* ID_PFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> +	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
> +	/* ID_DFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> +	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
> +	/* ID_MMFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> +	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
> +	/* ID_MMFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> +	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
> +	/* ID_MMFR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> +	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
> +	/* ID_MMFR3_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> +	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
> +
> +	/* ID_ISAR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> +	  trap_isar, reset_isar, ID_ISAR0_EL1 },
> +	/* ID_ISAR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> +	  trap_isar, reset_isar, ID_ISAR1_EL1 },
> +	/* ID_ISAR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> +	  trap_isar, reset_isar, ID_ISAR2_EL1 },
> +	/* ID_ISAR3_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> +	  trap_isar, reset_isar, ID_ISAR3_EL1 },
> +	/* ID_ISAR4_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> +	  trap_isar, reset_isar, ID_ISAR4_EL1 },
> +	/* ID_ISAR5_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> +	  trap_isar, reset_isar, ID_ISAR5_EL1 },
> +
> +	/* MVFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> +	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
> +	/* MVFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> +	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
> +	/* MVFR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> +	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
> +
> +	/* ID_AA64PFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
> +	/* ID_AA64PFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
> +
> +	/* ID_AA64DFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
> +	/* ID_AA64DFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
> +
> +	/* ID_AA64ISAR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
> +	/* ID_AA64ISAR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
> +
> +	/* ID_AA64MMFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
> +	/* ID_AA64MMFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
> +
>  	/* SCTLR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
>  	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
> @@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  
>  FUNCTION_INVARIANT(ctr_el0)
>  FUNCTION_INVARIANT(revidr_el1)
> -FUNCTION_INVARIANT(id_pfr0_el1)
> -FUNCTION_INVARIANT(id_pfr1_el1)
> -FUNCTION_INVARIANT(id_dfr0_el1)
>  FUNCTION_INVARIANT(id_afr0_el1)
> -FUNCTION_INVARIANT(id_mmfr0_el1)
> -FUNCTION_INVARIANT(id_mmfr1_el1)
> -FUNCTION_INVARIANT(id_mmfr2_el1)
> -FUNCTION_INVARIANT(id_mmfr3_el1)
> -FUNCTION_INVARIANT(id_isar0_el1)
> -FUNCTION_INVARIANT(id_isar1_el1)
> -FUNCTION_INVARIANT(id_isar2_el1)
> -FUNCTION_INVARIANT(id_isar3_el1)
> -FUNCTION_INVARIANT(id_isar4_el1)
> -FUNCTION_INVARIANT(id_isar5_el1)
>  FUNCTION_INVARIANT(clidr_el1)
>  FUNCTION_INVARIANT(aidr_el1)
>  
> @@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
>  	  NULL, get_revidr_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> -	  NULL, get_id_pfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> -	  NULL, get_id_pfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> -	  NULL, get_id_dfr0_el1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
>  	  NULL, get_id_afr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> -	  NULL, get_id_mmfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> -	  NULL, get_id_mmfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> -	  NULL, get_id_mmfr2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> -	  NULL, get_id_mmfr3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> -	  NULL, get_id_isar0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> -	  NULL, get_id_isar1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> -	  NULL, get_id_isar2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> -	  NULL, get_id_isar3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> -	  NULL, get_id_isar4_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> -	  NULL, get_id_isar5_el1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
>  	  NULL, get_clidr_el1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-15  4:23     ` Shannon Zhao
  0 siblings, 0 replies; 24+ messages in thread
From: Shannon Zhao @ 2015-09-15  4:23 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/9/9 16:38, Tushar Jagad wrote:
> This patch modifies the HCR_GUEST_FLAGS to enable trapping of
> non secure read to registers under the HCR_EL2.TID3 group to EL2.
> 
> We emulate the accesses to capability registers which list the number of
> breakpoints, watchpoints, etc. These values are provided by the user when
> starting the VM. The emulated values are constructed at runtime from the
> trap handler.
> 
> Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
> ---
>  Documentation/virtual/kvm/api.txt |    8 +
>  arch/arm/kvm/arm.c                |   50 ++++-
>  arch/arm64/include/asm/kvm_arm.h  |    2 +-
>  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
>  arch/arm64/include/asm/kvm_host.h |    4 +-
>  arch/arm64/include/uapi/asm/kvm.h |    7 +
>  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
>  7 files changed, 503 insertions(+), 49 deletions(-)
> 
> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> index a7926a9..b06c104 100644
> --- a/Documentation/virtual/kvm/api.txt
> +++ b/Documentation/virtual/kvm/api.txt
> @@ -2561,6 +2561,14 @@ Possible features:
>  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
>  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
> +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
> +	  This is a 4-bit value which defines number of hardware
> +	  breakpoints supported on guest. If this is not sepecified or
> +	  set to zero then the guest sees the value as is from the host.
> +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
> +	  This is a 4-bit value which defines number of hardware
> +	  watchpoints supported on guest. If this is not sepecified or
> +	  set to zero then the guest sees the value as is from the host.
>  
>  
>  4.83 KVM_ARM_PREFERRED_TARGET
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index bc738d2..8907d37 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			       const struct kvm_vcpu_init *init)
>  {
>  	unsigned int i;
> +	u64 aa64dfr;
> +
>  	int phys_target = kvm_target_cpu();
>  
>  	if (init->target != phys_target)
> @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
>  		return -EINVAL;
>  
> +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> +
>  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
>  	for (i = 0; i < sizeof(init->features) * 8; i++) {
>  		bool set = (init->features[i / 32] & (1 << (i % 32)));
> @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  		if (set && i >= KVM_VCPU_MAX_FEATURES)
>  			return -ENOENT;
>  
> +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
> +			int h_bpts;
> +			int g_bpts;
> +
> +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
> +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
> +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
> +
> +			/*
> +			 * We ensure that the host can support the requested
> +			 * number of hardware breakpoints.
> +			 */
> +			if (g_bpts > h_bpts)
> +				return -EINVAL;
> +
This may not work. Assuming that the number of source host hardware
breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
create VM on the source host. But if the number of destination host
hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
and fail to create VM on the destination host and migrate failed.

(P.S. I'm considering the guest PMU for cross-cpu type, so I have look
at this patch)

> +			vcpu->arch.bpts = g_bpts;
> +
> +			i  += 3;
> +
> +			continue;
> +		}
> +
> +		if (i == KVM_ARM_VCPU_NUM_WPTS) {
> +			int h_wpts;
> +			int g_wpts;
> +
> +			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
> +			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
> +					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
> +
> +			/*
> +			 * We ensure that the host can support the requested
> +			 * number of hardware watchpoints.
> +			 */
> +			if (g_wpts > h_wpts)
> +				return -EINVAL;
> +
> +			vcpu->arch.wpts = g_wpts;
> +
> +			i += 3;
> +
> +			continue;
> +		}
> +
>  		/*
>  		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
>  		 * use the same feature set.
> @@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			set_bit(i, vcpu->arch.features);
>  	}
>  
> -	vcpu->arch.target = phys_target;
> +	vcpu->arch.target = init->target;
>  
>  	/* Now we know what it is, we can reset it. */
>  	return kvm_reset_vcpu(vcpu);
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index ac6fafb..3b67051 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -78,7 +78,7 @@
>   */
>  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
>  			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> -			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
> +			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
>  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
>  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
>  
> diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> index c1d5bde..087d104 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -56,15 +56,39 @@
>  #define DBGWVR15_EL1	86
>  #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
>  #define MIDR_EL1	88	/* Main ID Register */
> +#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
> +#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
> +#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
> +#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
> +#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
> +#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
> +#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
> +#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
> +#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
> +#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
> +#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
> +#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
> +#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
> +#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
> +#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
> +#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
> +#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
> +#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
> +#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
> +#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
> +#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
> +#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
> +#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
> +#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
>  
>  /* 32bit specific registers. Keep them at the end of the range */
> -#define	DACR32_EL2	89	/* Domain Access Control Register */
> -#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
> -#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
> -#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
> -#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
> -#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
> -#define	NR_SYS_REGS	95
> +#define	DACR32_EL2	113	/* Domain Access Control Register */
> +#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
> +#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
> +#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
> +#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
> +#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
> +#define	NR_SYS_REGS	119
>  
>  /* 32bit mapping */
>  #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 2709db2..c780227 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -43,7 +43,7 @@
>  #include <kvm/arm_vgic.h>
>  #include <kvm/arm_arch_timer.h>
>  
> -#define KVM_VCPU_MAX_FEATURES 3
> +#define KVM_VCPU_MAX_FEATURES 12
>  
>  int __attribute_const__ kvm_target_cpu(void);
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
> @@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
>  	/* Target CPU and feature flags */
>  	int target;
>  	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
> +	u32 bpts;
> +	u32 wpts;
>  
>  	/* Detect first run of a vcpu */
>  	bool has_run_once;
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index d268320..94d1fc9 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -88,6 +88,13 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> +#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
> +#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
> +
> +#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
> +#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
> +#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
> +#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7047292..273eecd 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
>  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
>  }
>  
> +static bool trap_tid3(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	if (p->is_write) {
> +		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> +	} else {
> +		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> +	}
> +
> +	return true;
> +}
> +
> +static bool trap_pfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 prf;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
> +		idx = ID_PFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
> +		idx = ID_PFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = prf;
> +}
> +
> +static bool trap_dfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 dfr;
> +
> +	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
> +	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
> +}
> +
> +static bool trap_mmfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 mmfr;
> +	u32 idx;
> +
> +	switch (r->CRm) {
> +	case 1:
> +		switch (r->Op2) {
> +		case 4:
> +			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR0_EL1;
> +			break;
> +
> +		case 5:
> +			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR1_EL1;
> +			break;
> +
> +		case 6:
> +			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR2_EL1;
> +			break;
> +
> +		case 7:
> +			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
> +			idx = ID_MMFR3_EL1;
> +			break;
> +
> +		default:
> +			BUG();
> +		}
> +		break;
> +
> +#if 0
> +	case 2:
> +		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
> +		idx = ID_MMFR4_EL1;
> +		break;
> +#endif
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = mmfr;
> +}
> +
> +static bool trap_isar(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 isar;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR0_EL1;
> +		break;
> +
> +	case 1:
> +		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR1_EL1;
> +		break;
> +
> +	case 2:
> +		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR2_EL1;
> +		break;
> +
> +	case 3:
> +		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR3_EL1;
> +		break;
> +
> +	case 4:
> +		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR4_EL1;
> +		break;
> +
> +	case 5:
> +		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
> +		idx = ID_ISAR5_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = isar;
> +}
> +
> +static bool trap_mvfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 mvfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
> +		idx = MVFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
> +		idx = MVFR1_EL1;
> +		break;
> +
> +	case 2:
> +		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
> +		idx = MVFR2_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = mvfr;
> +}
> +
> +static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64pfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
> +		idx = ID_AA64PFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
> +		idx = ID_AA64PFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64pfr;
> +}
> +
> +static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64dfr;
> +	u32 idx;
> +	u32 bpts;
> +	u32 wpts;
> +
> +	bpts = vcpu->arch.bpts;
> +	if (bpts)
> +		bpts--;
> +
> +	wpts = vcpu->arch.wpts;
> +	if (wpts)
> +		wpts--;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> +		idx = ID_AA64DFR0_EL1;
> +		if (bpts)
> +			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
> +		if (wpts)
> +			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
> +		idx = ID_AA64DFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64dfr;
> +}
> +
> +static bool trap_aa64isar(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u32 aa64isar;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
> +		idx = ID_AA64ISAR0_EL1;
> +		break;
> +
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
> +		idx = ID_AA64ISAR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +	vcpu_sys_reg(vcpu, idx) = aa64isar;
> +}
> +
> +static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
> +		const struct sys_reg_params *p,
> +		const struct sys_reg_desc *r)
> +{
> +	return trap_tid3(vcpu, p, r);
> +}
> +
> +static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> +{
> +	u64 aa64mmfr;
> +	u32 idx;
> +
> +	switch (r->Op2) {
> +	case 0:
> +		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
> +		idx = ID_AA64MMFR0_EL1;
> +		break;
> +	case 1:
> +		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
> +		idx = ID_AA64MMFR1_EL1;
> +		break;
> +
> +	default:
> +		BUG();
> +	}
> +
> +	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
> +}
> +
> +
>  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
>  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
>  	/* DBGBVRn_EL1 */						\
> @@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	/* MPIDR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
>  	  NULL, reset_mpidr, MPIDR_EL1 },
> +
> +	/* ID_PFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> +	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
> +	/* ID_PFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> +	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
> +	/* ID_DFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> +	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
> +	/* ID_MMFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> +	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
> +	/* ID_MMFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> +	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
> +	/* ID_MMFR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> +	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
> +	/* ID_MMFR3_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> +	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
> +
> +	/* ID_ISAR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> +	  trap_isar, reset_isar, ID_ISAR0_EL1 },
> +	/* ID_ISAR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> +	  trap_isar, reset_isar, ID_ISAR1_EL1 },
> +	/* ID_ISAR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> +	  trap_isar, reset_isar, ID_ISAR2_EL1 },
> +	/* ID_ISAR3_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> +	  trap_isar, reset_isar, ID_ISAR3_EL1 },
> +	/* ID_ISAR4_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> +	  trap_isar, reset_isar, ID_ISAR4_EL1 },
> +	/* ID_ISAR5_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> +	  trap_isar, reset_isar, ID_ISAR5_EL1 },
> +
> +	/* MVFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> +	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
> +	/* MVFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> +	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
> +	/* MVFR2_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> +	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
> +
> +	/* ID_AA64PFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
> +	/* ID_AA64PFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
> +
> +	/* ID_AA64DFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
> +	/* ID_AA64DFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
> +
> +	/* ID_AA64ISAR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
> +	/* ID_AA64ISAR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
> +
> +	/* ID_AA64MMFR0_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
> +	/* ID_AA64MMFR1_EL1 */
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
> +
>  	/* SCTLR_EL1 */
>  	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
>  	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
> @@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  
>  FUNCTION_INVARIANT(ctr_el0)
>  FUNCTION_INVARIANT(revidr_el1)
> -FUNCTION_INVARIANT(id_pfr0_el1)
> -FUNCTION_INVARIANT(id_pfr1_el1)
> -FUNCTION_INVARIANT(id_dfr0_el1)
>  FUNCTION_INVARIANT(id_afr0_el1)
> -FUNCTION_INVARIANT(id_mmfr0_el1)
> -FUNCTION_INVARIANT(id_mmfr1_el1)
> -FUNCTION_INVARIANT(id_mmfr2_el1)
> -FUNCTION_INVARIANT(id_mmfr3_el1)
> -FUNCTION_INVARIANT(id_isar0_el1)
> -FUNCTION_INVARIANT(id_isar1_el1)
> -FUNCTION_INVARIANT(id_isar2_el1)
> -FUNCTION_INVARIANT(id_isar3_el1)
> -FUNCTION_INVARIANT(id_isar4_el1)
> -FUNCTION_INVARIANT(id_isar5_el1)
>  FUNCTION_INVARIANT(clidr_el1)
>  FUNCTION_INVARIANT(aidr_el1)
>  
> @@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
>  	  NULL, get_revidr_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> -	  NULL, get_id_pfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> -	  NULL, get_id_pfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> -	  NULL, get_id_dfr0_el1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
>  	  NULL, get_id_afr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> -	  NULL, get_id_mmfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> -	  NULL, get_id_mmfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> -	  NULL, get_id_mmfr2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> -	  NULL, get_id_mmfr3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> -	  NULL, get_id_isar0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> -	  NULL, get_id_isar1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> -	  NULL, get_id_isar2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> -	  NULL, get_id_isar3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> -	  NULL, get_id_isar4_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> -	  NULL, get_id_isar5_el1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
>  	  NULL, get_clidr_el1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> 

-- 
Shannon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
  2015-09-15  4:23     ` Shannon Zhao
  (?)
@ 2015-09-15  7:18       ` Tushar Jagad
  -1 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-15  7:18 UTC (permalink / raw)
  To: Shannon Zhao
  Cc: peter.maydell, christoffer.dall, marc.zyngier, patches,
	qemu-devel, Huangpeng (Peter),
	Tushar Jagad, kvmarm, linux-arm-kernel


Hi Shannon,

On Tue, Sep 15, 2015 at 12:23:57PM +0800, Shannon Zhao wrote:
>
>
> On 2015/9/9 16:38, Tushar Jagad wrote:
> > This patch modifies the HCR_GUEST_FLAGS to enable trapping of
> > non secure read to registers under the HCR_EL2.TID3 group to EL2.
> >
> > We emulate the accesses to capability registers which list the number of
> > breakpoints, watchpoints, etc. These values are provided by the user when
> > starting the VM. The emulated values are constructed at runtime from the
> > trap handler.
> >
> > Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
> > ---
> >  Documentation/virtual/kvm/api.txt |    8 +
> >  arch/arm/kvm/arm.c                |   50 ++++-
> >  arch/arm64/include/asm/kvm_arm.h  |    2 +-
> >  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
> >  arch/arm64/include/asm/kvm_host.h |    4 +-
> >  arch/arm64/include/uapi/asm/kvm.h |    7 +
> >  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
> >  7 files changed, 503 insertions(+), 49 deletions(-)
> >
> > diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> > index a7926a9..b06c104 100644
> > --- a/Documentation/virtual/kvm/api.txt
> > +++ b/Documentation/virtual/kvm/api.txt
> > @@ -2561,6 +2561,14 @@ Possible features:
> >  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
> >  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
> >  	  Depends on KVM_CAP_ARM_PSCI_0_2.
> > +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
> > +	  This is a 4-bit value which defines number of hardware
> > +	  breakpoints supported on guest. If this is not sepecified or
> > +	  set to zero then the guest sees the value as is from the host.
> > +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
> > +	  This is a 4-bit value which defines number of hardware
> > +	  watchpoints supported on guest. If this is not sepecified or
> > +	  set to zero then the guest sees the value as is from the host.
> >
> >
> >  4.83 KVM_ARM_PREFERRED_TARGET
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index bc738d2..8907d37 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  			       const struct kvm_vcpu_init *init)
> >  {
> >  	unsigned int i;
> > +	u64 aa64dfr;
> > +
> >  	int phys_target = kvm_target_cpu();
> >
> >  	if (init->target != phys_target)
> > @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
> >  		return -EINVAL;
> >
> > +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> > +
> >  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
> >  	for (i = 0; i < sizeof(init->features) * 8; i++) {
> >  		bool set = (init->features[i / 32] & (1 << (i % 32)));
> > @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  		if (set && i >= KVM_VCPU_MAX_FEATURES)
> >  			return -ENOENT;
> >
> > +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
> > +			int h_bpts;
> > +			int g_bpts;
> > +
> > +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
> > +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
> > +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
> > +
> > +			/*
> > +			 * We ensure that the host can support the requested
> > +			 * number of hardware breakpoints.
> > +			 */
> > +			if (g_bpts > h_bpts)
> > +				return -EINVAL;
> > +
> This may not work. Assuming that the number of source host hardware
> breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
> create VM on the source host. But if the number of destination host
> hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
> and fail to create VM on the destination host and migrate failed.
>
> (P.S. I'm considering the guest PMU for cross-cpu type, so I have look
> at this patch)

We basically want to avoid migrating a guest to a host which lacks the
necessary support in the hardware. Thus consider a case where in there are
different platforms (with different CPU implementation capabilities) in a
cluster ie. few platforms support 2 h/w breakpoints/watchpoints, some platforms
support 4 h/w breakpoints/watchpoints, etc. In this case the least common
denominator of these implementation details should be considered before
starting a vm. So in the given scenario we will configure all vm's to have 2
h/w breakpoints/watchpoints which will avoid crashing of guest post migration.

For now these patches consider h/w breakpoint and h/w watchpoints but need to
expand to include PMU support.
--
Thanks,
Tushar

>
> > +			vcpu->arch.bpts = g_bpts;
> > +
> > +			i  += 3;
> > +
> > +			continue;
> > +		}
> > +
> > +		if (i == KVM_ARM_VCPU_NUM_WPTS) {
> > +			int h_wpts;
> > +			int g_wpts;
> > +
> > +			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
> > +			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
> > +					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
> > +
> > +			/*
> > +			 * We ensure that the host can support the requested
> > +			 * number of hardware watchpoints.
> > +			 */
> > +			if (g_wpts > h_wpts)
> > +				return -EINVAL;
> > +
> > +			vcpu->arch.wpts = g_wpts;
> > +
> > +			i += 3;
> > +
> > +			continue;
> > +		}
> > +
> >  		/*
> >  		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
> >  		 * use the same feature set.
> > @@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  			set_bit(i, vcpu->arch.features);
> >  	}
> >
> > -	vcpu->arch.target = phys_target;
> > +	vcpu->arch.target = init->target;
> >
> >  	/* Now we know what it is, we can reset it. */
> >  	return kvm_reset_vcpu(vcpu);
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index ac6fafb..3b67051 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -78,7 +78,7 @@
> >   */
> >  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
> >  			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> > -			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
> > +			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
> >  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
> >  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
> >
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index c1d5bde..087d104 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -56,15 +56,39 @@
> >  #define DBGWVR15_EL1	86
> >  #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
> >  #define MIDR_EL1	88	/* Main ID Register */
> > +#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
> > +#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
> > +#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
> > +#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
> > +#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
> > +#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
> > +#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
> > +#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
> > +#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
> > +#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
> > +#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
> > +#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
> > +#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
> > +#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
> > +#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
> > +#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
> > +#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
> > +#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
> > +#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
> > +#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
> > +#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
> > +#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
> > +#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
> > +#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
> >
> >  /* 32bit specific registers. Keep them at the end of the range */
> > -#define	DACR32_EL2	89	/* Domain Access Control Register */
> > -#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
> > -#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
> > -#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
> > -#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
> > -#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
> > -#define	NR_SYS_REGS	95
> > +#define	DACR32_EL2	113	/* Domain Access Control Register */
> > +#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
> > +#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
> > +#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
> > +#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
> > +#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
> > +#define	NR_SYS_REGS	119
> >
> >  /* 32bit mapping */
> >  #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 2709db2..c780227 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -43,7 +43,7 @@
> >  #include <kvm/arm_vgic.h>
> >  #include <kvm/arm_arch_timer.h>
> >
> > -#define KVM_VCPU_MAX_FEATURES 3
> > +#define KVM_VCPU_MAX_FEATURES 12
> >
> >  int __attribute_const__ kvm_target_cpu(void);
> >  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
> > @@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
> >  	/* Target CPU and feature flags */
> >  	int target;
> >  	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
> > +	u32 bpts;
> > +	u32 wpts;
> >
> >  	/* Detect first run of a vcpu */
> >  	bool has_run_once;
> > diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> > index d268320..94d1fc9 100644
> > --- a/arch/arm64/include/uapi/asm/kvm.h
> > +++ b/arch/arm64/include/uapi/asm/kvm.h
> > @@ -88,6 +88,13 @@ struct kvm_regs {
> >  #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
> >  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
> >  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> > +#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
> > +#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
> > +
> > +#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
> > +#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
> > +#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
> > +#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
> >
> >  struct kvm_vcpu_init {
> >  	__u32 target;
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 7047292..273eecd 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
> >  }
> >
> > +static bool trap_tid3(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	if (p->is_write) {
> > +		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> > +	} else {
> > +		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static bool trap_pfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 prf;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
> > +		idx = ID_PFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
> > +		idx = ID_PFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = prf;
> > +}
> > +
> > +static bool trap_dfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 dfr;
> > +
> > +	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
> > +	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
> > +}
> > +
> > +static bool trap_mmfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 mmfr;
> > +	u32 idx;
> > +
> > +	switch (r->CRm) {
> > +	case 1:
> > +		switch (r->Op2) {
> > +		case 4:
> > +			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR0_EL1;
> > +			break;
> > +
> > +		case 5:
> > +			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR1_EL1;
> > +			break;
> > +
> > +		case 6:
> > +			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR2_EL1;
> > +			break;
> > +
> > +		case 7:
> > +			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR3_EL1;
> > +			break;
> > +
> > +		default:
> > +			BUG();
> > +		}
> > +		break;
> > +
> > +#if 0
> > +	case 2:
> > +		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
> > +		idx = ID_MMFR4_EL1;
> > +		break;
> > +#endif
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = mmfr;
> > +}
> > +
> > +static bool trap_isar(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 isar;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR0_EL1;
> > +		break;
> > +
> > +	case 1:
> > +		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR1_EL1;
> > +		break;
> > +
> > +	case 2:
> > +		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR2_EL1;
> > +		break;
> > +
> > +	case 3:
> > +		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR3_EL1;
> > +		break;
> > +
> > +	case 4:
> > +		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR4_EL1;
> > +		break;
> > +
> > +	case 5:
> > +		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR5_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = isar;
> > +}
> > +
> > +static bool trap_mvfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 mvfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR1_EL1;
> > +		break;
> > +
> > +	case 2:
> > +		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR2_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = mvfr;
> > +}
> > +
> > +static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64pfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
> > +		idx = ID_AA64PFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
> > +		idx = ID_AA64PFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64pfr;
> > +}
> > +
> > +static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64dfr;
> > +	u32 idx;
> > +	u32 bpts;
> > +	u32 wpts;
> > +
> > +	bpts = vcpu->arch.bpts;
> > +	if (bpts)
> > +		bpts--;
> > +
> > +	wpts = vcpu->arch.wpts;
> > +	if (wpts)
> > +		wpts--;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> > +		idx = ID_AA64DFR0_EL1;
> > +		if (bpts)
> > +			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
> > +		if (wpts)
> > +			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
> > +		idx = ID_AA64DFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64dfr;
> > +}
> > +
> > +static bool trap_aa64isar(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 aa64isar;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
> > +		idx = ID_AA64ISAR0_EL1;
> > +		break;
> > +
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
> > +		idx = ID_AA64ISAR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = aa64isar;
> > +}
> > +
> > +static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64mmfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
> > +		idx = ID_AA64MMFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
> > +		idx = ID_AA64MMFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
> > +}
> > +
> > +
> >  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> >  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
> >  	/* DBGBVRn_EL1 */						\
> > @@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >  	/* MPIDR_EL1 */
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
> >  	  NULL, reset_mpidr, MPIDR_EL1 },
> > +
> > +	/* ID_PFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> > +	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
> > +	/* ID_PFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> > +	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
> > +	/* ID_DFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> > +	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
> > +	/* ID_MMFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
> > +	/* ID_MMFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
> > +	/* ID_MMFR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
> > +	/* ID_MMFR3_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
> > +
> > +	/* ID_ISAR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> > +	  trap_isar, reset_isar, ID_ISAR0_EL1 },
> > +	/* ID_ISAR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> > +	  trap_isar, reset_isar, ID_ISAR1_EL1 },
> > +	/* ID_ISAR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> > +	  trap_isar, reset_isar, ID_ISAR2_EL1 },
> > +	/* ID_ISAR3_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> > +	  trap_isar, reset_isar, ID_ISAR3_EL1 },
> > +	/* ID_ISAR4_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> > +	  trap_isar, reset_isar, ID_ISAR4_EL1 },
> > +	/* ID_ISAR5_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> > +	  trap_isar, reset_isar, ID_ISAR5_EL1 },
> > +
> > +	/* MVFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> > +	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
> > +	/* MVFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> > +	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
> > +	/* MVFR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> > +	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
> > +
> > +	/* ID_AA64PFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> > +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
> > +	/* ID_AA64PFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> > +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
> > +
> > +	/* ID_AA64DFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> > +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
> > +	/* ID_AA64DFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> > +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
> > +
> > +	/* ID_AA64ISAR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> > +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
> > +	/* ID_AA64ISAR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> > +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
> > +
> > +	/* ID_AA64MMFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> > +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
> > +	/* ID_AA64MMFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> > +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
> > +
> >  	/* SCTLR_EL1 */
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
> >  	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
> > @@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
> >
> >  FUNCTION_INVARIANT(ctr_el0)
> >  FUNCTION_INVARIANT(revidr_el1)
> > -FUNCTION_INVARIANT(id_pfr0_el1)
> > -FUNCTION_INVARIANT(id_pfr1_el1)
> > -FUNCTION_INVARIANT(id_dfr0_el1)
> >  FUNCTION_INVARIANT(id_afr0_el1)
> > -FUNCTION_INVARIANT(id_mmfr0_el1)
> > -FUNCTION_INVARIANT(id_mmfr1_el1)
> > -FUNCTION_INVARIANT(id_mmfr2_el1)
> > -FUNCTION_INVARIANT(id_mmfr3_el1)
> > -FUNCTION_INVARIANT(id_isar0_el1)
> > -FUNCTION_INVARIANT(id_isar1_el1)
> > -FUNCTION_INVARIANT(id_isar2_el1)
> > -FUNCTION_INVARIANT(id_isar3_el1)
> > -FUNCTION_INVARIANT(id_isar4_el1)
> > -FUNCTION_INVARIANT(id_isar5_el1)
> >  FUNCTION_INVARIANT(clidr_el1)
> >  FUNCTION_INVARIANT(aidr_el1)
> >
> > @@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
> >  static struct sys_reg_desc invariant_sys_regs[] = {
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> >  	  NULL, get_revidr_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> > -	  NULL, get_id_pfr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> > -	  NULL, get_id_pfr1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> > -	  NULL, get_id_dfr0_el1 },
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> >  	  NULL, get_id_afr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> > -	  NULL, get_id_mmfr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> > -	  NULL, get_id_mmfr1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> > -	  NULL, get_id_mmfr2_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> > -	  NULL, get_id_mmfr3_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> > -	  NULL, get_id_isar0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> > -	  NULL, get_id_isar1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> > -	  NULL, get_id_isar2_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> > -	  NULL, get_id_isar3_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> > -	  NULL, get_id_isar4_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> > -	  NULL, get_id_isar5_el1 },
> >  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> >  	  NULL, get_clidr_el1 },
> >  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> >
>
> --
> Shannon
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-15  7:18       ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-15  7:18 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, patches, qemu-devel, kvmarm, linux-arm-kernel


Hi Shannon,

On Tue, Sep 15, 2015 at 12:23:57PM +0800, Shannon Zhao wrote:
>
>
> On 2015/9/9 16:38, Tushar Jagad wrote:
> > This patch modifies the HCR_GUEST_FLAGS to enable trapping of
> > non secure read to registers under the HCR_EL2.TID3 group to EL2.
> >
> > We emulate the accesses to capability registers which list the number of
> > breakpoints, watchpoints, etc. These values are provided by the user when
> > starting the VM. The emulated values are constructed at runtime from the
> > trap handler.
> >
> > Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
> > ---
> >  Documentation/virtual/kvm/api.txt |    8 +
> >  arch/arm/kvm/arm.c                |   50 ++++-
> >  arch/arm64/include/asm/kvm_arm.h  |    2 +-
> >  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
> >  arch/arm64/include/asm/kvm_host.h |    4 +-
> >  arch/arm64/include/uapi/asm/kvm.h |    7 +
> >  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
> >  7 files changed, 503 insertions(+), 49 deletions(-)
> >
> > diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> > index a7926a9..b06c104 100644
> > --- a/Documentation/virtual/kvm/api.txt
> > +++ b/Documentation/virtual/kvm/api.txt
> > @@ -2561,6 +2561,14 @@ Possible features:
> >  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
> >  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
> >  	  Depends on KVM_CAP_ARM_PSCI_0_2.
> > +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
> > +	  This is a 4-bit value which defines number of hardware
> > +	  breakpoints supported on guest. If this is not sepecified or
> > +	  set to zero then the guest sees the value as is from the host.
> > +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
> > +	  This is a 4-bit value which defines number of hardware
> > +	  watchpoints supported on guest. If this is not sepecified or
> > +	  set to zero then the guest sees the value as is from the host.
> >
> >
> >  4.83 KVM_ARM_PREFERRED_TARGET
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index bc738d2..8907d37 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  			       const struct kvm_vcpu_init *init)
> >  {
> >  	unsigned int i;
> > +	u64 aa64dfr;
> > +
> >  	int phys_target = kvm_target_cpu();
> >
> >  	if (init->target != phys_target)
> > @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
> >  		return -EINVAL;
> >
> > +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> > +
> >  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
> >  	for (i = 0; i < sizeof(init->features) * 8; i++) {
> >  		bool set = (init->features[i / 32] & (1 << (i % 32)));
> > @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  		if (set && i >= KVM_VCPU_MAX_FEATURES)
> >  			return -ENOENT;
> >
> > +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
> > +			int h_bpts;
> > +			int g_bpts;
> > +
> > +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
> > +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
> > +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
> > +
> > +			/*
> > +			 * We ensure that the host can support the requested
> > +			 * number of hardware breakpoints.
> > +			 */
> > +			if (g_bpts > h_bpts)
> > +				return -EINVAL;
> > +
> This may not work. Assuming that the number of source host hardware
> breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
> create VM on the source host. But if the number of destination host
> hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
> and fail to create VM on the destination host and migrate failed.
>
> (P.S. I'm considering the guest PMU for cross-cpu type, so I have look
> at this patch)

We basically want to avoid migrating a guest to a host which lacks the
necessary support in the hardware. Thus consider a case where in there are
different platforms (with different CPU implementation capabilities) in a
cluster ie. few platforms support 2 h/w breakpoints/watchpoints, some platforms
support 4 h/w breakpoints/watchpoints, etc. In this case the least common
denominator of these implementation details should be considered before
starting a vm. So in the given scenario we will configure all vm's to have 2
h/w breakpoints/watchpoints which will avoid crashing of guest post migration.

For now these patches consider h/w breakpoint and h/w watchpoints but need to
expand to include PMU support.
--
Thanks,
Tushar

>
> > +			vcpu->arch.bpts = g_bpts;
> > +
> > +			i  += 3;
> > +
> > +			continue;
> > +		}
> > +
> > +		if (i == KVM_ARM_VCPU_NUM_WPTS) {
> > +			int h_wpts;
> > +			int g_wpts;
> > +
> > +			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
> > +			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
> > +					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
> > +
> > +			/*
> > +			 * We ensure that the host can support the requested
> > +			 * number of hardware watchpoints.
> > +			 */
> > +			if (g_wpts > h_wpts)
> > +				return -EINVAL;
> > +
> > +			vcpu->arch.wpts = g_wpts;
> > +
> > +			i += 3;
> > +
> > +			continue;
> > +		}
> > +
> >  		/*
> >  		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
> >  		 * use the same feature set.
> > @@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  			set_bit(i, vcpu->arch.features);
> >  	}
> >
> > -	vcpu->arch.target = phys_target;
> > +	vcpu->arch.target = init->target;
> >
> >  	/* Now we know what it is, we can reset it. */
> >  	return kvm_reset_vcpu(vcpu);
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index ac6fafb..3b67051 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -78,7 +78,7 @@
> >   */
> >  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
> >  			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> > -			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
> > +			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
> >  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
> >  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
> >
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index c1d5bde..087d104 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -56,15 +56,39 @@
> >  #define DBGWVR15_EL1	86
> >  #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
> >  #define MIDR_EL1	88	/* Main ID Register */
> > +#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
> > +#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
> > +#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
> > +#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
> > +#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
> > +#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
> > +#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
> > +#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
> > +#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
> > +#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
> > +#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
> > +#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
> > +#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
> > +#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
> > +#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
> > +#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
> > +#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
> > +#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
> > +#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
> > +#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
> > +#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
> > +#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
> > +#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
> > +#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
> >
> >  /* 32bit specific registers. Keep them at the end of the range */
> > -#define	DACR32_EL2	89	/* Domain Access Control Register */
> > -#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
> > -#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
> > -#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
> > -#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
> > -#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
> > -#define	NR_SYS_REGS	95
> > +#define	DACR32_EL2	113	/* Domain Access Control Register */
> > +#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
> > +#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
> > +#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
> > +#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
> > +#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
> > +#define	NR_SYS_REGS	119
> >
> >  /* 32bit mapping */
> >  #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 2709db2..c780227 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -43,7 +43,7 @@
> >  #include <kvm/arm_vgic.h>
> >  #include <kvm/arm_arch_timer.h>
> >
> > -#define KVM_VCPU_MAX_FEATURES 3
> > +#define KVM_VCPU_MAX_FEATURES 12
> >
> >  int __attribute_const__ kvm_target_cpu(void);
> >  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
> > @@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
> >  	/* Target CPU and feature flags */
> >  	int target;
> >  	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
> > +	u32 bpts;
> > +	u32 wpts;
> >
> >  	/* Detect first run of a vcpu */
> >  	bool has_run_once;
> > diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> > index d268320..94d1fc9 100644
> > --- a/arch/arm64/include/uapi/asm/kvm.h
> > +++ b/arch/arm64/include/uapi/asm/kvm.h
> > @@ -88,6 +88,13 @@ struct kvm_regs {
> >  #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
> >  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
> >  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> > +#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
> > +#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
> > +
> > +#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
> > +#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
> > +#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
> > +#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
> >
> >  struct kvm_vcpu_init {
> >  	__u32 target;
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 7047292..273eecd 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
> >  }
> >
> > +static bool trap_tid3(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	if (p->is_write) {
> > +		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> > +	} else {
> > +		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static bool trap_pfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 prf;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
> > +		idx = ID_PFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
> > +		idx = ID_PFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = prf;
> > +}
> > +
> > +static bool trap_dfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 dfr;
> > +
> > +	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
> > +	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
> > +}
> > +
> > +static bool trap_mmfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 mmfr;
> > +	u32 idx;
> > +
> > +	switch (r->CRm) {
> > +	case 1:
> > +		switch (r->Op2) {
> > +		case 4:
> > +			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR0_EL1;
> > +			break;
> > +
> > +		case 5:
> > +			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR1_EL1;
> > +			break;
> > +
> > +		case 6:
> > +			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR2_EL1;
> > +			break;
> > +
> > +		case 7:
> > +			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR3_EL1;
> > +			break;
> > +
> > +		default:
> > +			BUG();
> > +		}
> > +		break;
> > +
> > +#if 0
> > +	case 2:
> > +		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
> > +		idx = ID_MMFR4_EL1;
> > +		break;
> > +#endif
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = mmfr;
> > +}
> > +
> > +static bool trap_isar(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 isar;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR0_EL1;
> > +		break;
> > +
> > +	case 1:
> > +		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR1_EL1;
> > +		break;
> > +
> > +	case 2:
> > +		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR2_EL1;
> > +		break;
> > +
> > +	case 3:
> > +		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR3_EL1;
> > +		break;
> > +
> > +	case 4:
> > +		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR4_EL1;
> > +		break;
> > +
> > +	case 5:
> > +		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR5_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = isar;
> > +}
> > +
> > +static bool trap_mvfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 mvfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR1_EL1;
> > +		break;
> > +
> > +	case 2:
> > +		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR2_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = mvfr;
> > +}
> > +
> > +static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64pfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
> > +		idx = ID_AA64PFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
> > +		idx = ID_AA64PFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64pfr;
> > +}
> > +
> > +static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64dfr;
> > +	u32 idx;
> > +	u32 bpts;
> > +	u32 wpts;
> > +
> > +	bpts = vcpu->arch.bpts;
> > +	if (bpts)
> > +		bpts--;
> > +
> > +	wpts = vcpu->arch.wpts;
> > +	if (wpts)
> > +		wpts--;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> > +		idx = ID_AA64DFR0_EL1;
> > +		if (bpts)
> > +			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
> > +		if (wpts)
> > +			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
> > +		idx = ID_AA64DFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64dfr;
> > +}
> > +
> > +static bool trap_aa64isar(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 aa64isar;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
> > +		idx = ID_AA64ISAR0_EL1;
> > +		break;
> > +
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
> > +		idx = ID_AA64ISAR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = aa64isar;
> > +}
> > +
> > +static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64mmfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
> > +		idx = ID_AA64MMFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
> > +		idx = ID_AA64MMFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
> > +}
> > +
> > +
> >  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> >  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
> >  	/* DBGBVRn_EL1 */						\
> > @@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >  	/* MPIDR_EL1 */
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
> >  	  NULL, reset_mpidr, MPIDR_EL1 },
> > +
> > +	/* ID_PFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> > +	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
> > +	/* ID_PFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> > +	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
> > +	/* ID_DFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> > +	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
> > +	/* ID_MMFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
> > +	/* ID_MMFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
> > +	/* ID_MMFR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
> > +	/* ID_MMFR3_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
> > +
> > +	/* ID_ISAR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> > +	  trap_isar, reset_isar, ID_ISAR0_EL1 },
> > +	/* ID_ISAR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> > +	  trap_isar, reset_isar, ID_ISAR1_EL1 },
> > +	/* ID_ISAR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> > +	  trap_isar, reset_isar, ID_ISAR2_EL1 },
> > +	/* ID_ISAR3_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> > +	  trap_isar, reset_isar, ID_ISAR3_EL1 },
> > +	/* ID_ISAR4_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> > +	  trap_isar, reset_isar, ID_ISAR4_EL1 },
> > +	/* ID_ISAR5_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> > +	  trap_isar, reset_isar, ID_ISAR5_EL1 },
> > +
> > +	/* MVFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> > +	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
> > +	/* MVFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> > +	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
> > +	/* MVFR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> > +	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
> > +
> > +	/* ID_AA64PFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> > +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
> > +	/* ID_AA64PFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> > +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
> > +
> > +	/* ID_AA64DFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> > +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
> > +	/* ID_AA64DFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> > +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
> > +
> > +	/* ID_AA64ISAR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> > +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
> > +	/* ID_AA64ISAR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> > +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
> > +
> > +	/* ID_AA64MMFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> > +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
> > +	/* ID_AA64MMFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> > +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
> > +
> >  	/* SCTLR_EL1 */
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
> >  	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
> > @@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
> >
> >  FUNCTION_INVARIANT(ctr_el0)
> >  FUNCTION_INVARIANT(revidr_el1)
> > -FUNCTION_INVARIANT(id_pfr0_el1)
> > -FUNCTION_INVARIANT(id_pfr1_el1)
> > -FUNCTION_INVARIANT(id_dfr0_el1)
> >  FUNCTION_INVARIANT(id_afr0_el1)
> > -FUNCTION_INVARIANT(id_mmfr0_el1)
> > -FUNCTION_INVARIANT(id_mmfr1_el1)
> > -FUNCTION_INVARIANT(id_mmfr2_el1)
> > -FUNCTION_INVARIANT(id_mmfr3_el1)
> > -FUNCTION_INVARIANT(id_isar0_el1)
> > -FUNCTION_INVARIANT(id_isar1_el1)
> > -FUNCTION_INVARIANT(id_isar2_el1)
> > -FUNCTION_INVARIANT(id_isar3_el1)
> > -FUNCTION_INVARIANT(id_isar4_el1)
> > -FUNCTION_INVARIANT(id_isar5_el1)
> >  FUNCTION_INVARIANT(clidr_el1)
> >  FUNCTION_INVARIANT(aidr_el1)
> >
> > @@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
> >  static struct sys_reg_desc invariant_sys_regs[] = {
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> >  	  NULL, get_revidr_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> > -	  NULL, get_id_pfr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> > -	  NULL, get_id_pfr1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> > -	  NULL, get_id_dfr0_el1 },
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> >  	  NULL, get_id_afr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> > -	  NULL, get_id_mmfr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> > -	  NULL, get_id_mmfr1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> > -	  NULL, get_id_mmfr2_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> > -	  NULL, get_id_mmfr3_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> > -	  NULL, get_id_isar0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> > -	  NULL, get_id_isar1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> > -	  NULL, get_id_isar2_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> > -	  NULL, get_id_isar3_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> > -	  NULL, get_id_isar4_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> > -	  NULL, get_id_isar5_el1 },
> >  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> >  	  NULL, get_clidr_el1 },
> >  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> >
>
> --
> Shannon
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-15  7:18       ` Tushar Jagad
  0 siblings, 0 replies; 24+ messages in thread
From: Tushar Jagad @ 2015-09-15  7:18 UTC (permalink / raw)
  To: linux-arm-kernel


Hi Shannon,

On Tue, Sep 15, 2015 at 12:23:57PM +0800, Shannon Zhao wrote:
>
>
> On 2015/9/9 16:38, Tushar Jagad wrote:
> > This patch modifies the HCR_GUEST_FLAGS to enable trapping of
> > non secure read to registers under the HCR_EL2.TID3 group to EL2.
> >
> > We emulate the accesses to capability registers which list the number of
> > breakpoints, watchpoints, etc. These values are provided by the user when
> > starting the VM. The emulated values are constructed at runtime from the
> > trap handler.
> >
> > Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
> > ---
> >  Documentation/virtual/kvm/api.txt |    8 +
> >  arch/arm/kvm/arm.c                |   50 ++++-
> >  arch/arm64/include/asm/kvm_arm.h  |    2 +-
> >  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
> >  arch/arm64/include/asm/kvm_host.h |    4 +-
> >  arch/arm64/include/uapi/asm/kvm.h |    7 +
> >  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
> >  7 files changed, 503 insertions(+), 49 deletions(-)
> >
> > diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
> > index a7926a9..b06c104 100644
> > --- a/Documentation/virtual/kvm/api.txt
> > +++ b/Documentation/virtual/kvm/api.txt
> > @@ -2561,6 +2561,14 @@ Possible features:
> >  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
> >  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
> >  	  Depends on KVM_CAP_ARM_PSCI_0_2.
> > +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
> > +	  This is a 4-bit value which defines number of hardware
> > +	  breakpoints supported on guest. If this is not sepecified or
> > +	  set to zero then the guest sees the value as is from the host.
> > +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
> > +	  This is a 4-bit value which defines number of hardware
> > +	  watchpoints supported on guest. If this is not sepecified or
> > +	  set to zero then the guest sees the value as is from the host.
> >
> >
> >  4.83 KVM_ARM_PREFERRED_TARGET
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index bc738d2..8907d37 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  			       const struct kvm_vcpu_init *init)
> >  {
> >  	unsigned int i;
> > +	u64 aa64dfr;
> > +
> >  	int phys_target = kvm_target_cpu();
> >
> >  	if (init->target != phys_target)
> > @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
> >  		return -EINVAL;
> >
> > +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> > +
> >  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
> >  	for (i = 0; i < sizeof(init->features) * 8; i++) {
> >  		bool set = (init->features[i / 32] & (1 << (i % 32)));
> > @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  		if (set && i >= KVM_VCPU_MAX_FEATURES)
> >  			return -ENOENT;
> >
> > +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
> > +			int h_bpts;
> > +			int g_bpts;
> > +
> > +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
> > +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
> > +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
> > +
> > +			/*
> > +			 * We ensure that the host can support the requested
> > +			 * number of hardware breakpoints.
> > +			 */
> > +			if (g_bpts > h_bpts)
> > +				return -EINVAL;
> > +
> This may not work. Assuming that the number of source host hardware
> breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
> create VM on the source host. But if the number of destination host
> hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
> and fail to create VM on the destination host and migrate failed.
>
> (P.S. I'm considering the guest PMU for cross-cpu type, so I have look
> at this patch)

We basically want to avoid migrating a guest to a host which lacks the
necessary support in the hardware. Thus consider a case where in there are
different platforms (with different CPU implementation capabilities) in a
cluster ie. few platforms support 2 h/w breakpoints/watchpoints, some platforms
support 4 h/w breakpoints/watchpoints, etc. In this case the least common
denominator of these implementation details should be considered before
starting a vm. So in the given scenario we will configure all vm's to have 2
h/w breakpoints/watchpoints which will avoid crashing of guest post migration.

For now these patches consider h/w breakpoint and h/w watchpoints but need to
expand to include PMU support.
--
Thanks,
Tushar

>
> > +			vcpu->arch.bpts = g_bpts;
> > +
> > +			i  += 3;
> > +
> > +			continue;
> > +		}
> > +
> > +		if (i == KVM_ARM_VCPU_NUM_WPTS) {
> > +			int h_wpts;
> > +			int g_wpts;
> > +
> > +			h_wpts = ((aa64dfr >> 20) & 0xf) + 1;
> > +			g_wpts = (init->features[KVM_ARM_VCPU_WPTS_FEATURES_IDX] &
> > +					KVM_ARM_VCPU_WPTS_MASK) >> KVM_ARM_VCPU_NUM_WPTS;
> > +
> > +			/*
> > +			 * We ensure that the host can support the requested
> > +			 * number of hardware watchpoints.
> > +			 */
> > +			if (g_wpts > h_wpts)
> > +				return -EINVAL;
> > +
> > +			vcpu->arch.wpts = g_wpts;
> > +
> > +			i += 3;
> > +
> > +			continue;
> > +		}
> > +
> >  		/*
> >  		 * Secondary and subsequent calls to KVM_ARM_VCPU_INIT must
> >  		 * use the same feature set.
> > @@ -727,7 +775,7 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  			set_bit(i, vcpu->arch.features);
> >  	}
> >
> > -	vcpu->arch.target = phys_target;
> > +	vcpu->arch.target = init->target;
> >
> >  	/* Now we know what it is, we can reset it. */
> >  	return kvm_reset_vcpu(vcpu);
> > diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> > index ac6fafb..3b67051 100644
> > --- a/arch/arm64/include/asm/kvm_arm.h
> > +++ b/arch/arm64/include/asm/kvm_arm.h
> > @@ -78,7 +78,7 @@
> >   */
> >  #define HCR_GUEST_FLAGS (HCR_TSC | HCR_TSW | HCR_TWE | HCR_TWI | HCR_VM | \
> >  			 HCR_TVM | HCR_BSU_IS | HCR_FB | HCR_TAC | \
> > -			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW)
> > +			 HCR_AMO | HCR_SWIO | HCR_TIDCP | HCR_RW | HCR_TID3)
> >  #define HCR_VIRT_EXCP_MASK (HCR_VA | HCR_VI | HCR_VF)
> >  #define HCR_INT_OVERRIDE   (HCR_FMO | HCR_IMO)
> >
> > diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
> > index c1d5bde..087d104 100644
> > --- a/arch/arm64/include/asm/kvm_asm.h
> > +++ b/arch/arm64/include/asm/kvm_asm.h
> > @@ -56,15 +56,39 @@
> >  #define DBGWVR15_EL1	86
> >  #define MDCCINT_EL1	87	/* Monitor Debug Comms Channel Interrupt Enable Reg */
> >  #define MIDR_EL1	88	/* Main ID Register */
> > +#define ID_AA64MMFR0_EL1	89	/* AArch64 Memory Model Feature Register 0 */
> > +#define ID_AA64MMFR1_EL1	90	/* AArch64 Memory Model Feature Register 1 */
> > +#define MVFR0_EL1	91	/* AArch32 Media and VFP Feature Register 0 */
> > +#define MVFR1_EL1	92	/* AArch32 Media and VFP Feature Register 1 */
> > +#define MVFR2_EL1	93	/* AArch32 Media and VFP Feature Register 2 */
> > +#define ID_AA64PFR0_EL1	94	/* AArch64 Processor Feature Register 0 */
> > +#define ID_AA64PFR1_EL1	95	/* AArch64 Processor Feature Register 1 */
> > +#define ID_AA64DFR0_EL1	96	/* AArch64 Debug Feature Register 0 */
> > +#define ID_AA64DFR1_EL1	97	/* AArch64 Debug Feature Register 1 */
> > +#define ID_AA64ISAR0_EL1	98	/* AArch64 Instruction Set Attribute Register 0 */
> > +#define ID_AA64ISAR1_EL1	99	/* AArch64 Instruction Set Attribute Register 1 */
> > +#define ID_PFR0_EL1	100	/* AArch32 Processor Feature Register 0 */
> > +#define ID_PFR1_EL1	101	/* AArch32 Processor Feature Register 1 */
> > +#define ID_DFR0_EL1	102	/* AArch32 Debug Feature Register 0 */
> > +#define ID_ISAR0_EL1	103	/* AArch32 Instruction Set Attribute Register 0 */
> > +#define ID_ISAR1_EL1	104	/* AArch32 Instruction Set Attribute Register 1 */
> > +#define ID_ISAR2_EL1	105	/* AArch32 Instruction Set Attribute Register 2 */
> > +#define ID_ISAR3_EL1	106	/* AArch32 Instruction Set Attribute Register 3 */
> > +#define ID_ISAR4_EL1	107	/* AArch32 Instruction Set Attribute Register 4 */
> > +#define ID_ISAR5_EL1	108	/* AArch32 Instruction Set Attribute Register 5 */
> > +#define ID_MMFR0_EL1	109	/* AArch32 Memory Model Feature Register 0 */
> > +#define ID_MMFR1_EL1	110	/* AArch32 Memory Model Feature Register 1 */
> > +#define ID_MMFR2_EL1	111	/* AArch32 Memory Model Feature Register 2 */
> > +#define ID_MMFR3_EL1	112	/* AArch32 Memory Model Feature Register 3 */
> >
> >  /* 32bit specific registers. Keep them at the end of the range */
> > -#define	DACR32_EL2	89	/* Domain Access Control Register */
> > -#define	IFSR32_EL2	90	/* Instruction Fault Status Register */
> > -#define	FPEXC32_EL2	91	/* Floating-Point Exception Control Register */
> > -#define	DBGVCR32_EL2	92	/* Debug Vector Catch Register */
> > -#define	TEECR32_EL1	93	/* ThumbEE Configuration Register */
> > -#define	TEEHBR32_EL1	94	/* ThumbEE Handler Base Register */
> > -#define	NR_SYS_REGS	95
> > +#define	DACR32_EL2	113	/* Domain Access Control Register */
> > +#define	IFSR32_EL2	114	/* Instruction Fault Status Register */
> > +#define	FPEXC32_EL2	115	/* Floating-Point Exception Control Register */
> > +#define	DBGVCR32_EL2	116	/* Debug Vector Catch Register */
> > +#define	TEECR32_EL1	117	/* ThumbEE Configuration Register */
> > +#define	TEEHBR32_EL1	118	/* ThumbEE Handler Base Register */
> > +#define	NR_SYS_REGS	119
> >
> >  /* 32bit mapping */
> >  #define c0_MIDR		(MIDR_EL1 * 2)	/* Main ID Register */
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index 2709db2..c780227 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -43,7 +43,7 @@
> >  #include <kvm/arm_vgic.h>
> >  #include <kvm/arm_arch_timer.h>
> >
> > -#define KVM_VCPU_MAX_FEATURES 3
> > +#define KVM_VCPU_MAX_FEATURES 12
> >
> >  int __attribute_const__ kvm_target_cpu(void);
> >  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
> > @@ -137,6 +137,8 @@ struct kvm_vcpu_arch {
> >  	/* Target CPU and feature flags */
> >  	int target;
> >  	DECLARE_BITMAP(features, KVM_VCPU_MAX_FEATURES);
> > +	u32 bpts;
> > +	u32 wpts;
> >
> >  	/* Detect first run of a vcpu */
> >  	bool has_run_once;
> > diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> > index d268320..94d1fc9 100644
> > --- a/arch/arm64/include/uapi/asm/kvm.h
> > +++ b/arch/arm64/include/uapi/asm/kvm.h
> > @@ -88,6 +88,13 @@ struct kvm_regs {
> >  #define KVM_ARM_VCPU_POWER_OFF		0 /* CPU is started in OFF state */
> >  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
> >  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
> > +#define KVM_ARM_VCPU_NUM_BPTS		3 /* Number of breakpoints supported */
> > +#define KVM_ARM_VCPU_NUM_WPTS		7 /* Number of watchpoints supported */
> > +
> > +#define KVM_ARM_VCPU_BPTS_FEATURES_IDX	0
> > +#define KVM_ARM_VCPU_WPTS_FEATURES_IDX	0
> > +#define KVM_ARM_VCPU_BPTS_MASK		0x00000078
> > +#define KVM_ARM_VCPU_WPTS_MASK		0x00000780
> >
> >  struct kvm_vcpu_init {
> >  	__u32 target;
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index 7047292..273eecd 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -244,6 +244,330 @@ static void reset_mpidr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> >  	vcpu_sys_reg(vcpu, MPIDR_EL1) = (1ULL << 31) | mpidr;
> >  }
> >
> > +static bool trap_tid3(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	if (p->is_write) {
> > +		vcpu_sys_reg(vcpu, r->reg) = *vcpu_reg(vcpu, p->Rt);
> > +	} else {
> > +		*vcpu_reg(vcpu, p->Rt) = vcpu_sys_reg(vcpu, r->reg);
> > +	}
> > +
> > +	return true;
> > +}
> > +
> > +static bool trap_pfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 prf;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_PFR0_EL1\n" : "=r" (prf));
> > +		idx = ID_PFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_PFR1_EL1\n" : "=r" (prf));
> > +		idx = ID_PFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = prf;
> > +}
> > +
> > +static bool trap_dfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 dfr;
> > +
> > +	asm volatile("mrs %0, ID_DFR0_EL1\n" : "=r" (dfr));
> > +	vcpu_sys_reg(vcpu, ID_DFR0_EL1) = dfr;
> > +}
> > +
> > +static bool trap_mmfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 mmfr;
> > +	u32 idx;
> > +
> > +	switch (r->CRm) {
> > +	case 1:
> > +		switch (r->Op2) {
> > +		case 4:
> > +			asm volatile("mrs %0, ID_MMFR0_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR0_EL1;
> > +			break;
> > +
> > +		case 5:
> > +			asm volatile("mrs %0, ID_MMFR1_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR1_EL1;
> > +			break;
> > +
> > +		case 6:
> > +			asm volatile("mrs %0, ID_MMFR2_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR2_EL1;
> > +			break;
> > +
> > +		case 7:
> > +			asm volatile("mrs %0, ID_MMFR3_EL1\n" : "=r" (mmfr));
> > +			idx = ID_MMFR3_EL1;
> > +			break;
> > +
> > +		default:
> > +			BUG();
> > +		}
> > +		break;
> > +
> > +#if 0
> > +	case 2:
> > +		asm volatile("mrs %0, ID_MMFR4_EL1\n" : "=r" (mmfr));
> > +		idx = ID_MMFR4_EL1;
> > +		break;
> > +#endif
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = mmfr;
> > +}
> > +
> > +static bool trap_isar(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 isar;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_ISAR0_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR0_EL1;
> > +		break;
> > +
> > +	case 1:
> > +		asm volatile("mrs %0, ID_ISAR1_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR1_EL1;
> > +		break;
> > +
> > +	case 2:
> > +		asm volatile("mrs %0, ID_ISAR2_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR2_EL1;
> > +		break;
> > +
> > +	case 3:
> > +		asm volatile("mrs %0, ID_ISAR3_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR3_EL1;
> > +		break;
> > +
> > +	case 4:
> > +		asm volatile("mrs %0, ID_ISAR4_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR4_EL1;
> > +		break;
> > +
> > +	case 5:
> > +		asm volatile("mrs %0, ID_ISAR5_EL1\n" : "=r" (isar));
> > +		idx = ID_ISAR5_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = isar;
> > +}
> > +
> > +static bool trap_mvfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_mvfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 mvfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, MVFR0_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, MVFR1_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR1_EL1;
> > +		break;
> > +
> > +	case 2:
> > +		asm volatile("mrs %0, MVFR2_EL1\n" : "=r" (mvfr));
> > +		idx = MVFR2_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = mvfr;
> > +}
> > +
> > +static bool trap_aa64pfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64pfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64pfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64PFR0_EL1\n" : "=r" (aa64pfr));
> > +		idx = ID_AA64PFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64PFR1_EL1\n" : "=r" (aa64pfr));
> > +		idx = ID_AA64PFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64pfr;
> > +}
> > +
> > +static bool trap_aa64dfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64dfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64dfr;
> > +	u32 idx;
> > +	u32 bpts;
> > +	u32 wpts;
> > +
> > +	bpts = vcpu->arch.bpts;
> > +	if (bpts)
> > +		bpts--;
> > +
> > +	wpts = vcpu->arch.wpts;
> > +	if (wpts)
> > +		wpts--;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
> > +		idx = ID_AA64DFR0_EL1;
> > +		if (bpts)
> > +			aa64dfr = ((aa64dfr) & ~(0xf << 12)) | (bpts << 12) ;
> > +		if (wpts)
> > +			aa64dfr = ((aa64dfr) & ~(0xf << 20)) | (wpts << 20) ;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64DFR1_EL1\n" : "=r" (aa64dfr));
> > +		idx = ID_AA64DFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64dfr;
> > +}
> > +
> > +static bool trap_aa64isar(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64isar(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u32 aa64isar;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64ISAR0_EL1\n" : "=r" (aa64isar));
> > +		idx = ID_AA64ISAR0_EL1;
> > +		break;
> > +
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64ISAR1_EL1\n" : "=r" (aa64isar));
> > +		idx = ID_AA64ISAR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +	vcpu_sys_reg(vcpu, idx) = aa64isar;
> > +}
> > +
> > +static bool trap_aa64mmfr(struct kvm_vcpu *vcpu,
> > +		const struct sys_reg_params *p,
> > +		const struct sys_reg_desc *r)
> > +{
> > +	return trap_tid3(vcpu, p, r);
> > +}
> > +
> > +static void reset_aa64mmfr(struct kvm_vcpu *vcpu, const struct sys_reg_desc *r)
> > +{
> > +	u64 aa64mmfr;
> > +	u32 idx;
> > +
> > +	switch (r->Op2) {
> > +	case 0:
> > +		asm volatile("mrs %0, ID_AA64MMFR0_EL1\n" : "=r" (aa64mmfr));
> > +		idx = ID_AA64MMFR0_EL1;
> > +		break;
> > +	case 1:
> > +		asm volatile("mrs %0, ID_AA64MMFR1_EL1\n" : "=r" (aa64mmfr));
> > +		idx = ID_AA64MMFR1_EL1;
> > +		break;
> > +
> > +	default:
> > +		BUG();
> > +	}
> > +
> > +	vcpu_sys_reg(vcpu, idx) = aa64mmfr;
> > +}
> > +
> > +
> >  /* Silly macro to expand the DBG{BCR,BVR,WVR,WCR}n_EL1 registers in one go */
> >  #define DBG_BCR_BVR_WCR_WVR_EL1(n)					\
> >  	/* DBGBVRn_EL1 */						\
> > @@ -364,6 +688,86 @@ static const struct sys_reg_desc sys_reg_descs[] = {
> >  	/* MPIDR_EL1 */
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b101),
> >  	  NULL, reset_mpidr, MPIDR_EL1 },
> > +
> > +	/* ID_PFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> > +	  trap_pfr, reset_pfr, ID_PFR0_EL1 },
> > +	/* ID_PFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> > +	  trap_pfr, reset_pfr, ID_PFR1_EL1 },
> > +	/* ID_DFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> > +	  trap_dfr, reset_dfr, ID_DFR0_EL1 },
> > +	/* ID_MMFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR0_EL1 },
> > +	/* ID_MMFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR1_EL1 },
> > +	/* ID_MMFR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR2_EL1 },
> > +	/* ID_MMFR3_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> > +	  trap_mmfr, reset_mmfr, ID_MMFR3_EL1 },
> > +
> > +	/* ID_ISAR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> > +	  trap_isar, reset_isar, ID_ISAR0_EL1 },
> > +	/* ID_ISAR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> > +	  trap_isar, reset_isar, ID_ISAR1_EL1 },
> > +	/* ID_ISAR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> > +	  trap_isar, reset_isar, ID_ISAR2_EL1 },
> > +	/* ID_ISAR3_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> > +	  trap_isar, reset_isar, ID_ISAR3_EL1 },
> > +	/* ID_ISAR4_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> > +	  trap_isar, reset_isar, ID_ISAR4_EL1 },
> > +	/* ID_ISAR5_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> > +	  trap_isar, reset_isar, ID_ISAR5_EL1 },
> > +
> > +	/* MVFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> > +	  trap_mvfr, reset_mvfr, MVFR0_EL1 },
> > +	/* MVFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> > +	  trap_mvfr, reset_mvfr, MVFR1_EL1 },
> > +	/* MVFR2_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> > +	  trap_mvfr, reset_mvfr, MVFR2_EL1 },
> > +
> > +	/* ID_AA64PFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> > +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR0_EL1 },
> > +	/* ID_AA64PFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> > +	  trap_aa64pfr, reset_aa64pfr, ID_AA64PFR1_EL1 },
> > +
> > +	/* ID_AA64DFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> > +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR0_EL1 },
> > +	/* ID_AA64DFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> > +	  trap_aa64dfr, reset_aa64dfr, ID_AA64DFR1_EL1 },
> > +
> > +	/* ID_AA64ISAR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> > +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR0_EL1 },
> > +	/* ID_AA64ISAR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> > +	  trap_aa64isar, reset_aa64isar, ID_AA64ISAR1_EL1 },
> > +
> > +	/* ID_AA64MMFR0_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> > +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR0_EL1 },
> > +	/* ID_AA64MMFR1_EL1 */
> > +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> > +	  trap_aa64mmfr, reset_aa64mmfr, ID_AA64MMFR1_EL1 },
> > +
> >  	/* SCTLR_EL1 */
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0001), CRm(0b0000), Op2(0b000),
> >  	  access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
> > @@ -1104,20 +1508,7 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
> >
> >  FUNCTION_INVARIANT(ctr_el0)
> >  FUNCTION_INVARIANT(revidr_el1)
> > -FUNCTION_INVARIANT(id_pfr0_el1)
> > -FUNCTION_INVARIANT(id_pfr1_el1)
> > -FUNCTION_INVARIANT(id_dfr0_el1)
> >  FUNCTION_INVARIANT(id_afr0_el1)
> > -FUNCTION_INVARIANT(id_mmfr0_el1)
> > -FUNCTION_INVARIANT(id_mmfr1_el1)
> > -FUNCTION_INVARIANT(id_mmfr2_el1)
> > -FUNCTION_INVARIANT(id_mmfr3_el1)
> > -FUNCTION_INVARIANT(id_isar0_el1)
> > -FUNCTION_INVARIANT(id_isar1_el1)
> > -FUNCTION_INVARIANT(id_isar2_el1)
> > -FUNCTION_INVARIANT(id_isar3_el1)
> > -FUNCTION_INVARIANT(id_isar4_el1)
> > -FUNCTION_INVARIANT(id_isar5_el1)
> >  FUNCTION_INVARIANT(clidr_el1)
> >  FUNCTION_INVARIANT(aidr_el1)
> >
> > @@ -1125,34 +1516,8 @@ FUNCTION_INVARIANT(aidr_el1)
> >  static struct sys_reg_desc invariant_sys_regs[] = {
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> >  	  NULL, get_revidr_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> > -	  NULL, get_id_pfr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> > -	  NULL, get_id_pfr1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> > -	  NULL, get_id_dfr0_el1 },
> >  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> >  	  NULL, get_id_afr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> > -	  NULL, get_id_mmfr0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> > -	  NULL, get_id_mmfr1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> > -	  NULL, get_id_mmfr2_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> > -	  NULL, get_id_mmfr3_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> > -	  NULL, get_id_isar0_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> > -	  NULL, get_id_isar1_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> > -	  NULL, get_id_isar2_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> > -	  NULL, get_id_isar3_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> > -	  NULL, get_id_isar4_el1 },
> > -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> > -	  NULL, get_id_isar5_el1 },
> >  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> >  	  NULL, get_clidr_el1 },
> >  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> >
>
> --
> Shannon
>

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
  2015-09-15  7:18       ` Tushar Jagad
  (?)
@ 2015-09-15  7:51         ` Shannon Zhao
  -1 siblings, 0 replies; 24+ messages in thread
From: Shannon Zhao @ 2015-09-15  7:51 UTC (permalink / raw)
  To: Tushar Jagad, Shannon Zhao
  Cc: peter.maydell, marc.zyngier, patches, qemu-devel,
	Huangpeng (Peter),
	linux-arm-kernel, kvmarm, christoffer.dall



On 2015/9/15 15:18, Tushar Jagad wrote:
> 
> Hi Shannon,
> 
> On Tue, Sep 15, 2015 at 12:23:57PM +0800, Shannon Zhao wrote:
>>
>>
>> On 2015/9/9 16:38, Tushar Jagad wrote:
>>> This patch modifies the HCR_GUEST_FLAGS to enable trapping of
>>> non secure read to registers under the HCR_EL2.TID3 group to EL2.
>>>
>>> We emulate the accesses to capability registers which list the number of
>>> breakpoints, watchpoints, etc. These values are provided by the user when
>>> starting the VM. The emulated values are constructed at runtime from the
>>> trap handler.
>>>
>>> Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
>>> ---
>>>  Documentation/virtual/kvm/api.txt |    8 +
>>>  arch/arm/kvm/arm.c                |   50 ++++-
>>>  arch/arm64/include/asm/kvm_arm.h  |    2 +-
>>>  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
>>>  arch/arm64/include/asm/kvm_host.h |    4 +-
>>>  arch/arm64/include/uapi/asm/kvm.h |    7 +
>>>  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
>>>  7 files changed, 503 insertions(+), 49 deletions(-)
>>>
>>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>>> index a7926a9..b06c104 100644
>>> --- a/Documentation/virtual/kvm/api.txt
>>> +++ b/Documentation/virtual/kvm/api.txt
>>> @@ -2561,6 +2561,14 @@ Possible features:
>>>  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
>>>  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
>>>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>> +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
>>> +	  This is a 4-bit value which defines number of hardware
>>> +	  breakpoints supported on guest. If this is not sepecified or
>>> +	  set to zero then the guest sees the value as is from the host.
>>> +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
>>> +	  This is a 4-bit value which defines number of hardware
>>> +	  watchpoints supported on guest. If this is not sepecified or
>>> +	  set to zero then the guest sees the value as is from the host.
>>>
>>>
>>>  4.83 KVM_ARM_PREFERRED_TARGET
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index bc738d2..8907d37 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  			       const struct kvm_vcpu_init *init)
>>>  {
>>>  	unsigned int i;
>>> +	u64 aa64dfr;
>>> +
>>>  	int phys_target = kvm_target_cpu();
>>>
>>>  	if (init->target != phys_target)
>>> @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
>>>  		return -EINVAL;
>>>
>>> +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
>>> +
>>>  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
>>>  	for (i = 0; i < sizeof(init->features) * 8; i++) {
>>>  		bool set = (init->features[i / 32] & (1 << (i % 32)));
>>> @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  		if (set && i >= KVM_VCPU_MAX_FEATURES)
>>>  			return -ENOENT;
>>>
>>> +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
>>> +			int h_bpts;
>>> +			int g_bpts;
>>> +
>>> +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
>>> +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
>>> +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
>>> +
>>> +			/*
>>> +			 * We ensure that the host can support the requested
>>> +			 * number of hardware breakpoints.
>>> +			 */
>>> +			if (g_bpts > h_bpts)
>>> +				return -EINVAL;
>>> +
>> This may not work. Assuming that the number of source host hardware
>> breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
>> create VM on the source host. But if the number of destination host
>> hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
>> and fail to create VM on the destination host and migrate failed.
>>
>> (P.S. I'm considering the guest PMU for cross-cpu type, so I have look
>> at this patch)
> 
> We basically want to avoid migrating a guest to a host which lacks the
> necessary support in the hardware. Thus consider a case where in there are
> different platforms (with different CPU implementation capabilities) in a
> cluster ie. few platforms support 2 h/w breakpoints/watchpoints, some platforms
> support 4 h/w breakpoints/watchpoints, etc. In this case the least common
> denominator of these implementation details should be considered before
> starting a vm. So in the given scenario we will configure all vm's to have 2
> h/w breakpoints/watchpoints which will avoid crashing of guest post migration.
> 

Oh, I see. Using the minimum number of all the hardware bpts/wpts.

> For now these patches consider h/w breakpoint and h/w watchpoints but need to
> expand to include PMU support.

Yeah, I think about how to do it. It could use the same way you do, i.e.
using cpu features to allow userspace to set the number of PMU counters
and store it at vcpu->arch.pmcs. Then when resetting vcpus, sync the
vcpu->arch.pmcs to the emulated register bits PMCR_EL0.N in the
reset_pmcr and if the vcpu->arch.pmcs == 0, use the host value.

What do you think about this?

You can have a look at the guest PMU patchset from [1]
"[PATCH v2 04/22] KVM: ARM64: Add reset and access handlers for PMCR_EL0
register"

[1]https://lists.cs.columbia.edu/pipermail/kvmarm/2015-September/016393.html

-- 
Shannon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-15  7:51         ` Shannon Zhao
  0 siblings, 0 replies; 24+ messages in thread
From: Shannon Zhao @ 2015-09-15  7:51 UTC (permalink / raw)
  To: Tushar Jagad, Shannon Zhao
  Cc: marc.zyngier, patches, qemu-devel, linux-arm-kernel, kvmarm



On 2015/9/15 15:18, Tushar Jagad wrote:
> 
> Hi Shannon,
> 
> On Tue, Sep 15, 2015 at 12:23:57PM +0800, Shannon Zhao wrote:
>>
>>
>> On 2015/9/9 16:38, Tushar Jagad wrote:
>>> This patch modifies the HCR_GUEST_FLAGS to enable trapping of
>>> non secure read to registers under the HCR_EL2.TID3 group to EL2.
>>>
>>> We emulate the accesses to capability registers which list the number of
>>> breakpoints, watchpoints, etc. These values are provided by the user when
>>> starting the VM. The emulated values are constructed at runtime from the
>>> trap handler.
>>>
>>> Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
>>> ---
>>>  Documentation/virtual/kvm/api.txt |    8 +
>>>  arch/arm/kvm/arm.c                |   50 ++++-
>>>  arch/arm64/include/asm/kvm_arm.h  |    2 +-
>>>  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
>>>  arch/arm64/include/asm/kvm_host.h |    4 +-
>>>  arch/arm64/include/uapi/asm/kvm.h |    7 +
>>>  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
>>>  7 files changed, 503 insertions(+), 49 deletions(-)
>>>
>>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>>> index a7926a9..b06c104 100644
>>> --- a/Documentation/virtual/kvm/api.txt
>>> +++ b/Documentation/virtual/kvm/api.txt
>>> @@ -2561,6 +2561,14 @@ Possible features:
>>>  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
>>>  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
>>>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>> +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
>>> +	  This is a 4-bit value which defines number of hardware
>>> +	  breakpoints supported on guest. If this is not sepecified or
>>> +	  set to zero then the guest sees the value as is from the host.
>>> +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
>>> +	  This is a 4-bit value which defines number of hardware
>>> +	  watchpoints supported on guest. If this is not sepecified or
>>> +	  set to zero then the guest sees the value as is from the host.
>>>
>>>
>>>  4.83 KVM_ARM_PREFERRED_TARGET
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index bc738d2..8907d37 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  			       const struct kvm_vcpu_init *init)
>>>  {
>>>  	unsigned int i;
>>> +	u64 aa64dfr;
>>> +
>>>  	int phys_target = kvm_target_cpu();
>>>
>>>  	if (init->target != phys_target)
>>> @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
>>>  		return -EINVAL;
>>>
>>> +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
>>> +
>>>  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
>>>  	for (i = 0; i < sizeof(init->features) * 8; i++) {
>>>  		bool set = (init->features[i / 32] & (1 << (i % 32)));
>>> @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  		if (set && i >= KVM_VCPU_MAX_FEATURES)
>>>  			return -ENOENT;
>>>
>>> +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
>>> +			int h_bpts;
>>> +			int g_bpts;
>>> +
>>> +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
>>> +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
>>> +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
>>> +
>>> +			/*
>>> +			 * We ensure that the host can support the requested
>>> +			 * number of hardware breakpoints.
>>> +			 */
>>> +			if (g_bpts > h_bpts)
>>> +				return -EINVAL;
>>> +
>> This may not work. Assuming that the number of source host hardware
>> breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
>> create VM on the source host. But if the number of destination host
>> hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
>> and fail to create VM on the destination host and migrate failed.
>>
>> (P.S. I'm considering the guest PMU for cross-cpu type, so I have look
>> at this patch)
> 
> We basically want to avoid migrating a guest to a host which lacks the
> necessary support in the hardware. Thus consider a case where in there are
> different platforms (with different CPU implementation capabilities) in a
> cluster ie. few platforms support 2 h/w breakpoints/watchpoints, some platforms
> support 4 h/w breakpoints/watchpoints, etc. In this case the least common
> denominator of these implementation details should be considered before
> starting a vm. So in the given scenario we will configure all vm's to have 2
> h/w breakpoints/watchpoints which will avoid crashing of guest post migration.
> 

Oh, I see. Using the minimum number of all the hardware bpts/wpts.

> For now these patches consider h/w breakpoint and h/w watchpoints but need to
> expand to include PMU support.

Yeah, I think about how to do it. It could use the same way you do, i.e.
using cpu features to allow userspace to set the number of PMU counters
and store it at vcpu->arch.pmcs. Then when resetting vcpus, sync the
vcpu->arch.pmcs to the emulated register bits PMCR_EL0.N in the
reset_pmcr and if the vcpu->arch.pmcs == 0, use the host value.

What do you think about this?

You can have a look at the guest PMU patchset from [1]
"[PATCH v2 04/22] KVM: ARM64: Add reset and access handlers for PMCR_EL0
register"

[1]https://lists.cs.columbia.edu/pipermail/kvmarm/2015-September/016393.html

-- 
Shannon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group
@ 2015-09-15  7:51         ` Shannon Zhao
  0 siblings, 0 replies; 24+ messages in thread
From: Shannon Zhao @ 2015-09-15  7:51 UTC (permalink / raw)
  To: linux-arm-kernel



On 2015/9/15 15:18, Tushar Jagad wrote:
> 
> Hi Shannon,
> 
> On Tue, Sep 15, 2015 at 12:23:57PM +0800, Shannon Zhao wrote:
>>
>>
>> On 2015/9/9 16:38, Tushar Jagad wrote:
>>> This patch modifies the HCR_GUEST_FLAGS to enable trapping of
>>> non secure read to registers under the HCR_EL2.TID3 group to EL2.
>>>
>>> We emulate the accesses to capability registers which list the number of
>>> breakpoints, watchpoints, etc. These values are provided by the user when
>>> starting the VM. The emulated values are constructed at runtime from the
>>> trap handler.
>>>
>>> Signed-off-by: Tushar Jagad <tushar.jagad@linaro.org>
>>> ---
>>>  Documentation/virtual/kvm/api.txt |    8 +
>>>  arch/arm/kvm/arm.c                |   50 ++++-
>>>  arch/arm64/include/asm/kvm_arm.h  |    2 +-
>>>  arch/arm64/include/asm/kvm_asm.h  |   38 +++-
>>>  arch/arm64/include/asm/kvm_host.h |    4 +-
>>>  arch/arm64/include/uapi/asm/kvm.h |    7 +
>>>  arch/arm64/kvm/sys_regs.c         |  443 +++++++++++++++++++++++++++++++++----
>>>  7 files changed, 503 insertions(+), 49 deletions(-)
>>>
>>> diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
>>> index a7926a9..b06c104 100644
>>> --- a/Documentation/virtual/kvm/api.txt
>>> +++ b/Documentation/virtual/kvm/api.txt
>>> @@ -2561,6 +2561,14 @@ Possible features:
>>>  	  Depends on KVM_CAP_ARM_EL1_32BIT (arm64 only).
>>>  	- KVM_ARM_VCPU_PSCI_0_2: Emulate PSCI v0.2 for the CPU.
>>>  	  Depends on KVM_CAP_ARM_PSCI_0_2.
>>> +	- KVM_ARM_VCPU_NUM_BPTS: Number of supported h/w breakpoints
>>> +	  This is a 4-bit value which defines number of hardware
>>> +	  breakpoints supported on guest. If this is not sepecified or
>>> +	  set to zero then the guest sees the value as is from the host.
>>> +	- KVM_ARM_VCPU_NUM_WPTS: Number of supported h/w watchpoints
>>> +	  This is a 4-bit value which defines number of hardware
>>> +	  watchpoints supported on guest. If this is not sepecified or
>>> +	  set to zero then the guest sees the value as is from the host.
>>>
>>>
>>>  4.83 KVM_ARM_PREFERRED_TARGET
>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>> index bc738d2..8907d37 100644
>>> --- a/arch/arm/kvm/arm.c
>>> +++ b/arch/arm/kvm/arm.c
>>> @@ -696,6 +696,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  			       const struct kvm_vcpu_init *init)
>>>  {
>>>  	unsigned int i;
>>> +	u64 aa64dfr;
>>> +
>>>  	int phys_target = kvm_target_cpu();
>>>
>>>  	if (init->target != phys_target)
>>> @@ -708,6 +710,8 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  	if (vcpu->arch.target != -1 && vcpu->arch.target != init->target)
>>>  		return -EINVAL;
>>>
>>> +	asm volatile("mrs %0, ID_AA64DFR0_EL1\n" : "=r" (aa64dfr));
>>> +
>>>  	/* -ENOENT for unknown features, -EINVAL for invalid combinations. */
>>>  	for (i = 0; i < sizeof(init->features) * 8; i++) {
>>>  		bool set = (init->features[i / 32] & (1 << (i % 32)));
>>> @@ -715,6 +719,50 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>>  		if (set && i >= KVM_VCPU_MAX_FEATURES)
>>>  			return -ENOENT;
>>>
>>> +		if (i == KVM_ARM_VCPU_NUM_BPTS) {
>>> +			int h_bpts;
>>> +			int g_bpts;
>>> +
>>> +			h_bpts = ((aa64dfr >> 12) & 0xf) + 1;
>>> +			g_bpts = (init->features[KVM_ARM_VCPU_BPTS_FEATURES_IDX] &
>>> +					KVM_ARM_VCPU_BPTS_MASK) >> KVM_ARM_VCPU_NUM_BPTS;
>>> +
>>> +			/*
>>> +			 * We ensure that the host can support the requested
>>> +			 * number of hardware breakpoints.
>>> +			 */
>>> +			if (g_bpts > h_bpts)
>>> +				return -EINVAL;
>>> +
>> This may not work. Assuming that the number of source host hardware
>> breakpoints is 15 and userspace set the g_bpts to 15 as well, it's ok to
>> create VM on the source host. But if the number of destination host
>> hardware breakpoints is lees than 15 (e.g. 8), this will return -EINVAL
>> and fail to create VM on the destination host and migrate failed.
>>
>> (P.S. I'm considering the guest PMU for cross-cpu type, so I have look
>> at this patch)
> 
> We basically want to avoid migrating a guest to a host which lacks the
> necessary support in the hardware. Thus consider a case where in there are
> different platforms (with different CPU implementation capabilities) in a
> cluster ie. few platforms support 2 h/w breakpoints/watchpoints, some platforms
> support 4 h/w breakpoints/watchpoints, etc. In this case the least common
> denominator of these implementation details should be considered before
> starting a vm. So in the given scenario we will configure all vm's to have 2
> h/w breakpoints/watchpoints which will avoid crashing of guest post migration.
> 

Oh, I see. Using the minimum number of all the hardware bpts/wpts.

> For now these patches consider h/w breakpoint and h/w watchpoints but need to
> expand to include PMU support.

Yeah, I think about how to do it. It could use the same way you do, i.e.
using cpu features to allow userspace to set the number of PMU counters
and store it at vcpu->arch.pmcs. Then when resetting vcpus, sync the
vcpu->arch.pmcs to the emulated register bits PMCR_EL0.N in the
reset_pmcr and if the vcpu->arch.pmcs == 0, use the host value.

What do you think about this?

You can have a look at the guest PMU patchset from [1]
"[PATCH v2 04/22] KVM: ARM64: Add reset and access handlers for PMCR_EL0
register"

[1]https://lists.cs.columbia.edu/pipermail/kvmarm/2015-September/016393.html

-- 
Shannon

^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2015-09-15  7:52 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-09-09  8:38 [Qemu-devel] [PATCH RFC 0/4] arm64: cross cpu support Tushar Jagad
2015-09-09  8:38 ` Tushar Jagad
2015-09-09  8:38 ` Tushar Jagad
2015-09-09  8:38 ` [Qemu-devel] [PATCH RFC 1/4] arm64: KVM: add MIDR_EL1 switching Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad
2015-09-09  8:38 ` [Qemu-devel] [PATCH RFC 2/4] arm64: kvm: enable trapping of read access to regs in TID3 group Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad
2015-09-15  4:23   ` [Qemu-devel] " Shannon Zhao
2015-09-15  4:23     ` Shannon Zhao
2015-09-15  4:23     ` Shannon Zhao
2015-09-15  7:18     ` [Qemu-devel] " Tushar Jagad
2015-09-15  7:18       ` Tushar Jagad
2015-09-15  7:18       ` Tushar Jagad
2015-09-15  7:51       ` Shannon Zhao
2015-09-15  7:51         ` Shannon Zhao
2015-09-15  7:51         ` Shannon Zhao
2015-09-09  8:38 ` [Qemu-devel] [PATCH RFC 3/4] arm64: kvm: Setup MIDR as per target vcpu Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad
2015-09-09  8:38 ` [Qemu-devel] [PATCH RFC 4/4] arm/arm64: kvm: Disable comparision of cpu and vcpu target Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad
2015-09-09  8:38   ` Tushar Jagad

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.