All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support
@ 2017-01-16  9:33 Shannon Zhao
  2017-01-16  9:33 ` [PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers Shannon Zhao
                   ` (6 more replies)
  0 siblings, 7 replies; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

This patch set adds support for Cross type vCPU in KVM-ARM64. It allows
userspace to request a different vCPU type with the physical ones and
check whether the physical CPUs could support that specific vCPU. If so,
KVM will trap the ID registers and return guest with the values from
usersapce.

This patch set is not complete since the CPU Errata is not considered
and currently it only checks if the id_aa64mmfr0_el1 register value is
legal. I want this as an example and need some feedback from folks if
this approach is right or proper.

You can test this patch set with QEMU using
-cpu cortex-a53/cortex-a57/generic/cortex-a72

These patches can be fetched from:
https://git.linaro.org/people/shannon.zhao/linux-mainline.git cross_vcpu_rfc

You corresponding QEMU patches can be fetched from:
https://git.linaro.org/people/shannon.zhao/qemu.git cross_vcpu_rfc

Thanks,
Shannon

Shannon Zhao (7):
  ARM64: KVM: Add the definition of ID registers
  ARM64: KVM: Add reset handlers for all ID registers
  ARM64: KVM: Reset ID registers when creating the VCPUs
  ARM64: KVM: emulate accessing ID registers
  ARM64: KVM: Support cross type vCPU
  ARM64: KVM: Support heterogeneous system
  ARM64: KVM: Add user set handler for id_aa64mmfr0_el1

 arch/arm/kvm/arm.c                   |  36 ++++-
 arch/arm64/include/asm/kvm_coproc.h  |   1 +
 arch/arm64/include/asm/kvm_emulate.h |   3 +
 arch/arm64/include/asm/kvm_host.h    |  49 +++++-
 arch/arm64/include/uapi/asm/kvm.h    |   1 +
 arch/arm64/kvm/guest.c               |  18 ++-
 arch/arm64/kvm/hyp/sysreg-sr.c       |   2 +
 arch/arm64/kvm/sys_regs.c            | 290 +++++++++++++++++++++++------------
 include/uapi/linux/kvm.h             |   2 +
 9 files changed, 296 insertions(+), 106 deletions(-)

-- 
2.0.4

^ permalink raw reply	[flat|nested] 30+ messages in thread

* [PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers
  2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
@ 2017-01-16  9:33 ` Shannon Zhao
  2017-01-28 12:07   ` Andrew Jones
  2017-01-16  9:33 ` [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all " Shannon Zhao
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

Add a new memeber in kvm_cpu_context to save the ID registers value.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_host.h | 46 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 46 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index e505038..6034f92 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -187,12 +187,57 @@ enum vcpu_sysreg {
 
 #define NR_COPRO_REGS	(NR_SYS_REGS * 2)
 
+enum id_vcpu_sysreg {
+	MIDR_EL1,
+	/* ID group 1 registers */
+	REVIDR_EL1,
+	AIDR_EL1,
+
+	/* ID group 2 registers */
+	CTR_EL0,
+	CCSIDR_EL1,
+	CLIDR_EL1,
+
+	/* ID group 3 registers */
+	ID_PFR0_EL1,
+	ID_PFR1_EL1,
+	ID_DFR0_EL1,
+	ID_AFR0_EL1,
+	ID_MMFR0_EL1,
+	ID_MMFR1_EL1,
+	ID_MMFR2_EL1,
+	ID_MMFR3_EL1,
+	ID_ISAR0_EL1,
+	ID_ISAR1_EL1,
+	ID_ISAR2_EL1,
+	ID_ISAR3_EL1,
+	ID_ISAR4_EL1,
+	ID_ISAR5_EL1,
+	MVFR0_EL1,
+	MVFR1_EL1,
+	MVFR2_EL1,
+	ID_AA64PFR0_EL1,
+	ID_AA64PFR1_EL1,
+	ID_AA64DFR0_EL1,
+	ID_AA64DFR1_EL1,
+	ID_AA64ISAR0_EL1,
+	ID_AA64ISAR1_EL1,
+	ID_AA64MMFR0_EL1,
+	ID_AA64MMFR1_EL1,
+	ID_AA64AFR0_EL1,
+	ID_AA64AFR1_EL1,
+	ID_MMFR4_EL1,
+
+	NR_ID_SYS_REGS
+};
+
 struct kvm_cpu_context {
 	struct kvm_regs	gp_regs;
 	union {
 		u64 sys_regs[NR_SYS_REGS];
 		u32 copro[NR_COPRO_REGS];
 	};
+	u64 id_sys_regs[NR_ID_SYS_REGS];
 };
 
 typedef struct kvm_cpu_context kvm_cpu_context_t;
@@ -277,6 +322,7 @@ struct kvm_vcpu_arch {
 
 #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
 #define vcpu_sys_reg(v,r)	((v)->arch.ctxt.sys_regs[(r)])
+#define vcpu_id_sys_reg(v,r)	((v)->arch.ctxt.id_sys_regs[(r)])
 /*
  * CP14 and CP15 live in the same array, as they are backed by the
  * same system registers.
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all ID registers
  2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
  2017-01-16  9:33 ` [PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers Shannon Zhao
@ 2017-01-16  9:33 ` Shannon Zhao
  2017-01-28 12:36   ` Andrew Jones
  2017-03-09 10:19   ` Christoffer Dall
  2017-01-16  9:33 ` [PATCH RFC 3/7] ARM64: KVM: Reset ID registers when creating the VCPUs Shannon Zhao
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

Move invariant_sys_regs before emulate_sys_reg so that it can be used
later.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 193 ++++++++++++++++++++++++++++------------------
 1 file changed, 116 insertions(+), 77 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 87e7e66..bf71eb4 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1432,6 +1432,122 @@ static const struct sys_reg_desc cp15_64_regs[] = {
 	{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
 };
 
+/*
+ * These are the invariant sys_reg registers: we let the guest see the
+ * host versions of these, so they're part of the guest state.
+ *
+ * A future CPU may provide a mechanism to present different values to
+ * the guest, or a future kvm may trap them.
+ */
+
+#define FUNCTION_INVARIANT(reg)						\
+	static void get_##reg(struct kvm_vcpu *v,			\
+			      const struct sys_reg_desc *r)		\
+	{								\
+		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
+	}
+
+FUNCTION_INVARIANT(midr_el1)
+FUNCTION_INVARIANT(ctr_el0)
+FUNCTION_INVARIANT(revidr_el1)
+FUNCTION_INVARIANT(id_pfr0_el1)
+FUNCTION_INVARIANT(id_pfr1_el1)
+FUNCTION_INVARIANT(id_dfr0_el1)
+FUNCTION_INVARIANT(id_afr0_el1)
+FUNCTION_INVARIANT(id_mmfr0_el1)
+FUNCTION_INVARIANT(id_mmfr1_el1)
+FUNCTION_INVARIANT(id_mmfr2_el1)
+FUNCTION_INVARIANT(id_mmfr3_el1)
+FUNCTION_INVARIANT(id_isar0_el1)
+FUNCTION_INVARIANT(id_isar1_el1)
+FUNCTION_INVARIANT(id_isar2_el1)
+FUNCTION_INVARIANT(id_isar3_el1)
+FUNCTION_INVARIANT(id_isar4_el1)
+FUNCTION_INVARIANT(id_isar5_el1)
+FUNCTION_INVARIANT(mvfr0_el1)
+FUNCTION_INVARIANT(mvfr1_el1)
+FUNCTION_INVARIANT(mvfr2_el1)
+FUNCTION_INVARIANT(id_aa64pfr0_el1)
+FUNCTION_INVARIANT(id_aa64pfr1_el1)
+FUNCTION_INVARIANT(id_aa64dfr0_el1)
+FUNCTION_INVARIANT(id_aa64dfr1_el1)
+FUNCTION_INVARIANT(id_aa64afr0_el1)
+FUNCTION_INVARIANT(id_aa64afr1_el1)
+FUNCTION_INVARIANT(id_aa64isar0_el1)
+FUNCTION_INVARIANT(id_aa64isar1_el1)
+FUNCTION_INVARIANT(id_aa64mmfr0_el1)
+FUNCTION_INVARIANT(id_aa64mmfr1_el1)
+FUNCTION_INVARIANT(clidr_el1)
+FUNCTION_INVARIANT(aidr_el1)
+
+/* ->val is filled in by kvm_sys_reg_table_init() */
+static struct sys_reg_desc invariant_sys_regs[] = {
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
+	  NULL, get_midr_el1, MIDR_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
+	  NULL, get_revidr_el1, REVIDR_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
+	  NULL, get_id_pfr0_el1, ID_PFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
+	  NULL, get_id_pfr1_el1, ID_PFR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
+	  NULL, get_id_dfr0_el1, ID_DFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
+	  NULL, get_id_afr0_el1, ID_AFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
+	  NULL, get_id_mmfr0_el1, ID_MMFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
+	  NULL, get_id_mmfr1_el1, ID_MMFR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
+	  NULL, get_id_mmfr2_el1, ID_MMFR2_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
+	  NULL, get_id_mmfr3_el1, ID_MMFR3_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
+	  NULL, get_id_isar0_el1, ID_ISAR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
+	  NULL, get_id_isar1_el1, ID_ISAR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
+	  NULL, get_id_isar2_el1, ID_ISAR2_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
+	  NULL, get_id_isar3_el1, ID_ISAR3_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
+	  NULL, get_id_isar4_el1, ID_ISAR4_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
+	  NULL, get_id_isar5_el1, ID_ISAR5_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
+	  NULL, get_mvfr0_el1, MVFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
+	  NULL, get_mvfr1_el1, MVFR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
+	  NULL, get_mvfr2_el1, MVFR2_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
+	  NULL, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
+	  NULL, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
+	  NULL, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
+	  NULL, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b100),
+	  NULL, get_id_aa64afr0_el1, ID_AA64AFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b101),
+	  NULL, get_id_aa64afr1_el1, ID_AA64AFR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
+	  NULL, get_id_aa64isar0_el1, ID_AA64ISAR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
+	  NULL, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
+	  NULL, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
+	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
+	  NULL, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
+	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
+	  NULL, get_clidr_el1, CLIDR_EL1 },
+	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
+	  NULL, get_aidr_el1, AIDR_EL1 },
+	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
+	  NULL, get_ctr_el0, CTR_EL0 },
+};
+
 /* Target specific emulation tables */
 static struct kvm_sys_reg_target_table *target_tables[KVM_ARM_NUM_TARGETS];
 
@@ -1822,83 +1938,6 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
 	return r;
 }
 
-/*
- * These are the invariant sys_reg registers: we let the guest see the
- * host versions of these, so they're part of the guest state.
- *
- * A future CPU may provide a mechanism to present different values to
- * the guest, or a future kvm may trap them.
- */
-
-#define FUNCTION_INVARIANT(reg)						\
-	static void get_##reg(struct kvm_vcpu *v,			\
-			      const struct sys_reg_desc *r)		\
-	{								\
-		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
-	}
-
-FUNCTION_INVARIANT(midr_el1)
-FUNCTION_INVARIANT(ctr_el0)
-FUNCTION_INVARIANT(revidr_el1)
-FUNCTION_INVARIANT(id_pfr0_el1)
-FUNCTION_INVARIANT(id_pfr1_el1)
-FUNCTION_INVARIANT(id_dfr0_el1)
-FUNCTION_INVARIANT(id_afr0_el1)
-FUNCTION_INVARIANT(id_mmfr0_el1)
-FUNCTION_INVARIANT(id_mmfr1_el1)
-FUNCTION_INVARIANT(id_mmfr2_el1)
-FUNCTION_INVARIANT(id_mmfr3_el1)
-FUNCTION_INVARIANT(id_isar0_el1)
-FUNCTION_INVARIANT(id_isar1_el1)
-FUNCTION_INVARIANT(id_isar2_el1)
-FUNCTION_INVARIANT(id_isar3_el1)
-FUNCTION_INVARIANT(id_isar4_el1)
-FUNCTION_INVARIANT(id_isar5_el1)
-FUNCTION_INVARIANT(clidr_el1)
-FUNCTION_INVARIANT(aidr_el1)
-
-/* ->val is filled in by kvm_sys_reg_table_init() */
-static struct sys_reg_desc invariant_sys_regs[] = {
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
-	  NULL, get_midr_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
-	  NULL, get_revidr_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-	  NULL, get_id_pfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
-	  NULL, get_id_pfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
-	  NULL, get_id_dfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
-	  NULL, get_id_afr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
-	  NULL, get_id_mmfr0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
-	  NULL, get_id_mmfr1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
-	  NULL, get_id_mmfr2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
-	  NULL, get_id_mmfr3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
-	  NULL, get_id_isar0_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
-	  NULL, get_id_isar1_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
-	  NULL, get_id_isar2_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
-	  NULL, get_id_isar3_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
-	  NULL, get_id_isar4_el1 },
-	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
-	  NULL, get_id_isar5_el1 },
-	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
-	  NULL, get_clidr_el1 },
-	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
-	  NULL, get_aidr_el1 },
-	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
-	  NULL, get_ctr_el0 },
-};
-
 static int reg_from_user(u64 *val, const void __user *uaddr, u64 id)
 {
 	if (copy_from_user(val, uaddr, KVM_REG_SIZE(id)) != 0)
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 3/7] ARM64: KVM: Reset ID registers when creating the VCPUs
  2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
  2017-01-16  9:33 ` [PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers Shannon Zhao
  2017-01-16  9:33 ` [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all " Shannon Zhao
@ 2017-01-16  9:33 ` Shannon Zhao
  2017-01-28 13:32   ` Andrew Jones
  2017-01-16  9:33 ` [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers Shannon Zhao
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

Reset ID registers when creating the VCPUs and store the values per
VCPU. Also modify the get_invariant_sys_reg and set_invariant_sys_reg
to get/set the ID register from vcpu context.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/include/asm/kvm_coproc.h |  1 +
 arch/arm64/kvm/guest.c              |  1 +
 arch/arm64/kvm/sys_regs.c           | 58 ++++++++++++++++++-------------------
 3 files changed, 31 insertions(+), 29 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
index 0b52377..0801b66 100644
--- a/arch/arm64/include/asm/kvm_coproc.h
+++ b/arch/arm64/include/asm/kvm_coproc.h
@@ -24,6 +24,7 @@
 #include <linux/kvm_host.h>
 
 void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
+void kvm_reset_id_sys_regs(struct kvm_vcpu *vcpu);
 
 struct kvm_sys_reg_table {
 	const struct sys_reg_desc *table;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index b37446a..92abe2b 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -48,6 +48,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
 
 int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
 {
+	kvm_reset_id_sys_regs(vcpu);
 	return 0;
 }
 
diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index bf71eb4..7c5fa03 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1440,11 +1440,11 @@ static const struct sys_reg_desc cp15_64_regs[] = {
  * the guest, or a future kvm may trap them.
  */
 
-#define FUNCTION_INVARIANT(reg)						\
-	static void get_##reg(struct kvm_vcpu *v,			\
-			      const struct sys_reg_desc *r)		\
+#define FUNCTION_INVARIANT(register)					\
+	static void get_##register(struct kvm_vcpu *v,			\
+				   const struct sys_reg_desc *r)	\
 	{								\
-		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
+		vcpu_id_sys_reg(v, r->reg) = read_sysreg(register);	\
 	}
 
 FUNCTION_INVARIANT(midr_el1)
@@ -1480,7 +1480,6 @@ FUNCTION_INVARIANT(id_aa64mmfr1_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
-/* ->val is filled in by kvm_sys_reg_table_init() */
 static struct sys_reg_desc invariant_sys_regs[] = {
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
 	  NULL, get_midr_el1, MIDR_EL1 },
@@ -1952,43 +1951,43 @@ static int reg_to_user(void __user *uaddr, const u64 *val, u64 id)
 	return 0;
 }
 
-static int get_invariant_sys_reg(u64 id, void __user *uaddr)
+static int get_invariant_sys_reg(struct kvm_vcpu *vcpu,
+				 const struct kvm_one_reg *reg)
 {
 	struct sys_reg_params params;
 	const struct sys_reg_desc *r;
+	void __user *uaddr = (void __user *)(unsigned long)reg->addr;
 
-	if (!index_to_params(id, &params))
+	if (!index_to_params(reg->id, &params))
 		return -ENOENT;
 
 	r = find_reg(&params, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
 	if (!r)
 		return -ENOENT;
 
-	return reg_to_user(uaddr, &r->val, id);
+	if (r->get_user)
+		return (r->get_user)(vcpu, r, reg, uaddr);
+
+	return reg_to_user(uaddr, &vcpu_id_sys_reg(vcpu, r->reg), reg->id);
 }
 
-static int set_invariant_sys_reg(u64 id, void __user *uaddr)
+static int set_invariant_sys_reg(struct kvm_vcpu *vcpu,
+				 const struct kvm_one_reg *reg)
 {
 	struct sys_reg_params params;
 	const struct sys_reg_desc *r;
-	int err;
-	u64 val = 0; /* Make sure high bits are 0 for 32-bit regs */
+	void __user *uaddr = (void __user *)(unsigned long)reg->addr;
 
-	if (!index_to_params(id, &params))
+	if (!index_to_params(reg->id, &params))
 		return -ENOENT;
 	r = find_reg(&params, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
 	if (!r)
 		return -ENOENT;
 
-	err = reg_from_user(&val, uaddr, id);
-	if (err)
-		return err;
-
-	/* This is what we mean by invariant: you can't change it. */
-	if (r->val != val)
-		return -EINVAL;
+	if (r->set_user)
+		return (r->set_user)(vcpu, r, reg, uaddr);
 
-	return 0;
+	return reg_from_user(&vcpu_id_sys_reg(vcpu, r->reg), uaddr, reg->id);
 }
 
 static bool is_valid_cache(u32 val)
@@ -2086,7 +2085,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 
 	r = index_to_sys_reg_desc(vcpu, reg->id);
 	if (!r)
-		return get_invariant_sys_reg(reg->id, uaddr);
+		return get_invariant_sys_reg(vcpu, reg);
 
 	if (r->get_user)
 		return (r->get_user)(vcpu, r, reg, uaddr);
@@ -2107,7 +2106,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
 
 	r = index_to_sys_reg_desc(vcpu, reg->id);
 	if (!r)
-		return set_invariant_sys_reg(reg->id, uaddr);
+		return set_invariant_sys_reg(vcpu, reg);
 
 	if (r->set_user)
 		return (r->set_user)(vcpu, r, reg, uaddr);
@@ -2252,7 +2251,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n)
 void kvm_sys_reg_table_init(void)
 {
 	unsigned int i;
-	struct sys_reg_desc clidr;
 
 	/* Make sure tables are unique and in order. */
 	BUG_ON(check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs)));
@@ -2262,10 +2260,6 @@ void kvm_sys_reg_table_init(void)
 	BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs)));
 	BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs)));
 
-	/* We abuse the reset function to overwrite the table itself. */
-	for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
-		invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
-
 	/*
 	 * CLIDR format is awkward, so clean it up.  See ARM B4.1.20:
 	 *
@@ -2276,8 +2270,7 @@ void kvm_sys_reg_table_init(void)
 	 *   value of 0b000, the values of Ctype4 to Ctype7 must be
 	 *   ignored.
 	 */
-	get_clidr_el1(NULL, &clidr); /* Ugly... */
-	cache_levels = clidr.val;
+	cache_levels = read_sysreg(clidr_el1);
 	for (i = 0; i < 7; i++)
 		if (((cache_levels >> (i*3)) & 7) == 0)
 			break;
@@ -2310,3 +2303,10 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
 		if (vcpu_sys_reg(vcpu, num) == 0x4242424242424242)
 			panic("Didn't reset vcpu_sys_reg(%zi)", num);
 }
+
+void kvm_reset_id_sys_regs(struct kvm_vcpu *vcpu)
+{
+	unsigned int i;
+	for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
+		invariant_sys_regs[i].reset(vcpu, &invariant_sys_regs[i]);
+}
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers
  2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
                   ` (2 preceding siblings ...)
  2017-01-16  9:33 ` [PATCH RFC 3/7] ARM64: KVM: Reset ID registers when creating the VCPUs Shannon Zhao
@ 2017-01-16  9:33 ` Shannon Zhao
  2017-01-28 13:49   ` Andrew Jones
  2017-03-09 10:28   ` Christoffer Dall
  2017-01-16  9:33 ` [PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU Shannon Zhao
                   ` (2 subsequent siblings)
  6 siblings, 2 replies; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 83 ++++++++++++++++++++++++++++-------------------
 1 file changed, 50 insertions(+), 33 deletions(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index 7c5fa03..f613e29 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1480,71 +1480,84 @@ FUNCTION_INVARIANT(id_aa64mmfr1_el1)
 FUNCTION_INVARIANT(clidr_el1)
 FUNCTION_INVARIANT(aidr_el1)
 
+static bool access_id_reg(struct kvm_vcpu *vcpu,
+			  struct sys_reg_params *p,
+			  const struct sys_reg_desc *r)
+{
+	if (p->is_write) {
+		vcpu_id_sys_reg(vcpu, r->reg) = p->regval;
+	} else {
+		p->regval = vcpu_id_sys_reg(vcpu, r->reg);
+	}
+
+	return true;
+}
+
 static struct sys_reg_desc invariant_sys_regs[] = {
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
-	  NULL, get_midr_el1, MIDR_EL1 },
+	  access_id_reg, get_midr_el1, MIDR_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
-	  NULL, get_revidr_el1, REVIDR_EL1 },
+	  access_id_reg, get_revidr_el1, REVIDR_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
-	  NULL, get_id_pfr0_el1, ID_PFR0_EL1 },
+	  access_id_reg, get_id_pfr0_el1, ID_PFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
-	  NULL, get_id_pfr1_el1, ID_PFR1_EL1 },
+	  access_id_reg, get_id_pfr1_el1, ID_PFR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
-	  NULL, get_id_dfr0_el1, ID_DFR0_EL1 },
+	  access_id_reg, get_id_dfr0_el1, ID_DFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
-	  NULL, get_id_afr0_el1, ID_AFR0_EL1 },
+	  access_id_reg, get_id_afr0_el1, ID_AFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
-	  NULL, get_id_mmfr0_el1, ID_MMFR0_EL1 },
+	  access_id_reg, get_id_mmfr0_el1, ID_MMFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
-	  NULL, get_id_mmfr1_el1, ID_MMFR1_EL1 },
+	  access_id_reg, get_id_mmfr1_el1, ID_MMFR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
-	  NULL, get_id_mmfr2_el1, ID_MMFR2_EL1 },
+	  access_id_reg, get_id_mmfr2_el1, ID_MMFR2_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
-	  NULL, get_id_mmfr3_el1, ID_MMFR3_EL1 },
+	  access_id_reg, get_id_mmfr3_el1, ID_MMFR3_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
-	  NULL, get_id_isar0_el1, ID_ISAR0_EL1 },
+	  access_id_reg, get_id_isar0_el1, ID_ISAR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
-	  NULL, get_id_isar1_el1, ID_ISAR1_EL1 },
+	  access_id_reg, get_id_isar1_el1, ID_ISAR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
-	  NULL, get_id_isar2_el1, ID_ISAR2_EL1 },
+	  access_id_reg, get_id_isar2_el1, ID_ISAR2_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
-	  NULL, get_id_isar3_el1, ID_ISAR3_EL1 },
+	  access_id_reg, get_id_isar3_el1, ID_ISAR3_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
-	  NULL, get_id_isar4_el1, ID_ISAR4_EL1 },
+	  access_id_reg, get_id_isar4_el1, ID_ISAR4_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
-	  NULL, get_id_isar5_el1, ID_ISAR5_EL1 },
+	  access_id_reg, get_id_isar5_el1, ID_ISAR5_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
-	  NULL, get_mvfr0_el1, MVFR0_EL1 },
+	  access_id_reg, get_mvfr0_el1, MVFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
-	  NULL, get_mvfr1_el1, MVFR1_EL1 },
+	  access_id_reg, get_mvfr1_el1, MVFR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
-	  NULL, get_mvfr2_el1, MVFR2_EL1 },
+	  access_id_reg, get_mvfr2_el1, MVFR2_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
-	  NULL, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
+	  access_id_reg, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
-	  NULL, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
+	  access_id_reg, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
-	  NULL, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
+	  access_id_reg, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
-	  NULL, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
+	  access_id_reg, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b100),
-	  NULL, get_id_aa64afr0_el1, ID_AA64AFR0_EL1 },
+	  access_id_reg, get_id_aa64afr0_el1, ID_AA64AFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b101),
-	  NULL, get_id_aa64afr1_el1, ID_AA64AFR1_EL1 },
+	  access_id_reg, get_id_aa64afr1_el1, ID_AA64AFR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
-	  NULL, get_id_aa64isar0_el1, ID_AA64ISAR0_EL1 },
+	  access_id_reg, get_id_aa64isar0_el1, ID_AA64ISAR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
-	  NULL, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
+	  access_id_reg, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
-	  NULL, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
+	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
-	  NULL, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
+	  access_id_reg, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
-	  NULL, get_clidr_el1, CLIDR_EL1 },
+	  access_id_reg, get_clidr_el1, CLIDR_EL1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
-	  NULL, get_aidr_el1, AIDR_EL1 },
+	  access_id_reg, get_aidr_el1, AIDR_EL1 },
 	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
-	  NULL, get_ctr_el0, CTR_EL0 },
+	  access_id_reg, get_ctr_el0, CTR_EL0 },
 };
 
 /* Target specific emulation tables */
@@ -1809,8 +1822,12 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
 
 	/* Search target-specific then generic table. */
 	r = find_reg(params, table, num);
-	if (!r)
+	if (!r) {
 		r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
+		if (!r)
+			r = find_reg(params, invariant_sys_regs,
+				     ARRAY_SIZE(invariant_sys_regs));
+	}
 
 	if (likely(r)) {
 		/*
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU
  2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
                   ` (3 preceding siblings ...)
  2017-01-16  9:33 ` [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers Shannon Zhao
@ 2017-01-16  9:33 ` Shannon Zhao
  2017-01-28 14:47   ` Andrew Jones
  2017-01-16  9:33 ` [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system Shannon Zhao
  2017-01-16  9:33 ` [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1 Shannon Zhao
  6 siblings, 1 reply; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

Add a capability to tell userspace that KVM supports cross type vCPU.
Add a cpu feature for userspace to set when it doesn't use host type
vCPU and kvm_vcpu_preferred_target return the host MIDR register value
so that userspace can check whether its requested vCPU type macthes the
one of physical CPU and if so, KVM will not trap ID registers even
though userspace doesn't specify -cpu host.
Guest accesses MIDR through VPIDR_EL2 so we save/restore it no matter
it's a cross type vCPU.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c                   | 10 ++++++++--
 arch/arm64/include/asm/kvm_emulate.h |  3 +++
 arch/arm64/include/asm/kvm_host.h    |  3 ++-
 arch/arm64/include/uapi/asm/kvm.h    |  1 +
 arch/arm64/kvm/guest.c               | 17 ++++++++++++++++-
 arch/arm64/kvm/hyp/sysreg-sr.c       |  2 ++
 include/uapi/linux/kvm.h             |  1 +
 7 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index 1167678..bdceb19 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -207,6 +207,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_PSCI_0_2:
 	case KVM_CAP_READONLY_MEM:
 	case KVM_CAP_MP_STATE:
+	case KVM_CAP_ARM_CROSS_VCPU:
 		r = 1;
 		break;
 	case KVM_CAP_COALESCED_MMIO:
@@ -809,8 +810,9 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 {
 	unsigned int i;
 	int phys_target = kvm_target_cpu();
+	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
 
-	if (init->target != phys_target)
+	if (!cross_vcpu && init->target != phys_target)
 		return -EINVAL;
 
 	/*
@@ -839,7 +841,11 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 			set_bit(i, vcpu->arch.features);
 	}
 
-	vcpu->arch.target = phys_target;
+	if (!cross_vcpu)
+		vcpu->arch.target = phys_target;
+	else
+		/* Use generic ARMv8 target for cross type vcpu. */
+		vcpu->arch.target = KVM_ARM_TARGET_GENERIC_V8;
 
 	/* Now we know what it is, we can reset it. */
 	return kvm_reset_vcpu(vcpu);
diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index f5ea0ba..bca7d3a 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -49,6 +49,9 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
 		vcpu->arch.hcr_el2 |= HCR_E2H;
 	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
 		vcpu->arch.hcr_el2 &= ~HCR_RW;
+	if (test_bit(KVM_ARM_VCPU_CROSS, vcpu->arch.features))
+		/* TODO: Set HCR_TID2 and trap cache registers */
+		vcpu->arch.hcr_el2 |= HCR_TID3 | HCR_TID1 | HCR_TID0;
 }
 
 static inline unsigned long vcpu_get_hcr(struct kvm_vcpu *vcpu)
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 6034f92..d0073d7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -41,10 +41,11 @@
 
 #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
 
-#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_VCPU_MAX_FEATURES 5
 
 #define KVM_REQ_VCPU_EXIT	8
 
+bool kvm_vcpu_has_feature_cross_cpu(const struct kvm_vcpu_init *init);
 int __attribute_const__ kvm_target_cpu(void);
 int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
 int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext);
diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
index 3051f86..7ba7117 100644
--- a/arch/arm64/include/uapi/asm/kvm.h
+++ b/arch/arm64/include/uapi/asm/kvm.h
@@ -97,6 +97,7 @@ struct kvm_regs {
 #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
 #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
 #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
+#define KVM_ARM_VCPU_CROSS		4 /* Support cross type vCPU */
 
 struct kvm_vcpu_init {
 	__u32 target;
diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
index 92abe2b..4a5ccab 100644
--- a/arch/arm64/kvm/guest.c
+++ b/arch/arm64/kvm/guest.c
@@ -308,8 +308,15 @@ int __attribute_const__ kvm_target_cpu(void)
 	return KVM_ARM_TARGET_GENERIC_V8;
 }
 
+bool kvm_vcpu_has_feature_cross_cpu(const struct kvm_vcpu_init *init)
+{
+	return init->features[KVM_ARM_VCPU_CROSS / 32] &
+	       (1 << (KVM_ARM_VCPU_CROSS % 32));
+}
+
 int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
 {
+	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
 	int target = kvm_target_cpu();
 
 	if (target < 0)
@@ -323,7 +330,15 @@ int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
 	 * specific features available for the preferred
 	 * target type.
 	 */
-	init->target = (__u32)target;
+	/* If userspace wants a cross type vcpu other that host, here we return
+	 * the midr for userspace to check whether the midr value of requeseted
+	 * vcpu matches the one of host. If they match, we will not trap CPU ID
+	 * registers even though userspace doesn't specify -cpu host.
+	 */
+	if (cross_vcpu)
+		init->target = read_cpuid_id();
+	else
+		init->target = (__u32)target;
 
 	return 0;
 }
diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
index 9341376..9461829 100644
--- a/arch/arm64/kvm/hyp/sysreg-sr.c
+++ b/arch/arm64/kvm/hyp/sysreg-sr.c
@@ -45,6 +45,7 @@ static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
 
 static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
 {
+	ctxt->id_sys_regs[MIDR_EL1]     = read_sysreg(vpidr_el2);
 	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
 	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
 	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg_el1(sctlr);
@@ -98,6 +99,7 @@ static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctx
 
 static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
 {
+	write_sysreg(ctxt->id_sys_regs[MIDR_EL1],       vpidr_el2);
 	write_sysreg(ctxt->sys_regs[MPIDR_EL1],		vmpidr_el2);
 	write_sysreg(ctxt->sys_regs[CSSELR_EL1],	csselr_el1);
 	write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1],	sctlr);
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index cac48ed..46115a2 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -871,6 +871,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_S390_USER_INSTR0 130
 #define KVM_CAP_MSI_DEVID 131
 #define KVM_CAP_PPC_HTM 132
+#define KVM_CAP_ARM_CROSS_VCPU 133
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
                   ` (4 preceding siblings ...)
  2017-01-16  9:33 ` [PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU Shannon Zhao
@ 2017-01-16  9:33 ` Shannon Zhao
  2017-01-28 14:55   ` Andrew Jones
  2017-01-16  9:33 ` [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1 Shannon Zhao
  6 siblings, 1 reply; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

When initializing KVM, check whether physical hardware is a
heterogeneous system through the MIDR values. If so, force userspace to
set the KVM_ARM_VCPU_CROSS feature bit. Otherwise, it should fail to
initialize VCPUs.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm/kvm/arm.c       | 26 ++++++++++++++++++++++++++
 include/uapi/linux/kvm.h |  1 +
 2 files changed, 27 insertions(+)

diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index bdceb19..21ec070 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -46,6 +46,7 @@
 #include <asm/kvm_coproc.h>
 #include <asm/kvm_psci.h>
 #include <asm/sections.h>
+#include <asm/cputype.h>
 
 #ifdef REQUIRES_VIRT
 __asm__(".arch_extension	virt");
@@ -65,6 +66,7 @@ static unsigned int kvm_vmid_bits __read_mostly;
 static DEFINE_SPINLOCK(kvm_vmid_lock);
 
 static bool vgic_present;
+static bool heterogeneous_system;
 
 static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
 
@@ -210,6 +212,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
 	case KVM_CAP_ARM_CROSS_VCPU:
 		r = 1;
 		break;
+	case KVM_CAP_ARM_HETEROGENEOUS:
+		r = heterogeneous_system;
+		break;
 	case KVM_CAP_COALESCED_MMIO:
 		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
 		break;
@@ -812,6 +817,12 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
 	int phys_target = kvm_target_cpu();
 	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
 
+	if (heterogeneous_system && !cross_vcpu) {
+		kvm_err("%s:Host is a heterogeneous system, set KVM_ARM_VCPU_CROSS bit\n",
+			__func__);
+		return -EINVAL;
+	}
+
 	if (!cross_vcpu && init->target != phys_target)
 		return -EINVAL;
 
@@ -1397,6 +1408,11 @@ static void check_kvm_target_cpu(void *ret)
 	*(int *)ret = kvm_target_cpu();
 }
 
+static void get_physical_cpu_midr(void *midr)
+{
+	*(u32 *)midr = read_cpuid_id();
+}
+
 struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
 {
 	struct kvm_vcpu *vcpu;
@@ -1417,6 +1433,7 @@ int kvm_arch_init(void *opaque)
 {
 	int err;
 	int ret, cpu;
+	u32 current_midr, midr;
 
 	if (!is_hyp_mode_available()) {
 		kvm_err("HYP mode not available\n");
@@ -1431,6 +1448,15 @@ int kvm_arch_init(void *opaque)
 		}
 	}
 
+	current_midr = read_cpuid_id();
+	for_each_online_cpu(cpu) {
+		smp_call_function_single(cpu, get_physical_cpu_midr, &midr, 1);
+		if (current_midr != midr) {
+			heterogeneous_system = true;
+			break;
+		}
+	}
+
 	err = init_common_resources();
 	if (err)
 		return err;
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 46115a2..cc2b63d 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -872,6 +872,7 @@ struct kvm_ppc_smmu_info {
 #define KVM_CAP_MSI_DEVID 131
 #define KVM_CAP_PPC_HTM 132
 #define KVM_CAP_ARM_CROSS_VCPU 133
+#define KVM_CAP_ARM_HETEROGENEOUS 134
 
 #ifdef KVM_CAP_IRQ_ROUTING
 
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1
  2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
                   ` (5 preceding siblings ...)
  2017-01-16  9:33 ` [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system Shannon Zhao
@ 2017-01-16  9:33 ` Shannon Zhao
  2017-01-28 15:22   ` Andrew Jones
  2017-03-09 12:52   ` Christoffer Dall
  6 siblings, 2 replies; 30+ messages in thread
From: Shannon Zhao @ 2017-01-16  9:33 UTC (permalink / raw)
  To: kvmarm; +Cc: marc.zyngier, qemu-arm, wu.wubin

From: Shannon Zhao <shannon.zhao@linaro.org>

Check if the configuration is fine.

Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
---
 arch/arm64/kvm/sys_regs.c | 32 +++++++++++++++++++++++++++++++-
 1 file changed, 31 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
index f613e29..9763b79 100644
--- a/arch/arm64/kvm/sys_regs.c
+++ b/arch/arm64/kvm/sys_regs.c
@@ -1493,6 +1493,35 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
 	return true;
 }
 
+static int set_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
+				const struct sys_reg_desc *rd,
+				const struct kvm_one_reg *reg,
+				void __user *uaddr)
+{
+	u64 val, id_aa64mmfr0;
+
+	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)) != 0)
+		return -EFAULT;
+
+	asm volatile("mrs %0, id_aa64mmfr0_el1\n" : "=r" (id_aa64mmfr0));
+
+	if ((val & GENMASK(3, 0)) > (id_aa64mmfr0 & GENMASK(3, 0)) ||
+	    (val & GENMASK(7, 4)) > (id_aa64mmfr0 & GENMASK(7, 4)) ||
+	    (val & GENMASK(11, 8)) > (id_aa64mmfr0 & GENMASK(11, 8)) ||
+	    (val & GENMASK(15, 12)) > (id_aa64mmfr0 & GENMASK(15, 12)) ||
+	    (val & GENMASK(19, 16)) > (id_aa64mmfr0 & GENMASK(19, 16)) ||
+	    (val & GENMASK(23, 20)) > (id_aa64mmfr0 & GENMASK(23, 20)) ||
+	    (val & GENMASK(27, 24)) < (id_aa64mmfr0 & GENMASK(27, 24)) ||
+	    (val & GENMASK(31, 28)) < (id_aa64mmfr0 & GENMASK(31, 28))) {
+		kvm_err("Wrong memory translation granule size/Physical Address range\n");
+		return -EINVAL;
+	}
+
+	vcpu_id_sys_reg(vcpu, rd->reg) = val & GENMASK(31, 0);
+
+	return 0;
+}
+
 static struct sys_reg_desc invariant_sys_regs[] = {
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
 	  access_id_reg, get_midr_el1, MIDR_EL1 },
@@ -1549,7 +1578,8 @@ static struct sys_reg_desc invariant_sys_regs[] = {
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
 	  access_id_reg, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
-	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
+	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1,
+	  0, NULL, set_id_aa64mmfr0_el1 },
 	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
 	  access_id_reg, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
 	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
-- 
2.0.4

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers
  2017-01-16  9:33 ` [PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers Shannon Zhao
@ 2017-01-28 12:07   ` Andrew Jones
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-01-28 12:07 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Mon, Jan 16, 2017 at 05:33:28PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add a new memeber in kvm_cpu_context to save the ID registers value.

Currently all the sysregs that need to be save/restored are in a single
array, sys_regs. This commit message needs to provide the rationale for
introducing the id_sys_regs array. If there is no good reason, then the
ID registers should be integrated with the rest. Also, what about the
ARMv7/AArch32 equivalent registers?

Thanks,
drew

> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/kvm_host.h | 46 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 46 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index e505038..6034f92 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -187,12 +187,57 @@ enum vcpu_sysreg {
>  
>  #define NR_COPRO_REGS	(NR_SYS_REGS * 2)
>  
> +enum id_vcpu_sysreg {
> +	MIDR_EL1,
> +	/* ID group 1 registers */
> +	REVIDR_EL1,
> +	AIDR_EL1,
> +
> +	/* ID group 2 registers */
> +	CTR_EL0,
> +	CCSIDR_EL1,
> +	CLIDR_EL1,
> +
> +	/* ID group 3 registers */
> +	ID_PFR0_EL1,
> +	ID_PFR1_EL1,
> +	ID_DFR0_EL1,
> +	ID_AFR0_EL1,
> +	ID_MMFR0_EL1,
> +	ID_MMFR1_EL1,
> +	ID_MMFR2_EL1,
> +	ID_MMFR3_EL1,
> +	ID_ISAR0_EL1,
> +	ID_ISAR1_EL1,
> +	ID_ISAR2_EL1,
> +	ID_ISAR3_EL1,
> +	ID_ISAR4_EL1,
> +	ID_ISAR5_EL1,
> +	MVFR0_EL1,
> +	MVFR1_EL1,
> +	MVFR2_EL1,
> +	ID_AA64PFR0_EL1,
> +	ID_AA64PFR1_EL1,
> +	ID_AA64DFR0_EL1,
> +	ID_AA64DFR1_EL1,
> +	ID_AA64ISAR0_EL1,
> +	ID_AA64ISAR1_EL1,
> +	ID_AA64MMFR0_EL1,
> +	ID_AA64MMFR1_EL1,
> +	ID_AA64AFR0_EL1,
> +	ID_AA64AFR1_EL1,
> +	ID_MMFR4_EL1,
> +
> +	NR_ID_SYS_REGS
> +};
> +
>  struct kvm_cpu_context {
>  	struct kvm_regs	gp_regs;
>  	union {
>  		u64 sys_regs[NR_SYS_REGS];
>  		u32 copro[NR_COPRO_REGS];
>  	};
> +	u64 id_sys_regs[NR_ID_SYS_REGS];
>  };
>  
>  typedef struct kvm_cpu_context kvm_cpu_context_t;
> @@ -277,6 +322,7 @@ struct kvm_vcpu_arch {
>  
>  #define vcpu_gp_regs(v)		(&(v)->arch.ctxt.gp_regs)
>  #define vcpu_sys_reg(v,r)	((v)->arch.ctxt.sys_regs[(r)])
> +#define vcpu_id_sys_reg(v,r)	((v)->arch.ctxt.id_sys_regs[(r)])
>  /*
>   * CP14 and CP15 live in the same array, as they are backed by the
>   * same system registers.
> -- 
> 2.0.4
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all ID registers
  2017-01-16  9:33 ` [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all " Shannon Zhao
@ 2017-01-28 12:36   ` Andrew Jones
  2017-03-09 10:19   ` Christoffer Dall
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-01-28 12:36 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Mon, Jan 16, 2017 at 05:33:29PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Move invariant_sys_regs before emulate_sys_reg so that it can be used
> later.

This patch says it's adding reset handlers, but it's not, because the
handlers were already there. This patch is adding register indices,
which, at this point, appear to still be unused. The patch also adds new
registers to the table without mentioning that in the commit message.
The new registers should be added with a separate patch first, providing
justification in its commit message. The code movement called out in the
commit message is fine, but yet to serve a purpose, so I guess I'll just
have to stay tuned.

drew

> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 193 ++++++++++++++++++++++++++++------------------
>  1 file changed, 116 insertions(+), 77 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 87e7e66..bf71eb4 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1432,6 +1432,122 @@ static const struct sys_reg_desc cp15_64_regs[] = {
>  	{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
>  };
>  
> +/*
> + * These are the invariant sys_reg registers: we let the guest see the
> + * host versions of these, so they're part of the guest state.
> + *
> + * A future CPU may provide a mechanism to present different values to
> + * the guest, or a future kvm may trap them.
> + */
> +
> +#define FUNCTION_INVARIANT(reg)						\
> +	static void get_##reg(struct kvm_vcpu *v,			\
> +			      const struct sys_reg_desc *r)		\
> +	{								\
> +		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
> +	}
> +
> +FUNCTION_INVARIANT(midr_el1)
> +FUNCTION_INVARIANT(ctr_el0)
> +FUNCTION_INVARIANT(revidr_el1)
> +FUNCTION_INVARIANT(id_pfr0_el1)
> +FUNCTION_INVARIANT(id_pfr1_el1)
> +FUNCTION_INVARIANT(id_dfr0_el1)
> +FUNCTION_INVARIANT(id_afr0_el1)
> +FUNCTION_INVARIANT(id_mmfr0_el1)
> +FUNCTION_INVARIANT(id_mmfr1_el1)
> +FUNCTION_INVARIANT(id_mmfr2_el1)
> +FUNCTION_INVARIANT(id_mmfr3_el1)
> +FUNCTION_INVARIANT(id_isar0_el1)
> +FUNCTION_INVARIANT(id_isar1_el1)
> +FUNCTION_INVARIANT(id_isar2_el1)
> +FUNCTION_INVARIANT(id_isar3_el1)
> +FUNCTION_INVARIANT(id_isar4_el1)
> +FUNCTION_INVARIANT(id_isar5_el1)
> +FUNCTION_INVARIANT(mvfr0_el1)
> +FUNCTION_INVARIANT(mvfr1_el1)
> +FUNCTION_INVARIANT(mvfr2_el1)
> +FUNCTION_INVARIANT(id_aa64pfr0_el1)
> +FUNCTION_INVARIANT(id_aa64pfr1_el1)
> +FUNCTION_INVARIANT(id_aa64dfr0_el1)
> +FUNCTION_INVARIANT(id_aa64dfr1_el1)
> +FUNCTION_INVARIANT(id_aa64afr0_el1)
> +FUNCTION_INVARIANT(id_aa64afr1_el1)
> +FUNCTION_INVARIANT(id_aa64isar0_el1)
> +FUNCTION_INVARIANT(id_aa64isar1_el1)
> +FUNCTION_INVARIANT(id_aa64mmfr0_el1)
> +FUNCTION_INVARIANT(id_aa64mmfr1_el1)
> +FUNCTION_INVARIANT(clidr_el1)
> +FUNCTION_INVARIANT(aidr_el1)
> +
> +/* ->val is filled in by kvm_sys_reg_table_init() */
> +static struct sys_reg_desc invariant_sys_regs[] = {
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
> +	  NULL, get_midr_el1, MIDR_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> +	  NULL, get_revidr_el1, REVIDR_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> +	  NULL, get_id_pfr0_el1, ID_PFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> +	  NULL, get_id_pfr1_el1, ID_PFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> +	  NULL, get_id_dfr0_el1, ID_DFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> +	  NULL, get_id_afr0_el1, ID_AFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> +	  NULL, get_id_mmfr0_el1, ID_MMFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> +	  NULL, get_id_mmfr1_el1, ID_MMFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> +	  NULL, get_id_mmfr2_el1, ID_MMFR2_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> +	  NULL, get_id_mmfr3_el1, ID_MMFR3_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> +	  NULL, get_id_isar0_el1, ID_ISAR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> +	  NULL, get_id_isar1_el1, ID_ISAR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> +	  NULL, get_id_isar2_el1, ID_ISAR2_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> +	  NULL, get_id_isar3_el1, ID_ISAR3_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> +	  NULL, get_id_isar4_el1, ID_ISAR4_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> +	  NULL, get_id_isar5_el1, ID_ISAR5_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> +	  NULL, get_mvfr0_el1, MVFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> +	  NULL, get_mvfr1_el1, MVFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> +	  NULL, get_mvfr2_el1, MVFR2_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> +	  NULL, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> +	  NULL, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> +	  NULL, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> +	  NULL, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b100),
> +	  NULL, get_id_aa64afr0_el1, ID_AA64AFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b101),
> +	  NULL, get_id_aa64afr1_el1, ID_AA64AFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> +	  NULL, get_id_aa64isar0_el1, ID_AA64ISAR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> +	  NULL, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> +	  NULL, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> +	  NULL, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
> +	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> +	  NULL, get_clidr_el1, CLIDR_EL1 },
> +	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> +	  NULL, get_aidr_el1, AIDR_EL1 },
> +	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
> +	  NULL, get_ctr_el0, CTR_EL0 },
> +};
> +
>  /* Target specific emulation tables */
>  static struct kvm_sys_reg_target_table *target_tables[KVM_ARM_NUM_TARGETS];
>  
> @@ -1822,83 +1938,6 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  	return r;
>  }
>  
> -/*
> - * These are the invariant sys_reg registers: we let the guest see the
> - * host versions of these, so they're part of the guest state.
> - *
> - * A future CPU may provide a mechanism to present different values to
> - * the guest, or a future kvm may trap them.
> - */
> -
> -#define FUNCTION_INVARIANT(reg)						\
> -	static void get_##reg(struct kvm_vcpu *v,			\
> -			      const struct sys_reg_desc *r)		\
> -	{								\
> -		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
> -	}
> -
> -FUNCTION_INVARIANT(midr_el1)
> -FUNCTION_INVARIANT(ctr_el0)
> -FUNCTION_INVARIANT(revidr_el1)
> -FUNCTION_INVARIANT(id_pfr0_el1)
> -FUNCTION_INVARIANT(id_pfr1_el1)
> -FUNCTION_INVARIANT(id_dfr0_el1)
> -FUNCTION_INVARIANT(id_afr0_el1)
> -FUNCTION_INVARIANT(id_mmfr0_el1)
> -FUNCTION_INVARIANT(id_mmfr1_el1)
> -FUNCTION_INVARIANT(id_mmfr2_el1)
> -FUNCTION_INVARIANT(id_mmfr3_el1)
> -FUNCTION_INVARIANT(id_isar0_el1)
> -FUNCTION_INVARIANT(id_isar1_el1)
> -FUNCTION_INVARIANT(id_isar2_el1)
> -FUNCTION_INVARIANT(id_isar3_el1)
> -FUNCTION_INVARIANT(id_isar4_el1)
> -FUNCTION_INVARIANT(id_isar5_el1)
> -FUNCTION_INVARIANT(clidr_el1)
> -FUNCTION_INVARIANT(aidr_el1)
> -
> -/* ->val is filled in by kvm_sys_reg_table_init() */
> -static struct sys_reg_desc invariant_sys_regs[] = {
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
> -	  NULL, get_midr_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> -	  NULL, get_revidr_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> -	  NULL, get_id_pfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> -	  NULL, get_id_pfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> -	  NULL, get_id_dfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> -	  NULL, get_id_afr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> -	  NULL, get_id_mmfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> -	  NULL, get_id_mmfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> -	  NULL, get_id_mmfr2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> -	  NULL, get_id_mmfr3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> -	  NULL, get_id_isar0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> -	  NULL, get_id_isar1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> -	  NULL, get_id_isar2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> -	  NULL, get_id_isar3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> -	  NULL, get_id_isar4_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> -	  NULL, get_id_isar5_el1 },
> -	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -	  NULL, get_clidr_el1 },
> -	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> -	  NULL, get_aidr_el1 },
> -	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -	  NULL, get_ctr_el0 },
> -};
> -
>  static int reg_from_user(u64 *val, const void __user *uaddr, u64 id)
>  {
>  	if (copy_from_user(val, uaddr, KVM_REG_SIZE(id)) != 0)
> -- 
> 2.0.4
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 3/7] ARM64: KVM: Reset ID registers when creating the VCPUs
  2017-01-16  9:33 ` [PATCH RFC 3/7] ARM64: KVM: Reset ID registers when creating the VCPUs Shannon Zhao
@ 2017-01-28 13:32   ` Andrew Jones
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-01-28 13:32 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Mon, Jan 16, 2017 at 05:33:30PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Reset ID registers when creating the VCPUs and store the values per
> VCPU. Also modify the get_invariant_sys_reg and set_invariant_sys_reg
> to get/set the ID register from vcpu context.

The patch does more than that. It also prepares the table to be
used with get/set_one_reg. The name 'invariant' is less and less
fitting, and should probably be changed before this patch
to 'id', or we should just integrate these ID registers into
the sys_reg table. I still haven't seen the motivation for
not doing that yet.

> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/include/asm/kvm_coproc.h |  1 +
>  arch/arm64/kvm/guest.c              |  1 +
>  arch/arm64/kvm/sys_regs.c           | 58 ++++++++++++++++++-------------------
>  3 files changed, 31 insertions(+), 29 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_coproc.h b/arch/arm64/include/asm/kvm_coproc.h
> index 0b52377..0801b66 100644
> --- a/arch/arm64/include/asm/kvm_coproc.h
> +++ b/arch/arm64/include/asm/kvm_coproc.h
> @@ -24,6 +24,7 @@
>  #include <linux/kvm_host.h>
>  
>  void kvm_reset_sys_regs(struct kvm_vcpu *vcpu);
> +void kvm_reset_id_sys_regs(struct kvm_vcpu *vcpu);
>  
>  struct kvm_sys_reg_table {
>  	const struct sys_reg_desc *table;
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index b37446a..92abe2b 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -48,6 +48,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
>  
>  int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
>  {
> +	kvm_reset_id_sys_regs(vcpu);

This call should go in kvm_reset_vcpu

>  	return 0;
>  }
>  
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index bf71eb4..7c5fa03 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1440,11 +1440,11 @@ static const struct sys_reg_desc cp15_64_regs[] = {
>   * the guest, or a future kvm may trap them.
>   */
>  
> -#define FUNCTION_INVARIANT(reg)						\
> -	static void get_##reg(struct kvm_vcpu *v,			\
> -			      const struct sys_reg_desc *r)		\
> +#define FUNCTION_INVARIANT(register)					\
> +	static void get_##register(struct kvm_vcpu *v,			\
> +				   const struct sys_reg_desc *r)	\
>  	{								\
> -		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
> +		vcpu_id_sys_reg(v, r->reg) = read_sysreg(register);	\
>  	}
>  
>  FUNCTION_INVARIANT(midr_el1)
> @@ -1480,7 +1480,6 @@ FUNCTION_INVARIANT(id_aa64mmfr1_el1)
>  FUNCTION_INVARIANT(clidr_el1)
>  FUNCTION_INVARIANT(aidr_el1)
>  
> -/* ->val is filled in by kvm_sys_reg_table_init() */
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
>  	  NULL, get_midr_el1, MIDR_EL1 },
> @@ -1952,43 +1951,43 @@ static int reg_to_user(void __user *uaddr, const u64 *val, u64 id)
>  	return 0;
>  }
>  
> -static int get_invariant_sys_reg(u64 id, void __user *uaddr)
> +static int get_invariant_sys_reg(struct kvm_vcpu *vcpu,
> +				 const struct kvm_one_reg *reg)
>  {
>  	struct sys_reg_params params;
>  	const struct sys_reg_desc *r;
> +	void __user *uaddr = (void __user *)(unsigned long)reg->addr;
>  
> -	if (!index_to_params(id, &params))
> +	if (!index_to_params(reg->id, &params))
>  		return -ENOENT;
>  
>  	r = find_reg(&params, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
>  	if (!r)
>  		return -ENOENT;
>  
> -	return reg_to_user(uaddr, &r->val, id);
> +	if (r->get_user)
> +		return (r->get_user)(vcpu, r, reg, uaddr);
> +
> +	return reg_to_user(uaddr, &vcpu_id_sys_reg(vcpu, r->reg), reg->id);
>  }
>  
> -static int set_invariant_sys_reg(u64 id, void __user *uaddr)
> +static int set_invariant_sys_reg(struct kvm_vcpu *vcpu,
> +				 const struct kvm_one_reg *reg)
>  {
>  	struct sys_reg_params params;
>  	const struct sys_reg_desc *r;
> -	int err;
> -	u64 val = 0; /* Make sure high bits are 0 for 32-bit regs */
> +	void __user *uaddr = (void __user *)(unsigned long)reg->addr;
>  
> -	if (!index_to_params(id, &params))
> +	if (!index_to_params(reg->id, &params))
>  		return -ENOENT;
>  	r = find_reg(&params, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs));
>  	if (!r)
>  		return -ENOENT;
>  
> -	err = reg_from_user(&val, uaddr, id);
> -	if (err)
> -		return err;
> -
> -	/* This is what we mean by invariant: you can't change it. */
> -	if (r->val != val)
> -		return -EINVAL;
> +	if (r->set_user)
> +		return (r->set_user)(vcpu, r, reg, uaddr);
>  
> -	return 0;
> +	return reg_from_user(&vcpu_id_sys_reg(vcpu, r->reg), uaddr, reg->id);
>  }
>  
>  static bool is_valid_cache(u32 val)
> @@ -2086,7 +2085,7 @@ int kvm_arm_sys_reg_get_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
>  
>  	r = index_to_sys_reg_desc(vcpu, reg->id);
>  	if (!r)
> -		return get_invariant_sys_reg(reg->id, uaddr);
> +		return get_invariant_sys_reg(vcpu, reg);
>  
>  	if (r->get_user)
>  		return (r->get_user)(vcpu, r, reg, uaddr);
> @@ -2107,7 +2106,7 @@ int kvm_arm_sys_reg_set_reg(struct kvm_vcpu *vcpu, const struct kvm_one_reg *reg
>  
>  	r = index_to_sys_reg_desc(vcpu, reg->id);
>  	if (!r)
> -		return set_invariant_sys_reg(reg->id, uaddr);
> +		return set_invariant_sys_reg(vcpu, reg);
>  
>  	if (r->set_user)
>  		return (r->set_user)(vcpu, r, reg, uaddr);
> @@ -2252,7 +2251,6 @@ static int check_sysreg_table(const struct sys_reg_desc *table, unsigned int n)
>  void kvm_sys_reg_table_init(void)
>  {
>  	unsigned int i;
> -	struct sys_reg_desc clidr;
>  
>  	/* Make sure tables are unique and in order. */
>  	BUG_ON(check_sysreg_table(sys_reg_descs, ARRAY_SIZE(sys_reg_descs)));
> @@ -2262,10 +2260,6 @@ void kvm_sys_reg_table_init(void)
>  	BUG_ON(check_sysreg_table(cp15_64_regs, ARRAY_SIZE(cp15_64_regs)));
>  	BUG_ON(check_sysreg_table(invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs)));
>  
> -	/* We abuse the reset function to overwrite the table itself. */
> -	for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
> -		invariant_sys_regs[i].reset(NULL, &invariant_sys_regs[i]);
> -
>  	/*
>  	 * CLIDR format is awkward, so clean it up.  See ARM B4.1.20:
>  	 *
> @@ -2276,8 +2270,7 @@ void kvm_sys_reg_table_init(void)
>  	 *   value of 0b000, the values of Ctype4 to Ctype7 must be
>  	 *   ignored.
>  	 */
> -	get_clidr_el1(NULL, &clidr); /* Ugly... */
> -	cache_levels = clidr.val;
> +	cache_levels = read_sysreg(clidr_el1);

Hmm... I think this ugly code was here to ensure cache_levels was
derived from the guest view of clidr_el1. With this series that
may no longer be the case, unless we can and do enforce it. We need
Marc to advise here.

>  	for (i = 0; i < 7; i++)
>  		if (((cache_levels >> (i*3)) & 7) == 0)
>  			break;
> @@ -2310,3 +2303,10 @@ void kvm_reset_sys_regs(struct kvm_vcpu *vcpu)
>  		if (vcpu_sys_reg(vcpu, num) == 0x4242424242424242)
>  			panic("Didn't reset vcpu_sys_reg(%zi)", num);
>  }
> +
> +void kvm_reset_id_sys_regs(struct kvm_vcpu *vcpu)
> +{
> +	unsigned int i;
> +	for (i = 0; i < ARRAY_SIZE(invariant_sys_regs); i++)
> +		invariant_sys_regs[i].reset(vcpu, &invariant_sys_regs[i]);

This function should be dropped and the calls to it replaced with

reset_sys_reg_descs(vcpu, invariant_sys_regs, ARRAY_SIZE(invariant_sys_regs))

or whatever the table gets renamed to, now that it's not invariant. Or
all this just goes away if we move the registers into the sys_reg
table...

> +}
> -- 
> 2.0.4
> 
>

Thanks,
drew 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers
  2017-01-16  9:33 ` [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers Shannon Zhao
@ 2017-01-28 13:49   ` Andrew Jones
  2017-03-09 10:28   ` Christoffer Dall
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-01-28 13:49 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Mon, Jan 16, 2017 at 05:33:31PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 83 ++++++++++++++++++++++++++++-------------------
>  1 file changed, 50 insertions(+), 33 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 7c5fa03..f613e29 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1480,71 +1480,84 @@ FUNCTION_INVARIANT(id_aa64mmfr1_el1)
>  FUNCTION_INVARIANT(clidr_el1)
>  FUNCTION_INVARIANT(aidr_el1)
>  
> +static bool access_id_reg(struct kvm_vcpu *vcpu,
> +			  struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	if (p->is_write) {
> +		vcpu_id_sys_reg(vcpu, r->reg) = p->regval;

Hmm, most/all id registers are write-ignore, right? If there
are some that are not, then they should get their own handler.

> +	} else {
> +		p->regval = vcpu_id_sys_reg(vcpu, r->reg);
> +	}
> +
> +	return true;
> +}
> +
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
> -	  NULL, get_midr_el1, MIDR_EL1 },
> +	  access_id_reg, get_midr_el1, MIDR_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> -	  NULL, get_revidr_el1, REVIDR_EL1 },
> +	  access_id_reg, get_revidr_el1, REVIDR_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> -	  NULL, get_id_pfr0_el1, ID_PFR0_EL1 },
> +	  access_id_reg, get_id_pfr0_el1, ID_PFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> -	  NULL, get_id_pfr1_el1, ID_PFR1_EL1 },
> +	  access_id_reg, get_id_pfr1_el1, ID_PFR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> -	  NULL, get_id_dfr0_el1, ID_DFR0_EL1 },
> +	  access_id_reg, get_id_dfr0_el1, ID_DFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> -	  NULL, get_id_afr0_el1, ID_AFR0_EL1 },
> +	  access_id_reg, get_id_afr0_el1, ID_AFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> -	  NULL, get_id_mmfr0_el1, ID_MMFR0_EL1 },
> +	  access_id_reg, get_id_mmfr0_el1, ID_MMFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> -	  NULL, get_id_mmfr1_el1, ID_MMFR1_EL1 },
> +	  access_id_reg, get_id_mmfr1_el1, ID_MMFR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> -	  NULL, get_id_mmfr2_el1, ID_MMFR2_EL1 },
> +	  access_id_reg, get_id_mmfr2_el1, ID_MMFR2_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> -	  NULL, get_id_mmfr3_el1, ID_MMFR3_EL1 },
> +	  access_id_reg, get_id_mmfr3_el1, ID_MMFR3_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> -	  NULL, get_id_isar0_el1, ID_ISAR0_EL1 },
> +	  access_id_reg, get_id_isar0_el1, ID_ISAR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> -	  NULL, get_id_isar1_el1, ID_ISAR1_EL1 },
> +	  access_id_reg, get_id_isar1_el1, ID_ISAR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> -	  NULL, get_id_isar2_el1, ID_ISAR2_EL1 },
> +	  access_id_reg, get_id_isar2_el1, ID_ISAR2_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> -	  NULL, get_id_isar3_el1, ID_ISAR3_EL1 },
> +	  access_id_reg, get_id_isar3_el1, ID_ISAR3_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> -	  NULL, get_id_isar4_el1, ID_ISAR4_EL1 },
> +	  access_id_reg, get_id_isar4_el1, ID_ISAR4_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> -	  NULL, get_id_isar5_el1, ID_ISAR5_EL1 },
> +	  access_id_reg, get_id_isar5_el1, ID_ISAR5_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> -	  NULL, get_mvfr0_el1, MVFR0_EL1 },
> +	  access_id_reg, get_mvfr0_el1, MVFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> -	  NULL, get_mvfr1_el1, MVFR1_EL1 },
> +	  access_id_reg, get_mvfr1_el1, MVFR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> -	  NULL, get_mvfr2_el1, MVFR2_EL1 },
> +	  access_id_reg, get_mvfr2_el1, MVFR2_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> -	  NULL, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
> +	  access_id_reg, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> -	  NULL, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
> +	  access_id_reg, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> -	  NULL, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
> +	  access_id_reg, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> -	  NULL, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
> +	  access_id_reg, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b100),
> -	  NULL, get_id_aa64afr0_el1, ID_AA64AFR0_EL1 },
> +	  access_id_reg, get_id_aa64afr0_el1, ID_AA64AFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b101),
> -	  NULL, get_id_aa64afr1_el1, ID_AA64AFR1_EL1 },
> +	  access_id_reg, get_id_aa64afr1_el1, ID_AA64AFR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> -	  NULL, get_id_aa64isar0_el1, ID_AA64ISAR0_EL1 },
> +	  access_id_reg, get_id_aa64isar0_el1, ID_AA64ISAR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> -	  NULL, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
> +	  access_id_reg, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> -	  NULL, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
> +	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> -	  NULL, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
> +	  access_id_reg, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -	  NULL, get_clidr_el1, CLIDR_EL1 },
> +	  access_id_reg, get_clidr_el1, CLIDR_EL1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> -	  NULL, get_aidr_el1, AIDR_EL1 },
> +	  access_id_reg, get_aidr_el1, AIDR_EL1 },
>  	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -	  NULL, get_ctr_el0, CTR_EL0 },
> +	  access_id_reg, get_ctr_el0, CTR_EL0 },
>  };
>  
>  /* Target specific emulation tables */
> @@ -1809,8 +1822,12 @@ static int emulate_sys_reg(struct kvm_vcpu *vcpu,
>  
>  	/* Search target-specific then generic table. */
>  	r = find_reg(params, table, num);
> -	if (!r)
> +	if (!r) {
>  		r = find_reg(params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
> +		if (!r)
> +			r = find_reg(params, invariant_sys_regs,
> +				     ARRAY_SIZE(invariant_sys_regs));
> +	}
>  
>  	if (likely(r)) {
>  		/*
> -- 
> 2.0.4
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU
  2017-01-16  9:33 ` [PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU Shannon Zhao
@ 2017-01-28 14:47   ` Andrew Jones
  2017-03-09 10:56     ` Christoffer Dall
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Jones @ 2017-01-28 14:47 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Mon, Jan 16, 2017 at 05:33:32PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Add a capability to tell userspace that KVM supports cross type vCPU.
> Add a cpu feature for userspace to set when it doesn't use host type
> vCPU and kvm_vcpu_preferred_target return the host MIDR register value
> so that userspace can check whether its requested vCPU type macthes the
> one of physical CPU and if so, KVM will not trap ID registers even
> though userspace doesn't specify -cpu host.
> Guest accesses MIDR through VPIDR_EL2 so we save/restore it no matter
> it's a cross type vCPU.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm/kvm/arm.c                   | 10 ++++++++--
>  arch/arm64/include/asm/kvm_emulate.h |  3 +++
>  arch/arm64/include/asm/kvm_host.h    |  3 ++-
>  arch/arm64/include/uapi/asm/kvm.h    |  1 +
>  arch/arm64/kvm/guest.c               | 17 ++++++++++++++++-
>  arch/arm64/kvm/hyp/sysreg-sr.c       |  2 ++
>  include/uapi/linux/kvm.h             |  1 +
>  7 files changed, 33 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 1167678..bdceb19 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -207,6 +207,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_ARM_PSCI_0_2:
>  	case KVM_CAP_READONLY_MEM:
>  	case KVM_CAP_MP_STATE:
> +	case KVM_CAP_ARM_CROSS_VCPU:
>  		r = 1;
>  		break;
>  	case KVM_CAP_COALESCED_MMIO:
> @@ -809,8 +810,9 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  {
>  	unsigned int i;
>  	int phys_target = kvm_target_cpu();
> +	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
>  
> -	if (init->target != phys_target)
> +	if (!cross_vcpu && init->target != phys_target)
>  		return -EINVAL;

I'm not sure we need the vcpu feature bit. I think qemu should be
allowed to try any target (if using -cpu host it will try the
kvm preferred target). kvm should check that the input target is
a known target and that it is compatible with the phys_target,
otherwise -EINVAL.

>  
>  	/*
> @@ -839,7 +841,11 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  			set_bit(i, vcpu->arch.features);
>  	}
>  
> -	vcpu->arch.target = phys_target;
> +	if (!cross_vcpu)
> +		vcpu->arch.target = phys_target;
> +	else
> +		/* Use generic ARMv8 target for cross type vcpu. */
> +		vcpu->arch.target = KVM_ARM_TARGET_GENERIC_V8;

We want to be able to select a specific target type. So, if
init->target is approved by kvm's checking, then this should
just be 
 vcpu->arch.target = init->target

If, for starters, we only want to support preferred and generic,
then kvm's check is easy
 init->target == phys_target || init->target == KVM_ARM_TARGET_GENERIC_V8

>  
>  	/* Now we know what it is, we can reset it. */
>  	return kvm_reset_vcpu(vcpu);
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index f5ea0ba..bca7d3a 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -49,6 +49,9 @@ static inline void vcpu_reset_hcr(struct kvm_vcpu *vcpu)
>  		vcpu->arch.hcr_el2 |= HCR_E2H;
>  	if (test_bit(KVM_ARM_VCPU_EL1_32BIT, vcpu->arch.features))
>  		vcpu->arch.hcr_el2 &= ~HCR_RW;
> +	if (test_bit(KVM_ARM_VCPU_CROSS, vcpu->arch.features))
> +		/* TODO: Set HCR_TID2 and trap cache registers */
> +		vcpu->arch.hcr_el2 |= HCR_TID3 | HCR_TID1 | HCR_TID0;

We could optimize this a bit. For each set_one_reg of an ID register
we can check if it matches the host. If it doesn't then we flag that
we'll need that register's group trap enabled (but not the other two
groups). Here we'll then only enable traps for the necessary groups.
Maybe there's little chance that we won't always need all three
though...

>  }
>  
>  static inline unsigned long vcpu_get_hcr(struct kvm_vcpu *vcpu)
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 6034f92..d0073d7 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -41,10 +41,11 @@
>  
>  #define KVM_MAX_VCPUS VGIC_V3_MAX_CPUS
>  
> -#define KVM_VCPU_MAX_FEATURES 4
> +#define KVM_VCPU_MAX_FEATURES 5
>  
>  #define KVM_REQ_VCPU_EXIT	8
>  
> +bool kvm_vcpu_has_feature_cross_cpu(const struct kvm_vcpu_init *init);
>  int __attribute_const__ kvm_target_cpu(void);
>  int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
>  int kvm_arch_dev_ioctl_check_extension(struct kvm *kvm, long ext);
> diff --git a/arch/arm64/include/uapi/asm/kvm.h b/arch/arm64/include/uapi/asm/kvm.h
> index 3051f86..7ba7117 100644
> --- a/arch/arm64/include/uapi/asm/kvm.h
> +++ b/arch/arm64/include/uapi/asm/kvm.h
> @@ -97,6 +97,7 @@ struct kvm_regs {
>  #define KVM_ARM_VCPU_EL1_32BIT		1 /* CPU running a 32bit VM */
>  #define KVM_ARM_VCPU_PSCI_0_2		2 /* CPU uses PSCI v0.2 */
>  #define KVM_ARM_VCPU_PMU_V3		3 /* Support guest PMUv3 */
> +#define KVM_ARM_VCPU_CROSS		4 /* Support cross type vCPU */
>  
>  struct kvm_vcpu_init {
>  	__u32 target;
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 92abe2b..4a5ccab 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -308,8 +308,15 @@ int __attribute_const__ kvm_target_cpu(void)
>  	return KVM_ARM_TARGET_GENERIC_V8;
>  }
>  
> +bool kvm_vcpu_has_feature_cross_cpu(const struct kvm_vcpu_init *init)
> +{
> +	return init->features[KVM_ARM_VCPU_CROSS / 32] &
> +	       (1 << (KVM_ARM_VCPU_CROSS % 32));
> +}
> +
>  int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
>  {
> +	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
>  	int target = kvm_target_cpu();
>  
>  	if (target < 0)
> @@ -323,7 +330,15 @@ int kvm_vcpu_preferred_target(struct kvm_vcpu_init *init)
>  	 * specific features available for the preferred
>  	 * target type.
>  	 */
> -	init->target = (__u32)target;
> +	/* If userspace wants a cross type vcpu other that host, here we return
> +	 * the midr for userspace to check whether the midr value of requeseted
> +	 * vcpu matches the one of host. If they match, we will not trap CPU ID
> +	 * registers even though userspace doesn't specify -cpu host.
> +	 */
> +	if (cross_vcpu)
> +		init->target = read_cpuid_id();
> +	else
> +		init->target = (__u32)target;

This is an API change, so would need a doc update too, but I don't
think it's necessary. kvm can do the checking, it has both the
requested target and the phys target available for comparing. Why
make userspace do it?

>  
>  	return 0;
>  }
> diff --git a/arch/arm64/kvm/hyp/sysreg-sr.c b/arch/arm64/kvm/hyp/sysreg-sr.c
> index 9341376..9461829 100644
> --- a/arch/arm64/kvm/hyp/sysreg-sr.c
> +++ b/arch/arm64/kvm/hyp/sysreg-sr.c
> @@ -45,6 +45,7 @@ static void __hyp_text __sysreg_save_common_state(struct kvm_cpu_context *ctxt)
>  
>  static void __hyp_text __sysreg_save_state(struct kvm_cpu_context *ctxt)
>  {
> +	ctxt->id_sys_regs[MIDR_EL1]     = read_sysreg(vpidr_el2);
>  	ctxt->sys_regs[MPIDR_EL1]	= read_sysreg(vmpidr_el2);
>  	ctxt->sys_regs[CSSELR_EL1]	= read_sysreg(csselr_el1);
>  	ctxt->sys_regs[SCTLR_EL1]	= read_sysreg_el1(sctlr);
> @@ -98,6 +99,7 @@ static void __hyp_text __sysreg_restore_common_state(struct kvm_cpu_context *ctx
>  
>  static void __hyp_text __sysreg_restore_state(struct kvm_cpu_context *ctxt)
>  {
> +	write_sysreg(ctxt->id_sys_regs[MIDR_EL1],       vpidr_el2);
>  	write_sysreg(ctxt->sys_regs[MPIDR_EL1],		vmpidr_el2);
>  	write_sysreg(ctxt->sys_regs[CSSELR_EL1],	csselr_el1);
>  	write_sysreg_el1(ctxt->sys_regs[SCTLR_EL1],	sctlr);
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index cac48ed..46115a2 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -871,6 +871,7 @@ struct kvm_ppc_smmu_info {
>  #define KVM_CAP_S390_USER_INSTR0 130
>  #define KVM_CAP_MSI_DEVID 131
>  #define KVM_CAP_PPC_HTM 132
> +#define KVM_CAP_ARM_CROSS_VCPU 133
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> -- 
> 2.0.4
> 
> 

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-01-16  9:33 ` [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system Shannon Zhao
@ 2017-01-28 14:55   ` Andrew Jones
  2017-03-09 15:21     ` Suzuki K Poulose
  2017-03-15 11:50     ` Christoffer Dall
  0 siblings, 2 replies; 30+ messages in thread
From: Andrew Jones @ 2017-01-28 14:55 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Mon, Jan 16, 2017 at 05:33:33PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> When initializing KVM, check whether physical hardware is a
> heterogeneous system through the MIDR values. If so, force userspace to
> set the KVM_ARM_VCPU_CROSS feature bit. Otherwise, it should fail to
> initialize VCPUs.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm/kvm/arm.c       | 26 ++++++++++++++++++++++++++
>  include/uapi/linux/kvm.h |  1 +
>  2 files changed, 27 insertions(+)
> 
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index bdceb19..21ec070 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -46,6 +46,7 @@
>  #include <asm/kvm_coproc.h>
>  #include <asm/kvm_psci.h>
>  #include <asm/sections.h>
> +#include <asm/cputype.h>
>  
>  #ifdef REQUIRES_VIRT
>  __asm__(".arch_extension	virt");
> @@ -65,6 +66,7 @@ static unsigned int kvm_vmid_bits __read_mostly;
>  static DEFINE_SPINLOCK(kvm_vmid_lock);
>  
>  static bool vgic_present;
> +static bool heterogeneous_system;
>  
>  static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
>  
> @@ -210,6 +212,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  	case KVM_CAP_ARM_CROSS_VCPU:
>  		r = 1;
>  		break;
> +	case KVM_CAP_ARM_HETEROGENEOUS:
> +		r = heterogeneous_system;
> +		break;

What's this for? When/why would usespace check it?

>  	case KVM_CAP_COALESCED_MMIO:
>  		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
>  		break;
> @@ -812,6 +817,12 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>  	int phys_target = kvm_target_cpu();
>  	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
>  
> +	if (heterogeneous_system && !cross_vcpu) {
> +		kvm_err("%s:Host is a heterogeneous system, set KVM_ARM_VCPU_CROSS bit\n",
> +			__func__);
> +		return -EINVAL;
> +	}

Instead of forcing userspace to set a bit, why not just confirm the
target selected will work? E.g. if only generic works on a heterogeneous
system then just 

 if (heterogeneous_system && init->target != GENERIC)
    return -EINVAL

should work

> +
>  	if (!cross_vcpu && init->target != phys_target)
>  		return -EINVAL;
>  
> @@ -1397,6 +1408,11 @@ static void check_kvm_target_cpu(void *ret)
>  	*(int *)ret = kvm_target_cpu();
>  }
>  
> +static void get_physical_cpu_midr(void *midr)
> +{
> +	*(u32 *)midr = read_cpuid_id();
> +}
> +
>  struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
>  {
>  	struct kvm_vcpu *vcpu;
> @@ -1417,6 +1433,7 @@ int kvm_arch_init(void *opaque)
>  {
>  	int err;
>  	int ret, cpu;
> +	u32 current_midr, midr;
>  
>  	if (!is_hyp_mode_available()) {
>  		kvm_err("HYP mode not available\n");
> @@ -1431,6 +1448,15 @@ int kvm_arch_init(void *opaque)
>  		}
>  	}
>  
> +	current_midr = read_cpuid_id();
> +	for_each_online_cpu(cpu) {
> +		smp_call_function_single(cpu, get_physical_cpu_midr, &midr, 1);
> +		if (current_midr != midr) {
> +			heterogeneous_system = true;
> +			break;
> +		}
> +	}

Is there no core kernel API that provides this?

> +
>  	err = init_common_resources();
>  	if (err)
>  		return err;
> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
> index 46115a2..cc2b63d 100644
> --- a/include/uapi/linux/kvm.h
> +++ b/include/uapi/linux/kvm.h
> @@ -872,6 +872,7 @@ struct kvm_ppc_smmu_info {
>  #define KVM_CAP_MSI_DEVID 131
>  #define KVM_CAP_PPC_HTM 132
>  #define KVM_CAP_ARM_CROSS_VCPU 133
> +#define KVM_CAP_ARM_HETEROGENEOUS 134
>  
>  #ifdef KVM_CAP_IRQ_ROUTING
>  
> -- 
> 2.0.4
> 
>

drew 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1
  2017-01-16  9:33 ` [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1 Shannon Zhao
@ 2017-01-28 15:22   ` Andrew Jones
  2017-03-09 12:52   ` Christoffer Dall
  1 sibling, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-01-28 15:22 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Mon, Jan 16, 2017 at 05:33:34PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Check if the configuration is fine.

I don't think we need this patch. Instead, userspace should
get_one_reg first when the target id register needs to be
sanity checked. The value it gets will be the host value at
that point, so it can sanity check itself before calling
set_one_reg.

drew

> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 32 +++++++++++++++++++++++++++++++-
>  1 file changed, 31 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f613e29..9763b79 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1493,6 +1493,35 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>  
> +static int set_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
> +				const struct sys_reg_desc *rd,
> +				const struct kvm_one_reg *reg,
> +				void __user *uaddr)
> +{
> +	u64 val, id_aa64mmfr0;
> +
> +	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +		return -EFAULT;
> +
> +	asm volatile("mrs %0, id_aa64mmfr0_el1\n" : "=r" (id_aa64mmfr0));
> +
> +	if ((val & GENMASK(3, 0)) > (id_aa64mmfr0 & GENMASK(3, 0)) ||
> +	    (val & GENMASK(7, 4)) > (id_aa64mmfr0 & GENMASK(7, 4)) ||
> +	    (val & GENMASK(11, 8)) > (id_aa64mmfr0 & GENMASK(11, 8)) ||
> +	    (val & GENMASK(15, 12)) > (id_aa64mmfr0 & GENMASK(15, 12)) ||
> +	    (val & GENMASK(19, 16)) > (id_aa64mmfr0 & GENMASK(19, 16)) ||
> +	    (val & GENMASK(23, 20)) > (id_aa64mmfr0 & GENMASK(23, 20)) ||
> +	    (val & GENMASK(27, 24)) < (id_aa64mmfr0 & GENMASK(27, 24)) ||
> +	    (val & GENMASK(31, 28)) < (id_aa64mmfr0 & GENMASK(31, 28))) {
> +		kvm_err("Wrong memory translation granule size/Physical Address range\n");
> +		return -EINVAL;
> +	}
> +
> +	vcpu_id_sys_reg(vcpu, rd->reg) = val & GENMASK(31, 0);
> +
> +	return 0;
> +}
> +
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
>  	  access_id_reg, get_midr_el1, MIDR_EL1 },
> @@ -1549,7 +1578,8 @@ static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
>  	  access_id_reg, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> -	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
> +	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1,
> +	  0, NULL, set_id_aa64mmfr0_el1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
>  	  access_id_reg, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -- 
> 2.0.4
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all ID registers
  2017-01-16  9:33 ` [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all " Shannon Zhao
  2017-01-28 12:36   ` Andrew Jones
@ 2017-03-09 10:19   ` Christoffer Dall
  1 sibling, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-03-09 10:19 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, kvmarm, wu.wubin

On Mon, Jan 16, 2017 at 05:33:29PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Move invariant_sys_regs before emulate_sys_reg so that it can be used
> later.
> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 193 ++++++++++++++++++++++++++++------------------
>  1 file changed, 116 insertions(+), 77 deletions(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 87e7e66..bf71eb4 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1432,6 +1432,122 @@ static const struct sys_reg_desc cp15_64_regs[] = {
>  	{ Op1( 1), CRn( 0), CRm( 2), Op2( 0), access_vm_reg, NULL, c2_TTBR1 },
>  };
>  
> +/*
> + * These are the invariant sys_reg registers: we let the guest see the
> + * host versions of these, so they're part of the guest state.
> + *
> + * A future CPU may provide a mechanism to present different values to
> + * the guest, or a future kvm may trap them.
> + */
> +
> +#define FUNCTION_INVARIANT(reg)						\
> +	static void get_##reg(struct kvm_vcpu *v,			\
> +			      const struct sys_reg_desc *r)		\
> +	{								\
> +		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
> +	}
> +
> +FUNCTION_INVARIANT(midr_el1)
> +FUNCTION_INVARIANT(ctr_el0)
> +FUNCTION_INVARIANT(revidr_el1)
> +FUNCTION_INVARIANT(id_pfr0_el1)
> +FUNCTION_INVARIANT(id_pfr1_el1)
> +FUNCTION_INVARIANT(id_dfr0_el1)
> +FUNCTION_INVARIANT(id_afr0_el1)
> +FUNCTION_INVARIANT(id_mmfr0_el1)
> +FUNCTION_INVARIANT(id_mmfr1_el1)
> +FUNCTION_INVARIANT(id_mmfr2_el1)
> +FUNCTION_INVARIANT(id_mmfr3_el1)
> +FUNCTION_INVARIANT(id_isar0_el1)
> +FUNCTION_INVARIANT(id_isar1_el1)
> +FUNCTION_INVARIANT(id_isar2_el1)
> +FUNCTION_INVARIANT(id_isar3_el1)
> +FUNCTION_INVARIANT(id_isar4_el1)
> +FUNCTION_INVARIANT(id_isar5_el1)
> +FUNCTION_INVARIANT(mvfr0_el1)
> +FUNCTION_INVARIANT(mvfr1_el1)
> +FUNCTION_INVARIANT(mvfr2_el1)
> +FUNCTION_INVARIANT(id_aa64pfr0_el1)
> +FUNCTION_INVARIANT(id_aa64pfr1_el1)
> +FUNCTION_INVARIANT(id_aa64dfr0_el1)
> +FUNCTION_INVARIANT(id_aa64dfr1_el1)
> +FUNCTION_INVARIANT(id_aa64afr0_el1)
> +FUNCTION_INVARIANT(id_aa64afr1_el1)
> +FUNCTION_INVARIANT(id_aa64isar0_el1)
> +FUNCTION_INVARIANT(id_aa64isar1_el1)
> +FUNCTION_INVARIANT(id_aa64mmfr0_el1)
> +FUNCTION_INVARIANT(id_aa64mmfr1_el1)
> +FUNCTION_INVARIANT(clidr_el1)
> +FUNCTION_INVARIANT(aidr_el1)
> +
> +/* ->val is filled in by kvm_sys_reg_table_init() */
> +static struct sys_reg_desc invariant_sys_regs[] = {
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
> +	  NULL, get_midr_el1, MIDR_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> +	  NULL, get_revidr_el1, REVIDR_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> +	  NULL, get_id_pfr0_el1, ID_PFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> +	  NULL, get_id_pfr1_el1, ID_PFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> +	  NULL, get_id_dfr0_el1, ID_DFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> +	  NULL, get_id_afr0_el1, ID_AFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> +	  NULL, get_id_mmfr0_el1, ID_MMFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> +	  NULL, get_id_mmfr1_el1, ID_MMFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> +	  NULL, get_id_mmfr2_el1, ID_MMFR2_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> +	  NULL, get_id_mmfr3_el1, ID_MMFR3_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> +	  NULL, get_id_isar0_el1, ID_ISAR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> +	  NULL, get_id_isar1_el1, ID_ISAR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> +	  NULL, get_id_isar2_el1, ID_ISAR2_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> +	  NULL, get_id_isar3_el1, ID_ISAR3_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> +	  NULL, get_id_isar4_el1, ID_ISAR4_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> +	  NULL, get_id_isar5_el1, ID_ISAR5_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b000),
> +	  NULL, get_mvfr0_el1, MVFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b001),
> +	  NULL, get_mvfr1_el1, MVFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0011), Op2(0b010),
> +	  NULL, get_mvfr2_el1, MVFR2_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b000),
> +	  NULL, get_id_aa64pfr0_el1, ID_AA64PFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0100), Op2(0b001),
> +	  NULL, get_id_aa64pfr1_el1, ID_AA64PFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b000),
> +	  NULL, get_id_aa64dfr0_el1, ID_AA64DFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b001),
> +	  NULL, get_id_aa64dfr1_el1, ID_AA64DFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b100),
> +	  NULL, get_id_aa64afr0_el1, ID_AA64AFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0101), Op2(0b101),
> +	  NULL, get_id_aa64afr1_el1, ID_AA64AFR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b000),
> +	  NULL, get_id_aa64isar0_el1, ID_AA64ISAR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
> +	  NULL, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> +	  NULL, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
> +	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
> +	  NULL, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
> +	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> +	  NULL, get_clidr_el1, CLIDR_EL1 },
> +	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> +	  NULL, get_aidr_el1, AIDR_EL1 },
> +	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
> +	  NULL, get_ctr_el0, CTR_EL0 },
> +};
> +
>  /* Target specific emulation tables */
>  static struct kvm_sys_reg_target_table *target_tables[KVM_ARM_NUM_TARGETS];
>  
> @@ -1822,83 +1938,6 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  	return r;
>  }
>  
> -/*
> - * These are the invariant sys_reg registers: we let the guest see the
> - * host versions of these, so they're part of the guest state.
> - *
> - * A future CPU may provide a mechanism to present different values to
> - * the guest, or a future kvm may trap them.
> - */
> -
> -#define FUNCTION_INVARIANT(reg)						\
> -	static void get_##reg(struct kvm_vcpu *v,			\
> -			      const struct sys_reg_desc *r)		\
> -	{								\
> -		((struct sys_reg_desc *)r)->val = read_sysreg(reg);	\
> -	}
> -
> -FUNCTION_INVARIANT(midr_el1)
> -FUNCTION_INVARIANT(ctr_el0)
> -FUNCTION_INVARIANT(revidr_el1)
> -FUNCTION_INVARIANT(id_pfr0_el1)
> -FUNCTION_INVARIANT(id_pfr1_el1)
> -FUNCTION_INVARIANT(id_dfr0_el1)
> -FUNCTION_INVARIANT(id_afr0_el1)
> -FUNCTION_INVARIANT(id_mmfr0_el1)
> -FUNCTION_INVARIANT(id_mmfr1_el1)
> -FUNCTION_INVARIANT(id_mmfr2_el1)
> -FUNCTION_INVARIANT(id_mmfr3_el1)
> -FUNCTION_INVARIANT(id_isar0_el1)
> -FUNCTION_INVARIANT(id_isar1_el1)
> -FUNCTION_INVARIANT(id_isar2_el1)
> -FUNCTION_INVARIANT(id_isar3_el1)
> -FUNCTION_INVARIANT(id_isar4_el1)
> -FUNCTION_INVARIANT(id_isar5_el1)
> -FUNCTION_INVARIANT(clidr_el1)
> -FUNCTION_INVARIANT(aidr_el1)
> -
> -/* ->val is filled in by kvm_sys_reg_table_init() */
> -static struct sys_reg_desc invariant_sys_regs[] = {
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
> -	  NULL, get_midr_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b110),
> -	  NULL, get_revidr_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b000),
> -	  NULL, get_id_pfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b001),
> -	  NULL, get_id_pfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b010),
> -	  NULL, get_id_dfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b011),
> -	  NULL, get_id_afr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b100),
> -	  NULL, get_id_mmfr0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b101),
> -	  NULL, get_id_mmfr1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b110),
> -	  NULL, get_id_mmfr2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0001), Op2(0b111),
> -	  NULL, get_id_mmfr3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b000),
> -	  NULL, get_id_isar0_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b001),
> -	  NULL, get_id_isar1_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b010),
> -	  NULL, get_id_isar2_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b011),
> -	  NULL, get_id_isar3_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b100),
> -	  NULL, get_id_isar4_el1 },
> -	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0010), Op2(0b101),
> -	  NULL, get_id_isar5_el1 },
> -	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -	  NULL, get_clidr_el1 },
> -	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b111),
> -	  NULL, get_aidr_el1 },
> -	{ Op0(0b11), Op1(0b011), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -	  NULL, get_ctr_el0 },
> -};
> -
>  static int reg_from_user(u64 *val, const void __user *uaddr, u64 id)
>  {
>  	if (copy_from_user(val, uaddr, KVM_REG_SIZE(id)) != 0)
> -- 
> 2.0.4
> 


Can we avoid this amount of churn by just adding a prototype above or do
something else like that?

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers
  2017-01-16  9:33 ` [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers Shannon Zhao
  2017-01-28 13:49   ` Andrew Jones
@ 2017-03-09 10:28   ` Christoffer Dall
  1 sibling, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-03-09 10:28 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, kvmarm, wu.wubin

On Mon, Jan 16, 2017 at 05:33:31PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>

Please provide a commit message.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU
  2017-01-28 14:47   ` Andrew Jones
@ 2017-03-09 10:56     ` Christoffer Dall
  0 siblings, 0 replies; 30+ messages in thread
From: Christoffer Dall @ 2017-03-09 10:56 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, qemu-arm, kvmarm, wu.wubin

On Sat, Jan 28, 2017 at 03:47:54PM +0100, Andrew Jones wrote:
> On Mon, Jan 16, 2017 at 05:33:32PM +0800, Shannon Zhao wrote:
> > From: Shannon Zhao <shannon.zhao@linaro.org>
> > 
> > Add a capability to tell userspace that KVM supports cross type vCPU.
> > Add a cpu feature for userspace to set when it doesn't use host type
> > vCPU and kvm_vcpu_preferred_target return the host MIDR register value
> > so that userspace can check whether its requested vCPU type macthes the
> > one of physical CPU and if so, KVM will not trap ID registers even
> > though userspace doesn't specify -cpu host.
> > Guest accesses MIDR through VPIDR_EL2 so we save/restore it no matter
> > it's a cross type vCPU.
> > 
> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > ---
> >  arch/arm/kvm/arm.c                   | 10 ++++++++--
> >  arch/arm64/include/asm/kvm_emulate.h |  3 +++
> >  arch/arm64/include/asm/kvm_host.h    |  3 ++-
> >  arch/arm64/include/uapi/asm/kvm.h    |  1 +
> >  arch/arm64/kvm/guest.c               | 17 ++++++++++++++++-
> >  arch/arm64/kvm/hyp/sysreg-sr.c       |  2 ++
> >  include/uapi/linux/kvm.h             |  1 +
> >  7 files changed, 33 insertions(+), 4 deletions(-)
> > 
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index 1167678..bdceb19 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -207,6 +207,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> >  	case KVM_CAP_ARM_PSCI_0_2:
> >  	case KVM_CAP_READONLY_MEM:
> >  	case KVM_CAP_MP_STATE:
> > +	case KVM_CAP_ARM_CROSS_VCPU:
> >  		r = 1;
> >  		break;
> >  	case KVM_CAP_COALESCED_MMIO:
> > @@ -809,8 +810,9 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  {
> >  	unsigned int i;
> >  	int phys_target = kvm_target_cpu();
> > +	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
> >  
> > -	if (init->target != phys_target)
> > +	if (!cross_vcpu && init->target != phys_target)
> >  		return -EINVAL;
> 
> I'm not sure we need the vcpu feature bit. I think qemu should be
> allowed to try any target (if using -cpu host it will try the
> kvm preferred target). kvm should check that the input target is
> a known target and that it is compatible with the phys_target,
> otherwise -EINVAL.
> 

I agree. I think we just need to advertise the capability to user space
instead.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1
  2017-01-16  9:33 ` [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1 Shannon Zhao
  2017-01-28 15:22   ` Andrew Jones
@ 2017-03-09 12:52   ` Christoffer Dall
  2017-03-09 15:03     ` Mark Rutland
  1 sibling, 1 reply; 30+ messages in thread
From: Christoffer Dall @ 2017-03-09 12:52 UTC (permalink / raw)
  To: Shannon Zhao; +Cc: marc.zyngier, qemu-arm, kvmarm, wu.wubin

On Mon, Jan 16, 2017 at 05:33:34PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@linaro.org>
> 
> Check if the configuration is fine.

This commit message really needs some love and attention.

> 
> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> ---
>  arch/arm64/kvm/sys_regs.c | 32 +++++++++++++++++++++++++++++++-
>  1 file changed, 31 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index f613e29..9763b79 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -1493,6 +1493,35 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>  
> +static int set_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
> +				const struct sys_reg_desc *rd,
> +				const struct kvm_one_reg *reg,
> +				void __user *uaddr)
> +{
> +	u64 val, id_aa64mmfr0;
> +
> +	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> +		return -EFAULT;
> +
> +	asm volatile("mrs %0, id_aa64mmfr0_el1\n" : "=r" (id_aa64mmfr0));

Doesn't the kernel have an abstraction for this already or a cached
value?

> +
> +	if ((val & GENMASK(3, 0)) > (id_aa64mmfr0 & GENMASK(3, 0)) ||
> +	    (val & GENMASK(7, 4)) > (id_aa64mmfr0 & GENMASK(7, 4)) ||
> +	    (val & GENMASK(11, 8)) > (id_aa64mmfr0 & GENMASK(11, 8)) ||
> +	    (val & GENMASK(15, 12)) > (id_aa64mmfr0 & GENMASK(15, 12)) ||
> +	    (val & GENMASK(19, 16)) > (id_aa64mmfr0 & GENMASK(19, 16)) ||
> +	    (val & GENMASK(23, 20)) > (id_aa64mmfr0 & GENMASK(23, 20)) ||
> +	    (val & GENMASK(27, 24)) < (id_aa64mmfr0 & GENMASK(27, 24)) ||
> +	    (val & GENMASK(31, 28)) < (id_aa64mmfr0 & GENMASK(31, 28))) {
> +		kvm_err("Wrong memory translation granule size/Physical Address range\n");
> +		return -EINVAL;
> +	}

This really needs some explanation as to what it's checking and what the
logic is.

> +
> +	vcpu_id_sys_reg(vcpu, rd->reg) = val & GENMASK(31, 0);
> +
> +	return 0;
> +}
> +
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0000), Op2(0b000),
>  	  access_id_reg, get_midr_el1, MIDR_EL1 },
> @@ -1549,7 +1578,8 @@ static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0110), Op2(0b001),
>  	  access_id_reg, get_id_aa64isar1_el1, ID_AA64ISAR1_EL1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b000),
> -	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1 },
> +	  access_id_reg, get_id_aa64mmfr0_el1, ID_AA64MMFR0_EL1,
> +	  0, NULL, set_id_aa64mmfr0_el1 },
>  	{ Op0(0b11), Op1(0b000), CRn(0b0000), CRm(0b0111), Op2(0b001),
>  	  access_id_reg, get_id_aa64mmfr1_el1, ID_AA64MMFR1_EL1 },
>  	{ Op0(0b11), Op1(0b001), CRn(0b0000), CRm(0b0000), Op2(0b001),
> -- 
> 2.0.4
> 
> 

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1
  2017-03-09 12:52   ` Christoffer Dall
@ 2017-03-09 15:03     ` Mark Rutland
  0 siblings, 0 replies; 30+ messages in thread
From: Mark Rutland @ 2017-03-09 15:03 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, qemu-arm, kvmarm, wu.wubin

On Thu, Mar 09, 2017 at 04:52:18AM -0800, Christoffer Dall wrote:
> On Mon, Jan 16, 2017 at 05:33:34PM +0800, Shannon Zhao wrote:
> > From: Shannon Zhao <shannon.zhao@linaro.org>
> > 
> > Check if the configuration is fine.
> 
> This commit message really needs some love and attention.
> 
> > 
> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > ---
> >  arch/arm64/kvm/sys_regs.c | 32 +++++++++++++++++++++++++++++++-
> >  1 file changed, 31 insertions(+), 1 deletion(-)
> > 
> > diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> > index f613e29..9763b79 100644
> > --- a/arch/arm64/kvm/sys_regs.c
> > +++ b/arch/arm64/kvm/sys_regs.c
> > @@ -1493,6 +1493,35 @@ static bool access_id_reg(struct kvm_vcpu *vcpu,
> >  	return true;
> >  }
> >  
> > +static int set_id_aa64mmfr0_el1(struct kvm_vcpu *vcpu,
> > +				const struct sys_reg_desc *rd,
> > +				const struct kvm_one_reg *reg,
> > +				void __user *uaddr)
> > +{
> > +	u64 val, id_aa64mmfr0;
> > +
> > +	if (copy_from_user(&val, uaddr, KVM_REG_SIZE(reg->id)) != 0)
> > +		return -EFAULT;
> > +
> > +	asm volatile("mrs %0, id_aa64mmfr0_el1\n" : "=r" (id_aa64mmfr0));
> 
> Doesn't the kernel have an abstraction for this already or a cached
> value?

Certainly we shouldn't be using a raw mrs instruction. We have
read_sysreg() or read_cpuid() for that.

The cpufeature code has a cached, system-wide safe value cached for each
system register. The cpuid_feature_extract_field() helper uses that.

> > +	if ((val & GENMASK(3, 0)) > (id_aa64mmfr0 & GENMASK(3, 0)) ||
> > +	    (val & GENMASK(7, 4)) > (id_aa64mmfr0 & GENMASK(7, 4)) ||
> > +	    (val & GENMASK(11, 8)) > (id_aa64mmfr0 & GENMASK(11, 8)) ||
> > +	    (val & GENMASK(15, 12)) > (id_aa64mmfr0 & GENMASK(15, 12)) ||
> > +	    (val & GENMASK(19, 16)) > (id_aa64mmfr0 & GENMASK(19, 16)) ||
> > +	    (val & GENMASK(23, 20)) > (id_aa64mmfr0 & GENMASK(23, 20)) ||
> > +	    (val & GENMASK(27, 24)) < (id_aa64mmfr0 & GENMASK(27, 24)) ||
> > +	    (val & GENMASK(31, 28)) < (id_aa64mmfr0 & GENMASK(31, 28))) {

Please use mnemonics. For example, we have ID_AA64MMFR0_TGRAN*_SHIFT
defined in <asm/sysreg.h>.

We also have extraction helpers, see
cpuid_feature_extract_unsigned_field(), as used in
id_aa64mmfr0_mixed_endian_el0().

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-01-28 14:55   ` Andrew Jones
@ 2017-03-09 15:21     ` Suzuki K Poulose
  2017-03-15 11:50     ` Christoffer Dall
  1 sibling, 0 replies; 30+ messages in thread
From: Suzuki K Poulose @ 2017-03-09 15:21 UTC (permalink / raw)
  To: Andrew Jones, Shannon Zhao; +Cc: marc.zyngier, qemu-arm, kvmarm, wu.wubin

On 28/01/17 14:55, Andrew Jones wrote:
> On Mon, Jan 16, 2017 at 05:33:33PM +0800, Shannon Zhao wrote:
>> From: Shannon Zhao <shannon.zhao@linaro.org>
>>
>> When initializing KVM, check whether physical hardware is a
>> heterogeneous system through the MIDR values. If so, force userspace to
>> set the KVM_ARM_VCPU_CROSS feature bit. Otherwise, it should fail to
>> initialize VCPUs.
>>
>> Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
>> ---
>>  arch/arm/kvm/arm.c       | 26 ++++++++++++++++++++++++++
>>  include/uapi/linux/kvm.h |  1 +
>>  2 files changed, 27 insertions(+)
>>
>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>> index bdceb19..21ec070 100644
>> --- a/arch/arm/kvm/arm.c
>> +++ b/arch/arm/kvm/arm.c
>> @@ -46,6 +46,7 @@
>>  #include <asm/kvm_coproc.h>
>>  #include <asm/kvm_psci.h>
>>  #include <asm/sections.h>
>> +#include <asm/cputype.h>
>>
>>  #ifdef REQUIRES_VIRT
>>  __asm__(".arch_extension	virt");
>> @@ -65,6 +66,7 @@ static unsigned int kvm_vmid_bits __read_mostly;
>>  static DEFINE_SPINLOCK(kvm_vmid_lock);
>>
>>  static bool vgic_present;
>> +static bool heterogeneous_system;
>>
>>  static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
>>
>> @@ -210,6 +212,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>>  	case KVM_CAP_ARM_CROSS_VCPU:
>>  		r = 1;
>>  		break;
>> +	case KVM_CAP_ARM_HETEROGENEOUS:
>> +		r = heterogeneous_system;
>> +		break;
>
> What's this for? When/why would usespace check it?
>
>>  	case KVM_CAP_COALESCED_MMIO:
>>  		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
>>  		break;
>> @@ -812,6 +817,12 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
>>  	int phys_target = kvm_target_cpu();
>>  	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
>>
>> +	if (heterogeneous_system && !cross_vcpu) {
>> +		kvm_err("%s:Host is a heterogeneous system, set KVM_ARM_VCPU_CROSS bit\n",
>> +			__func__);
>> +		return -EINVAL;
>> +	}
>
> Instead of forcing userspace to set a bit, why not just confirm the
> target selected will work? E.g. if only generic works on a heterogeneous
> system then just
>
>  if (heterogeneous_system && init->target != GENERIC)
>     return -EINVAL
>
> should work
>
>> +
>>  	if (!cross_vcpu && init->target != phys_target)
>>  		return -EINVAL;
>>
>> @@ -1397,6 +1408,11 @@ static void check_kvm_target_cpu(void *ret)
>>  	*(int *)ret = kvm_target_cpu();
>>  }
>>
>> +static void get_physical_cpu_midr(void *midr)
>> +{
>> +	*(u32 *)midr = read_cpuid_id();
>> +}
>> +
>>  struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr)
>>  {
>>  	struct kvm_vcpu *vcpu;
>> @@ -1417,6 +1433,7 @@ int kvm_arch_init(void *opaque)
>>  {
>>  	int err;
>>  	int ret, cpu;
>> +	u32 current_midr, midr;
>>
>>  	if (!is_hyp_mode_available()) {
>>  		kvm_err("HYP mode not available\n");
>> @@ -1431,6 +1448,15 @@ int kvm_arch_init(void *opaque)
>>  		}
>>  	}
>>
>> +	current_midr = read_cpuid_id();
>> +	for_each_online_cpu(cpu) {
>> +		smp_call_function_single(cpu, get_physical_cpu_midr, &midr, 1);
>> +		if (current_midr != midr) {
>> +			heterogeneous_system = true;
>> +			break;
>> +		}
>> +	}
>
> Is there no core kernel API that provides this?

On arm64, there is a per-cpu cpuinfo structure kept for each online CPU, which keeps
cached values of the ID registers. May be we could use that here.
See arch/arm64/kernel/cpuinfo.c::__cpuinfo_store_cpu()

Suzuki

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-01-28 14:55   ` Andrew Jones
  2017-03-09 15:21     ` Suzuki K Poulose
@ 2017-03-15 11:50     ` Christoffer Dall
  2017-03-15 12:51       ` Andrew Jones
  1 sibling, 1 reply; 30+ messages in thread
From: Christoffer Dall @ 2017-03-15 11:50 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

Hi Drew,

[Replying here to try to capture the discussion about this patch we had
at connect].

On Sat, Jan 28, 2017 at 03:55:51PM +0100, Andrew Jones wrote:
> On Mon, Jan 16, 2017 at 05:33:33PM +0800, Shannon Zhao wrote:
> > From: Shannon Zhao <shannon.zhao@linaro.org>
> > 
> > When initializing KVM, check whether physical hardware is a
> > heterogeneous system through the MIDR values. If so, force userspace to
> > set the KVM_ARM_VCPU_CROSS feature bit. Otherwise, it should fail to
> > initialize VCPUs.
> > 
> > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > ---
> >  arch/arm/kvm/arm.c       | 26 ++++++++++++++++++++++++++
> >  include/uapi/linux/kvm.h |  1 +
> >  2 files changed, 27 insertions(+)
> > 
> > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > index bdceb19..21ec070 100644
> > --- a/arch/arm/kvm/arm.c
> > +++ b/arch/arm/kvm/arm.c
> > @@ -46,6 +46,7 @@
> >  #include <asm/kvm_coproc.h>
> >  #include <asm/kvm_psci.h>
> >  #include <asm/sections.h>
> > +#include <asm/cputype.h>
> >  
> >  #ifdef REQUIRES_VIRT
> >  __asm__(".arch_extension	virt");
> > @@ -65,6 +66,7 @@ static unsigned int kvm_vmid_bits __read_mostly;
> >  static DEFINE_SPINLOCK(kvm_vmid_lock);
> >  
> >  static bool vgic_present;
> > +static bool heterogeneous_system;
> >  
> >  static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
> >  
> > @@ -210,6 +212,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> >  	case KVM_CAP_ARM_CROSS_VCPU:
> >  		r = 1;
> >  		break;
> > +	case KVM_CAP_ARM_HETEROGENEOUS:
> > +		r = heterogeneous_system;
> > +		break;
> 
> What's this for? When/why would usespace check it?
> 

Without a capability, how can userspace tell the difference between "I
got -EINVAL because I'm on an old kernel" or "I asked for something that
any kernel cannot emulate"?

Do we need to distinguish between these cases?

> >  	case KVM_CAP_COALESCED_MMIO:
> >  		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
> >  		break;
> > @@ -812,6 +817,12 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> >  	int phys_target = kvm_target_cpu();
> >  	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
> >  
> > +	if (heterogeneous_system && !cross_vcpu) {
> > +		kvm_err("%s:Host is a heterogeneous system, set KVM_ARM_VCPU_CROSS bit\n",
> > +			__func__);
> > +		return -EINVAL;
> > +	}
> 
> Instead of forcing userspace to set a bit, why not just confirm the
> target selected will work? E.g. if only generic works on a heterogeneous
> system then just 
> 
>  if (heterogeneous_system && init->target != GENERIC)
>     return -EINVAL
> 
> should work
> 

Yes, I think we concluded that if we advertise if we can do the
cross type emulation or not, then if we can do the emulation we should
just do it when possible, for maximum user experience.

I'm sure I missed some aspect of this discussion though.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 11:50     ` Christoffer Dall
@ 2017-03-15 12:51       ` Andrew Jones
  2017-03-15 13:36         ` Christoffer Dall
  0 siblings, 1 reply; 30+ messages in thread
From: Andrew Jones @ 2017-03-15 12:51 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Wed, Mar 15, 2017 at 12:50:44PM +0100, Christoffer Dall wrote:
> Hi Drew,
> 
> [Replying here to try to capture the discussion about this patch we had
> at connect].
> 
> On Sat, Jan 28, 2017 at 03:55:51PM +0100, Andrew Jones wrote:
> > On Mon, Jan 16, 2017 at 05:33:33PM +0800, Shannon Zhao wrote:
> > > From: Shannon Zhao <shannon.zhao@linaro.org>
> > > 
> > > When initializing KVM, check whether physical hardware is a
> > > heterogeneous system through the MIDR values. If so, force userspace to
> > > set the KVM_ARM_VCPU_CROSS feature bit. Otherwise, it should fail to
> > > initialize VCPUs.
> > > 
> > > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > > ---
> > >  arch/arm/kvm/arm.c       | 26 ++++++++++++++++++++++++++
> > >  include/uapi/linux/kvm.h |  1 +
> > >  2 files changed, 27 insertions(+)
> > > 
> > > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > > index bdceb19..21ec070 100644
> > > --- a/arch/arm/kvm/arm.c
> > > +++ b/arch/arm/kvm/arm.c
> > > @@ -46,6 +46,7 @@
> > >  #include <asm/kvm_coproc.h>
> > >  #include <asm/kvm_psci.h>
> > >  #include <asm/sections.h>
> > > +#include <asm/cputype.h>
> > >  
> > >  #ifdef REQUIRES_VIRT
> > >  __asm__(".arch_extension	virt");
> > > @@ -65,6 +66,7 @@ static unsigned int kvm_vmid_bits __read_mostly;
> > >  static DEFINE_SPINLOCK(kvm_vmid_lock);
> > >  
> > >  static bool vgic_present;
> > > +static bool heterogeneous_system;
> > >  
> > >  static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
> > >  
> > > @@ -210,6 +212,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> > >  	case KVM_CAP_ARM_CROSS_VCPU:
> > >  		r = 1;
> > >  		break;
> > > +	case KVM_CAP_ARM_HETEROGENEOUS:
> > > +		r = heterogeneous_system;
> > > +		break;
> > 
> > What's this for? When/why would usespace check it?
> > 
> 
> Without a capability, how can userspace tell the difference between "I
> got -EINVAL because I'm on an old kernel" or "I asked for something that
> any kernel cannot emulate"?
> 
> Do we need to distinguish between these cases?

Yup, I'm in full agreement that we need a capability for the
cross-vcpu support. Above this heterogeneous one there's the
CROSS_VCPU one though. Do we need both? If QEMU wants to know
whether or not the host it's running on is heterogeneous, then
it can just query sysfs, rather than ask KVM.

> 
> > >  	case KVM_CAP_COALESCED_MMIO:
> > >  		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
> > >  		break;
> > > @@ -812,6 +817,12 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> > >  	int phys_target = kvm_target_cpu();
> > >  	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
> > >  
> > > +	if (heterogeneous_system && !cross_vcpu) {
> > > +		kvm_err("%s:Host is a heterogeneous system, set KVM_ARM_VCPU_CROSS bit\n",
> > > +			__func__);
> > > +		return -EINVAL;
> > > +	}
> > 
> > Instead of forcing userspace to set a bit, why not just confirm the
> > target selected will work? E.g. if only generic works on a heterogeneous
> > system then just 
> > 
> >  if (heterogeneous_system && init->target != GENERIC)
> >     return -EINVAL
> > 
> > should work
> > 
> 
> Yes, I think we concluded that if we advertise if we can do the
> cross type emulation or not, then if we can do the emulation we should
> just do it when possible, for maximum user experience.

Your agreement here implies to me that we only need the one capability.

> 
> I'm sure I missed some aspect of this discussion though.

Me too. As we discussed, it's probably time to try and hash out a fresh
design doc. It'd be good to get a clear design agreed upon before
returning to the patches.

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 12:51       ` Andrew Jones
@ 2017-03-15 13:36         ` Christoffer Dall
  2017-03-15 14:06           ` Andrew Jones
  0 siblings, 1 reply; 30+ messages in thread
From: Christoffer Dall @ 2017-03-15 13:36 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Wed, Mar 15, 2017 at 01:51:13PM +0100, Andrew Jones wrote:
> On Wed, Mar 15, 2017 at 12:50:44PM +0100, Christoffer Dall wrote:
> > Hi Drew,
> > 
> > [Replying here to try to capture the discussion about this patch we had
> > at connect].
> > 
> > On Sat, Jan 28, 2017 at 03:55:51PM +0100, Andrew Jones wrote:
> > > On Mon, Jan 16, 2017 at 05:33:33PM +0800, Shannon Zhao wrote:
> > > > From: Shannon Zhao <shannon.zhao@linaro.org>
> > > > 
> > > > When initializing KVM, check whether physical hardware is a
> > > > heterogeneous system through the MIDR values. If so, force userspace to
> > > > set the KVM_ARM_VCPU_CROSS feature bit. Otherwise, it should fail to
> > > > initialize VCPUs.
> > > > 
> > > > Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org>
> > > > ---
> > > >  arch/arm/kvm/arm.c       | 26 ++++++++++++++++++++++++++
> > > >  include/uapi/linux/kvm.h |  1 +
> > > >  2 files changed, 27 insertions(+)
> > > > 
> > > > diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> > > > index bdceb19..21ec070 100644
> > > > --- a/arch/arm/kvm/arm.c
> > > > +++ b/arch/arm/kvm/arm.c
> > > > @@ -46,6 +46,7 @@
> > > >  #include <asm/kvm_coproc.h>
> > > >  #include <asm/kvm_psci.h>
> > > >  #include <asm/sections.h>
> > > > +#include <asm/cputype.h>
> > > >  
> > > >  #ifdef REQUIRES_VIRT
> > > >  __asm__(".arch_extension	virt");
> > > > @@ -65,6 +66,7 @@ static unsigned int kvm_vmid_bits __read_mostly;
> > > >  static DEFINE_SPINLOCK(kvm_vmid_lock);
> > > >  
> > > >  static bool vgic_present;
> > > > +static bool heterogeneous_system;
> > > >  
> > > >  static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
> > > >  
> > > > @@ -210,6 +212,9 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
> > > >  	case KVM_CAP_ARM_CROSS_VCPU:
> > > >  		r = 1;
> > > >  		break;
> > > > +	case KVM_CAP_ARM_HETEROGENEOUS:
> > > > +		r = heterogeneous_system;
> > > > +		break;
> > > 
> > > What's this for? When/why would usespace check it?
> > > 
> > 
> > Without a capability, how can userspace tell the difference between "I
> > got -EINVAL because I'm on an old kernel" or "I asked for something that
> > any kernel cannot emulate"?
> > 
> > Do we need to distinguish between these cases?
> 
> Yup, I'm in full agreement that we need a capability for the
> cross-vcpu support. Above this heterogeneous one there's the
> CROSS_VCPU one though. Do we need both? 

Probably not.

> If QEMU wants to know
> whether or not the host it's running on is heterogeneous, then
> it can just query sysfs, rather than ask KVM.
> 

Can it?  Is this information available in a reliable way from userspace?

> > 
> > > >  	case KVM_CAP_COALESCED_MMIO:
> > > >  		r = KVM_COALESCED_MMIO_PAGE_OFFSET;
> > > >  		break;
> > > > @@ -812,6 +817,12 @@ static int kvm_vcpu_set_target(struct kvm_vcpu *vcpu,
> > > >  	int phys_target = kvm_target_cpu();
> > > >  	bool cross_vcpu = kvm_vcpu_has_feature_cross_cpu(init);
> > > >  
> > > > +	if (heterogeneous_system && !cross_vcpu) {
> > > > +		kvm_err("%s:Host is a heterogeneous system, set KVM_ARM_VCPU_CROSS bit\n",
> > > > +			__func__);
> > > > +		return -EINVAL;
> > > > +	}
> > > 
> > > Instead of forcing userspace to set a bit, why not just confirm the
> > > target selected will work? E.g. if only generic works on a heterogeneous
> > > system then just 
> > > 
> > >  if (heterogeneous_system && init->target != GENERIC)
> > >     return -EINVAL
> > > 
> > > should work
> > > 
> > 
> > Yes, I think we concluded that if we advertise if we can do the
> > cross type emulation or not, then if we can do the emulation we should
> > just do it when possible, for maximum user experience.
> 
> Your agreement here implies to me that we only need the one capability.
> 

Yes.

> > 
> > I'm sure I missed some aspect of this discussion though.
> 
> Me too. As we discussed, it's probably time to try and hash out a fresh
> design doc. It'd be good to get a clear design agreed upon before
> returning to the patches.
> 

Yes, it's on my list.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 13:36         ` Christoffer Dall
@ 2017-03-15 14:06           ` Andrew Jones
  2017-03-15 14:21             ` Peter Maydell
                               ` (2 more replies)
  0 siblings, 3 replies; 30+ messages in thread
From: Andrew Jones @ 2017-03-15 14:06 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Wed, Mar 15, 2017 at 02:36:45PM +0100, Christoffer Dall wrote:
> > If QEMU wants to know
> > whether or not the host it's running on is heterogeneous, then
> > it can just query sysfs, rather than ask KVM.
> > 
> 
> Can it?  Is this information available in a reliable way from userspace?

I don't know much (anything) about it, but, afaict, yes. See
https://lkml.org/lkml/2017/1/19/380

> > > I'm sure I missed some aspect of this discussion though.
> > 
> > Me too. As we discussed, it's probably time to try and hash out a fresh
> > design doc. It'd be good to get a clear design agreed upon before
> > returning to the patches.
> > 
> 
> Yes, it's on my list.

I'm happy to help with this, even to take a stab at the first draft. Just
let me know if you or Shannon prefer to do the draft yourselves. If you
don't have a preference, then I can start working on it pretty soon.

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 14:06           ` Andrew Jones
@ 2017-03-15 14:21             ` Peter Maydell
  2017-03-15 14:42               ` Andrew Jones
  2017-03-15 14:49             ` Mark Rutland
  2017-03-15 15:22             ` Christoffer Dall
  2 siblings, 1 reply; 30+ messages in thread
From: Peter Maydell @ 2017-03-15 14:21 UTC (permalink / raw)
  To: Andrew Jones; +Cc: Christoffer Dall, Marc Zyngier, qemu-arm, wu.wubin, kvmarm

On 15 March 2017 at 14:06, Andrew Jones <drjones@redhat.com> wrote:
> On Wed, Mar 15, 2017 at 02:36:45PM +0100, Christoffer Dall wrote:
>> > If QEMU wants to know
>> > whether or not the host it's running on is heterogeneous, then
>> > it can just query sysfs, rather than ask KVM.
>> >
>>
>> Can it?  Is this information available in a reliable way from userspace?
>
> I don't know much (anything) about it, but, afaict, yes. See
> https://lkml.org/lkml/2017/1/19/380

AFAICT that won't work if the CPU you're trying to
look at the ID registers for happens to be offline at
the time you're looking at it.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 14:21             ` Peter Maydell
@ 2017-03-15 14:42               ` Andrew Jones
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-03-15 14:42 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Christoffer Dall, Marc Zyngier, qemu-arm, wu.wubin, kvmarm

On Wed, Mar 15, 2017 at 02:21:48PM +0000, Peter Maydell wrote:
> On 15 March 2017 at 14:06, Andrew Jones <drjones@redhat.com> wrote:
> > On Wed, Mar 15, 2017 at 02:36:45PM +0100, Christoffer Dall wrote:
> >> > If QEMU wants to know
> >> > whether or not the host it's running on is heterogeneous, then
> >> > it can just query sysfs, rather than ask KVM.
> >> >
> >>
> >> Can it?  Is this information available in a reliable way from userspace?
> >
> > I don't know much (anything) about it, but, afaict, yes. See
> > https://lkml.org/lkml/2017/1/19/380
> 
> AFAICT that won't work if the CPU you're trying to
> look at the ID registers for happens to be offline at
> the time you're looking at it.
>

Hmm, OK, but I think the jury is still out on whether QEMU even
needs to know this information. If we push the burden up to the
user/libvirt, than this information can be obtained [somehow] at
the higher level. The higher level can then choose to use generic,
if no vcpu affinity is desired, or, if one of the heterogeneous
cpu types is preferred for the vcpu's model, then it can also
ensure the vcpu is affined to the appropriate cpuset.

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 14:06           ` Andrew Jones
  2017-03-15 14:21             ` Peter Maydell
@ 2017-03-15 14:49             ` Mark Rutland
  2017-03-15 15:22             ` Christoffer Dall
  2 siblings, 0 replies; 30+ messages in thread
From: Mark Rutland @ 2017-03-15 14:49 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, Christoffer Dall, qemu-arm, kvmarm, wu.wubin

On Wed, Mar 15, 2017 at 03:06:33PM +0100, Andrew Jones wrote:
> On Wed, Mar 15, 2017 at 02:36:45PM +0100, Christoffer Dall wrote:
> > > If QEMU wants to know
> > > whether or not the host it's running on is heterogeneous, then
> > > it can just query sysfs, rather than ask KVM.
> > > 
> > 
> > Can it?  Is this information available in a reliable way from userspace?
> 
> I don't know much (anything) about it, but, afaict, yes. See
> https://lkml.org/lkml/2017/1/19/380

The "capacity" of a CPU does *not* tell you if your system is
hetereogeneous. Two vastly different CPU implementations can stumble
upon the same capacity, and two identical implementations could be
assigned close but not identical capacities.

The "capacity" is purely a scheduler heuristic, and should not be relied
upon for functional correctness.

We have a sysfs interface to see the MIDR and REVIDR of (online) CPUs,
which can tell you. See Documentation/arm64/cpu-feature-registers.txt.

Whether a system is heterogeneous can change at runtime, as CPUs can be
brought online very late (e.g. if booted with maxcpus capped, or if we
get "real" hotplug in future).

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 14:06           ` Andrew Jones
  2017-03-15 14:21             ` Peter Maydell
  2017-03-15 14:49             ` Mark Rutland
@ 2017-03-15 15:22             ` Christoffer Dall
  2017-03-15 15:32               ` Andrew Jones
  2 siblings, 1 reply; 30+ messages in thread
From: Christoffer Dall @ 2017-03-15 15:22 UTC (permalink / raw)
  To: Andrew Jones; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Wed, Mar 15, 2017 at 03:06:33PM +0100, Andrew Jones wrote:
> On Wed, Mar 15, 2017 at 02:36:45PM +0100, Christoffer Dall wrote:
> > > If QEMU wants to know
> > > whether or not the host it's running on is heterogeneous, then
> > > it can just query sysfs, rather than ask KVM.
> > > 
> > 
> > Can it?  Is this information available in a reliable way from userspace?
> 
> I don't know much (anything) about it, but, afaict, yes. See
> https://lkml.org/lkml/2017/1/19/380
> 
> > > > I'm sure I missed some aspect of this discussion though.
> > > 
> > > Me too. As we discussed, it's probably time to try and hash out a fresh
> > > design doc. It'd be good to get a clear design agreed upon before
> > > returning to the patches.
> > > 
> > 
> > Yes, it's on my list.
> 
> I'm happy to help with this, even to take a stab at the first draft. Just
> let me know if you or Shannon prefer to do the draft yourselves. If you
> don't have a preference, then I can start working on it pretty soon.
> 
I'd be really happy for you to draft a new design doc, and I can review
in private or publicly first as you prefer.

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system
  2017-03-15 15:22             ` Christoffer Dall
@ 2017-03-15 15:32               ` Andrew Jones
  0 siblings, 0 replies; 30+ messages in thread
From: Andrew Jones @ 2017-03-15 15:32 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: marc.zyngier, qemu-arm, wu.wubin, kvmarm

On Wed, Mar 15, 2017 at 04:22:37PM +0100, Christoffer Dall wrote:
> On Wed, Mar 15, 2017 at 03:06:33PM +0100, Andrew Jones wrote:
> > On Wed, Mar 15, 2017 at 02:36:45PM +0100, Christoffer Dall wrote:
> > > > If QEMU wants to know
> > > > whether or not the host it's running on is heterogeneous, then
> > > > it can just query sysfs, rather than ask KVM.
> > > > 
> > > 
> > > Can it?  Is this information available in a reliable way from userspace?
> > 
> > I don't know much (anything) about it, but, afaict, yes. See
> > https://lkml.org/lkml/2017/1/19/380
> > 
> > > > > I'm sure I missed some aspect of this discussion though.
> > > > 
> > > > Me too. As we discussed, it's probably time to try and hash out a fresh
> > > > design doc. It'd be good to get a clear design agreed upon before
> > > > returning to the patches.
> > > > 
> > > 
> > > Yes, it's on my list.
> > 
> > I'm happy to help with this, even to take a stab at the first draft. Just
> > let me know if you or Shannon prefer to do the draft yourselves. If you
> > don't have a preference, then I can start working on it pretty soon.
> > 
> I'd be really happy for you to draft a new design doc, and I can review
> in private or publicly first as you prefer.
>

OK, I'll hopefully have something pulled together by the end of next week.

Thanks,
drew

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2017-03-15 15:31 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-01-16  9:33 [PATCH RFC 0/7] ARM64: KVM: Cross type vCPU support Shannon Zhao
2017-01-16  9:33 ` [PATCH RFC 1/7] ARM64: KVM: Add the definition of ID registers Shannon Zhao
2017-01-28 12:07   ` Andrew Jones
2017-01-16  9:33 ` [PATCH RFC 2/7] ARM64: KVM: Add reset handlers for all " Shannon Zhao
2017-01-28 12:36   ` Andrew Jones
2017-03-09 10:19   ` Christoffer Dall
2017-01-16  9:33 ` [PATCH RFC 3/7] ARM64: KVM: Reset ID registers when creating the VCPUs Shannon Zhao
2017-01-28 13:32   ` Andrew Jones
2017-01-16  9:33 ` [PATCH RFC 4/7] ARM64: KVM: emulate accessing ID registers Shannon Zhao
2017-01-28 13:49   ` Andrew Jones
2017-03-09 10:28   ` Christoffer Dall
2017-01-16  9:33 ` [PATCH RFC 5/7] ARM64: KVM: Support cross type vCPU Shannon Zhao
2017-01-28 14:47   ` Andrew Jones
2017-03-09 10:56     ` Christoffer Dall
2017-01-16  9:33 ` [PATCH RFC 6/7] ARM64: KVM: Support heterogeneous system Shannon Zhao
2017-01-28 14:55   ` Andrew Jones
2017-03-09 15:21     ` Suzuki K Poulose
2017-03-15 11:50     ` Christoffer Dall
2017-03-15 12:51       ` Andrew Jones
2017-03-15 13:36         ` Christoffer Dall
2017-03-15 14:06           ` Andrew Jones
2017-03-15 14:21             ` Peter Maydell
2017-03-15 14:42               ` Andrew Jones
2017-03-15 14:49             ` Mark Rutland
2017-03-15 15:22             ` Christoffer Dall
2017-03-15 15:32               ` Andrew Jones
2017-01-16  9:33 ` [PATCH RFC 7/7] ARM64: KVM: Add user set handler for id_aa64mmfr0_el1 Shannon Zhao
2017-01-28 15:22   ` Andrew Jones
2017-03-09 12:52   ` Christoffer Dall
2017-03-09 15:03     ` Mark Rutland

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.