All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/5] arm64: KVM: vgic-v2: Allow unsafe GICV accesses
@ 2016-08-19 12:38 ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm

In a number of cases, KVM cannot give access direct access to the
GICv2 GICV region, either because GICV is not page aligned, or its
size is not a multiple of the page size. This is especially visible
with 16kB/64kB pages and the original GIC-400 layout where each region
is only 4k aligned.

Instead of disabling KVM altogether (which is the current behaviour),
there is some value in trapping each guest GICV access, performing the
access as quickly as possible at EL2, and resuming the guest. This
allows us to keep KVM enabled on this HW.

Implementation wise, this is done with a static key controlling the
workaround being enabled, hence coming at zero cost (well, an extra
nop on the exit hot path) for unaffected platforms. On the affected
HW, I've measured a 10 to 15% overhead for a self-IPI test, which is
pretty bad, but still much better than not having a GIC at all.

Tested on Juno.

Marc Zyngier (5):
  arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
  arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP
    code
  arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
  arm64: KVM: vgic-v2: Add GICV access from HYP
  arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest
    is unsafe

 arch/arm/kvm/emulate.c               |  4 +-
 arch/arm64/include/asm/kvm_emulate.h | 49 +++++++++++++++++++++++++
 arch/arm64/include/asm/kvm_hyp.h     |  1 +
 arch/arm64/kvm/emulate.c             | 47 +-----------------------
 arch/arm64/kvm/hyp/switch.c          | 32 ++++++++++++++++
 include/kvm/arm_vgic.h               |  6 +++
 virt/kvm/arm/hyp/vgic-v2-sr.c        | 46 +++++++++++++++++++++++
 virt/kvm/arm/vgic/vgic-v2.c          | 71 ++++++++++++++++++++++--------------
 8 files changed, 180 insertions(+), 76 deletions(-)

-- 
2.1.4


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 0/5] arm64: KVM: vgic-v2: Allow unsafe GICV accesses
@ 2016-08-19 12:38 ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: linux-arm-kernel

In a number of cases, KVM cannot give access direct access to the
GICv2 GICV region, either because GICV is not page aligned, or its
size is not a multiple of the page size. This is especially visible
with 16kB/64kB pages and the original GIC-400 layout where each region
is only 4k aligned.

Instead of disabling KVM altogether (which is the current behaviour),
there is some value in trapping each guest GICV access, performing the
access as quickly as possible at EL2, and resuming the guest. This
allows us to keep KVM enabled on this HW.

Implementation wise, this is done with a static key controlling the
workaround being enabled, hence coming at zero cost (well, an extra
nop on the exit hot path) for unaffected platforms. On the affected
HW, I've measured a 10 to 15% overhead for a self-IPI test, which is
pretty bad, but still much better than not having a GIC at all.

Tested on Juno.

Marc Zyngier (5):
  arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
  arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP
    code
  arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
  arm64: KVM: vgic-v2: Add GICV access from HYP
  arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest
    is unsafe

 arch/arm/kvm/emulate.c               |  4 +-
 arch/arm64/include/asm/kvm_emulate.h | 49 +++++++++++++++++++++++++
 arch/arm64/include/asm/kvm_hyp.h     |  1 +
 arch/arm64/kvm/emulate.c             | 47 +-----------------------
 arch/arm64/kvm/hyp/switch.c          | 32 ++++++++++++++++
 include/kvm/arm_vgic.h               |  6 +++
 virt/kvm/arm/hyp/vgic-v2-sr.c        | 46 +++++++++++++++++++++++
 virt/kvm/arm/vgic/vgic-v2.c          | 71 ++++++++++++++++++++++--------------
 8 files changed, 180 insertions(+), 76 deletions(-)

-- 
2.1.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 1/5] arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
  2016-08-19 12:38 ` Marc Zyngier
@ 2016-08-19 12:38   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm

The ARM ARM asserts that having the IT bits set in ARM mode is
unpredictable. But should a CPU do that (and still be able to
run correctly), we shouldn't kill the system.

So let's drop the BUG_ON that checks for this, as it is not
very helpful.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/emulate.c   | 4 +---
 arch/arm64/kvm/emulate.c | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
index af93e3f..eda9ddd 100644
--- a/arch/arm/kvm/emulate.c
+++ b/arch/arm/kvm/emulate.c
@@ -221,9 +221,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
 	unsigned long cpsr = *vcpu_cpsr(vcpu);
 	bool is_arm = !(cpsr & PSR_T_BIT);
 
-	BUG_ON(is_arm && (cpsr & PSR_IT_MASK));
-
-	if (!(cpsr & PSR_IT_MASK))
+	if (is_arm || !(cpsr & PSR_IT_MASK))
 		return;
 
 	cond = (cpsr & 0xe000) >> 13;
diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
index f87d8fb..df76590 100644
--- a/arch/arm64/kvm/emulate.c
+++ b/arch/arm64/kvm/emulate.c
@@ -120,9 +120,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
 	unsigned long cpsr = *vcpu_cpsr(vcpu);
 	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
 
-	BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
-
-	if (!(cpsr & COMPAT_PSR_IT_MASK))
+	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
 		return;
 
 	cond = (cpsr & 0xe000) >> 13;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 1/5] arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
@ 2016-08-19 12:38   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: linux-arm-kernel

The ARM ARM asserts that having the IT bits set in ARM mode is
unpredictable. But should a CPU do that (and still be able to
run correctly), we shouldn't kill the system.

So let's drop the BUG_ON that checks for this, as it is not
very helpful.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm/kvm/emulate.c   | 4 +---
 arch/arm64/kvm/emulate.c | 4 +---
 2 files changed, 2 insertions(+), 6 deletions(-)

diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
index af93e3f..eda9ddd 100644
--- a/arch/arm/kvm/emulate.c
+++ b/arch/arm/kvm/emulate.c
@@ -221,9 +221,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
 	unsigned long cpsr = *vcpu_cpsr(vcpu);
 	bool is_arm = !(cpsr & PSR_T_BIT);
 
-	BUG_ON(is_arm && (cpsr & PSR_IT_MASK));
-
-	if (!(cpsr & PSR_IT_MASK))
+	if (is_arm || !(cpsr & PSR_IT_MASK))
 		return;
 
 	cond = (cpsr & 0xe000) >> 13;
diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
index f87d8fb..df76590 100644
--- a/arch/arm64/kvm/emulate.c
+++ b/arch/arm64/kvm/emulate.c
@@ -120,9 +120,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
 	unsigned long cpsr = *vcpu_cpsr(vcpu);
 	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
 
-	BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
-
-	if (!(cpsr & COMPAT_PSR_IT_MASK))
+	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
 		return;
 
 	cond = (cpsr & 0xe000) >> 13;
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
  2016-08-19 12:38 ` Marc Zyngier
@ 2016-08-19 12:38   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm

As we're going to start emulating some instruction while in HYP,
we need to be able to move the PC forward. Pretty easy for AArch64,
but quite fidly for AArch32 (think Thumb2 and the IT state).

In order to be able to reuse the existing code in HYP, move the bulk
of it to kvm_emulate.h, and let the implementation located in
emulate.c use it. HYP will be able to use it at the expense of an
additional copy in the object file, but we can at least share the
source code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 49 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/emulate.c             | 45 +--------------------------------
 2 files changed, 50 insertions(+), 44 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 4cdeae3..60db363 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -311,4 +311,53 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 	return data;		/* Leave LE untouched */
 }
 
+/**
+ * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
+ * @vcpu:	The VCPU pointer
+ *
+ * When exceptions occur while instructions are executed in Thumb IF-THEN
+ * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
+ * to do this little bit of work manually. The fields map like this:
+ *
+ * IT[7:0] -> CPSR[26:25],CPSR[15:10]
+ */
+static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
+{
+	unsigned long itbits, cond;
+	unsigned long cpsr = *vcpu_cpsr(vcpu);
+	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
+
+	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
+		return;
+
+	cond = (cpsr & 0xe000) >> 13;
+	itbits = (cpsr & 0x1c00) >> (10 - 2);
+	itbits |= (cpsr & (0x3 << 25)) >> 25;
+
+	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
+	if ((itbits & 0x7) == 0)
+		itbits = cond = 0;
+	else
+		itbits = (itbits << 1) & 0x1f;
+
+	cpsr &= ~COMPAT_PSR_IT_MASK;
+	cpsr |= cond << 13;
+	cpsr |= (itbits & 0x1c) << (10 - 2);
+	cpsr |= (itbits & 0x3) << 25;
+	*vcpu_cpsr(vcpu) = cpsr;
+}
+
+static void inline kvm_skip_aarch32_instr(struct kvm_vcpu *vcpu,
+					  bool is_wide_instr)
+{
+	bool is_thumb;
+
+	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
+	if (is_thumb && !is_wide_instr)
+		*vcpu_pc(vcpu) += 2;
+	else
+		*vcpu_pc(vcpu) += 4;
+	kvm_adjust_itstate(vcpu);
+}
+
 #endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
index df76590..d5f6a29 100644
--- a/arch/arm64/kvm/emulate.c
+++ b/arch/arm64/kvm/emulate.c
@@ -105,53 +105,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
 }
 
 /**
- * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
- * @vcpu:	The VCPU pointer
- *
- * When exceptions occur while instructions are executed in Thumb IF-THEN
- * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
- * to do this little bit of work manually. The fields map like this:
- *
- * IT[7:0] -> CPSR[26:25],CPSR[15:10]
- */
-static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
-{
-	unsigned long itbits, cond;
-	unsigned long cpsr = *vcpu_cpsr(vcpu);
-	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
-
-	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
-		return;
-
-	cond = (cpsr & 0xe000) >> 13;
-	itbits = (cpsr & 0x1c00) >> (10 - 2);
-	itbits |= (cpsr & (0x3 << 25)) >> 25;
-
-	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
-	if ((itbits & 0x7) == 0)
-		itbits = cond = 0;
-	else
-		itbits = (itbits << 1) & 0x1f;
-
-	cpsr &= ~COMPAT_PSR_IT_MASK;
-	cpsr |= cond << 13;
-	cpsr |= (itbits & 0x1c) << (10 - 2);
-	cpsr |= (itbits & 0x3) << 25;
-	*vcpu_cpsr(vcpu) = cpsr;
-}
-
-/**
  * kvm_skip_instr - skip a trapped instruction and proceed to the next
  * @vcpu: The vcpu pointer
  */
 void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
 {
-	bool is_thumb;
-
-	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
-	if (is_thumb && !is_wide_instr)
-		*vcpu_pc(vcpu) += 2;
-	else
-		*vcpu_pc(vcpu) += 4;
-	kvm_adjust_itstate(vcpu);
+	kvm_skip_aarch32_instr(vcpu, is_wide_instr);
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
@ 2016-08-19 12:38   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: linux-arm-kernel

As we're going to start emulating some instruction while in HYP,
we need to be able to move the PC forward. Pretty easy for AArch64,
but quite fidly for AArch32 (think Thumb2 and the IT state).

In order to be able to reuse the existing code in HYP, move the bulk
of it to kvm_emulate.h, and let the implementation located in
emulate.c use it. HYP will be able to use it at the expense of an
additional copy in the object file, but we can at least share the
source code.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_emulate.h | 49 ++++++++++++++++++++++++++++++++++++
 arch/arm64/kvm/emulate.c             | 45 +--------------------------------
 2 files changed, 50 insertions(+), 44 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
index 4cdeae3..60db363 100644
--- a/arch/arm64/include/asm/kvm_emulate.h
+++ b/arch/arm64/include/asm/kvm_emulate.h
@@ -311,4 +311,53 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
 	return data;		/* Leave LE untouched */
 }
 
+/**
+ * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
+ * @vcpu:	The VCPU pointer
+ *
+ * When exceptions occur while instructions are executed in Thumb IF-THEN
+ * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
+ * to do this little bit of work manually. The fields map like this:
+ *
+ * IT[7:0] -> CPSR[26:25],CPSR[15:10]
+ */
+static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
+{
+	unsigned long itbits, cond;
+	unsigned long cpsr = *vcpu_cpsr(vcpu);
+	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
+
+	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
+		return;
+
+	cond = (cpsr & 0xe000) >> 13;
+	itbits = (cpsr & 0x1c00) >> (10 - 2);
+	itbits |= (cpsr & (0x3 << 25)) >> 25;
+
+	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
+	if ((itbits & 0x7) == 0)
+		itbits = cond = 0;
+	else
+		itbits = (itbits << 1) & 0x1f;
+
+	cpsr &= ~COMPAT_PSR_IT_MASK;
+	cpsr |= cond << 13;
+	cpsr |= (itbits & 0x1c) << (10 - 2);
+	cpsr |= (itbits & 0x3) << 25;
+	*vcpu_cpsr(vcpu) = cpsr;
+}
+
+static void inline kvm_skip_aarch32_instr(struct kvm_vcpu *vcpu,
+					  bool is_wide_instr)
+{
+	bool is_thumb;
+
+	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
+	if (is_thumb && !is_wide_instr)
+		*vcpu_pc(vcpu) += 2;
+	else
+		*vcpu_pc(vcpu) += 4;
+	kvm_adjust_itstate(vcpu);
+}
+
 #endif /* __ARM64_KVM_EMULATE_H__ */
diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
index df76590..d5f6a29 100644
--- a/arch/arm64/kvm/emulate.c
+++ b/arch/arm64/kvm/emulate.c
@@ -105,53 +105,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
 }
 
 /**
- * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
- * @vcpu:	The VCPU pointer
- *
- * When exceptions occur while instructions are executed in Thumb IF-THEN
- * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
- * to do this little bit of work manually. The fields map like this:
- *
- * IT[7:0] -> CPSR[26:25],CPSR[15:10]
- */
-static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
-{
-	unsigned long itbits, cond;
-	unsigned long cpsr = *vcpu_cpsr(vcpu);
-	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
-
-	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
-		return;
-
-	cond = (cpsr & 0xe000) >> 13;
-	itbits = (cpsr & 0x1c00) >> (10 - 2);
-	itbits |= (cpsr & (0x3 << 25)) >> 25;
-
-	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
-	if ((itbits & 0x7) == 0)
-		itbits = cond = 0;
-	else
-		itbits = (itbits << 1) & 0x1f;
-
-	cpsr &= ~COMPAT_PSR_IT_MASK;
-	cpsr |= cond << 13;
-	cpsr |= (itbits & 0x1c) << (10 - 2);
-	cpsr |= (itbits & 0x3) << 25;
-	*vcpu_cpsr(vcpu) = cpsr;
-}
-
-/**
  * kvm_skip_instr - skip a trapped instruction and proceed to the next
  * @vcpu: The vcpu pointer
  */
 void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
 {
-	bool is_thumb;
-
-	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
-	if (is_thumb && !is_wide_instr)
-		*vcpu_pc(vcpu) += 2;
-	else
-		*vcpu_pc(vcpu) += 4;
-	kvm_adjust_itstate(vcpu);
+	kvm_skip_aarch32_instr(vcpu, is_wide_instr);
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
  2016-08-19 12:38 ` Marc Zyngier
@ 2016-08-19 12:38   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm

In order to efficiently perform the GICV access on behalf of the
guest, we need to be able to do avoid going back all the way to
the host kernel.

For this, we introduce a new hook in the world switch code,
conveniently placed just after populating the fault info.
At that point, we only have saved/restored the GP registers,
and we can quickly perform all the required checks (data abort,
translation fault, valid faulting syndrome, not an external
abort, not a PTW).

Coming back from the emulation code, we need to skip the emulated
instruction. This involves an additional bit of save/restore in
order to be able to access the guest's PC (and possibly CPSR if
this is a 32bit guest).

At this stage, no emulation code is provided.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_hyp.h |  1 +
 arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
 include/kvm/arm_vgic.h           |  3 +++
 virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
 virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
 5 files changed, 45 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index cff5105..88ec3ac 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
 
 void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
 void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
+bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
 
 void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
 void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ae7855f..0be1594 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -17,6 +17,7 @@
 
 #include <linux/types.h>
 #include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
 
 static bool __hyp_text __fpsimd_enabled_nvhe(void)
@@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
+
+	if (vcpu_mode_is_32bit(vcpu)) {
+		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
+		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
+	} else {
+		*vcpu_pc(vcpu) += 4;
+	}
+
+	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
+}
+
 static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
@@ -270,6 +286,22 @@ again:
 	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
 		goto again;
 
+	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
+	    exit_code == ARM_EXCEPTION_TRAP) {
+		bool valid;
+
+		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
+			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
+			kvm_vcpu_dabt_isvalid(vcpu) &&
+			!kvm_vcpu_dabt_isextabt(vcpu) &&
+			!kvm_vcpu_dabt_iss1tw(vcpu);
+
+		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {
+			__skip_instr(vcpu);
+			goto again;
+		}
+	}
+
 	fp_enabled = __fpsimd_enabled();
 
 	__sysreg_save_guest_state(guest_ctxt);
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 19b698e..8eb1501 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -20,6 +20,7 @@
 #include <linux/kvm.h>
 #include <linux/irqreturn.h>
 #include <linux/spinlock.h>
+#include <linux/static_key.h>
 #include <linux/types.h>
 #include <kvm/iodev.h>
 #include <linux/list.h>
@@ -265,6 +266,8 @@ struct vgic_cpu {
 	bool lpis_enabled;
 };
 
+extern struct static_key_false vgic_v2_cpuif_trap;
+
 int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
 void kvm_vgic_early_init(struct kvm *kvm);
 int kvm_vgic_create(struct kvm *kvm, u32 type);
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 7cffd93..3e2a62e 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -167,3 +167,10 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 	writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
 	vcpu->arch.vgic_cpu.live_lrs = live_lrs;
 }
+
+#ifdef CONFIG_ARM64
+bool __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
+{
+		return false;
+}
+#endif
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 0bf6709..b8da901 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -294,6 +294,8 @@ out:
 	return ret;
 }
 
+DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap);
+
 /**
  * vgic_v2_probe - probe for a GICv2 compatible interrupt controller in DT
  * @node:	pointer to the DT node
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
@ 2016-08-19 12:38   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: linux-arm-kernel

In order to efficiently perform the GICV access on behalf of the
guest, we need to be able to do avoid going back all the way to
the host kernel.

For this, we introduce a new hook in the world switch code,
conveniently placed just after populating the fault info.
At that point, we only have saved/restored the GP registers,
and we can quickly perform all the required checks (data abort,
translation fault, valid faulting syndrome, not an external
abort, not a PTW).

Coming back from the emulation code, we need to skip the emulated
instruction. This involves an additional bit of save/restore in
order to be able to access the guest's PC (and possibly CPSR if
this is a 32bit guest).

At this stage, no emulation code is provided.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 arch/arm64/include/asm/kvm_hyp.h |  1 +
 arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
 include/kvm/arm_vgic.h           |  3 +++
 virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
 virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
 5 files changed, 45 insertions(+)

diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index cff5105..88ec3ac 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
 
 void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
 void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
+bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
 
 void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
 void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
index ae7855f..0be1594 100644
--- a/arch/arm64/kvm/hyp/switch.c
+++ b/arch/arm64/kvm/hyp/switch.c
@@ -17,6 +17,7 @@
 
 #include <linux/types.h>
 #include <asm/kvm_asm.h>
+#include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
 
 static bool __hyp_text __fpsimd_enabled_nvhe(void)
@@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
 	return true;
 }
 
+static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
+{
+	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
+
+	if (vcpu_mode_is_32bit(vcpu)) {
+		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
+		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
+		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
+	} else {
+		*vcpu_pc(vcpu) += 4;
+	}
+
+	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
+}
+
 static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
 {
 	struct kvm_cpu_context *host_ctxt;
@@ -270,6 +286,22 @@ again:
 	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
 		goto again;
 
+	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
+	    exit_code == ARM_EXCEPTION_TRAP) {
+		bool valid;
+
+		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
+			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
+			kvm_vcpu_dabt_isvalid(vcpu) &&
+			!kvm_vcpu_dabt_isextabt(vcpu) &&
+			!kvm_vcpu_dabt_iss1tw(vcpu);
+
+		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {
+			__skip_instr(vcpu);
+			goto again;
+		}
+	}
+
 	fp_enabled = __fpsimd_enabled();
 
 	__sysreg_save_guest_state(guest_ctxt);
diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 19b698e..8eb1501 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -20,6 +20,7 @@
 #include <linux/kvm.h>
 #include <linux/irqreturn.h>
 #include <linux/spinlock.h>
+#include <linux/static_key.h>
 #include <linux/types.h>
 #include <kvm/iodev.h>
 #include <linux/list.h>
@@ -265,6 +266,8 @@ struct vgic_cpu {
 	bool lpis_enabled;
 };
 
+extern struct static_key_false vgic_v2_cpuif_trap;
+
 int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
 void kvm_vgic_early_init(struct kvm *kvm);
 int kvm_vgic_create(struct kvm *kvm, u32 type);
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 7cffd93..3e2a62e 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -167,3 +167,10 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 	writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
 	vcpu->arch.vgic_cpu.live_lrs = live_lrs;
 }
+
+#ifdef CONFIG_ARM64
+bool __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
+{
+		return false;
+}
+#endif
diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index 0bf6709..b8da901 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -294,6 +294,8 @@ out:
 	return ret;
 }
 
+DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap);
+
 /**
  * vgic_v2_probe - probe for a GICv2 compatible interrupt controller in DT
  * @node:	pointer to the DT node
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/5] arm64: KVM: vgic-v2: Add GICV access from HYP
  2016-08-19 12:38 ` Marc Zyngier
@ 2016-08-19 12:38   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm

Now that we have the necessary infrastructure to handle MMIO accesses
in HYP, perform the GICV access on behalf of the guest. This requires
checking that the access is strictly 32bit, properly aligned, and
falls within the expected range.

When all condition are satisfied, we perform the access and tell
the rest of the HYP code that the instruction has been correctly
emulated.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h        |  3 +++
 virt/kvm/arm/hyp/vgic-v2-sr.c | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 8eb1501..bb46c03 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -50,6 +50,9 @@ struct vgic_global {
 	/* Physical address of vgic virtual cpu interface */
 	phys_addr_t		vcpu_base;
 
+	/* GICV mapping */
+	void __iomem		*vcpu_base_va;
+
 	/* virtual control interface mapping */
 	void __iomem		*vctrl_base;
 
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 3e2a62e..860fdc2 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -19,6 +19,7 @@
 #include <linux/irqchip/arm-gic.h>
 #include <linux/kvm_host.h>
 
+#include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
 
 static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu,
@@ -171,6 +172,44 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 #ifdef CONFIG_ARM64
 bool __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 {
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+	struct vgic_dist *vgic = &kvm->arch.vgic;
+	phys_addr_t fault_ipa;
+	void __iomem *addr;
+	int rd;
+
+	/* Reject anything but a 32bit access */
+	if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32))
 		return false;
+
+	/* Not aligned? Don't bother */
+	fault_ipa = kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
+	if (fault_ipa & 3)
+		return false;
+
+	/* Build the full address */
+	fault_ipa |= kvm_vcpu_get_fault_ipa(vcpu);
+
+	/* If not for GICV, move on */
+	if (fault_ipa <  vgic->vgic_cpu_base ||
+	    fault_ipa >= (vgic->vgic_cpu_base + KVM_VGIC_V2_CPU_SIZE))
+		return false;
+
+	rd = kvm_vcpu_dabt_get_rd(vcpu);
+	addr  = kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+	addr += fault_ipa - vgic->vgic_cpu_base;
+
+	if (kvm_vcpu_dabt_iswrite(vcpu)) {
+		u32 data = vcpu_data_guest_to_host(vcpu,
+						   vcpu_get_reg(vcpu, rd),
+						   sizeof(u32));
+		writel_relaxed(data, addr);
+	} else {
+		u32 data = readl_relaxed(addr);
+		vcpu_set_reg(vcpu, rd, vcpu_data_host_to_guest(vcpu, data,
+							       sizeof(u32)));
+	}
+
+	return true;
 }
 #endif
-- 
2.1.4


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 4/5] arm64: KVM: vgic-v2: Add GICV access from HYP
@ 2016-08-19 12:38   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: linux-arm-kernel

Now that we have the necessary infrastructure to handle MMIO accesses
in HYP, perform the GICV access on behalf of the guest. This requires
checking that the access is strictly 32bit, properly aligned, and
falls within the expected range.

When all condition are satisfied, we perform the access and tell
the rest of the HYP code that the instruction has been correctly
emulated.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 include/kvm/arm_vgic.h        |  3 +++
 virt/kvm/arm/hyp/vgic-v2-sr.c | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+)

diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
index 8eb1501..bb46c03 100644
--- a/include/kvm/arm_vgic.h
+++ b/include/kvm/arm_vgic.h
@@ -50,6 +50,9 @@ struct vgic_global {
 	/* Physical address of vgic virtual cpu interface */
 	phys_addr_t		vcpu_base;
 
+	/* GICV mapping */
+	void __iomem		*vcpu_base_va;
+
 	/* virtual control interface mapping */
 	void __iomem		*vctrl_base;
 
diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
index 3e2a62e..860fdc2 100644
--- a/virt/kvm/arm/hyp/vgic-v2-sr.c
+++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
@@ -19,6 +19,7 @@
 #include <linux/irqchip/arm-gic.h>
 #include <linux/kvm_host.h>
 
+#include <asm/kvm_emulate.h>
 #include <asm/kvm_hyp.h>
 
 static void __hyp_text save_maint_int_state(struct kvm_vcpu *vcpu,
@@ -171,6 +172,44 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
 #ifdef CONFIG_ARM64
 bool __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
 {
+	struct kvm *kvm = kern_hyp_va(vcpu->kvm);
+	struct vgic_dist *vgic = &kvm->arch.vgic;
+	phys_addr_t fault_ipa;
+	void __iomem *addr;
+	int rd;
+
+	/* Reject anything but a 32bit access */
+	if (kvm_vcpu_dabt_get_as(vcpu) != sizeof(u32))
 		return false;
+
+	/* Not aligned? Don't bother */
+	fault_ipa = kvm_vcpu_get_hfar(vcpu) & GENMASK(11, 0);
+	if (fault_ipa & 3)
+		return false;
+
+	/* Build the full address */
+	fault_ipa |= kvm_vcpu_get_fault_ipa(vcpu);
+
+	/* If not for GICV, move on */
+	if (fault_ipa <  vgic->vgic_cpu_base ||
+	    fault_ipa >= (vgic->vgic_cpu_base + KVM_VGIC_V2_CPU_SIZE))
+		return false;
+
+	rd = kvm_vcpu_dabt_get_rd(vcpu);
+	addr  = kern_hyp_va((kern_hyp_va(&kvm_vgic_global_state))->vcpu_base_va);
+	addr += fault_ipa - vgic->vgic_cpu_base;
+
+	if (kvm_vcpu_dabt_iswrite(vcpu)) {
+		u32 data = vcpu_data_guest_to_host(vcpu,
+						   vcpu_get_reg(vcpu, rd),
+						   sizeof(u32));
+		writel_relaxed(data, addr);
+	} else {
+		u32 data = readl_relaxed(addr);
+		vcpu_set_reg(vcpu, rd, vcpu_data_host_to_guest(vcpu, data,
+							       sizeof(u32)));
+	}
+
+	return true;
 }
 #endif
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
  2016-08-19 12:38 ` Marc Zyngier
@ 2016-08-19 12:38   ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm

So far, we've been disabling KVM on systems where the GICV region couldn't
be safely given to a guest. Now that we're able to handle this access
safely by emulating it in HYP, we can enable this feature when we detect
an unsafe configuration.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
 1 file changed, 42 insertions(+), 27 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index b8da901..d1dcfc76 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
 		goto out;
 	}
 
-	ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
-				    kvm_vgic_global_state.vcpu_base,
-				    KVM_VGIC_V2_CPU_SIZE, true);
-	if (ret) {
-		kvm_err("Unable to remap VGIC CPU to VCPU\n");
-		goto out;
+	if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
+		ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
+					    kvm_vgic_global_state.vcpu_base,
+					    KVM_VGIC_V2_CPU_SIZE, true);
+		if (ret) {
+			kvm_err("Unable to remap VGIC CPU to VCPU\n");
+			goto out;
+		}
 	}
 
 	dist->ready = true;
@@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 		return -ENXIO;
 	}
 
-	if (!PAGE_ALIGNED(info->vcpu.start)) {
-		kvm_err("GICV physical address 0x%llx not page aligned\n",
-			(unsigned long long)info->vcpu.start);
-		return -ENXIO;
-	}
+	if (!PAGE_ALIGNED(info->vcpu.start) ||
+	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
+		kvm_info("GICV region size/alignement is unsafe, using trapping\n");
+		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
+							     resource_size(&info->vcpu));
+		if (!kvm_vgic_global_state.vcpu_base_va) {
+			kvm_err("Cannot ioremap GICV\n");
+			return -ENOMEM;
+		}
 
-	if (!PAGE_ALIGNED(resource_size(&info->vcpu))) {
-		kvm_err("GICV size 0x%llx not a multiple of page size 0x%lx\n",
-			(unsigned long long)resource_size(&info->vcpu),
-			PAGE_SIZE);
-		return -ENXIO;
+		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
+					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
+					     info->vcpu.start);
+		if (ret) {
+			kvm_err("Cannot map GICV into hyp\n");
+			goto out;
+		}
+
+		static_branch_enable(&vgic_v2_cpuif_trap);
 	}
 
 	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
 						   resource_size(&info->vctrl));
 	if (!kvm_vgic_global_state.vctrl_base) {
 		kvm_err("Cannot ioremap GICH\n");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto out;
 	}
 
 	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
 	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
 
-	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
-	if (ret) {
-		kvm_err("Cannot register GICv2 KVM device\n");
-		iounmap(kvm_vgic_global_state.vctrl_base);
-		return ret;
-	}
-
 	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
 				     kvm_vgic_global_state.vctrl_base +
 					 resource_size(&info->vctrl),
 				     info->vctrl.start);
 	if (ret) {
 		kvm_err("Cannot map VCTRL into hyp\n");
-		kvm_unregister_device_ops(KVM_DEV_TYPE_ARM_VGIC_V2);
-		iounmap(kvm_vgic_global_state.vctrl_base);
-		return ret;
+		goto out;
+	}
+
+	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
+	if (ret) {
+		kvm_err("Cannot register GICv2 KVM device\n");
+		goto out;
 	}
 
 	kvm_vgic_global_state.can_emulate_gicv2 = true;
@@ -361,4 +369,11 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 	kvm_info("vgic-v2@%llx\n", info->vctrl.start);
 
 	return 0;
+out:
+	if (kvm_vgic_global_state.vctrl_base)
+		iounmap(kvm_vgic_global_state.vctrl_base);
+	if (kvm_vgic_global_state.vcpu_base_va)
+		iounmap(kvm_vgic_global_state.vcpu_base_va);
+
+	return ret;
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
@ 2016-08-19 12:38   ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 12:38 UTC (permalink / raw)
  To: linux-arm-kernel

So far, we've been disabling KVM on systems where the GICV region couldn't
be safely given to a guest. Now that we're able to handle this access
safely by emulating it in HYP, we can enable this feature when we detect
an unsafe configuration.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
 1 file changed, 42 insertions(+), 27 deletions(-)

diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
index b8da901..d1dcfc76 100644
--- a/virt/kvm/arm/vgic/vgic-v2.c
+++ b/virt/kvm/arm/vgic/vgic-v2.c
@@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
 		goto out;
 	}
 
-	ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
-				    kvm_vgic_global_state.vcpu_base,
-				    KVM_VGIC_V2_CPU_SIZE, true);
-	if (ret) {
-		kvm_err("Unable to remap VGIC CPU to VCPU\n");
-		goto out;
+	if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
+		ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
+					    kvm_vgic_global_state.vcpu_base,
+					    KVM_VGIC_V2_CPU_SIZE, true);
+		if (ret) {
+			kvm_err("Unable to remap VGIC CPU to VCPU\n");
+			goto out;
+		}
 	}
 
 	dist->ready = true;
@@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 		return -ENXIO;
 	}
 
-	if (!PAGE_ALIGNED(info->vcpu.start)) {
-		kvm_err("GICV physical address 0x%llx not page aligned\n",
-			(unsigned long long)info->vcpu.start);
-		return -ENXIO;
-	}
+	if (!PAGE_ALIGNED(info->vcpu.start) ||
+	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
+		kvm_info("GICV region size/alignement is unsafe, using trapping\n");
+		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
+							     resource_size(&info->vcpu));
+		if (!kvm_vgic_global_state.vcpu_base_va) {
+			kvm_err("Cannot ioremap GICV\n");
+			return -ENOMEM;
+		}
 
-	if (!PAGE_ALIGNED(resource_size(&info->vcpu))) {
-		kvm_err("GICV size 0x%llx not a multiple of page size 0x%lx\n",
-			(unsigned long long)resource_size(&info->vcpu),
-			PAGE_SIZE);
-		return -ENXIO;
+		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
+					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
+					     info->vcpu.start);
+		if (ret) {
+			kvm_err("Cannot map GICV into hyp\n");
+			goto out;
+		}
+
+		static_branch_enable(&vgic_v2_cpuif_trap);
 	}
 
 	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
 						   resource_size(&info->vctrl));
 	if (!kvm_vgic_global_state.vctrl_base) {
 		kvm_err("Cannot ioremap GICH\n");
-		return -ENOMEM;
+		ret = -ENOMEM;
+		goto out;
 	}
 
 	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
 	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
 
-	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
-	if (ret) {
-		kvm_err("Cannot register GICv2 KVM device\n");
-		iounmap(kvm_vgic_global_state.vctrl_base);
-		return ret;
-	}
-
 	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
 				     kvm_vgic_global_state.vctrl_base +
 					 resource_size(&info->vctrl),
 				     info->vctrl.start);
 	if (ret) {
 		kvm_err("Cannot map VCTRL into hyp\n");
-		kvm_unregister_device_ops(KVM_DEV_TYPE_ARM_VGIC_V2);
-		iounmap(kvm_vgic_global_state.vctrl_base);
-		return ret;
+		goto out;
+	}
+
+	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
+	if (ret) {
+		kvm_err("Cannot register GICv2 KVM device\n");
+		goto out;
 	}
 
 	kvm_vgic_global_state.can_emulate_gicv2 = true;
@@ -361,4 +369,11 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
 	kvm_info("vgic-v2@%llx\n", info->vctrl.start);
 
 	return 0;
+out:
+	if (kvm_vgic_global_state.vctrl_base)
+		iounmap(kvm_vgic_global_state.vctrl_base);
+	if (kvm_vgic_global_state.vcpu_base_va)
+		iounmap(kvm_vgic_global_state.vcpu_base_va);
+
+	return ret;
 }
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
  2016-08-19 12:38   ` Marc Zyngier
@ 2016-08-19 12:53     ` Peter Maydell
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-08-19 12:53 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: arm-mail-list, kvm-devel, kvmarm

On 19 August 2016 at 13:38, Marc Zyngier <marc.zyngier@arm.com> wrote:
> So far, we've been disabling KVM on systems where the GICV region couldn't
> be safely given to a guest. Now that we're able to handle this access
> safely by emulating it in HYP, we can enable this feature when we detect
> an unsafe configuration.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
>  1 file changed, 42 insertions(+), 27 deletions(-)
>
> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
> index b8da901..d1dcfc76 100644
> --- a/virt/kvm/arm/vgic/vgic-v2.c
> +++ b/virt/kvm/arm/vgic/vgic-v2.c
> @@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
>                 goto out;
>         }
>
> -       ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> -                                   kvm_vgic_global_state.vcpu_base,
> -                                   KVM_VGIC_V2_CPU_SIZE, true);
> -       if (ret) {
> -               kvm_err("Unable to remap VGIC CPU to VCPU\n");
> -               goto out;
> +       if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
> +               ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> +                                           kvm_vgic_global_state.vcpu_base,
> +                                           KVM_VGIC_V2_CPU_SIZE, true);
> +               if (ret) {
> +                       kvm_err("Unable to remap VGIC CPU to VCPU\n");
> +                       goto out;
> +               }
>         }
>
>         dist->ready = true;
> @@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>                 return -ENXIO;
>         }
>
> -       if (!PAGE_ALIGNED(info->vcpu.start)) {
> -               kvm_err("GICV physical address 0x%llx not page aligned\n",
> -                       (unsigned long long)info->vcpu.start);
> -               return -ENXIO;
> -       }
> +       if (!PAGE_ALIGNED(info->vcpu.start) ||
> +           !PAGE_ALIGNED(resource_size(&info->vcpu))) {
> +               kvm_info("GICV region size/alignement is unsafe, using trapping\n");

"alignment".

Is it worth specifically saying "performance will be worse", or do we
expect this to only happen on systems where the h/w can't permit direct
access (as opposed to those with bad dt info) ?

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
@ 2016-08-19 12:53     ` Peter Maydell
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-08-19 12:53 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 August 2016 at 13:38, Marc Zyngier <marc.zyngier@arm.com> wrote:
> So far, we've been disabling KVM on systems where the GICV region couldn't
> be safely given to a guest. Now that we're able to handle this access
> safely by emulating it in HYP, we can enable this feature when we detect
> an unsafe configuration.
>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
>  1 file changed, 42 insertions(+), 27 deletions(-)
>
> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
> index b8da901..d1dcfc76 100644
> --- a/virt/kvm/arm/vgic/vgic-v2.c
> +++ b/virt/kvm/arm/vgic/vgic-v2.c
> @@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
>                 goto out;
>         }
>
> -       ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> -                                   kvm_vgic_global_state.vcpu_base,
> -                                   KVM_VGIC_V2_CPU_SIZE, true);
> -       if (ret) {
> -               kvm_err("Unable to remap VGIC CPU to VCPU\n");
> -               goto out;
> +       if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
> +               ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> +                                           kvm_vgic_global_state.vcpu_base,
> +                                           KVM_VGIC_V2_CPU_SIZE, true);
> +               if (ret) {
> +                       kvm_err("Unable to remap VGIC CPU to VCPU\n");
> +                       goto out;
> +               }
>         }
>
>         dist->ready = true;
> @@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>                 return -ENXIO;
>         }
>
> -       if (!PAGE_ALIGNED(info->vcpu.start)) {
> -               kvm_err("GICV physical address 0x%llx not page aligned\n",
> -                       (unsigned long long)info->vcpu.start);
> -               return -ENXIO;
> -       }
> +       if (!PAGE_ALIGNED(info->vcpu.start) ||
> +           !PAGE_ALIGNED(resource_size(&info->vcpu))) {
> +               kvm_info("GICV region size/alignement is unsafe, using trapping\n");

"alignment".

Is it worth specifically saying "performance will be worse", or do we
expect this to only happen on systems where the h/w can't permit direct
access (as opposed to those with bad dt info) ?

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
  2016-08-19 12:53     ` Peter Maydell
@ 2016-08-19 13:05       ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 13:05 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Christoffer Dall, kvm-devel, arm-mail-list, kvmarm

Hi Peter,

On 19/08/16 13:53, Peter Maydell wrote:
> On 19 August 2016 at 13:38, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> So far, we've been disabling KVM on systems where the GICV region couldn't
>> be safely given to a guest. Now that we're able to handle this access
>> safely by emulating it in HYP, we can enable this feature when we detect
>> an unsafe configuration.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
>>  1 file changed, 42 insertions(+), 27 deletions(-)
>>
>> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
>> index b8da901..d1dcfc76 100644
>> --- a/virt/kvm/arm/vgic/vgic-v2.c
>> +++ b/virt/kvm/arm/vgic/vgic-v2.c
>> @@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
>>                 goto out;
>>         }
>>
>> -       ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
>> -                                   kvm_vgic_global_state.vcpu_base,
>> -                                   KVM_VGIC_V2_CPU_SIZE, true);
>> -       if (ret) {
>> -               kvm_err("Unable to remap VGIC CPU to VCPU\n");
>> -               goto out;
>> +       if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
>> +               ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
>> +                                           kvm_vgic_global_state.vcpu_base,
>> +                                           KVM_VGIC_V2_CPU_SIZE, true);
>> +               if (ret) {
>> +                       kvm_err("Unable to remap VGIC CPU to VCPU\n");
>> +                       goto out;
>> +               }
>>         }
>>
>>         dist->ready = true;
>> @@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>>                 return -ENXIO;
>>         }
>>
>> -       if (!PAGE_ALIGNED(info->vcpu.start)) {
>> -               kvm_err("GICV physical address 0x%llx not page aligned\n",
>> -                       (unsigned long long)info->vcpu.start);
>> -               return -ENXIO;
>> -       }
>> +       if (!PAGE_ALIGNED(info->vcpu.start) ||
>> +           !PAGE_ALIGNED(resource_size(&info->vcpu))) {
>> +               kvm_info("GICV region size/alignement is unsafe, using trapping\n");
> 
> "alignment".

Ah, thanks. There's always a bit of French being stuck somewhere... ;-)

> 
> Is it worth specifically saying "performance will be worse", or do we
> expect this to only happen on systems where the h/w can't permit direct
> access (as opposed to those with bad dt info) ?

We cannot distinguish between the two, unfortunately. Even worse, ACPI
only gives us a base address, and not the size of the region. So even if
the HW was perfectly compliant with SBSA, we have to assume the worse case.

I guess that a slightly more alarming message is in order indeed.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
@ 2016-08-19 13:05       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 13:05 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Peter,

On 19/08/16 13:53, Peter Maydell wrote:
> On 19 August 2016 at 13:38, Marc Zyngier <marc.zyngier@arm.com> wrote:
>> So far, we've been disabling KVM on systems where the GICV region couldn't
>> be safely given to a guest. Now that we're able to handle this access
>> safely by emulating it in HYP, we can enable this feature when we detect
>> an unsafe configuration.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
>>  1 file changed, 42 insertions(+), 27 deletions(-)
>>
>> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
>> index b8da901..d1dcfc76 100644
>> --- a/virt/kvm/arm/vgic/vgic-v2.c
>> +++ b/virt/kvm/arm/vgic/vgic-v2.c
>> @@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
>>                 goto out;
>>         }
>>
>> -       ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
>> -                                   kvm_vgic_global_state.vcpu_base,
>> -                                   KVM_VGIC_V2_CPU_SIZE, true);
>> -       if (ret) {
>> -               kvm_err("Unable to remap VGIC CPU to VCPU\n");
>> -               goto out;
>> +       if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
>> +               ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
>> +                                           kvm_vgic_global_state.vcpu_base,
>> +                                           KVM_VGIC_V2_CPU_SIZE, true);
>> +               if (ret) {
>> +                       kvm_err("Unable to remap VGIC CPU to VCPU\n");
>> +                       goto out;
>> +               }
>>         }
>>
>>         dist->ready = true;
>> @@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>>                 return -ENXIO;
>>         }
>>
>> -       if (!PAGE_ALIGNED(info->vcpu.start)) {
>> -               kvm_err("GICV physical address 0x%llx not page aligned\n",
>> -                       (unsigned long long)info->vcpu.start);
>> -               return -ENXIO;
>> -       }
>> +       if (!PAGE_ALIGNED(info->vcpu.start) ||
>> +           !PAGE_ALIGNED(resource_size(&info->vcpu))) {
>> +               kvm_info("GICV region size/alignement is unsafe, using trapping\n");
> 
> "alignment".

Ah, thanks. There's always a bit of French being stuck somewhere... ;-)

> 
> Is it worth specifically saying "performance will be worse", or do we
> expect this to only happen on systems where the h/w can't permit direct
> access (as opposed to those with bad dt info) ?

We cannot distinguish between the two, unfortunately. Even worse, ACPI
only gives us a base address, and not the size of the region. So even if
the HW was perfectly compliant with SBSA, we have to assume the worse case.

I guess that a slightly more alarming message is in order indeed.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
  2016-08-19 13:05       ` Marc Zyngier
@ 2016-08-19 13:31         ` Peter Maydell
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-08-19 13:31 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: arm-mail-list, kvm-devel, kvmarm

On 19 August 2016 at 14:05, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 19/08/16 13:53, Peter Maydell wrote:
>> Is it worth specifically saying "performance will be worse", or do we
>> expect this to only happen on systems where the h/w can't permit direct
>> access (as opposed to those with bad dt info) ?
>
> We cannot distinguish between the two, unfortunately. Even worse, ACPI
> only gives us a base address, and not the size of the region. So even if
> the HW was perfectly compliant with SBSA, we have to assume the worse case.

Right, but if we expect this is mostly going to be "you just have
to live with it on this hardware" there's less point in printing
an alarming message, whereas if there's a significant subset of
"dt is just wrong" cases then the alarm might help in getting them
fixed, maybe...

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
@ 2016-08-19 13:31         ` Peter Maydell
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-08-19 13:31 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 August 2016 at 14:05, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 19/08/16 13:53, Peter Maydell wrote:
>> Is it worth specifically saying "performance will be worse", or do we
>> expect this to only happen on systems where the h/w can't permit direct
>> access (as opposed to those with bad dt info) ?
>
> We cannot distinguish between the two, unfortunately. Even worse, ACPI
> only gives us a base address, and not the size of the region. So even if
> the HW was perfectly compliant with SBSA, we have to assume the worse case.

Right, but if we expect this is mostly going to be "you just have
to live with it on this hardware" there's less point in printing
an alarming message, whereas if there's a significant subset of
"dt is just wrong" cases then the alarm might help in getting them
fixed, maybe...

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
  2016-08-19 13:31         ` Peter Maydell
@ 2016-08-19 14:54           ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 14:54 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Christoffer Dall, kvm-devel, arm-mail-list, kvmarm

On Fri, 19 Aug 2016 14:31:12 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On 19 August 2016 at 14:05, Marc Zyngier <marc.zyngier@arm.com> wrote:
> > On 19/08/16 13:53, Peter Maydell wrote:  
> >> Is it worth specifically saying "performance will be worse", or do we
> >> expect this to only happen on systems where the h/w can't permit direct
> >> access (as opposed to those with bad dt info) ?  
> >
> > We cannot distinguish between the two, unfortunately. Even worse, ACPI
> > only gives us a base address, and not the size of the region. So even if
> > the HW was perfectly compliant with SBSA, we have to assume the worse case.  
> 
> Right, but if we expect this is mostly going to be "you just have
> to live with it on this hardware" there's less point in printing
> an alarming message, whereas if there's a significant subset of
> "dt is just wrong" cases then the alarm might help in getting them
> fixed, maybe...

That'd require some more infrastructure from the kernel's GIC driver
(which now provides the various base addresses), but I guess that we
can have a look as well.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
@ 2016-08-19 14:54           ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-08-19 14:54 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, 19 Aug 2016 14:31:12 +0100
Peter Maydell <peter.maydell@linaro.org> wrote:

> On 19 August 2016 at 14:05, Marc Zyngier <marc.zyngier@arm.com> wrote:
> > On 19/08/16 13:53, Peter Maydell wrote:  
> >> Is it worth specifically saying "performance will be worse", or do we
> >> expect this to only happen on systems where the h/w can't permit direct
> >> access (as opposed to those with bad dt info) ?  
> >
> > We cannot distinguish between the two, unfortunately. Even worse, ACPI
> > only gives us a base address, and not the size of the region. So even if
> > the HW was perfectly compliant with SBSA, we have to assume the worse case.  
> 
> Right, but if we expect this is mostly going to be "you just have
> to live with it on this hardware" there's less point in printing
> an alarming message, whereas if there's a significant subset of
> "dt is just wrong" cases then the alarm might help in getting them
> fixed, maybe...

That'd require some more infrastructure from the kernel's GIC driver
(which now provides the various base addresses), but I guess that we
can have a look as well.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
  2016-08-19 12:38   ` Marc Zyngier
@ 2016-09-01 11:56     ` Christoffer Dall
  -1 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 11:56 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm

On Fri, Aug 19, 2016 at 01:38:11PM +0100, Marc Zyngier wrote:
> The ARM ARM asserts that having the IT bits set in ARM mode is
> unpredictable. But should a CPU do that (and still be able to
> run correctly), we shouldn't kill the system.
> 
> So let's drop the BUG_ON that checks for this, as it is not
> very helpful.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

not sure how this relates to the series, but looks fine otherwise:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

> ---
>  arch/arm/kvm/emulate.c   | 4 +---
>  arch/arm64/kvm/emulate.c | 4 +---
>  2 files changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
> index af93e3f..eda9ddd 100644
> --- a/arch/arm/kvm/emulate.c
> +++ b/arch/arm/kvm/emulate.c
> @@ -221,9 +221,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>  	unsigned long cpsr = *vcpu_cpsr(vcpu);
>  	bool is_arm = !(cpsr & PSR_T_BIT);
>  
> -	BUG_ON(is_arm && (cpsr & PSR_IT_MASK));
> -
> -	if (!(cpsr & PSR_IT_MASK))
> +	if (is_arm || !(cpsr & PSR_IT_MASK))
>  		return;
>  
>  	cond = (cpsr & 0xe000) >> 13;
> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
> index f87d8fb..df76590 100644
> --- a/arch/arm64/kvm/emulate.c
> +++ b/arch/arm64/kvm/emulate.c
> @@ -120,9 +120,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>  	unsigned long cpsr = *vcpu_cpsr(vcpu);
>  	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
>  
> -	BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
> -
> -	if (!(cpsr & COMPAT_PSR_IT_MASK))
> +	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
>  		return;
>  
>  	cond = (cpsr & 0xe000) >> 13;
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 1/5] arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
@ 2016-09-01 11:56     ` Christoffer Dall
  0 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 11:56 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 19, 2016 at 01:38:11PM +0100, Marc Zyngier wrote:
> The ARM ARM asserts that having the IT bits set in ARM mode is
> unpredictable. But should a CPU do that (and still be able to
> run correctly), we shouldn't kill the system.
> 
> So let's drop the BUG_ON that checks for this, as it is not
> very helpful.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

not sure how this relates to the series, but looks fine otherwise:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

> ---
>  arch/arm/kvm/emulate.c   | 4 +---
>  arch/arm64/kvm/emulate.c | 4 +---
>  2 files changed, 2 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm/kvm/emulate.c b/arch/arm/kvm/emulate.c
> index af93e3f..eda9ddd 100644
> --- a/arch/arm/kvm/emulate.c
> +++ b/arch/arm/kvm/emulate.c
> @@ -221,9 +221,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>  	unsigned long cpsr = *vcpu_cpsr(vcpu);
>  	bool is_arm = !(cpsr & PSR_T_BIT);
>  
> -	BUG_ON(is_arm && (cpsr & PSR_IT_MASK));
> -
> -	if (!(cpsr & PSR_IT_MASK))
> +	if (is_arm || !(cpsr & PSR_IT_MASK))
>  		return;
>  
>  	cond = (cpsr & 0xe000) >> 13;
> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
> index f87d8fb..df76590 100644
> --- a/arch/arm64/kvm/emulate.c
> +++ b/arch/arm64/kvm/emulate.c
> @@ -120,9 +120,7 @@ static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>  	unsigned long cpsr = *vcpu_cpsr(vcpu);
>  	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
>  
> -	BUG_ON(is_arm && (cpsr & COMPAT_PSR_IT_MASK));
> -
> -	if (!(cpsr & COMPAT_PSR_IT_MASK))
> +	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
>  		return;
>  
>  	cond = (cpsr & 0xe000) >> 13;
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
  2016-08-19 12:38   ` Marc Zyngier
@ 2016-09-01 12:09     ` Christoffer Dall
  -1 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 12:09 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm

On Fri, Aug 19, 2016 at 01:38:12PM +0100, Marc Zyngier wrote:
> As we're going to start emulating some instruction while in HYP,
> we need to be able to move the PC forward. Pretty easy for AArch64,
> but quite fidly for AArch32 (think Thumb2 and the IT state).
> 
> In order to be able to reuse the existing code in HYP, move the bulk
> of it to kvm_emulate.h, and let the implementation located in
> emulate.c use it. HYP will be able to use it at the expense of an
> additional copy in the object file, but we can at least share the
> source code.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 49 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/emulate.c             | 45 +--------------------------------
>  2 files changed, 50 insertions(+), 44 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 4cdeae3..60db363 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -311,4 +311,53 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
>  	return data;		/* Leave LE untouched */
>  }
>  
> +/**
> + * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> + * @vcpu:	The VCPU pointer
> + *
> + * When exceptions occur while instructions are executed in Thumb IF-THEN
> + * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
> + * to do this little bit of work manually. The fields map like this:
> + *
> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> + */
> +static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> +{
> +	unsigned long itbits, cond;
> +	unsigned long cpsr = *vcpu_cpsr(vcpu);
> +	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> +
> +	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
> +		return;
> +
> +	cond = (cpsr & 0xe000) >> 13;
> +	itbits = (cpsr & 0x1c00) >> (10 - 2);
> +	itbits |= (cpsr & (0x3 << 25)) >> 25;
> +
> +	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
> +	if ((itbits & 0x7) == 0)
> +		itbits = cond = 0;
> +	else
> +		itbits = (itbits << 1) & 0x1f;
> +
> +	cpsr &= ~COMPAT_PSR_IT_MASK;
> +	cpsr |= cond << 13;
> +	cpsr |= (itbits & 0x1c) << (10 - 2);
> +	cpsr |= (itbits & 0x3) << 25;
> +	*vcpu_cpsr(vcpu) = cpsr;
> +}
> +
> +static void inline kvm_skip_aarch32_instr(struct kvm_vcpu *vcpu,
> +					  bool is_wide_instr)
> +{
> +	bool is_thumb;
> +
> +	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
> +	if (is_thumb && !is_wide_instr)
> +		*vcpu_pc(vcpu) += 2;
> +	else
> +		*vcpu_pc(vcpu) += 4;
> +	kvm_adjust_itstate(vcpu);
> +}
> +
>  #endif /* __ARM64_KVM_EMULATE_H__ */
> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
> index df76590..d5f6a29 100644
> --- a/arch/arm64/kvm/emulate.c
> +++ b/arch/arm64/kvm/emulate.c
> @@ -105,53 +105,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> - * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> - * @vcpu:	The VCPU pointer
> - *
> - * When exceptions occur while instructions are executed in Thumb IF-THEN
> - * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
> - * to do this little bit of work manually. The fields map like this:
> - *
> - * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> - */
> -static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> -{
> -	unsigned long itbits, cond;
> -	unsigned long cpsr = *vcpu_cpsr(vcpu);
> -	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> -
> -	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
> -		return;
> -
> -	cond = (cpsr & 0xe000) >> 13;
> -	itbits = (cpsr & 0x1c00) >> (10 - 2);
> -	itbits |= (cpsr & (0x3 << 25)) >> 25;
> -
> -	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
> -	if ((itbits & 0x7) == 0)
> -		itbits = cond = 0;
> -	else
> -		itbits = (itbits << 1) & 0x1f;
> -
> -	cpsr &= ~COMPAT_PSR_IT_MASK;
> -	cpsr |= cond << 13;
> -	cpsr |= (itbits & 0x1c) << (10 - 2);
> -	cpsr |= (itbits & 0x3) << 25;
> -	*vcpu_cpsr(vcpu) = cpsr;
> -}
> -
> -/**

This is completely duplicated in arch/arm/kvm/emulate.c (with the same
useless BUG_ON from the previous patch still around), and this is a
pretty long static inline.

How about adding virt/kvm/arm/emulate.c and move these functions in
there?

Making them available in hyp mode should just be a matter of annotating
them with __hyp_text, right?


Thanks,
-Christoffer

>   * kvm_skip_instr - skip a trapped instruction and proceed to the next
>   * @vcpu: The vcpu pointer
>   */
>  void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
>  {
> -	bool is_thumb;
> -
> -	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
> -	if (is_thumb && !is_wide_instr)
> -		*vcpu_pc(vcpu) += 2;
> -	else
> -		*vcpu_pc(vcpu) += 4;
> -	kvm_adjust_itstate(vcpu);
> +	kvm_skip_aarch32_instr(vcpu, is_wide_instr);
>  }
> -- 
> 2.1.4
> 


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
@ 2016-09-01 12:09     ` Christoffer Dall
  0 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 12:09 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 19, 2016 at 01:38:12PM +0100, Marc Zyngier wrote:
> As we're going to start emulating some instruction while in HYP,
> we need to be able to move the PC forward. Pretty easy for AArch64,
> but quite fidly for AArch32 (think Thumb2 and the IT state).
> 
> In order to be able to reuse the existing code in HYP, move the bulk
> of it to kvm_emulate.h, and let the implementation located in
> emulate.c use it. HYP will be able to use it at the expense of an
> additional copy in the object file, but we can at least share the
> source code.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/kvm_emulate.h | 49 ++++++++++++++++++++++++++++++++++++
>  arch/arm64/kvm/emulate.c             | 45 +--------------------------------
>  2 files changed, 50 insertions(+), 44 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
> index 4cdeae3..60db363 100644
> --- a/arch/arm64/include/asm/kvm_emulate.h
> +++ b/arch/arm64/include/asm/kvm_emulate.h
> @@ -311,4 +311,53 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
>  	return data;		/* Leave LE untouched */
>  }
>  
> +/**
> + * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> + * @vcpu:	The VCPU pointer
> + *
> + * When exceptions occur while instructions are executed in Thumb IF-THEN
> + * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
> + * to do this little bit of work manually. The fields map like this:
> + *
> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> + */
> +static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> +{
> +	unsigned long itbits, cond;
> +	unsigned long cpsr = *vcpu_cpsr(vcpu);
> +	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> +
> +	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
> +		return;
> +
> +	cond = (cpsr & 0xe000) >> 13;
> +	itbits = (cpsr & 0x1c00) >> (10 - 2);
> +	itbits |= (cpsr & (0x3 << 25)) >> 25;
> +
> +	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
> +	if ((itbits & 0x7) == 0)
> +		itbits = cond = 0;
> +	else
> +		itbits = (itbits << 1) & 0x1f;
> +
> +	cpsr &= ~COMPAT_PSR_IT_MASK;
> +	cpsr |= cond << 13;
> +	cpsr |= (itbits & 0x1c) << (10 - 2);
> +	cpsr |= (itbits & 0x3) << 25;
> +	*vcpu_cpsr(vcpu) = cpsr;
> +}
> +
> +static void inline kvm_skip_aarch32_instr(struct kvm_vcpu *vcpu,
> +					  bool is_wide_instr)
> +{
> +	bool is_thumb;
> +
> +	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
> +	if (is_thumb && !is_wide_instr)
> +		*vcpu_pc(vcpu) += 2;
> +	else
> +		*vcpu_pc(vcpu) += 4;
> +	kvm_adjust_itstate(vcpu);
> +}
> +
>  #endif /* __ARM64_KVM_EMULATE_H__ */
> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
> index df76590..d5f6a29 100644
> --- a/arch/arm64/kvm/emulate.c
> +++ b/arch/arm64/kvm/emulate.c
> @@ -105,53 +105,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
>  }
>  
>  /**
> - * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> - * @vcpu:	The VCPU pointer
> - *
> - * When exceptions occur while instructions are executed in Thumb IF-THEN
> - * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
> - * to do this little bit of work manually. The fields map like this:
> - *
> - * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> - */
> -static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> -{
> -	unsigned long itbits, cond;
> -	unsigned long cpsr = *vcpu_cpsr(vcpu);
> -	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> -
> -	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
> -		return;
> -
> -	cond = (cpsr & 0xe000) >> 13;
> -	itbits = (cpsr & 0x1c00) >> (10 - 2);
> -	itbits |= (cpsr & (0x3 << 25)) >> 25;
> -
> -	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
> -	if ((itbits & 0x7) == 0)
> -		itbits = cond = 0;
> -	else
> -		itbits = (itbits << 1) & 0x1f;
> -
> -	cpsr &= ~COMPAT_PSR_IT_MASK;
> -	cpsr |= cond << 13;
> -	cpsr |= (itbits & 0x1c) << (10 - 2);
> -	cpsr |= (itbits & 0x3) << 25;
> -	*vcpu_cpsr(vcpu) = cpsr;
> -}
> -
> -/**

This is completely duplicated in arch/arm/kvm/emulate.c (with the same
useless BUG_ON from the previous patch still around), and this is a
pretty long static inline.

How about adding virt/kvm/arm/emulate.c and move these functions in
there?

Making them available in hyp mode should just be a matter of annotating
them with __hyp_text, right?


Thanks,
-Christoffer

>   * kvm_skip_instr - skip a trapped instruction and proceed to the next
>   * @vcpu: The vcpu pointer
>   */
>  void kvm_skip_instr32(struct kvm_vcpu *vcpu, bool is_wide_instr)
>  {
> -	bool is_thumb;
> -
> -	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
> -	if (is_thumb && !is_wide_instr)
> -		*vcpu_pc(vcpu) += 2;
> -	else
> -		*vcpu_pc(vcpu) += 4;
> -	kvm_adjust_itstate(vcpu);
> +	kvm_skip_aarch32_instr(vcpu, is_wide_instr);
>  }
> -- 
> 2.1.4
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 1/5] arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
  2016-09-01 11:56     ` Christoffer Dall
@ 2016-09-01 12:21       ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-09-01 12:21 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm

On 01/09/16 12:56, Christoffer Dall wrote:
> On Fri, Aug 19, 2016 at 01:38:11PM +0100, Marc Zyngier wrote:
>> The ARM ARM asserts that having the IT bits set in ARM mode is
>> unpredictable. But should a CPU do that (and still be able to
>> run correctly), we shouldn't kill the system.
>>
>> So let's drop the BUG_ON that checks for this, as it is not
>> very helpful.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> 
> not sure how this relates to the series, but looks fine otherwise:

In the following patch, I'm moving the whole function to be an inline
that will be used in HYP as well. A BUG_ON in HYP is a pretty tasteless
thing to do, so I opted for removing it here.

But your remark on the following patch makes more sense over all. I'll
follow up there.

> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 1/5] arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode
@ 2016-09-01 12:21       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-09-01 12:21 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/09/16 12:56, Christoffer Dall wrote:
> On Fri, Aug 19, 2016 at 01:38:11PM +0100, Marc Zyngier wrote:
>> The ARM ARM asserts that having the IT bits set in ARM mode is
>> unpredictable. But should a CPU do that (and still be able to
>> run correctly), we shouldn't kill the system.
>>
>> So let's drop the BUG_ON that checks for this, as it is not
>> very helpful.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> 
> not sure how this relates to the series, but looks fine otherwise:

In the following patch, I'm moving the whole function to be an inline
that will be used in HYP as well. A BUG_ON in HYP is a pretty tasteless
thing to do, so I opted for removing it here.

But your remark on the following patch makes more sense over all. I'll
follow up there.

> Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
  2016-09-01 12:09     ` Christoffer Dall
@ 2016-09-01 12:23       ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-09-01 12:23 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: kvm, linux-arm-kernel, kvmarm

On 01/09/16 13:09, Christoffer Dall wrote:
> On Fri, Aug 19, 2016 at 01:38:12PM +0100, Marc Zyngier wrote:
>> As we're going to start emulating some instruction while in HYP,
>> we need to be able to move the PC forward. Pretty easy for AArch64,
>> but quite fidly for AArch32 (think Thumb2 and the IT state).
>>
>> In order to be able to reuse the existing code in HYP, move the bulk
>> of it to kvm_emulate.h, and let the implementation located in
>> emulate.c use it. HYP will be able to use it at the expense of an
>> additional copy in the object file, but we can at least share the
>> source code.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/include/asm/kvm_emulate.h | 49 ++++++++++++++++++++++++++++++++++++
>>  arch/arm64/kvm/emulate.c             | 45 +--------------------------------
>>  2 files changed, 50 insertions(+), 44 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
>> index 4cdeae3..60db363 100644
>> --- a/arch/arm64/include/asm/kvm_emulate.h
>> +++ b/arch/arm64/include/asm/kvm_emulate.h
>> @@ -311,4 +311,53 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
>>  	return data;		/* Leave LE untouched */
>>  }
>>  
>> +/**
>> + * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
>> + * @vcpu:	The VCPU pointer
>> + *
>> + * When exceptions occur while instructions are executed in Thumb IF-THEN
>> + * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
>> + * to do this little bit of work manually. The fields map like this:
>> + *
>> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
>> + */
>> +static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>> +{
>> +	unsigned long itbits, cond;
>> +	unsigned long cpsr = *vcpu_cpsr(vcpu);
>> +	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
>> +
>> +	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
>> +		return;
>> +
>> +	cond = (cpsr & 0xe000) >> 13;
>> +	itbits = (cpsr & 0x1c00) >> (10 - 2);
>> +	itbits |= (cpsr & (0x3 << 25)) >> 25;
>> +
>> +	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
>> +	if ((itbits & 0x7) == 0)
>> +		itbits = cond = 0;
>> +	else
>> +		itbits = (itbits << 1) & 0x1f;
>> +
>> +	cpsr &= ~COMPAT_PSR_IT_MASK;
>> +	cpsr |= cond << 13;
>> +	cpsr |= (itbits & 0x1c) << (10 - 2);
>> +	cpsr |= (itbits & 0x3) << 25;
>> +	*vcpu_cpsr(vcpu) = cpsr;
>> +}
>> +
>> +static void inline kvm_skip_aarch32_instr(struct kvm_vcpu *vcpu,
>> +					  bool is_wide_instr)
>> +{
>> +	bool is_thumb;
>> +
>> +	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
>> +	if (is_thumb && !is_wide_instr)
>> +		*vcpu_pc(vcpu) += 2;
>> +	else
>> +		*vcpu_pc(vcpu) += 4;
>> +	kvm_adjust_itstate(vcpu);
>> +}
>> +
>>  #endif /* __ARM64_KVM_EMULATE_H__ */
>> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
>> index df76590..d5f6a29 100644
>> --- a/arch/arm64/kvm/emulate.c
>> +++ b/arch/arm64/kvm/emulate.c
>> @@ -105,53 +105,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
>>  }
>>  
>>  /**
>> - * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
>> - * @vcpu:	The VCPU pointer
>> - *
>> - * When exceptions occur while instructions are executed in Thumb IF-THEN
>> - * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
>> - * to do this little bit of work manually. The fields map like this:
>> - *
>> - * IT[7:0] -> CPSR[26:25],CPSR[15:10]
>> - */
>> -static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>> -{
>> -	unsigned long itbits, cond;
>> -	unsigned long cpsr = *vcpu_cpsr(vcpu);
>> -	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
>> -
>> -	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
>> -		return;
>> -
>> -	cond = (cpsr & 0xe000) >> 13;
>> -	itbits = (cpsr & 0x1c00) >> (10 - 2);
>> -	itbits |= (cpsr & (0x3 << 25)) >> 25;
>> -
>> -	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
>> -	if ((itbits & 0x7) == 0)
>> -		itbits = cond = 0;
>> -	else
>> -		itbits = (itbits << 1) & 0x1f;
>> -
>> -	cpsr &= ~COMPAT_PSR_IT_MASK;
>> -	cpsr |= cond << 13;
>> -	cpsr |= (itbits & 0x1c) << (10 - 2);
>> -	cpsr |= (itbits & 0x3) << 25;
>> -	*vcpu_cpsr(vcpu) = cpsr;
>> -}
>> -
>> -/**
> 
> This is completely duplicated in arch/arm/kvm/emulate.c (with the same
> useless BUG_ON from the previous patch still around), and this is a
> pretty long static inline.
> 
> How about adding virt/kvm/arm/emulate.c and move these functions in
> there?
> 
> Making them available in hyp mode should just be a matter of annotating
> them with __hyp_text, right?

That's pretty cunning. I'll give it a go.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
@ 2016-09-01 12:23       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-09-01 12:23 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/09/16 13:09, Christoffer Dall wrote:
> On Fri, Aug 19, 2016 at 01:38:12PM +0100, Marc Zyngier wrote:
>> As we're going to start emulating some instruction while in HYP,
>> we need to be able to move the PC forward. Pretty easy for AArch64,
>> but quite fidly for AArch32 (think Thumb2 and the IT state).
>>
>> In order to be able to reuse the existing code in HYP, move the bulk
>> of it to kvm_emulate.h, and let the implementation located in
>> emulate.c use it. HYP will be able to use it at the expense of an
>> additional copy in the object file, but we can at least share the
>> source code.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/include/asm/kvm_emulate.h | 49 ++++++++++++++++++++++++++++++++++++
>>  arch/arm64/kvm/emulate.c             | 45 +--------------------------------
>>  2 files changed, 50 insertions(+), 44 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/kvm_emulate.h b/arch/arm64/include/asm/kvm_emulate.h
>> index 4cdeae3..60db363 100644
>> --- a/arch/arm64/include/asm/kvm_emulate.h
>> +++ b/arch/arm64/include/asm/kvm_emulate.h
>> @@ -311,4 +311,53 @@ static inline unsigned long vcpu_data_host_to_guest(struct kvm_vcpu *vcpu,
>>  	return data;		/* Leave LE untouched */
>>  }
>>  
>> +/**
>> + * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
>> + * @vcpu:	The VCPU pointer
>> + *
>> + * When exceptions occur while instructions are executed in Thumb IF-THEN
>> + * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
>> + * to do this little bit of work manually. The fields map like this:
>> + *
>> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
>> + */
>> +static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>> +{
>> +	unsigned long itbits, cond;
>> +	unsigned long cpsr = *vcpu_cpsr(vcpu);
>> +	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
>> +
>> +	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
>> +		return;
>> +
>> +	cond = (cpsr & 0xe000) >> 13;
>> +	itbits = (cpsr & 0x1c00) >> (10 - 2);
>> +	itbits |= (cpsr & (0x3 << 25)) >> 25;
>> +
>> +	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
>> +	if ((itbits & 0x7) == 0)
>> +		itbits = cond = 0;
>> +	else
>> +		itbits = (itbits << 1) & 0x1f;
>> +
>> +	cpsr &= ~COMPAT_PSR_IT_MASK;
>> +	cpsr |= cond << 13;
>> +	cpsr |= (itbits & 0x1c) << (10 - 2);
>> +	cpsr |= (itbits & 0x3) << 25;
>> +	*vcpu_cpsr(vcpu) = cpsr;
>> +}
>> +
>> +static void inline kvm_skip_aarch32_instr(struct kvm_vcpu *vcpu,
>> +					  bool is_wide_instr)
>> +{
>> +	bool is_thumb;
>> +
>> +	is_thumb = !!(*vcpu_cpsr(vcpu) & COMPAT_PSR_T_BIT);
>> +	if (is_thumb && !is_wide_instr)
>> +		*vcpu_pc(vcpu) += 2;
>> +	else
>> +		*vcpu_pc(vcpu) += 4;
>> +	kvm_adjust_itstate(vcpu);
>> +}
>> +
>>  #endif /* __ARM64_KVM_EMULATE_H__ */
>> diff --git a/arch/arm64/kvm/emulate.c b/arch/arm64/kvm/emulate.c
>> index df76590..d5f6a29 100644
>> --- a/arch/arm64/kvm/emulate.c
>> +++ b/arch/arm64/kvm/emulate.c
>> @@ -105,53 +105,10 @@ bool kvm_condition_valid32(const struct kvm_vcpu *vcpu)
>>  }
>>  
>>  /**
>> - * adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
>> - * @vcpu:	The VCPU pointer
>> - *
>> - * When exceptions occur while instructions are executed in Thumb IF-THEN
>> - * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
>> - * to do this little bit of work manually. The fields map like this:
>> - *
>> - * IT[7:0] -> CPSR[26:25],CPSR[15:10]
>> - */
>> -static void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
>> -{
>> -	unsigned long itbits, cond;
>> -	unsigned long cpsr = *vcpu_cpsr(vcpu);
>> -	bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
>> -
>> -	if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
>> -		return;
>> -
>> -	cond = (cpsr & 0xe000) >> 13;
>> -	itbits = (cpsr & 0x1c00) >> (10 - 2);
>> -	itbits |= (cpsr & (0x3 << 25)) >> 25;
>> -
>> -	/* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
>> -	if ((itbits & 0x7) == 0)
>> -		itbits = cond = 0;
>> -	else
>> -		itbits = (itbits << 1) & 0x1f;
>> -
>> -	cpsr &= ~COMPAT_PSR_IT_MASK;
>> -	cpsr |= cond << 13;
>> -	cpsr |= (itbits & 0x1c) << (10 - 2);
>> -	cpsr |= (itbits & 0x3) << 25;
>> -	*vcpu_cpsr(vcpu) = cpsr;
>> -}
>> -
>> -/**
> 
> This is completely duplicated in arch/arm/kvm/emulate.c (with the same
> useless BUG_ON from the previous patch still around), and this is a
> pretty long static inline.
> 
> How about adding virt/kvm/arm/emulate.c and move these functions in
> there?
> 
> Making them available in hyp mode should just be a matter of annotating
> them with __hyp_text, right?

That's pretty cunning. I'll give it a go.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
  2016-08-19 12:38   ` Marc Zyngier
@ 2016-09-01 12:45     ` Peter Maydell
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-09-01 12:45 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: Christoffer Dall, kvm-devel, arm-mail-list, kvmarm

On 19 August 2016 at 13:38, Marc Zyngier <marc.zyngier@arm.com> wrote:
> As we're going to start emulating some instruction while in HYP,
> we need to be able to move the PC forward. Pretty easy for AArch64,
> but quite fidly for AArch32 (think Thumb2 and the IT state).

> +/**
> + * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> + * @vcpu:      The VCPU pointer
> + *
> + * When exceptions occur while instructions are executed in Thumb IF-THEN
> + * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
> + * to do this little bit of work manually. The fields map like this:
> + *
> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> + */
> +static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> +{
> +       unsigned long itbits, cond;
> +       unsigned long cpsr = *vcpu_cpsr(vcpu);
> +       bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> +
> +       if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
> +               return;
> +
> +       cond = (cpsr & 0xe000) >> 13;
> +       itbits = (cpsr & 0x1c00) >> (10 - 2);
> +       itbits |= (cpsr & (0x3 << 25)) >> 25;
> +
> +       /* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
> +       if ((itbits & 0x7) == 0)
> +               itbits = cond = 0;
> +       else
> +               itbits = (itbits << 1) & 0x1f;
> +
> +       cpsr &= ~COMPAT_PSR_IT_MASK;
> +       cpsr |= cond << 13;
> +       cpsr |= (itbits & 0x1c) << (10 - 2);
> +       cpsr |= (itbits & 0x3) << 25;
> +       *vcpu_cpsr(vcpu) = cpsr;
> +}

Does this happen often enough to be worth micro-optimising?
With the code as written we read and then writeback the cond
field into the cpsr most of the time, which you could avoid by
doing the "done with the IT block?" check early:
    if (!(cpsr & 0x06000c00)) {
        cpsr &= ~COMPAT_PSR_IT_MASK;
        return;
    }

I guess the compiler is smart enough to notice that the "& 0x1f"
can be dropped.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code
@ 2016-09-01 12:45     ` Peter Maydell
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-09-01 12:45 UTC (permalink / raw)
  To: linux-arm-kernel

On 19 August 2016 at 13:38, Marc Zyngier <marc.zyngier@arm.com> wrote:
> As we're going to start emulating some instruction while in HYP,
> we need to be able to move the PC forward. Pretty easy for AArch64,
> but quite fidly for AArch32 (think Thumb2 and the IT state).

> +/**
> + * kvm_adjust_itstate - adjust ITSTATE when emulating instructions in IT-block
> + * @vcpu:      The VCPU pointer
> + *
> + * When exceptions occur while instructions are executed in Thumb IF-THEN
> + * blocks, the ITSTATE field of the CPSR is not advanced (updated), so we have
> + * to do this little bit of work manually. The fields map like this:
> + *
> + * IT[7:0] -> CPSR[26:25],CPSR[15:10]
> + */
> +static inline void kvm_adjust_itstate(struct kvm_vcpu *vcpu)
> +{
> +       unsigned long itbits, cond;
> +       unsigned long cpsr = *vcpu_cpsr(vcpu);
> +       bool is_arm = !(cpsr & COMPAT_PSR_T_BIT);
> +
> +       if (is_arm || !(cpsr & COMPAT_PSR_IT_MASK))
> +               return;
> +
> +       cond = (cpsr & 0xe000) >> 13;
> +       itbits = (cpsr & 0x1c00) >> (10 - 2);
> +       itbits |= (cpsr & (0x3 << 25)) >> 25;
> +
> +       /* Perform ITAdvance (see page A2-52 in ARM DDI 0406C) */
> +       if ((itbits & 0x7) == 0)
> +               itbits = cond = 0;
> +       else
> +               itbits = (itbits << 1) & 0x1f;
> +
> +       cpsr &= ~COMPAT_PSR_IT_MASK;
> +       cpsr |= cond << 13;
> +       cpsr |= (itbits & 0x1c) << (10 - 2);
> +       cpsr |= (itbits & 0x3) << 25;
> +       *vcpu_cpsr(vcpu) = cpsr;
> +}

Does this happen often enough to be worth micro-optimising?
With the code as written we read and then writeback the cond
field into the cpsr most of the time, which you could avoid by
doing the "done with the IT block?" check early:
    if (!(cpsr & 0x06000c00)) {
        cpsr &= ~COMPAT_PSR_IT_MASK;
        return;
    }

I guess the compiler is smart enough to notice that the "& 0x1f"
can be dropped.

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
  2016-08-19 12:38   ` Marc Zyngier
@ 2016-09-01 12:46     ` Christoffer Dall
  -1 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 12:46 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: kvm, linux-arm-kernel, kvmarm

On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
> In order to efficiently perform the GICV access on behalf of the
> guest, we need to be able to do avoid going back all the way to

s/do//

> the host kernel.
> 
> For this, we introduce a new hook in the world switch code,
> conveniently placed just after populating the fault info.
> At that point, we only have saved/restored the GP registers,
> and we can quickly perform all the required checks (data abort,
> translation fault, valid faulting syndrome, not an external
> abort, not a PTW).
> 
> Coming back from the emulation code, we need to skip the emulated
> instruction. This involves an additional bit of save/restore in
> order to be able to access the guest's PC (and possibly CPSR if
> this is a 32bit guest).
> 
> At this stage, no emulation code is provided.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/kvm_hyp.h |  1 +
>  arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
>  include/kvm/arm_vgic.h           |  3 +++
>  virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
>  virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
>  5 files changed, 45 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index cff5105..88ec3ac 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
>  
>  void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
>  void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
> +bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
>  
>  void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
>  void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index ae7855f..0be1594 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -17,6 +17,7 @@
>  
>  #include <linux/types.h>
>  #include <asm/kvm_asm.h>
> +#include <asm/kvm_emulate.h>
>  #include <asm/kvm_hyp.h>
>  
>  static bool __hyp_text __fpsimd_enabled_nvhe(void)
> @@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> +static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
> +
> +	if (vcpu_mode_is_32bit(vcpu)) {
> +		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
> +		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> +		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
> +	} else {
> +		*vcpu_pc(vcpu) += 4;
> +	}
> +
> +	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
> +}
> +
>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *host_ctxt;
> @@ -270,6 +286,22 @@ again:
>  	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
>  		goto again;
>  
> +	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
> +	    exit_code == ARM_EXCEPTION_TRAP) {

do you get the static branch benefit when the test contains an &&
clause?  (I haven't looked at the generated assembly, no)

Also, if you flip this static branch for code both mapped in EL1 and
EL2, would you potentially need some form of additional icache
maintenance here?

Or are you relying on the static branch being set at boot time and hold
forever true/false?

> +		bool valid;
> +
> +		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
> +			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
> +			kvm_vcpu_dabt_isvalid(vcpu) &&
> +			!kvm_vcpu_dabt_isextabt(vcpu) &&
> +			!kvm_vcpu_dabt_iss1tw(vcpu);
> +
> +		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {

extra whitespace

> +			__skip_instr(vcpu);

does this interact in any amusing way with single-step guest debugging?

> +			goto again;
> +		}
> +	}
> +
>  	fp_enabled = __fpsimd_enabled();
>  
>  	__sysreg_save_guest_state(guest_ctxt);
> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> index 19b698e..8eb1501 100644
> --- a/include/kvm/arm_vgic.h
> +++ b/include/kvm/arm_vgic.h
> @@ -20,6 +20,7 @@
>  #include <linux/kvm.h>
>  #include <linux/irqreturn.h>
>  #include <linux/spinlock.h>
> +#include <linux/static_key.h>
>  #include <linux/types.h>
>  #include <kvm/iodev.h>
>  #include <linux/list.h>
> @@ -265,6 +266,8 @@ struct vgic_cpu {
>  	bool lpis_enabled;
>  };
>  
> +extern struct static_key_false vgic_v2_cpuif_trap;
> +
>  int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
>  void kvm_vgic_early_init(struct kvm *kvm);
>  int kvm_vgic_create(struct kvm *kvm, u32 type);
> diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
> index 7cffd93..3e2a62e 100644
> --- a/virt/kvm/arm/hyp/vgic-v2-sr.c
> +++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
> @@ -167,3 +167,10 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
>  	writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
>  	vcpu->arch.vgic_cpu.live_lrs = live_lrs;
>  }
> +
> +#ifdef CONFIG_ARM64
> +bool __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
> +{
> +		return false;
> +}
> +#endif
> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
> index 0bf6709..b8da901 100644
> --- a/virt/kvm/arm/vgic/vgic-v2.c
> +++ b/virt/kvm/arm/vgic/vgic-v2.c
> @@ -294,6 +294,8 @@ out:
>  	return ret;
>  }
>  
> +DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap);
> +
>  /**
>   * vgic_v2_probe - probe for a GICv2 compatible interrupt controller in DT
>   * @node:	pointer to the DT node
> -- 
> 2.1.4
> 

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
@ 2016-09-01 12:46     ` Christoffer Dall
  0 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 12:46 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
> In order to efficiently perform the GICV access on behalf of the
> guest, we need to be able to do avoid going back all the way to

s/do//

> the host kernel.
> 
> For this, we introduce a new hook in the world switch code,
> conveniently placed just after populating the fault info.
> At that point, we only have saved/restored the GP registers,
> and we can quickly perform all the required checks (data abort,
> translation fault, valid faulting syndrome, not an external
> abort, not a PTW).
> 
> Coming back from the emulation code, we need to skip the emulated
> instruction. This involves an additional bit of save/restore in
> order to be able to access the guest's PC (and possibly CPSR if
> this is a 32bit guest).
> 
> At this stage, no emulation code is provided.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm64/include/asm/kvm_hyp.h |  1 +
>  arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
>  include/kvm/arm_vgic.h           |  3 +++
>  virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
>  virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
>  5 files changed, 45 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> index cff5105..88ec3ac 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
>  
>  void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
>  void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
> +bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
>  
>  void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
>  void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index ae7855f..0be1594 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -17,6 +17,7 @@
>  
>  #include <linux/types.h>
>  #include <asm/kvm_asm.h>
> +#include <asm/kvm_emulate.h>
>  #include <asm/kvm_hyp.h>
>  
>  static bool __hyp_text __fpsimd_enabled_nvhe(void)
> @@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
>  	return true;
>  }
>  
> +static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
> +{
> +	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
> +
> +	if (vcpu_mode_is_32bit(vcpu)) {
> +		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
> +		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> +		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
> +	} else {
> +		*vcpu_pc(vcpu) += 4;
> +	}
> +
> +	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
> +}
> +
>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>  {
>  	struct kvm_cpu_context *host_ctxt;
> @@ -270,6 +286,22 @@ again:
>  	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
>  		goto again;
>  
> +	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
> +	    exit_code == ARM_EXCEPTION_TRAP) {

do you get the static branch benefit when the test contains an &&
clause?  (I haven't looked at the generated assembly, no)

Also, if you flip this static branch for code both mapped in EL1 and
EL2, would you potentially need some form of additional icache
maintenance here?

Or are you relying on the static branch being set at boot time and hold
forever true/false?

> +		bool valid;
> +
> +		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
> +			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
> +			kvm_vcpu_dabt_isvalid(vcpu) &&
> +			!kvm_vcpu_dabt_isextabt(vcpu) &&
> +			!kvm_vcpu_dabt_iss1tw(vcpu);
> +
> +		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {

extra whitespace

> +			__skip_instr(vcpu);

does this interact in any amusing way with single-step guest debugging?

> +			goto again;
> +		}
> +	}
> +
>  	fp_enabled = __fpsimd_enabled();
>  
>  	__sysreg_save_guest_state(guest_ctxt);
> diff --git a/include/kvm/arm_vgic.h b/include/kvm/arm_vgic.h
> index 19b698e..8eb1501 100644
> --- a/include/kvm/arm_vgic.h
> +++ b/include/kvm/arm_vgic.h
> @@ -20,6 +20,7 @@
>  #include <linux/kvm.h>
>  #include <linux/irqreturn.h>
>  #include <linux/spinlock.h>
> +#include <linux/static_key.h>
>  #include <linux/types.h>
>  #include <kvm/iodev.h>
>  #include <linux/list.h>
> @@ -265,6 +266,8 @@ struct vgic_cpu {
>  	bool lpis_enabled;
>  };
>  
> +extern struct static_key_false vgic_v2_cpuif_trap;
> +
>  int kvm_vgic_addr(struct kvm *kvm, unsigned long type, u64 *addr, bool write);
>  void kvm_vgic_early_init(struct kvm *kvm);
>  int kvm_vgic_create(struct kvm *kvm, u32 type);
> diff --git a/virt/kvm/arm/hyp/vgic-v2-sr.c b/virt/kvm/arm/hyp/vgic-v2-sr.c
> index 7cffd93..3e2a62e 100644
> --- a/virt/kvm/arm/hyp/vgic-v2-sr.c
> +++ b/virt/kvm/arm/hyp/vgic-v2-sr.c
> @@ -167,3 +167,10 @@ void __hyp_text __vgic_v2_restore_state(struct kvm_vcpu *vcpu)
>  	writel_relaxed(cpu_if->vgic_vmcr, base + GICH_VMCR);
>  	vcpu->arch.vgic_cpu.live_lrs = live_lrs;
>  }
> +
> +#ifdef CONFIG_ARM64
> +bool __hyp_text __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu)
> +{
> +		return false;
> +}
> +#endif
> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
> index 0bf6709..b8da901 100644
> --- a/virt/kvm/arm/vgic/vgic-v2.c
> +++ b/virt/kvm/arm/vgic/vgic-v2.c
> @@ -294,6 +294,8 @@ out:
>  	return ret;
>  }
>  
> +DEFINE_STATIC_KEY_FALSE(vgic_v2_cpuif_trap);
> +
>  /**
>   * vgic_v2_probe - probe for a GICv2 compatible interrupt controller in DT
>   * @node:	pointer to the DT node
> -- 
> 2.1.4
> 

Thanks,
-Christoffer

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 4/5] arm64: KVM: vgic-v2: Add GICV access from HYP
  2016-08-19 12:38   ` Marc Zyngier
@ 2016-09-01 13:28     ` Christoffer Dall
  -1 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 13:28 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm

On Fri, Aug 19, 2016 at 01:38:14PM +0100, Marc Zyngier wrote:
> Now that we have the necessary infrastructure to handle MMIO accesses
> in HYP, perform the GICV access on behalf of the guest. This requires
> checking that the access is strictly 32bit, properly aligned, and
> falls within the expected range.
> 
> When all condition are satisfied, we perform the access and tell
> the rest of the HYP code that the instruction has been correctly
> emulated.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 4/5] arm64: KVM: vgic-v2: Add GICV access from HYP
@ 2016-09-01 13:28     ` Christoffer Dall
  0 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 13:28 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 19, 2016 at 01:38:14PM +0100, Marc Zyngier wrote:
> Now that we have the necessary infrastructure to handle MMIO accesses
> in HYP, perform the GICV access on behalf of the guest. This requires
> checking that the access is strictly 32bit, properly aligned, and
> falls within the expected range.
> 
> When all condition are satisfied, we perform the access and tell
> the rest of the HYP code that the instruction has been correctly
> emulated.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
  2016-08-19 12:38   ` Marc Zyngier
@ 2016-09-01 13:30     ` Christoffer Dall
  -1 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 13:30 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm

On Fri, Aug 19, 2016 at 01:38:15PM +0100, Marc Zyngier wrote:
> So far, we've been disabling KVM on systems where the GICV region couldn't
> be safely given to a guest. Now that we're able to handle this access
> safely by emulating it in HYP, we can enable this feature when we detect
> an unsafe configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
>  1 file changed, 42 insertions(+), 27 deletions(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
> index b8da901..d1dcfc76 100644
> --- a/virt/kvm/arm/vgic/vgic-v2.c
> +++ b/virt/kvm/arm/vgic/vgic-v2.c
> @@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
>  		goto out;
>  	}
>  
> -	ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> -				    kvm_vgic_global_state.vcpu_base,
> -				    KVM_VGIC_V2_CPU_SIZE, true);
> -	if (ret) {
> -		kvm_err("Unable to remap VGIC CPU to VCPU\n");
> -		goto out;
> +	if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
> +		ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> +					    kvm_vgic_global_state.vcpu_base,
> +					    KVM_VGIC_V2_CPU_SIZE, true);
> +		if (ret) {
> +			kvm_err("Unable to remap VGIC CPU to VCPU\n");
> +			goto out;
> +		}
>  	}
>  
>  	dist->ready = true;
> @@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>  		return -ENXIO;
>  	}
>  
> -	if (!PAGE_ALIGNED(info->vcpu.start)) {
> -		kvm_err("GICV physical address 0x%llx not page aligned\n",
> -			(unsigned long long)info->vcpu.start);
> -		return -ENXIO;
> -	}
> +	if (!PAGE_ALIGNED(info->vcpu.start) ||
> +	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
> +		kvm_info("GICV region size/alignement is unsafe, using trapping\n");
> +		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
> +							     resource_size(&info->vcpu));
> +		if (!kvm_vgic_global_state.vcpu_base_va) {
> +			kvm_err("Cannot ioremap GICV\n");
> +			return -ENOMEM;
> +		}
>  
> -	if (!PAGE_ALIGNED(resource_size(&info->vcpu))) {
> -		kvm_err("GICV size 0x%llx not a multiple of page size 0x%lx\n",
> -			(unsigned long long)resource_size(&info->vcpu),
> -			PAGE_SIZE);
> -		return -ENXIO;
> +		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
> +					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
> +					     info->vcpu.start);
> +		if (ret) {
> +			kvm_err("Cannot map GICV into hyp\n");
> +			goto out;
> +		}
> +
> +		static_branch_enable(&vgic_v2_cpuif_trap);
>  	}
>  
>  	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
>  						   resource_size(&info->vctrl));
>  	if (!kvm_vgic_global_state.vctrl_base) {
>  		kvm_err("Cannot ioremap GICH\n");
> -		return -ENOMEM;
> +		ret = -ENOMEM;
> +		goto out;
>  	}
>  
>  	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
>  	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
>  
> -	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
> -	if (ret) {
> -		kvm_err("Cannot register GICv2 KVM device\n");
> -		iounmap(kvm_vgic_global_state.vctrl_base);
> -		return ret;
> -	}
> -
>  	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
>  				     kvm_vgic_global_state.vctrl_base +
>  					 resource_size(&info->vctrl),
>  				     info->vctrl.start);
>  	if (ret) {
>  		kvm_err("Cannot map VCTRL into hyp\n");
> -		kvm_unregister_device_ops(KVM_DEV_TYPE_ARM_VGIC_V2);
> -		iounmap(kvm_vgic_global_state.vctrl_base);
> -		return ret;
> +		goto out;
> +	}
> +
> +	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
> +	if (ret) {
> +		kvm_err("Cannot register GICv2 KVM device\n");
> +		goto out;
>  	}
>  
>  	kvm_vgic_global_state.can_emulate_gicv2 = true;
> @@ -361,4 +369,11 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>  	kvm_info("vgic-v2@%llx\n", info->vctrl.start);
>  
>  	return 0;
> +out:
> +	if (kvm_vgic_global_state.vctrl_base)
> +		iounmap(kvm_vgic_global_state.vctrl_base);
> +	if (kvm_vgic_global_state.vcpu_base_va)
> +		iounmap(kvm_vgic_global_state.vcpu_base_va);
> +
> +	return ret;
>  }
> -- 
> 2.1.4

With the spelling fix from Peter, and the slightly more alarming message
(shouldn't this be a kvm_warn("...using trapping") as well?) then:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe
@ 2016-09-01 13:30     ` Christoffer Dall
  0 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 13:30 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Aug 19, 2016 at 01:38:15PM +0100, Marc Zyngier wrote:
> So far, we've been disabling KVM on systems where the GICV region couldn't
> be safely given to a guest. Now that we're able to handle this access
> safely by emulating it in HYP, we can enable this feature when we detect
> an unsafe configuration.
> 
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  virt/kvm/arm/vgic/vgic-v2.c | 69 +++++++++++++++++++++++++++------------------
>  1 file changed, 42 insertions(+), 27 deletions(-)
> 
> diff --git a/virt/kvm/arm/vgic/vgic-v2.c b/virt/kvm/arm/vgic/vgic-v2.c
> index b8da901..d1dcfc76 100644
> --- a/virt/kvm/arm/vgic/vgic-v2.c
> +++ b/virt/kvm/arm/vgic/vgic-v2.c
> @@ -278,12 +278,14 @@ int vgic_v2_map_resources(struct kvm *kvm)
>  		goto out;
>  	}
>  
> -	ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> -				    kvm_vgic_global_state.vcpu_base,
> -				    KVM_VGIC_V2_CPU_SIZE, true);
> -	if (ret) {
> -		kvm_err("Unable to remap VGIC CPU to VCPU\n");
> -		goto out;
> +	if (!static_branch_unlikely(&vgic_v2_cpuif_trap)) {
> +		ret = kvm_phys_addr_ioremap(kvm, dist->vgic_cpu_base,
> +					    kvm_vgic_global_state.vcpu_base,
> +					    KVM_VGIC_V2_CPU_SIZE, true);
> +		if (ret) {
> +			kvm_err("Unable to remap VGIC CPU to VCPU\n");
> +			goto out;
> +		}
>  	}
>  
>  	dist->ready = true;
> @@ -312,45 +314,51 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>  		return -ENXIO;
>  	}
>  
> -	if (!PAGE_ALIGNED(info->vcpu.start)) {
> -		kvm_err("GICV physical address 0x%llx not page aligned\n",
> -			(unsigned long long)info->vcpu.start);
> -		return -ENXIO;
> -	}
> +	if (!PAGE_ALIGNED(info->vcpu.start) ||
> +	    !PAGE_ALIGNED(resource_size(&info->vcpu))) {
> +		kvm_info("GICV region size/alignement is unsafe, using trapping\n");
> +		kvm_vgic_global_state.vcpu_base_va = ioremap(info->vcpu.start,
> +							     resource_size(&info->vcpu));
> +		if (!kvm_vgic_global_state.vcpu_base_va) {
> +			kvm_err("Cannot ioremap GICV\n");
> +			return -ENOMEM;
> +		}
>  
> -	if (!PAGE_ALIGNED(resource_size(&info->vcpu))) {
> -		kvm_err("GICV size 0x%llx not a multiple of page size 0x%lx\n",
> -			(unsigned long long)resource_size(&info->vcpu),
> -			PAGE_SIZE);
> -		return -ENXIO;
> +		ret = create_hyp_io_mappings(kvm_vgic_global_state.vcpu_base_va,
> +					     kvm_vgic_global_state.vcpu_base_va + resource_size(&info->vcpu),
> +					     info->vcpu.start);
> +		if (ret) {
> +			kvm_err("Cannot map GICV into hyp\n");
> +			goto out;
> +		}
> +
> +		static_branch_enable(&vgic_v2_cpuif_trap);
>  	}
>  
>  	kvm_vgic_global_state.vctrl_base = ioremap(info->vctrl.start,
>  						   resource_size(&info->vctrl));
>  	if (!kvm_vgic_global_state.vctrl_base) {
>  		kvm_err("Cannot ioremap GICH\n");
> -		return -ENOMEM;
> +		ret = -ENOMEM;
> +		goto out;
>  	}
>  
>  	vtr = readl_relaxed(kvm_vgic_global_state.vctrl_base + GICH_VTR);
>  	kvm_vgic_global_state.nr_lr = (vtr & 0x3f) + 1;
>  
> -	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
> -	if (ret) {
> -		kvm_err("Cannot register GICv2 KVM device\n");
> -		iounmap(kvm_vgic_global_state.vctrl_base);
> -		return ret;
> -	}
> -
>  	ret = create_hyp_io_mappings(kvm_vgic_global_state.vctrl_base,
>  				     kvm_vgic_global_state.vctrl_base +
>  					 resource_size(&info->vctrl),
>  				     info->vctrl.start);
>  	if (ret) {
>  		kvm_err("Cannot map VCTRL into hyp\n");
> -		kvm_unregister_device_ops(KVM_DEV_TYPE_ARM_VGIC_V2);
> -		iounmap(kvm_vgic_global_state.vctrl_base);
> -		return ret;
> +		goto out;
> +	}
> +
> +	ret = kvm_register_vgic_device(KVM_DEV_TYPE_ARM_VGIC_V2);
> +	if (ret) {
> +		kvm_err("Cannot register GICv2 KVM device\n");
> +		goto out;
>  	}
>  
>  	kvm_vgic_global_state.can_emulate_gicv2 = true;
> @@ -361,4 +369,11 @@ int vgic_v2_probe(const struct gic_kvm_info *info)
>  	kvm_info("vgic-v2@%llx\n", info->vctrl.start);
>  
>  	return 0;
> +out:
> +	if (kvm_vgic_global_state.vctrl_base)
> +		iounmap(kvm_vgic_global_state.vctrl_base);
> +	if (kvm_vgic_global_state.vcpu_base_va)
> +		iounmap(kvm_vgic_global_state.vcpu_base_va);
> +
> +	return ret;
>  }
> -- 
> 2.1.4

With the spelling fix from Peter, and the slightly more alarming message
(shouldn't this be a kvm_warn("...using trapping") as well?) then:

Reviewed-by: Christoffer Dall <christoffer.dall@linaro.org>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
  2016-09-01 12:46     ` Christoffer Dall
@ 2016-09-01 14:28       ` Marc Zyngier
  -1 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-09-01 14:28 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: linux-arm-kernel, kvm, kvmarm, Alex Bennee

On 01/09/16 13:46, Christoffer Dall wrote:
> On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
>> In order to efficiently perform the GICV access on behalf of the
>> guest, we need to be able to do avoid going back all the way to
> 
> s/do//
> 
>> the host kernel.
>>
>> For this, we introduce a new hook in the world switch code,
>> conveniently placed just after populating the fault info.
>> At that point, we only have saved/restored the GP registers,
>> and we can quickly perform all the required checks (data abort,
>> translation fault, valid faulting syndrome, not an external
>> abort, not a PTW).
>>
>> Coming back from the emulation code, we need to skip the emulated
>> instruction. This involves an additional bit of save/restore in
>> order to be able to access the guest's PC (and possibly CPSR if
>> this is a 32bit guest).
>>
>> At this stage, no emulation code is provided.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/include/asm/kvm_hyp.h |  1 +
>>  arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
>>  include/kvm/arm_vgic.h           |  3 +++
>>  virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
>>  virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
>>  5 files changed, 45 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
>> index cff5105..88ec3ac 100644
>> --- a/arch/arm64/include/asm/kvm_hyp.h
>> +++ b/arch/arm64/include/asm/kvm_hyp.h
>> @@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
>>  
>>  void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
>>  void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
>> +bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
>>  
>>  void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
>>  void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
>> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> index ae7855f..0be1594 100644
>> --- a/arch/arm64/kvm/hyp/switch.c
>> +++ b/arch/arm64/kvm/hyp/switch.c
>> @@ -17,6 +17,7 @@
>>  
>>  #include <linux/types.h>
>>  #include <asm/kvm_asm.h>
>> +#include <asm/kvm_emulate.h>
>>  #include <asm/kvm_hyp.h>
>>  
>>  static bool __hyp_text __fpsimd_enabled_nvhe(void)
>> @@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
>>  	return true;
>>  }
>>  
>> +static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
>> +
>> +	if (vcpu_mode_is_32bit(vcpu)) {
>> +		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
>> +		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
>> +		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
>> +	} else {
>> +		*vcpu_pc(vcpu) += 4;
>> +	}
>> +
>> +	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
>> +}
>> +
>>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>>  {
>>  	struct kvm_cpu_context *host_ctxt;
>> @@ -270,6 +286,22 @@ again:
>>  	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
>>  		goto again;
>>  
>> +	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
>> +	    exit_code == ARM_EXCEPTION_TRAP) {
> 
> do you get the static branch benefit when the test contains an &&
> clause?  (I haven't looked at the generated assembly, no)

You do, otherwise the C semantics would be broken. This is strictly
equivalent to:

	if (static_branch_unlikely()) {
		if (exit_code == ...) {
			[...]
		}
	}

> Also, if you flip this static branch for code both mapped in EL1 and
> EL2, would you potentially need some form of additional icache
> maintenance here?
> 
> Or are you relying on the static branch being set at boot time and hold
> forever true/false?

I asked myself this exact question when I did this, and became convinced
that this was OK for two reasons:
- we do it only once
- when we do it, we haven't executed that code yet, so it cannot be in
the cache yet

> 
>> +		bool valid;
>> +
>> +		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
>> +			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
>> +			kvm_vcpu_dabt_isvalid(vcpu) &&
>> +			!kvm_vcpu_dabt_isextabt(vcpu) &&
>> +			!kvm_vcpu_dabt_iss1tw(vcpu);
>> +
>> +		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {
> 
> extra whitespace
> 
>> +			__skip_instr(vcpu);
> 
> does this interact in any amusing way with single-step guest debugging?

Ouch. Good point. Actually, nothing that uses kvm_skip_instr() works for
singlestep/watchpoint either (trapped sysreg, WFx, spurious traps,
MMIO). I guess that's something we need to fix overall.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
@ 2016-09-01 14:28       ` Marc Zyngier
  0 siblings, 0 replies; 42+ messages in thread
From: Marc Zyngier @ 2016-09-01 14:28 UTC (permalink / raw)
  To: linux-arm-kernel

On 01/09/16 13:46, Christoffer Dall wrote:
> On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
>> In order to efficiently perform the GICV access on behalf of the
>> guest, we need to be able to do avoid going back all the way to
> 
> s/do//
> 
>> the host kernel.
>>
>> For this, we introduce a new hook in the world switch code,
>> conveniently placed just after populating the fault info.
>> At that point, we only have saved/restored the GP registers,
>> and we can quickly perform all the required checks (data abort,
>> translation fault, valid faulting syndrome, not an external
>> abort, not a PTW).
>>
>> Coming back from the emulation code, we need to skip the emulated
>> instruction. This involves an additional bit of save/restore in
>> order to be able to access the guest's PC (and possibly CPSR if
>> this is a 32bit guest).
>>
>> At this stage, no emulation code is provided.
>>
>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>> ---
>>  arch/arm64/include/asm/kvm_hyp.h |  1 +
>>  arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
>>  include/kvm/arm_vgic.h           |  3 +++
>>  virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
>>  virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
>>  5 files changed, 45 insertions(+)
>>
>> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
>> index cff5105..88ec3ac 100644
>> --- a/arch/arm64/include/asm/kvm_hyp.h
>> +++ b/arch/arm64/include/asm/kvm_hyp.h
>> @@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
>>  
>>  void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
>>  void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
>> +bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
>>  
>>  void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
>>  void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
>> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
>> index ae7855f..0be1594 100644
>> --- a/arch/arm64/kvm/hyp/switch.c
>> +++ b/arch/arm64/kvm/hyp/switch.c
>> @@ -17,6 +17,7 @@
>>  
>>  #include <linux/types.h>
>>  #include <asm/kvm_asm.h>
>> +#include <asm/kvm_emulate.h>
>>  #include <asm/kvm_hyp.h>
>>  
>>  static bool __hyp_text __fpsimd_enabled_nvhe(void)
>> @@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
>>  	return true;
>>  }
>>  
>> +static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
>> +{
>> +	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
>> +
>> +	if (vcpu_mode_is_32bit(vcpu)) {
>> +		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
>> +		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
>> +		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
>> +	} else {
>> +		*vcpu_pc(vcpu) += 4;
>> +	}
>> +
>> +	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
>> +}
>> +
>>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
>>  {
>>  	struct kvm_cpu_context *host_ctxt;
>> @@ -270,6 +286,22 @@ again:
>>  	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
>>  		goto again;
>>  
>> +	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
>> +	    exit_code == ARM_EXCEPTION_TRAP) {
> 
> do you get the static branch benefit when the test contains an &&
> clause?  (I haven't looked at the generated assembly, no)

You do, otherwise the C semantics would be broken. This is strictly
equivalent to:

	if (static_branch_unlikely()) {
		if (exit_code == ...) {
			[...]
		}
	}

> Also, if you flip this static branch for code both mapped in EL1 and
> EL2, would you potentially need some form of additional icache
> maintenance here?
> 
> Or are you relying on the static branch being set at boot time and hold
> forever true/false?

I asked myself this exact question when I did this, and became convinced
that this was OK for two reasons:
- we do it only once
- when we do it, we haven't executed that code yet, so it cannot be in
the cache yet

> 
>> +		bool valid;
>> +
>> +		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
>> +			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
>> +			kvm_vcpu_dabt_isvalid(vcpu) &&
>> +			!kvm_vcpu_dabt_isextabt(vcpu) &&
>> +			!kvm_vcpu_dabt_iss1tw(vcpu);
>> +
>> +		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {
> 
> extra whitespace
> 
>> +			__skip_instr(vcpu);
> 
> does this interact in any amusing way with single-step guest debugging?

Ouch. Good point. Actually, nothing that uses kvm_skip_instr() works for
singlestep/watchpoint either (trapped sysreg, WFx, spurious traps,
MMIO). I guess that's something we need to fix overall.

Thanks,

	M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
  2016-09-01 14:28       ` Marc Zyngier
@ 2016-09-01 14:39         ` Peter Maydell
  -1 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-09-01 14:39 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: arm-mail-list, kvm-devel, kvmarm

On 1 September 2016 at 15:28, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 01/09/16 13:46, Christoffer Dall wrote:
>> On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
>>> +                    __skip_instr(vcpu);
>>
>> does this interact in any amusing way with single-step guest debugging?
>
> Ouch. Good point. Actually, nothing that uses kvm_skip_instr() works for
> singlestep/watchpoint either (trapped sysreg, WFx, spurious traps,
> MMIO). I guess that's something we need to fix overall.

I vaguely remember having conversations about this way way back.
All the "advance one insn" stuff (including how the singlestep
state machine gets advanced) is probably on the wrong side of
"return to userspace".

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
@ 2016-09-01 14:39         ` Peter Maydell
  0 siblings, 0 replies; 42+ messages in thread
From: Peter Maydell @ 2016-09-01 14:39 UTC (permalink / raw)
  To: linux-arm-kernel

On 1 September 2016 at 15:28, Marc Zyngier <marc.zyngier@arm.com> wrote:
> On 01/09/16 13:46, Christoffer Dall wrote:
>> On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
>>> +                    __skip_instr(vcpu);
>>
>> does this interact in any amusing way with single-step guest debugging?
>
> Ouch. Good point. Actually, nothing that uses kvm_skip_instr() works for
> singlestep/watchpoint either (trapped sysreg, WFx, spurious traps,
> MMIO). I guess that's something we need to fix overall.

I vaguely remember having conversations about this way way back.
All the "advance one insn" stuff (including how the singlestep
state machine gets advanced) is probably on the wrong side of
"return to userspace".

thanks
-- PMM

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
  2016-09-01 14:28       ` Marc Zyngier
@ 2016-09-01 14:55         ` Christoffer Dall
  -1 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 14:55 UTC (permalink / raw)
  To: Marc Zyngier; +Cc: linux-arm-kernel, kvm, kvmarm, Alex Bennee

On Thu, Sep 01, 2016 at 03:28:36PM +0100, Marc Zyngier wrote:
> On 01/09/16 13:46, Christoffer Dall wrote:
> > On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
> >> In order to efficiently perform the GICV access on behalf of the
> >> guest, we need to be able to do avoid going back all the way to
> > 
> > s/do//
> > 
> >> the host kernel.
> >>
> >> For this, we introduce a new hook in the world switch code,
> >> conveniently placed just after populating the fault info.
> >> At that point, we only have saved/restored the GP registers,
> >> and we can quickly perform all the required checks (data abort,
> >> translation fault, valid faulting syndrome, not an external
> >> abort, not a PTW).
> >>
> >> Coming back from the emulation code, we need to skip the emulated
> >> instruction. This involves an additional bit of save/restore in
> >> order to be able to access the guest's PC (and possibly CPSR if
> >> this is a 32bit guest).
> >>
> >> At this stage, no emulation code is provided.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >>  arch/arm64/include/asm/kvm_hyp.h |  1 +
> >>  arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
> >>  include/kvm/arm_vgic.h           |  3 +++
> >>  virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
> >>  virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
> >>  5 files changed, 45 insertions(+)
> >>
> >> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> >> index cff5105..88ec3ac 100644
> >> --- a/arch/arm64/include/asm/kvm_hyp.h
> >> +++ b/arch/arm64/include/asm/kvm_hyp.h
> >> @@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
> >>  
> >>  void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
> >>  void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
> >> +bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
> >>  
> >>  void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
> >>  void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
> >> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> >> index ae7855f..0be1594 100644
> >> --- a/arch/arm64/kvm/hyp/switch.c
> >> +++ b/arch/arm64/kvm/hyp/switch.c
> >> @@ -17,6 +17,7 @@
> >>  
> >>  #include <linux/types.h>
> >>  #include <asm/kvm_asm.h>
> >> +#include <asm/kvm_emulate.h>
> >>  #include <asm/kvm_hyp.h>
> >>  
> >>  static bool __hyp_text __fpsimd_enabled_nvhe(void)
> >> @@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
> >>  	return true;
> >>  }
> >>  
> >> +static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
> >> +{
> >> +	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
> >> +
> >> +	if (vcpu_mode_is_32bit(vcpu)) {
> >> +		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
> >> +		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> >> +		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
> >> +	} else {
> >> +		*vcpu_pc(vcpu) += 4;
> >> +	}
> >> +
> >> +	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
> >> +}
> >> +
> >>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
> >>  {
> >>  	struct kvm_cpu_context *host_ctxt;
> >> @@ -270,6 +286,22 @@ again:
> >>  	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
> >>  		goto again;
> >>  
> >> +	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
> >> +	    exit_code == ARM_EXCEPTION_TRAP) {
> > 
> > do you get the static branch benefit when the test contains an &&
> > clause?  (I haven't looked at the generated assembly, no)
> 
> You do, otherwise the C semantics would be broken. This is strictly
> equivalent to:
> 
> 	if (static_branch_unlikely()) {
> 		if (exit_code == ...) {
> 			[...]
> 		}
> 	}
> 
> > Also, if you flip this static branch for code both mapped in EL1 and
> > EL2, would you potentially need some form of additional icache
> > maintenance here?
> > 
> > Or are you relying on the static branch being set at boot time and hold
> > forever true/false?
> 
> I asked myself this exact question when I did this, and became convinced
> that this was OK for two reasons:
> - we do it only once
> - when we do it, we haven't executed that code yet, so it cannot be in
> the cache yet
> 
> > 
> >> +		bool valid;
> >> +
> >> +		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
> >> +			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
> >> +			kvm_vcpu_dabt_isvalid(vcpu) &&
> >> +			!kvm_vcpu_dabt_isextabt(vcpu) &&
> >> +			!kvm_vcpu_dabt_iss1tw(vcpu);
> >> +
> >> +		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {
> > 
> > extra whitespace
> > 
> >> +			__skip_instr(vcpu);
> > 
> > does this interact in any amusing way with single-step guest debugging?
> 
> Ouch. Good point. Actually, nothing that uses kvm_skip_instr() works for
> singlestep/watchpoint either (trapped sysreg, WFx, spurious traps,
> MMIO). I guess that's something we need to fix overall.
> 
I suppose, yes.  I discretely cc'ed this e-mail to Alex :)

-Christoffer

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure
@ 2016-09-01 14:55         ` Christoffer Dall
  0 siblings, 0 replies; 42+ messages in thread
From: Christoffer Dall @ 2016-09-01 14:55 UTC (permalink / raw)
  To: linux-arm-kernel

On Thu, Sep 01, 2016 at 03:28:36PM +0100, Marc Zyngier wrote:
> On 01/09/16 13:46, Christoffer Dall wrote:
> > On Fri, Aug 19, 2016 at 01:38:13PM +0100, Marc Zyngier wrote:
> >> In order to efficiently perform the GICV access on behalf of the
> >> guest, we need to be able to do avoid going back all the way to
> > 
> > s/do//
> > 
> >> the host kernel.
> >>
> >> For this, we introduce a new hook in the world switch code,
> >> conveniently placed just after populating the fault info.
> >> At that point, we only have saved/restored the GP registers,
> >> and we can quickly perform all the required checks (data abort,
> >> translation fault, valid faulting syndrome, not an external
> >> abort, not a PTW).
> >>
> >> Coming back from the emulation code, we need to skip the emulated
> >> instruction. This involves an additional bit of save/restore in
> >> order to be able to access the guest's PC (and possibly CPSR if
> >> this is a 32bit guest).
> >>
> >> At this stage, no emulation code is provided.
> >>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >>  arch/arm64/include/asm/kvm_hyp.h |  1 +
> >>  arch/arm64/kvm/hyp/switch.c      | 32 ++++++++++++++++++++++++++++++++
> >>  include/kvm/arm_vgic.h           |  3 +++
> >>  virt/kvm/arm/hyp/vgic-v2-sr.c    |  7 +++++++
> >>  virt/kvm/arm/vgic/vgic-v2.c      |  2 ++
> >>  5 files changed, 45 insertions(+)
> >>
> >> diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
> >> index cff5105..88ec3ac 100644
> >> --- a/arch/arm64/include/asm/kvm_hyp.h
> >> +++ b/arch/arm64/include/asm/kvm_hyp.h
> >> @@ -123,6 +123,7 @@ typeof(orig) * __hyp_text fname(void)					\
> >>  
> >>  void __vgic_v2_save_state(struct kvm_vcpu *vcpu);
> >>  void __vgic_v2_restore_state(struct kvm_vcpu *vcpu);
> >> +bool __vgic_v2_perform_cpuif_access(struct kvm_vcpu *vcpu);
> >>  
> >>  void __vgic_v3_save_state(struct kvm_vcpu *vcpu);
> >>  void __vgic_v3_restore_state(struct kvm_vcpu *vcpu);
> >> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> >> index ae7855f..0be1594 100644
> >> --- a/arch/arm64/kvm/hyp/switch.c
> >> +++ b/arch/arm64/kvm/hyp/switch.c
> >> @@ -17,6 +17,7 @@
> >>  
> >>  #include <linux/types.h>
> >>  #include <asm/kvm_asm.h>
> >> +#include <asm/kvm_emulate.h>
> >>  #include <asm/kvm_hyp.h>
> >>  
> >>  static bool __hyp_text __fpsimd_enabled_nvhe(void)
> >> @@ -232,6 +233,21 @@ static bool __hyp_text __populate_fault_info(struct kvm_vcpu *vcpu)
> >>  	return true;
> >>  }
> >>  
> >> +static void __hyp_text __skip_instr(struct kvm_vcpu *vcpu)
> >> +{
> >> +	vcpu->arch.ctxt.gp_regs.regs.pc	= read_sysreg_el2(elr);
> >> +
> >> +	if (vcpu_mode_is_32bit(vcpu)) {
> >> +		vcpu->arch.ctxt.gp_regs.regs.pstate = read_sysreg_el2(spsr);
> >> +		kvm_skip_aarch32_instr(vcpu, kvm_vcpu_trap_il_is32bit(vcpu));
> >> +		write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pstate, spsr);
> >> +	} else {
> >> +		*vcpu_pc(vcpu) += 4;
> >> +	}
> >> +
> >> +	write_sysreg_el2(vcpu->arch.ctxt.gp_regs.regs.pc, elr);
> >> +}
> >> +
> >>  static int __hyp_text __guest_run(struct kvm_vcpu *vcpu)
> >>  {
> >>  	struct kvm_cpu_context *host_ctxt;
> >> @@ -270,6 +286,22 @@ again:
> >>  	if (exit_code == ARM_EXCEPTION_TRAP && !__populate_fault_info(vcpu))
> >>  		goto again;
> >>  
> >> +	if (static_branch_unlikely(&vgic_v2_cpuif_trap) &&
> >> +	    exit_code == ARM_EXCEPTION_TRAP) {
> > 
> > do you get the static branch benefit when the test contains an &&
> > clause?  (I haven't looked at the generated assembly, no)
> 
> You do, otherwise the C semantics would be broken. This is strictly
> equivalent to:
> 
> 	if (static_branch_unlikely()) {
> 		if (exit_code == ...) {
> 			[...]
> 		}
> 	}
> 
> > Also, if you flip this static branch for code both mapped in EL1 and
> > EL2, would you potentially need some form of additional icache
> > maintenance here?
> > 
> > Or are you relying on the static branch being set at boot time and hold
> > forever true/false?
> 
> I asked myself this exact question when I did this, and became convinced
> that this was OK for two reasons:
> - we do it only once
> - when we do it, we haven't executed that code yet, so it cannot be in
> the cache yet
> 
> > 
> >> +		bool valid;
> >> +
> >> +		valid = kvm_vcpu_trap_get_class(vcpu) == ESR_ELx_EC_DABT_LOW &&
> >> +			kvm_vcpu_trap_get_fault_type(vcpu) == FSC_FAULT &&
> >> +			kvm_vcpu_dabt_isvalid(vcpu) &&
> >> +			!kvm_vcpu_dabt_isextabt(vcpu) &&
> >> +			!kvm_vcpu_dabt_iss1tw(vcpu);
> >> +
> >> +		if (valid &&  __vgic_v2_perform_cpuif_access(vcpu)) {
> > 
> > extra whitespace
> > 
> >> +			__skip_instr(vcpu);
> > 
> > does this interact in any amusing way with single-step guest debugging?
> 
> Ouch. Good point. Actually, nothing that uses kvm_skip_instr() works for
> singlestep/watchpoint either (trapped sysreg, WFx, spurious traps,
> MMIO). I guess that's something we need to fix overall.
> 
I suppose, yes.  I discretely cc'ed this e-mail to Alex :)

-Christoffer

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2016-09-01 14:55 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-19 12:38 [PATCH 0/5] arm64: KVM: vgic-v2: Allow unsafe GICV accesses Marc Zyngier
2016-08-19 12:38 ` Marc Zyngier
2016-08-19 12:38 ` [PATCH 1/5] arm/arm64: KVM: Don't BUG_ON if IT bits are set in ARM mode Marc Zyngier
2016-08-19 12:38   ` Marc Zyngier
2016-09-01 11:56   ` Christoffer Dall
2016-09-01 11:56     ` Christoffer Dall
2016-09-01 12:21     ` Marc Zyngier
2016-09-01 12:21       ` Marc Zyngier
2016-08-19 12:38 ` [PATCH 2/5] arm64: KVM: Allow kvm_skip_instr32 to be shared between kernel and HYP code Marc Zyngier
2016-08-19 12:38   ` Marc Zyngier
2016-09-01 12:09   ` Christoffer Dall
2016-09-01 12:09     ` Christoffer Dall
2016-09-01 12:23     ` Marc Zyngier
2016-09-01 12:23       ` Marc Zyngier
2016-09-01 12:45   ` Peter Maydell
2016-09-01 12:45     ` Peter Maydell
2016-08-19 12:38 ` [PATCH 3/5] arm64: KVM: vgic-v2: Add the GICV emulation infrastructure Marc Zyngier
2016-08-19 12:38   ` Marc Zyngier
2016-09-01 12:46   ` Christoffer Dall
2016-09-01 12:46     ` Christoffer Dall
2016-09-01 14:28     ` Marc Zyngier
2016-09-01 14:28       ` Marc Zyngier
2016-09-01 14:39       ` Peter Maydell
2016-09-01 14:39         ` Peter Maydell
2016-09-01 14:55       ` Christoffer Dall
2016-09-01 14:55         ` Christoffer Dall
2016-08-19 12:38 ` [PATCH 4/5] arm64: KVM: vgic-v2: Add GICV access from HYP Marc Zyngier
2016-08-19 12:38   ` Marc Zyngier
2016-09-01 13:28   ` Christoffer Dall
2016-09-01 13:28     ` Christoffer Dall
2016-08-19 12:38 ` [PATCH 5/5] arm64: KVM: vgic-v2: Enable GICV access from HYP if access from guest is unsafe Marc Zyngier
2016-08-19 12:38   ` Marc Zyngier
2016-08-19 12:53   ` Peter Maydell
2016-08-19 12:53     ` Peter Maydell
2016-08-19 13:05     ` Marc Zyngier
2016-08-19 13:05       ` Marc Zyngier
2016-08-19 13:31       ` Peter Maydell
2016-08-19 13:31         ` Peter Maydell
2016-08-19 14:54         ` Marc Zyngier
2016-08-19 14:54           ` Marc Zyngier
2016-09-01 13:30   ` Christoffer Dall
2016-09-01 13:30     ` Christoffer Dall

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.