kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] KVM: arm64: Additional 32bit fixes
@ 2020-06-09  8:49 Marc Zyngier
  2020-06-09  8:49 ` [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts Marc Zyngier
  2020-06-09  8:49 ` [PATCH 2/2] KVM: arm64: Synchronize sysreg state on injecting an AArch32 exception Marc Zyngier
  0 siblings, 2 replies; 6+ messages in thread
From: Marc Zyngier @ 2020-06-09  8:49 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, kernel-team

Here's a couple of patches that address the issues that James
mentionned in [1], affecting 32bit guests.

I lack a BE-capable host to properly test the first patch, but it is
obviously correct (ha! ;-).

[1] https://lore.kernel.org/r/20200526161834.29165-1-james.morse@arm.com

Marc Zyngier (2):
  KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts
  KVM: arm64: Synchronize sysreg state on injecting an AArch32 exception

 arch/arm64/include/asm/kvm_host.h | 10 ++++++++--
 arch/arm64/kvm/aarch32.c          | 28 ++++++++++++++++++++++++++++
 2 files changed, 36 insertions(+), 2 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts
  2020-06-09  8:49 [PATCH 0/2] KVM: arm64: Additional 32bit fixes Marc Zyngier
@ 2020-06-09  8:49 ` Marc Zyngier
  2020-06-09 11:41   ` Robin Murphy
  2020-06-09  8:49 ` [PATCH 2/2] KVM: arm64: Synchronize sysreg state on injecting an AArch32 exception Marc Zyngier
  1 sibling, 1 reply; 6+ messages in thread
From: Marc Zyngier @ 2020-06-09  8:49 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, kernel-team, stable

AArch32 CP1x registers are overlayed on their AArch64 counterparts
in the vcpu struct. This leads to an interesting problem as they
are stored in their CPU-local format, and thus a CP1x register
doesn't "hit" the lower 32bit portion of the AArch64 register on
a BE host.

To workaround this unfortunate situation, introduce a bias trick
in the vcpu_cp1x() accessors which picks the correct half of the
64bit register.

Cc: stable@vger.kernel.org
Reported-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/include/asm/kvm_host.h | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 59029e90b557..e80c0e06f235 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -404,8 +404,14 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
  * CP14 and CP15 live in the same array, as they are backed by the
  * same system registers.
  */
-#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r)])
-#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r)])
+#ifdef CPU_BIG_ENDIAN
+#define CPx_OFFSET	1
+#else
+#define CPx_OFFSET	0
+#endif
+
+#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
+#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
 
 struct kvm_vm_stat {
 	ulong remote_tlb_flush;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/2] KVM: arm64: Synchronize sysreg state on injecting an AArch32 exception
  2020-06-09  8:49 [PATCH 0/2] KVM: arm64: Additional 32bit fixes Marc Zyngier
  2020-06-09  8:49 ` [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts Marc Zyngier
@ 2020-06-09  8:49 ` Marc Zyngier
  1 sibling, 0 replies; 6+ messages in thread
From: Marc Zyngier @ 2020-06-09  8:49 UTC (permalink / raw)
  To: kvmarm, kvm, linux-arm-kernel
  Cc: James Morse, Julien Thierry, Suzuki K Poulose, kernel-team, stable

On a VHE system, the EL1 state is left in the CPU most of the time,
and only syncronized back to memory when vcpu_put() is called (most
of the time on preemption).

Which means that when injecting an exception, we'd better have a way
to either:
(1) write directly to the EL1 sysregs
(2) synchronize the state back to memory, and do the changes there

For an AArch64, we already do (1), so we are safe. Unfortunately,
doing the same thing for AArch32 would be pretty invasive. Instead,
we can easily implement (2) by calling the put/load architectural
backends, and keep preemption disabled. We can then reload the
state back into EL1.

Cc: stable@vger.kernel.org
Reported-by: James Morse <james.morse@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
---
 arch/arm64/kvm/aarch32.c | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/arch/arm64/kvm/aarch32.c b/arch/arm64/kvm/aarch32.c
index 0a356aa91aa1..40a62a99fbf8 100644
--- a/arch/arm64/kvm/aarch32.c
+++ b/arch/arm64/kvm/aarch32.c
@@ -33,6 +33,26 @@ static const u8 return_offsets[8][2] = {
 	[7] = { 4, 4 },		/* FIQ, unused */
 };
 
+static bool pre_fault_synchronize(struct kvm_vcpu *vcpu)
+{
+	preempt_disable();
+	if (vcpu->arch.sysregs_loaded_on_cpu) {
+		kvm_arch_vcpu_put(vcpu);
+		return true;
+	}
+
+	preempt_enable();
+	return false;
+}
+
+static void post_fault_synchronize(struct kvm_vcpu *vcpu, bool loaded)
+{
+	if (loaded) {
+		kvm_arch_vcpu_load(vcpu, smp_processor_id());
+		preempt_enable();
+	}
+}
+
 /*
  * When an exception is taken, most CPSR fields are left unchanged in the
  * handler. However, some are explicitly overridden (e.g. M[4:0]).
@@ -155,7 +175,10 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, u32 mode, u32 vect_offset)
 
 void kvm_inject_undef32(struct kvm_vcpu *vcpu)
 {
+	bool loaded = pre_fault_synchronize(vcpu);
+
 	prepare_fault32(vcpu, PSR_AA32_MODE_UND, 4);
+	post_fault_synchronize(vcpu, loaded);
 }
 
 /*
@@ -168,6 +191,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
 	u32 vect_offset;
 	u32 *far, *fsr;
 	bool is_lpae;
+	bool loaded;
+
+	loaded = pre_fault_synchronize(vcpu);
 
 	if (is_pabt) {
 		vect_offset = 12;
@@ -191,6 +217,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool is_pabt,
 		/* no need to shuffle FS[4] into DFSR[10] as its 0 */
 		*fsr = DFSR_FSC_EXTABT_nLPAE;
 	}
+
+	post_fault_synchronize(vcpu, loaded);
 }
 
 void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts
  2020-06-09  8:49 ` [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts Marc Zyngier
@ 2020-06-09 11:41   ` Robin Murphy
  2020-06-09 11:48     ` Marc Zyngier
  0 siblings, 1 reply; 6+ messages in thread
From: Robin Murphy @ 2020-06-09 11:41 UTC (permalink / raw)
  To: Marc Zyngier, kvmarm, kvm, linux-arm-kernel
  Cc: kernel-team, James Morse, stable, Julien Thierry, Suzuki K Poulose

On 2020-06-09 09:49, Marc Zyngier wrote:
> AArch32 CP1x registers are overlayed on their AArch64 counterparts
> in the vcpu struct. This leads to an interesting problem as they
> are stored in their CPU-local format, and thus a CP1x register
> doesn't "hit" the lower 32bit portion of the AArch64 register on
> a BE host.
> 
> To workaround this unfortunate situation, introduce a bias trick
> in the vcpu_cp1x() accessors which picks the correct half of the
> 64bit register.
> 
> Cc: stable@vger.kernel.org
> Reported-by: James Morse <james.morse@arm.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> ---
>   arch/arm64/include/asm/kvm_host.h | 10 ++++++++--
>   1 file changed, 8 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index 59029e90b557..e80c0e06f235 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -404,8 +404,14 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
>    * CP14 and CP15 live in the same array, as they are backed by the
>    * same system registers.
>    */
> -#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r)])
> -#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r)])
> +#ifdef CPU_BIG_ENDIAN

Ahem... I think you're missing a "CONFIG_" there ;)

Bonus trickery - for a 0 or 1 value you can simply use IS_ENABLED().

Robin.

> +#define CPx_OFFSET	1
> +#else
> +#define CPx_OFFSET	0
> +#endif
> +
> +#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
> +#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
>   
>   struct kvm_vm_stat {
>   	ulong remote_tlb_flush;
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts
  2020-06-09 11:41   ` Robin Murphy
@ 2020-06-09 11:48     ` Marc Zyngier
  2020-06-10 14:12       ` James Morse
  0 siblings, 1 reply; 6+ messages in thread
From: Marc Zyngier @ 2020-06-09 11:48 UTC (permalink / raw)
  To: Robin Murphy
  Cc: kvmarm, kvm, linux-arm-kernel, kernel-team, James Morse, stable,
	Julien Thierry, Suzuki K Poulose

Hi Robin,

On 2020-06-09 12:41, Robin Murphy wrote:
> On 2020-06-09 09:49, Marc Zyngier wrote:
>> AArch32 CP1x registers are overlayed on their AArch64 counterparts
>> in the vcpu struct. This leads to an interesting problem as they
>> are stored in their CPU-local format, and thus a CP1x register
>> doesn't "hit" the lower 32bit portion of the AArch64 register on
>> a BE host.
>> 
>> To workaround this unfortunate situation, introduce a bias trick
>> in the vcpu_cp1x() accessors which picks the correct half of the
>> 64bit register.
>> 
>> Cc: stable@vger.kernel.org
>> Reported-by: James Morse <james.morse@arm.com>
>> Signed-off-by: Marc Zyngier <maz@kernel.org>
>> ---
>>   arch/arm64/include/asm/kvm_host.h | 10 ++++++++--
>>   1 file changed, 8 insertions(+), 2 deletions(-)
>> 
>> diff --git a/arch/arm64/include/asm/kvm_host.h 
>> b/arch/arm64/include/asm/kvm_host.h
>> index 59029e90b557..e80c0e06f235 100644
>> --- a/arch/arm64/include/asm/kvm_host.h
>> +++ b/arch/arm64/include/asm/kvm_host.h
>> @@ -404,8 +404,14 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, 
>> u64 val, int reg);
>>    * CP14 and CP15 live in the same array, as they are backed by the
>>    * same system registers.
>>    */
>> -#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r)])
>> -#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r)])
>> +#ifdef CPU_BIG_ENDIAN
> 
> Ahem... I think you're missing a "CONFIG_" there ;)

Duh! As I said, I didn't test the thing at all! ;-)

> Bonus trickery - for a 0 or 1 value you can simply use IS_ENABLED().

Beautiful! Definitely a must! :D

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts
  2020-06-09 11:48     ` Marc Zyngier
@ 2020-06-10 14:12       ` James Morse
  0 siblings, 0 replies; 6+ messages in thread
From: James Morse @ 2020-06-10 14:12 UTC (permalink / raw)
  To: Marc Zyngier, Robin Murphy
  Cc: kvm, Suzuki K Poulose, stable, Julien Thierry, kernel-team,
	kvmarm, linux-arm-kernel

Hi Marc, Robin,

On 09/06/2020 12:48, Marc Zyngier wrote:
> On 2020-06-09 12:41, Robin Murphy wrote:
>> On 2020-06-09 09:49, Marc Zyngier wrote:
>>> AArch32 CP1x registers are overlayed on their AArch64 counterparts
>>> in the vcpu struct. This leads to an interesting problem as they
>>> are stored in their CPU-local format, and thus a CP1x register
>>> doesn't "hit" the lower 32bit portion of the AArch64 register on
>>> a BE host.
>>>
>>> To workaround this unfortunate situation, introduce a bias trick
>>> in the vcpu_cp1x() accessors which picks the correct half of the
>>> 64bit register.

>>> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
>>> index 59029e90b557..e80c0e06f235 100644
>>> --- a/arch/arm64/include/asm/kvm_host.h
>>> +++ b/arch/arm64/include/asm/kvm_host.h
>>> @@ -404,8 +404,14 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
>>>    * CP14 and CP15 live in the same array, as they are backed by the
>>>    * same system registers.
>>>    */
>>> -#define vcpu_cp14(v,r)        ((v)->arch.ctxt.copro[(r)])
>>> -#define vcpu_cp15(v,r)        ((v)->arch.ctxt.copro[(r)])
>>> +#ifdef CPU_BIG_ENDIAN
>>
>> Ahem... I think you're missing a "CONFIG_" there ;)
> 
> Duh! As I said, I didn't test the thing at all! ;-)
> 
>> Bonus trickery - for a 0 or 1 value you can simply use IS_ENABLED().
> 
> Beautiful! Definitely a must! :D

With Robin's suggestion of:
---------------%<---------------
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index 2a935457712b..54e9c7eb3596 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -405,11 +405,7 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 val, int reg);
  * CP14 and CP15 live in the same array, as they are backed by the
  * same system registers.
  */
-#ifdef CPU_BIG_ENDIAN
-#define CPx_OFFSET     1
-#else
-#define CPx_OFFSET     0
-#endif
+#define CPx_OFFSET     IS_ENABLED(CONFIG_CPU_BIG_ENDIAN)

 #define vcpu_cp14(v,r)         ((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
 #define vcpu_cp15(v,r)         ((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
---------------%<---------------

Tested-by: James Morse <james.morse@arm.com>
Acked-by: James Morse <james.morse@arm.com>


Thanks,

James

-----
Before this patch, an aarch32 guest of a BE host reading sysregs KVM is trap-and-undeffing
gets:
| Bad mode in prefetch abort handler detected
| Internal error: Oops - bad mode: 0 [#1] SMP THUMB2
| Modules linked in:
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.7.0+ #260
| Hardware name: Generic DT based system
| PC is at 0x4
| LR is at smp_cpus_done+0x85/0x98
| pc : [<00000004>]    lr : [<808035cb>]    psr: 6000009b
| sp : 9f4a1f08  ip : 00000003  fp : 00000000
| r10: 00000000  r9 : 00000000  r8 : 00000000
| r7 : 80904ea8  r6 : 80904f6c  r5 : 00000002  r4 : 000f4240
| r3 : bc605c12  r2 : bc605c12  r1 : 1f38c000  r0 : 0000c348
| Flags: nZCv  IRQs off  FIQs on  Mode UND_32  ISA ARM  Segment none
| Control: 50c5383d  Table: 8000406a  DAC: bc605c12
| Process swapper/0 (pid: 1, stack limit = 0x(ptrval))
[...]
| [<808035cb>] (smp_cpus_done) from [<00000002>] (0x2)
| Code: bad PC value
| ---[ end trace b37275bf489ca225 ]---


instead of the undef it so richly deserved:
| Internal error: Oops - undefined instruction: 0 [#1] SMP THUMB2
| Modules linked in:
| CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.7.0+ #260
| Hardware name: Generic DT based system
| PC is at smp_cpus_done+0x88/0x98
| LR is at smp_cpus_done+0x85/0x98
| pc : [<808035ce>]    lr : [<808035cb>]    psr: 60000073
| sp : 9f495f50  ip : 00000001  fp : 00000000
| r10: 00000000  r9 : 00000000  r8 : 00000000
| r7 : 80904ea8  r6 : 80904f6c  r5 : 00000001  r4 : 0007a120
| r3 : 7f3828d2  r2 : 7f3828d2  r1 : 1f39f000  r0 : 0000c348
| Flags: nZCv  IRQs on  FIQs off  Mode SVC_32  ISA Thumb  Segment none
| Control: 50c5383d  Table: 8000406a  DAC: 00000051
| Process swapper/0 (pid: 1, stack limit = 0x(ptrval))
[...]
| [<808035ce>] (smp_cpus_done) from [<80800f73>] (kernel_init_freeable+0xdf/0x204)
| [<80800f73>] (kernel_init_freeable) from [<805aa2a7>] (kernel_init+0x7/0xc8)
| [<805aa2a7>] (kernel_init) from [<80100159>] (ret_from_fork+0x11/0x38)
| Code: f7ff f8b9 f24c 3048 (ee11) 1f30
| ---[ end trace 4c78dcd8460e6041 ]---



^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-06-10 14:12 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-09  8:49 [PATCH 0/2] KVM: arm64: Additional 32bit fixes Marc Zyngier
2020-06-09  8:49 ` [PATCH 1/2] KVM: arm64: Make vcpu_cp1x() work on Big Endian hosts Marc Zyngier
2020-06-09 11:41   ` Robin Murphy
2020-06-09 11:48     ` Marc Zyngier
2020-06-10 14:12       ` James Morse
2020-06-09  8:49 ` [PATCH 2/2] KVM: arm64: Synchronize sysreg state on injecting an AArch32 exception Marc Zyngier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).