All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: James Morse <james.morse@arm.com>
Cc: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 0/3] KVM: arm64: aarch32 ACTLR accesses
Date: Sun, 31 May 2020 14:37:50 +0100	[thread overview]
Message-ID: <8bdb1f67fb142f3547ebcbbe4e6158c0@kernel.org> (raw)
In-Reply-To: <20200526161834.29165-1-james.morse@arm.com>

Hi James,

On 2020-05-26 17:18, James Morse wrote:

[...]

> 1. How does this copro[] thing work with a big-endian host?
> The cp15_regs emulation look fine as nothing uses vcpu_cp15() to read 
> the
> register, but wouldn't prepare_fault32() read the wrong end of the 
> register
> when using vcpu_cp15()?

This seems pretty broken indeed. How about something like this:

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 59029e90b557..e80c0e06f235 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -404,8 +404,14 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 
val, int reg);
   * CP14 and CP15 live in the same array, as they are backed by the
   * same system registers.
   */
-#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r)])
-#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r)])
+#ifdef CPU_BIG_ENDIAN
+#define CPx_OFFSET	1
+#else
+#define CPx_OFFSET	0
+#endif
+
+#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
+#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])

  struct kvm_vm_stat {
  	ulong remote_tlb_flush;

Yes, it's ugly.

> 2. How does the 32bit fault injection code work with VHE?
> vcpu_cp15() modifies the in-memory copy, surely a vcpu_put() will 
> clobber
> everything it did, or fail to restore it when entering the guest.

Wow, you're really wadding into dangerous territory! ;-)

Again, you are absolutely right. I guess nobody really ever ran
32bit guests on VHE systems, as they are both rare and mostly
with 64-bit-only EL1. This code is also mostly never used
(we run well behaved guests at all times!).

Here's a hack that should do the right thing at all times:

diff --git a/arch/arm64/kvm/aarch32.c b/arch/arm64/kvm/aarch32.c
index 0a356aa91aa1..40a62a99fbf8 100644
--- a/arch/arm64/kvm/aarch32.c
+++ b/arch/arm64/kvm/aarch32.c
@@ -33,6 +33,26 @@ static const u8 return_offsets[8][2] = {
  	[7] = { 4, 4 },		/* FIQ, unused */
  };

+static bool pre_fault_synchronize(struct kvm_vcpu *vcpu)
+{
+	preempt_disable();
+	if (vcpu->arch.sysregs_loaded_on_cpu) {
+		kvm_arch_vcpu_put(vcpu);
+		return true;
+	}
+
+	preempt_enable();
+	return false;
+}
+
+static void post_fault_synchronize(struct kvm_vcpu *vcpu, bool loaded)
+{
+	if (loaded) {
+		kvm_arch_vcpu_load(vcpu, smp_processor_id());
+		preempt_enable();
+	}
+}
+
  /*
   * When an exception is taken, most CPSR fields are left unchanged in 
the
   * handler. However, some are explicitly overridden (e.g. M[4:0]).
@@ -155,7 +175,10 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, 
u32 mode, u32 vect_offset)

  void kvm_inject_undef32(struct kvm_vcpu *vcpu)
  {
+	bool loaded = pre_fault_synchronize(vcpu);
+
  	prepare_fault32(vcpu, PSR_AA32_MODE_UND, 4);
+	post_fault_synchronize(vcpu, loaded);
  }

  /*
@@ -168,6 +191,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool 
is_pabt,
  	u32 vect_offset;
  	u32 *far, *fsr;
  	bool is_lpae;
+	bool loaded;
+
+	loaded = pre_fault_synchronize(vcpu);

  	if (is_pabt) {
  		vect_offset = 12;
@@ -191,6 +217,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool 
is_pabt,
  		/* no need to shuffle FS[4] into DFSR[10] as its 0 */
  		*fsr = DFSR_FSC_EXTABT_nLPAE;
  	}
+
+	post_fault_synchronize(vcpu, loaded);
  }

  void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)

Of course, none of this is tested. We really should find ways to
trigger these corner cases... :-/

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: James Morse <james.morse@arm.com>
Cc: Julien Thierry <julien.thierry.kdev@gmail.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org,
	Suzuki K Poulose <suzuki.poulose@arm.com>
Subject: Re: [PATCH 0/3] KVM: arm64: aarch32 ACTLR accesses
Date: Sun, 31 May 2020 14:37:50 +0100	[thread overview]
Message-ID: <8bdb1f67fb142f3547ebcbbe4e6158c0@kernel.org> (raw)
In-Reply-To: <20200526161834.29165-1-james.morse@arm.com>

Hi James,

On 2020-05-26 17:18, James Morse wrote:

[...]

> 1. How does this copro[] thing work with a big-endian host?
> The cp15_regs emulation look fine as nothing uses vcpu_cp15() to read 
> the
> register, but wouldn't prepare_fault32() read the wrong end of the 
> register
> when using vcpu_cp15()?

This seems pretty broken indeed. How about something like this:

diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 59029e90b557..e80c0e06f235 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -404,8 +404,14 @@ void vcpu_write_sys_reg(struct kvm_vcpu *vcpu, u64 
val, int reg);
   * CP14 and CP15 live in the same array, as they are backed by the
   * same system registers.
   */
-#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r)])
-#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r)])
+#ifdef CPU_BIG_ENDIAN
+#define CPx_OFFSET	1
+#else
+#define CPx_OFFSET	0
+#endif
+
+#define vcpu_cp14(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])
+#define vcpu_cp15(v,r)		((v)->arch.ctxt.copro[(r) ^ CPx_OFFSET])

  struct kvm_vm_stat {
  	ulong remote_tlb_flush;

Yes, it's ugly.

> 2. How does the 32bit fault injection code work with VHE?
> vcpu_cp15() modifies the in-memory copy, surely a vcpu_put() will 
> clobber
> everything it did, or fail to restore it when entering the guest.

Wow, you're really wadding into dangerous territory! ;-)

Again, you are absolutely right. I guess nobody really ever ran
32bit guests on VHE systems, as they are both rare and mostly
with 64-bit-only EL1. This code is also mostly never used
(we run well behaved guests at all times!).

Here's a hack that should do the right thing at all times:

diff --git a/arch/arm64/kvm/aarch32.c b/arch/arm64/kvm/aarch32.c
index 0a356aa91aa1..40a62a99fbf8 100644
--- a/arch/arm64/kvm/aarch32.c
+++ b/arch/arm64/kvm/aarch32.c
@@ -33,6 +33,26 @@ static const u8 return_offsets[8][2] = {
  	[7] = { 4, 4 },		/* FIQ, unused */
  };

+static bool pre_fault_synchronize(struct kvm_vcpu *vcpu)
+{
+	preempt_disable();
+	if (vcpu->arch.sysregs_loaded_on_cpu) {
+		kvm_arch_vcpu_put(vcpu);
+		return true;
+	}
+
+	preempt_enable();
+	return false;
+}
+
+static void post_fault_synchronize(struct kvm_vcpu *vcpu, bool loaded)
+{
+	if (loaded) {
+		kvm_arch_vcpu_load(vcpu, smp_processor_id());
+		preempt_enable();
+	}
+}
+
  /*
   * When an exception is taken, most CPSR fields are left unchanged in 
the
   * handler. However, some are explicitly overridden (e.g. M[4:0]).
@@ -155,7 +175,10 @@ static void prepare_fault32(struct kvm_vcpu *vcpu, 
u32 mode, u32 vect_offset)

  void kvm_inject_undef32(struct kvm_vcpu *vcpu)
  {
+	bool loaded = pre_fault_synchronize(vcpu);
+
  	prepare_fault32(vcpu, PSR_AA32_MODE_UND, 4);
+	post_fault_synchronize(vcpu, loaded);
  }

  /*
@@ -168,6 +191,9 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool 
is_pabt,
  	u32 vect_offset;
  	u32 *far, *fsr;
  	bool is_lpae;
+	bool loaded;
+
+	loaded = pre_fault_synchronize(vcpu);

  	if (is_pabt) {
  		vect_offset = 12;
@@ -191,6 +217,8 @@ static void inject_abt32(struct kvm_vcpu *vcpu, bool 
is_pabt,
  		/* no need to shuffle FS[4] into DFSR[10] as its 0 */
  		*fsr = DFSR_FSC_EXTABT_nLPAE;
  	}
+
+	post_fault_synchronize(vcpu, loaded);
  }

  void kvm_inject_dabt32(struct kvm_vcpu *vcpu, unsigned long addr)

Of course, none of this is tested. We really should find ways to
trigger these corner cases... :-/

Thanks,

         M.
-- 
Jazz is not dead. It just smells funny...

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2020-05-31 13:37 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-26 16:18 [PATCH 0/3] KVM: arm64: aarch32 ACTLR accesses James Morse
2020-05-26 16:18 ` James Morse
2020-05-26 16:18 ` [PATCH 1/3] KVM: arm64: Stop writing aarch32's CSSELR into ACTLR James Morse
2020-05-26 16:18   ` James Morse
2020-05-26 16:18   ` James Morse
2020-05-27 16:57   ` Sasha Levin
2020-05-27 16:57     ` Sasha Levin
2020-05-27 16:57     ` Sasha Levin
2020-05-28  8:57   ` Marc Zyngier
2020-05-28  8:57     ` Marc Zyngier
2020-05-28  8:57     ` Marc Zyngier
2020-05-28 11:59     ` James Morse
2020-05-28 11:59       ` James Morse
2020-05-28 11:59       ` James Morse
2020-05-28 12:10       ` Marc Zyngier
2020-05-28 12:10         ` Marc Zyngier
2020-05-28 12:10         ` Marc Zyngier
2020-05-26 16:18 ` [PATCH 2/3] KVM: arm64: Stop save/restoring ACTLR_EL1 James Morse
2020-05-26 16:18   ` James Morse
2020-05-28 12:36   ` Marc Zyngier
2020-05-28 12:36     ` Marc Zyngier
2020-05-28 12:38     ` Marc Zyngier
2020-05-28 12:38       ` Marc Zyngier
2020-05-28 12:55       ` James Morse
2020-05-28 12:55         ` James Morse
2020-05-26 16:18 ` [PATCH 3/3] KVM: arm64: Add emulation for 32bit guests accessing ACTLR2 James Morse
2020-05-26 16:18   ` James Morse
2020-05-28 12:51   ` Marc Zyngier
2020-05-28 12:51     ` Marc Zyngier
2020-05-31 13:37 ` Marc Zyngier [this message]
2020-05-31 13:37   ` [PATCH 0/3] KVM: arm64: aarch32 ACTLR accesses Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=8bdb1f67fb142f3547ebcbbe4e6158c0@kernel.org \
    --to=maz@kernel.org \
    --cc=james.morse@arm.com \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.