All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <marc.zyngier@arm.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>
Cc: Punit Agrawal <punit.agrawal@arm.com>,
	kvm@vger.kernel.org,
	"Gustavo A . R . Silva" <gustavo@embeddedor.com>,
	Will Deacon <will.deacon@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, Lukas Braun <koomi@moshbit.net>
Subject: [PATCH 03/28] KVM: arm/arm64: Fix VMID alloc race by reverting to lock-less
Date: Wed, 19 Dec 2018 18:03:24 +0000	[thread overview]
Message-ID: <20181219180349.242681-4-marc.zyngier@arm.com> (raw)
In-Reply-To: <20181219180349.242681-1-marc.zyngier@arm.com>

From: Christoffer Dall <christoffer.dall@arm.com>

We recently addressed a VMID generation race by introducing a read/write
lock around accesses and updates to the vmid generation values.

However, kvm_arch_vcpu_ioctl_run() also calls need_new_vmid_gen() but
does so without taking the read lock.

As far as I can tell, this can lead to the same kind of race:

  VM 0, VCPU 0			VM 0, VCPU 1
  ------------			------------
  update_vttbr (vmid 254)
  				update_vttbr (vmid 1) // roll over
				read_lock(kvm_vmid_lock);
				force_vm_exit()
  local_irq_disable
  need_new_vmid_gen == false //because vmid gen matches

  enter_guest (vmid 254)
  				kvm_arch.vttbr = <PGD>:<VMID 1>
				read_unlock(kvm_vmid_lock);

  				enter_guest (vmid 1)

Which results in running two VCPUs in the same VM with different VMIDs
and (even worse) other VCPUs from other VMs could now allocate clashing
VMID 254 from the new generation as long as VCPU 0 is not exiting.

Attempt to solve this by making sure vttbr is updated before another CPU
can observe the updated VMID generation.

Cc: stable@vger.kernel.org
Fixes: f0cf47d939d0 "KVM: arm/arm64: Close VMID generation race"
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/arm.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 4adcee5fc126..d9273f972828 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -66,7 +66,7 @@ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
 static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
 static u32 kvm_next_vmid;
 static unsigned int kvm_vmid_bits __read_mostly;
-static DEFINE_RWLOCK(kvm_vmid_lock);
+static DEFINE_SPINLOCK(kvm_vmid_lock);
 
 static bool vgic_present;
 
@@ -484,7 +484,9 @@ void force_vm_exit(const cpumask_t *mask)
  */
 static bool need_new_vmid_gen(struct kvm *kvm)
 {
-	return unlikely(kvm->arch.vmid_gen != atomic64_read(&kvm_vmid_gen));
+	u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen);
+	smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
+	return unlikely(READ_ONCE(kvm->arch.vmid_gen) != current_vmid_gen);
 }
 
 /**
@@ -499,16 +501,11 @@ static void update_vttbr(struct kvm *kvm)
 {
 	phys_addr_t pgd_phys;
 	u64 vmid, cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
-	bool new_gen;
 
-	read_lock(&kvm_vmid_lock);
-	new_gen = need_new_vmid_gen(kvm);
-	read_unlock(&kvm_vmid_lock);
-
-	if (!new_gen)
+	if (!need_new_vmid_gen(kvm))
 		return;
 
-	write_lock(&kvm_vmid_lock);
+	spin_lock(&kvm_vmid_lock);
 
 	/*
 	 * We need to re-check the vmid_gen here to ensure that if another vcpu
@@ -516,7 +513,7 @@ static void update_vttbr(struct kvm *kvm)
 	 * use the same vmid.
 	 */
 	if (!need_new_vmid_gen(kvm)) {
-		write_unlock(&kvm_vmid_lock);
+		spin_unlock(&kvm_vmid_lock);
 		return;
 	}
 
@@ -539,7 +536,6 @@ static void update_vttbr(struct kvm *kvm)
 		kvm_call_hyp(__kvm_flush_vm_context);
 	}
 
-	kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
 	kvm->arch.vmid = kvm_next_vmid;
 	kvm_next_vmid++;
 	kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
@@ -550,7 +546,10 @@ static void update_vttbr(struct kvm *kvm)
 	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
 	kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
 
-	write_unlock(&kvm_vmid_lock);
+	smp_wmb();
+	WRITE_ONCE(kvm->arch.vmid_gen, atomic64_read(&kvm_vmid_gen));
+
+	spin_unlock(&kvm_vmid_lock);
 }
 
 static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
-- 
2.19.2

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <marc.zyngier@arm.com>
To: "Paolo Bonzini" <pbonzini@redhat.com>,
	"Radim Krčmář" <rkrcmar@redhat.com>
Cc: "Mark Rutland" <mark.rutland@arm.com>,
	"Punit Agrawal" <punit.agrawal@arm.com>,
	kvm@vger.kernel.org, "Julien Thierry" <julien.thierry@arm.com>,
	"Gustavo A . R . Silva" <gustavo@embeddedor.com>,
	"Will Deacon" <will.deacon@arm.com>,
	"Christoffer Dall" <christoffer.dall@arm.com>,
	linux-arm-kernel@lists.infradead.org, punitagrawal@gmail.com,
	"Alex Bennée" <alex.bennee@linaro.org>,
	kvmarm@lists.cs.columbia.edu,
	"Suzuki Poulose" <suzuki.poulose@arm.com>,
	"Lukas Braun" <koomi@moshbit.net>
Subject: [PATCH 03/28] KVM: arm/arm64: Fix VMID alloc race by reverting to lock-less
Date: Wed, 19 Dec 2018 18:03:24 +0000	[thread overview]
Message-ID: <20181219180349.242681-4-marc.zyngier@arm.com> (raw)
In-Reply-To: <20181219180349.242681-1-marc.zyngier@arm.com>

From: Christoffer Dall <christoffer.dall@arm.com>

We recently addressed a VMID generation race by introducing a read/write
lock around accesses and updates to the vmid generation values.

However, kvm_arch_vcpu_ioctl_run() also calls need_new_vmid_gen() but
does so without taking the read lock.

As far as I can tell, this can lead to the same kind of race:

  VM 0, VCPU 0			VM 0, VCPU 1
  ------------			------------
  update_vttbr (vmid 254)
  				update_vttbr (vmid 1) // roll over
				read_lock(kvm_vmid_lock);
				force_vm_exit()
  local_irq_disable
  need_new_vmid_gen == false //because vmid gen matches

  enter_guest (vmid 254)
  				kvm_arch.vttbr = <PGD>:<VMID 1>
				read_unlock(kvm_vmid_lock);

  				enter_guest (vmid 1)

Which results in running two VCPUs in the same VM with different VMIDs
and (even worse) other VCPUs from other VMs could now allocate clashing
VMID 254 from the new generation as long as VCPU 0 is not exiting.

Attempt to solve this by making sure vttbr is updated before another CPU
can observe the updated VMID generation.

Cc: stable@vger.kernel.org
Fixes: f0cf47d939d0 "KVM: arm/arm64: Close VMID generation race"
Reviewed-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Christoffer Dall <christoffer.dall@arm.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
---
 virt/kvm/arm/arm.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index 4adcee5fc126..d9273f972828 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -66,7 +66,7 @@ static DEFINE_PER_CPU(struct kvm_vcpu *, kvm_arm_running_vcpu);
 static atomic64_t kvm_vmid_gen = ATOMIC64_INIT(1);
 static u32 kvm_next_vmid;
 static unsigned int kvm_vmid_bits __read_mostly;
-static DEFINE_RWLOCK(kvm_vmid_lock);
+static DEFINE_SPINLOCK(kvm_vmid_lock);
 
 static bool vgic_present;
 
@@ -484,7 +484,9 @@ void force_vm_exit(const cpumask_t *mask)
  */
 static bool need_new_vmid_gen(struct kvm *kvm)
 {
-	return unlikely(kvm->arch.vmid_gen != atomic64_read(&kvm_vmid_gen));
+	u64 current_vmid_gen = atomic64_read(&kvm_vmid_gen);
+	smp_rmb(); /* Orders read of kvm_vmid_gen and kvm->arch.vmid */
+	return unlikely(READ_ONCE(kvm->arch.vmid_gen) != current_vmid_gen);
 }
 
 /**
@@ -499,16 +501,11 @@ static void update_vttbr(struct kvm *kvm)
 {
 	phys_addr_t pgd_phys;
 	u64 vmid, cnp = kvm_cpu_has_cnp() ? VTTBR_CNP_BIT : 0;
-	bool new_gen;
 
-	read_lock(&kvm_vmid_lock);
-	new_gen = need_new_vmid_gen(kvm);
-	read_unlock(&kvm_vmid_lock);
-
-	if (!new_gen)
+	if (!need_new_vmid_gen(kvm))
 		return;
 
-	write_lock(&kvm_vmid_lock);
+	spin_lock(&kvm_vmid_lock);
 
 	/*
 	 * We need to re-check the vmid_gen here to ensure that if another vcpu
@@ -516,7 +513,7 @@ static void update_vttbr(struct kvm *kvm)
 	 * use the same vmid.
 	 */
 	if (!need_new_vmid_gen(kvm)) {
-		write_unlock(&kvm_vmid_lock);
+		spin_unlock(&kvm_vmid_lock);
 		return;
 	}
 
@@ -539,7 +536,6 @@ static void update_vttbr(struct kvm *kvm)
 		kvm_call_hyp(__kvm_flush_vm_context);
 	}
 
-	kvm->arch.vmid_gen = atomic64_read(&kvm_vmid_gen);
 	kvm->arch.vmid = kvm_next_vmid;
 	kvm_next_vmid++;
 	kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
@@ -550,7 +546,10 @@ static void update_vttbr(struct kvm *kvm)
 	vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
 	kvm->arch.vttbr = kvm_phys_to_vttbr(pgd_phys) | vmid | cnp;
 
-	write_unlock(&kvm_vmid_lock);
+	smp_wmb();
+	WRITE_ONCE(kvm->arch.vmid_gen, atomic64_read(&kvm_vmid_gen));
+
+	spin_unlock(&kvm_vmid_lock);
 }
 
 static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
-- 
2.19.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2018-12-19 18:03 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-19 18:03 [GIT PULL] KVM/arm updates for 4.21 Marc Zyngier
2018-12-19 18:03 ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 01/28] arm64: KVM: Skip MMIO insn after emulation Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 02/28] arm64: KVM: Consistently advance singlestep when emulating instructions Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` Marc Zyngier [this message]
2018-12-19 18:03   ` [PATCH 03/28] KVM: arm/arm64: Fix VMID alloc race by reverting to lock-less Marc Zyngier
2018-12-19 18:03 ` [PATCH 04/28] KVM: arm/arm64: Log PSTATE for unhandled sysregs Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 05/28] KVM: arm/arm64: vgic-v2: Set active_source to 0 when restoring state Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 06/28] KVM: arm/arm64: Share common code in user_mem_abort() Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 07/28] KVM: arm/arm64: Re-factor setting the Stage 2 entry to exec on fault Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 08/28] KVM: arm/arm64: Introduce helpers to manipulate page table entries Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 09/28] KVM: arm64: Support dirty page tracking for PUD hugepages Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 10/28] KVM: arm64: Support PUD hugepage in stage2_is_exec() Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 11/28] KVM: arm64: Support handling access faults for PUD hugepages Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 12/28] KVM: arm64: Update age handlers to support " Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 13/28] KVM: arm64: Add support for creating PUD hugepages at stage 2 Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 14/28] KVM: arm/arm64: vgic: Do not cond_resched_lock() with IRQs disabled Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 15/28] KVM: arm64: Clarify explanation of STAGE2_PGTABLE_LEVELS Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 16/28] KVM: arm/arm64: vgic: Cap SPIs to the VM-defined maximum Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 17/28] KVM: arm/arm64: vgic: Fix off-by-one bug in vgic_get_irq() Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 18/28] KVM: arm/arm64: vgic: Consider priority and active state for pending irq Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 19/28] KVM: arm/arm64: Fixup the kvm_exit tracepoint Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 20/28] KVM: arm/arm64: Remove arch timer workqueue Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 21/28] KVM: arm/arm64: arch_timer: Simplify kvm_timer_vcpu_terminate Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 22/28] KVM: arm64: Make vcpu const in vcpu_read_sys_reg Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 23/28] arm64: KVM: Add trapped system register access tracepoint Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 24/28] arm/arm64: KVM: vgic: Force VM halt when changing the active state of GICv3 PPIs/SGIs Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 25/28] KVM: arm/arm64: Fix unintended stage 2 PMD mappings Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 26/28] arm64: KVM: Avoid setting the upper 32 bits of VTCR_EL2 to 1 Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 27/28] arm/arm64: KVM: Add ARM_EXCEPTION_IS_TRAP macro Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 18:03 ` [PATCH 28/28] arm: KVM: Add S2_PMD_{MASK,SIZE} constants Marc Zyngier
2018-12-19 18:03   ` Marc Zyngier
2018-12-19 19:34 ` [GIT PULL] KVM/arm updates for 4.21 Paolo Bonzini
2018-12-19 19:34   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181219180349.242681-4-marc.zyngier@arm.com \
    --to=marc.zyngier@arm.com \
    --cc=gustavo@embeddedor.com \
    --cc=koomi@moshbit.net \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=pbonzini@redhat.com \
    --cc=punit.agrawal@arm.com \
    --cc=rkrcmar@redhat.com \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.