All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoffer Dall <christoffer.dall@linaro.org>
To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org
Cc: Marc Zyngier <marc.zyngier@arm.com>,
	hemk976@gmail.com, Jintack Lim <jintack@cs.columbia.edu>,
	kvm@vger.kernel.org, Christoffer Dall <cdall@linaro.org>,
	stable@vger.kernel.org
Subject: [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT
Date: Fri,  8 Sep 2017 07:52:19 -0700	[thread overview]
Message-ID: <1504882339-42520-3-git-send-email-christoffer.dall@linaro.org> (raw)
In-Reply-To: <1504882339-42520-1-git-send-email-christoffer.dall@linaro.org>

From: Christoffer Dall <cdall@linaro.org>

Getting a per-CPU variable requires a non-preemptible context and we
were relying on a normal spinlock to disable preemption as well.  This
asusmption breaks with PREEMPT_RT and was observed on v4.9 using
PREEMPT_RT.

This change moves the spinlock tighter around the critical section
accessing the IRQ structure protected by the lock and uses a separate
preemption disabled section for determining the requesting VCPU.  There
should be no change in functionality of performance degradation on
non-RT.

Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
Cc: stable@vger.kernel.org
Cc: Jintack Lim <jintack@cs.columbia.edu>
Reported-by: Hemanth Kumar <hemk976@gmail.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index c1e4bdd..7377f97 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 				    bool new_active_state)
 {
 	struct kvm_vcpu *requester_vcpu;
-	spin_lock(&irq->irq_lock);
 
 	/*
 	 * The vcpu parameter here can mean multiple things depending on how
@@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 	 * NULL, which is fine, because we guarantee that no VCPUs are running
 	 * when accessing VGIC state from user space so irq->vcpu->cpu is
 	 * always -1.
+	 *
+	 * We have to temporarily disable preemption to read the per-CPU
+	 * variable.  It doesn't matter if we actually get preempted
+	 * after enabling preemption because we only need to figure out if
+	 * this thread is a running VCPU thread, and in that case for which
+	 * VCPU.  If we're migrated the preempt notifiers will migrate the
+	 * running VCPU pointer with us.
 	 */
+	preempt_disable();
 	requester_vcpu = kvm_arm_get_running_vcpu();
+	preempt_enable();
+
+	spin_lock(&irq->irq_lock);
 
 	/*
 	 * If this virtual IRQ was written into a list register, we
-- 
2.7.4

WARNING: multiple messages have this Message-ID (diff)
From: Christoffer Dall <christoffer.dall@linaro.org>
To: kvmarm@lists.cs.columbia.edu, linux-arm-kernel@lists.infradead.org
Cc: Christoffer Dall <cdall@linaro.org>,
	kvm@vger.kernel.org, Marc Zyngier <marc.zyngier@arm.com>,
	stable@vger.kernel.org, hemk976@gmail.com
Subject: [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT
Date: Fri,  8 Sep 2017 07:52:19 -0700	[thread overview]
Message-ID: <1504882339-42520-3-git-send-email-christoffer.dall@linaro.org> (raw)
In-Reply-To: <1504882339-42520-1-git-send-email-christoffer.dall@linaro.org>

From: Christoffer Dall <cdall@linaro.org>

Getting a per-CPU variable requires a non-preemptible context and we
were relying on a normal spinlock to disable preemption as well.  This
asusmption breaks with PREEMPT_RT and was observed on v4.9 using
PREEMPT_RT.

This change moves the spinlock tighter around the critical section
accessing the IRQ structure protected by the lock and uses a separate
preemption disabled section for determining the requesting VCPU.  There
should be no change in functionality of performance degradation on
non-RT.

Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
Cc: stable@vger.kernel.org
Cc: Jintack Lim <jintack@cs.columbia.edu>
Reported-by: Hemanth Kumar <hemk976@gmail.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index c1e4bdd..7377f97 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 				    bool new_active_state)
 {
 	struct kvm_vcpu *requester_vcpu;
-	spin_lock(&irq->irq_lock);
 
 	/*
 	 * The vcpu parameter here can mean multiple things depending on how
@@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 	 * NULL, which is fine, because we guarantee that no VCPUs are running
 	 * when accessing VGIC state from user space so irq->vcpu->cpu is
 	 * always -1.
+	 *
+	 * We have to temporarily disable preemption to read the per-CPU
+	 * variable.  It doesn't matter if we actually get preempted
+	 * after enabling preemption because we only need to figure out if
+	 * this thread is a running VCPU thread, and in that case for which
+	 * VCPU.  If we're migrated the preempt notifiers will migrate the
+	 * running VCPU pointer with us.
 	 */
+	preempt_disable();
 	requester_vcpu = kvm_arm_get_running_vcpu();
+	preempt_enable();
+
+	spin_lock(&irq->irq_lock);
 
 	/*
 	 * If this virtual IRQ was written into a list register, we
-- 
2.7.4

WARNING: multiple messages have this Message-ID (diff)
From: christoffer.dall@linaro.org (Christoffer Dall)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT
Date: Fri,  8 Sep 2017 07:52:19 -0700	[thread overview]
Message-ID: <1504882339-42520-3-git-send-email-christoffer.dall@linaro.org> (raw)
In-Reply-To: <1504882339-42520-1-git-send-email-christoffer.dall@linaro.org>

From: Christoffer Dall <cdall@linaro.org>

Getting a per-CPU variable requires a non-preemptible context and we
were relying on a normal spinlock to disable preemption as well.  This
asusmption breaks with PREEMPT_RT and was observed on v4.9 using
PREEMPT_RT.

This change moves the spinlock tighter around the critical section
accessing the IRQ structure protected by the lock and uses a separate
preemption disabled section for determining the requesting VCPU.  There
should be no change in functionality of performance degradation on
non-RT.

Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
Cc: stable at vger.kernel.org
Cc: Jintack Lim <jintack@cs.columbia.edu>
Reported-by: Hemanth Kumar <hemk976@gmail.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index c1e4bdd..7377f97 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 				    bool new_active_state)
 {
 	struct kvm_vcpu *requester_vcpu;
-	spin_lock(&irq->irq_lock);
 
 	/*
 	 * The vcpu parameter here can mean multiple things depending on how
@@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 	 * NULL, which is fine, because we guarantee that no VCPUs are running
 	 * when accessing VGIC state from user space so irq->vcpu->cpu is
 	 * always -1.
+	 *
+	 * We have to temporarily disable preemption to read the per-CPU
+	 * variable.  It doesn't matter if we actually get preempted
+	 * after enabling preemption because we only need to figure out if
+	 * this thread is a running VCPU thread, and in that case for which
+	 * VCPU.  If we're migrated the preempt notifiers will migrate the
+	 * running VCPU pointer with us.
 	 */
+	preempt_disable();
 	requester_vcpu = kvm_arm_get_running_vcpu();
+	preempt_enable();
+
+	spin_lock(&irq->irq_lock);
 
 	/*
 	 * If this virtual IRQ was written into a list register, we
-- 
2.7.4

  parent reply	other threads:[~2017-09-08 14:53 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-08 14:52 [PATCH 0/2] Fix preemption issues with kvm_arm_get_running_vcpu Christoffer Dall
2017-09-08 14:52 ` Christoffer Dall
2017-09-08 14:52 ` [PATCH 1/2] KVM: arm/arm64: Remove redundant preemptible checks Christoffer Dall
2017-09-08 14:52   ` Christoffer Dall
2017-09-08 14:52 ` Christoffer Dall [this message]
2017-09-08 14:52   ` [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT Christoffer Dall
2017-09-08 14:52   ` Christoffer Dall
2017-09-11 11:29   ` Hemanth Kumar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1504882339-42520-3-git-send-email-christoffer.dall@linaro.org \
    --to=christoffer.dall@linaro.org \
    --cc=cdall@linaro.org \
    --cc=hemk976@gmail.com \
    --cc=jintack@cs.columbia.edu \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=marc.zyngier@arm.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.