All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] Fix preemption issues with kvm_arm_get_running_vcpu
@ 2017-09-08 14:52 ` Christoffer Dall
  0 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2017-09-08 14:52 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel; +Cc: Marc Zyngier, Christoffer Dall, hemk976, kvm

From: Christoffer Dall <cdall@linaro.org>

These two patches slightly improves our preemption configuration when
accessing the per-CPU variables holding the currently running VCPU by
removing a redundant check and rearranging calling code from
non-preemptible context on RT systems.

I'm not entirely sure if we fix issues only affecting PREEMPT_RT in mainline
and also cc these to stable.  Looking at the git log there seems to be some
precedence for this.  If not, I'm happy to remove the cc tag and/or drop the
second patch.

Thanks,
-Christoffer

Christoffer Dall (2):
  KVM: arm/arm64: Remove redundant preemptible checks
  KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT

 virt/kvm/arm/arm.c            |  2 --
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 2 files changed, 11 insertions(+), 3 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 0/2] Fix preemption issues with kvm_arm_get_running_vcpu
@ 2017-09-08 14:52 ` Christoffer Dall
  0 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2017-09-08 14:52 UTC (permalink / raw)
  To: linux-arm-kernel

From: Christoffer Dall <cdall@linaro.org>

These two patches slightly improves our preemption configuration when
accessing the per-CPU variables holding the currently running VCPU by
removing a redundant check and rearranging calling code from
non-preemptible context on RT systems.

I'm not entirely sure if we fix issues only affecting PREEMPT_RT in mainline
and also cc these to stable.  Looking at the git log there seems to be some
precedence for this.  If not, I'm happy to remove the cc tag and/or drop the
second patch.

Thanks,
-Christoffer

Christoffer Dall (2):
  KVM: arm/arm64: Remove redundant preemptible checks
  KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT

 virt/kvm/arm/arm.c            |  2 --
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 2 files changed, 11 insertions(+), 3 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/2] KVM: arm/arm64: Remove redundant preemptible checks
  2017-09-08 14:52 ` Christoffer Dall
@ 2017-09-08 14:52   ` Christoffer Dall
  -1 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2017-09-08 14:52 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: Marc Zyngier, hemk976, Jintack Lim, kvm, Christoffer Dall

From: Christoffer Dall <cdall@linaro.org>

The __this_cpu_read() and __this_cpu_write() functions already implement
checks for the required preemption levels when using
CONFIG_DEBUG_PREEMPT which gives you nice error messages and such.
Therefore there is no need to explicitly check this using a BUG_ON() in
the code (which we don't do for other uses of per cpu variables either).

Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/arm.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index a39a1e1..04313a2 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -69,7 +69,6 @@ static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
 
 static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
 {
-	BUG_ON(preemptible());
 	__this_cpu_write(kvm_arm_running_vcpu, vcpu);
 }
 
@@ -79,7 +78,6 @@ static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
  */
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void)
 {
-	BUG_ON(preemptible());
 	return __this_cpu_read(kvm_arm_running_vcpu);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 1/2] KVM: arm/arm64: Remove redundant preemptible checks
@ 2017-09-08 14:52   ` Christoffer Dall
  0 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2017-09-08 14:52 UTC (permalink / raw)
  To: linux-arm-kernel

From: Christoffer Dall <cdall@linaro.org>

The __this_cpu_read() and __this_cpu_write() functions already implement
checks for the required preemption levels when using
CONFIG_DEBUG_PREEMPT which gives you nice error messages and such.
Therefore there is no need to explicitly check this using a BUG_ON() in
the code (which we don't do for other uses of per cpu variables either).

Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/arm.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
index a39a1e1..04313a2 100644
--- a/virt/kvm/arm/arm.c
+++ b/virt/kvm/arm/arm.c
@@ -69,7 +69,6 @@ static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
 
 static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
 {
-	BUG_ON(preemptible());
 	__this_cpu_write(kvm_arm_running_vcpu, vcpu);
 }
 
@@ -79,7 +78,6 @@ static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
  */
 struct kvm_vcpu *kvm_arm_get_running_vcpu(void)
 {
-	BUG_ON(preemptible());
 	return __this_cpu_read(kvm_arm_running_vcpu);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT
  2017-09-08 14:52 ` Christoffer Dall
  (?)
@ 2017-09-08 14:52   ` Christoffer Dall
  -1 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2017-09-08 14:52 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: Marc Zyngier, hemk976, Jintack Lim, kvm, Christoffer Dall, stable

From: Christoffer Dall <cdall@linaro.org>

Getting a per-CPU variable requires a non-preemptible context and we
were relying on a normal spinlock to disable preemption as well.  This
asusmption breaks with PREEMPT_RT and was observed on v4.9 using
PREEMPT_RT.

This change moves the spinlock tighter around the critical section
accessing the IRQ structure protected by the lock and uses a separate
preemption disabled section for determining the requesting VCPU.  There
should be no change in functionality of performance degradation on
non-RT.

Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
Cc: stable@vger.kernel.org
Cc: Jintack Lim <jintack@cs.columbia.edu>
Reported-by: Hemanth Kumar <hemk976@gmail.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index c1e4bdd..7377f97 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 				    bool new_active_state)
 {
 	struct kvm_vcpu *requester_vcpu;
-	spin_lock(&irq->irq_lock);
 
 	/*
 	 * The vcpu parameter here can mean multiple things depending on how
@@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 	 * NULL, which is fine, because we guarantee that no VCPUs are running
 	 * when accessing VGIC state from user space so irq->vcpu->cpu is
 	 * always -1.
+	 *
+	 * We have to temporarily disable preemption to read the per-CPU
+	 * variable.  It doesn't matter if we actually get preempted
+	 * after enabling preemption because we only need to figure out if
+	 * this thread is a running VCPU thread, and in that case for which
+	 * VCPU.  If we're migrated the preempt notifiers will migrate the
+	 * running VCPU pointer with us.
 	 */
+	preempt_disable();
 	requester_vcpu = kvm_arm_get_running_vcpu();
+	preempt_enable();
+
+	spin_lock(&irq->irq_lock);
 
 	/*
 	 * If this virtual IRQ was written into a list register, we
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT
@ 2017-09-08 14:52   ` Christoffer Dall
  0 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2017-09-08 14:52 UTC (permalink / raw)
  To: kvmarm, linux-arm-kernel
  Cc: Christoffer Dall, kvm, Marc Zyngier, stable, hemk976

From: Christoffer Dall <cdall@linaro.org>

Getting a per-CPU variable requires a non-preemptible context and we
were relying on a normal spinlock to disable preemption as well.  This
asusmption breaks with PREEMPT_RT and was observed on v4.9 using
PREEMPT_RT.

This change moves the spinlock tighter around the critical section
accessing the IRQ structure protected by the lock and uses a separate
preemption disabled section for determining the requesting VCPU.  There
should be no change in functionality of performance degradation on
non-RT.

Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
Cc: stable@vger.kernel.org
Cc: Jintack Lim <jintack@cs.columbia.edu>
Reported-by: Hemanth Kumar <hemk976@gmail.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index c1e4bdd..7377f97 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 				    bool new_active_state)
 {
 	struct kvm_vcpu *requester_vcpu;
-	spin_lock(&irq->irq_lock);
 
 	/*
 	 * The vcpu parameter here can mean multiple things depending on how
@@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 	 * NULL, which is fine, because we guarantee that no VCPUs are running
 	 * when accessing VGIC state from user space so irq->vcpu->cpu is
 	 * always -1.
+	 *
+	 * We have to temporarily disable preemption to read the per-CPU
+	 * variable.  It doesn't matter if we actually get preempted
+	 * after enabling preemption because we only need to figure out if
+	 * this thread is a running VCPU thread, and in that case for which
+	 * VCPU.  If we're migrated the preempt notifiers will migrate the
+	 * running VCPU pointer with us.
 	 */
+	preempt_disable();
 	requester_vcpu = kvm_arm_get_running_vcpu();
+	preempt_enable();
+
+	spin_lock(&irq->irq_lock);
 
 	/*
 	 * If this virtual IRQ was written into a list register, we
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT
@ 2017-09-08 14:52   ` Christoffer Dall
  0 siblings, 0 replies; 8+ messages in thread
From: Christoffer Dall @ 2017-09-08 14:52 UTC (permalink / raw)
  To: linux-arm-kernel

From: Christoffer Dall <cdall@linaro.org>

Getting a per-CPU variable requires a non-preemptible context and we
were relying on a normal spinlock to disable preemption as well.  This
asusmption breaks with PREEMPT_RT and was observed on v4.9 using
PREEMPT_RT.

This change moves the spinlock tighter around the critical section
accessing the IRQ structure protected by the lock and uses a separate
preemption disabled section for determining the requesting VCPU.  There
should be no change in functionality of performance degradation on
non-RT.

Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own active state")
Cc: stable at vger.kernel.org
Cc: Jintack Lim <jintack@cs.columbia.edu>
Reported-by: Hemanth Kumar <hemk976@gmail.com>
Signed-off-by: Christoffer Dall <cdall@linaro.org>
---
 virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
index c1e4bdd..7377f97 100644
--- a/virt/kvm/arm/vgic/vgic-mmio.c
+++ b/virt/kvm/arm/vgic/vgic-mmio.c
@@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 				    bool new_active_state)
 {
 	struct kvm_vcpu *requester_vcpu;
-	spin_lock(&irq->irq_lock);
 
 	/*
 	 * The vcpu parameter here can mean multiple things depending on how
@@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
 	 * NULL, which is fine, because we guarantee that no VCPUs are running
 	 * when accessing VGIC state from user space so irq->vcpu->cpu is
 	 * always -1.
+	 *
+	 * We have to temporarily disable preemption to read the per-CPU
+	 * variable.  It doesn't matter if we actually get preempted
+	 * after enabling preemption because we only need to figure out if
+	 * this thread is a running VCPU thread, and in that case for which
+	 * VCPU.  If we're migrated the preempt notifiers will migrate the
+	 * running VCPU pointer with us.
 	 */
+	preempt_disable();
 	requester_vcpu = kvm_arm_get_running_vcpu();
+	preempt_enable();
+
+	spin_lock(&irq->irq_lock);
 
 	/*
 	 * If this virtual IRQ was written into a list register, we
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT
  2017-09-08 14:52   ` Christoffer Dall
  (?)
  (?)
@ 2017-09-11 11:29   ` Hemanth Kumar
  -1 siblings, 0 replies; 8+ messages in thread
From: Hemanth Kumar @ 2017-09-11 11:29 UTC (permalink / raw)
  To: Christoffer Dall
  Cc: Christoffer Dall, kvm, Marc Zyngier, stable, kvmarm, linux-arm-kernel


[-- Attachment #1.1: Type: text/plain, Size: 2574 bytes --]

Tested Works fine.

On Fri, Sep 8, 2017 at 8:22 PM, Christoffer Dall <
christoffer.dall@linaro.org> wrote:

> From: Christoffer Dall <cdall@linaro.org>
>
> Getting a per-CPU variable requires a non-preemptible context and we
> were relying on a normal spinlock to disable preemption as well.  This
> asusmption breaks with PREEMPT_RT and was observed on v4.9 using
> PREEMPT_RT.
>
> This change moves the spinlock tighter around the critical section
> accessing the IRQ structure protected by the lock and uses a separate
> preemption disabled section for determining the requesting VCPU.  There
> should be no change in functionality of performance degradation on
> non-RT.
>
> Fixes: 370a0ec18199 ("KVM: arm/arm64: Let vcpu thread modify its own
> active state")
> Cc: stable@vger.kernel.org
> Cc: Jintack Lim <jintack@cs.columbia.edu>
> Reported-by: Hemanth Kumar <hemk976@gmail.com>
> Signed-off-by: Christoffer Dall <cdall@linaro.org>
> ---
>  virt/kvm/arm/vgic/vgic-mmio.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> index c1e4bdd..7377f97 100644
> --- a/virt/kvm/arm/vgic/vgic-mmio.c
> +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> @@ -181,7 +181,6 @@ static void vgic_mmio_change_active(struct kvm_vcpu
> *vcpu, struct vgic_irq *irq,
>                                     bool new_active_state)
>  {
>         struct kvm_vcpu *requester_vcpu;
> -       spin_lock(&irq->irq_lock);
>
>         /*
>          * The vcpu parameter here can mean multiple things depending on
> how
> @@ -195,8 +194,19 @@ static void vgic_mmio_change_active(struct kvm_vcpu
> *vcpu, struct vgic_irq *irq,
>          * NULL, which is fine, because we guarantee that no VCPUs are
> running
>          * when accessing VGIC state from user space so irq->vcpu->cpu is
>          * always -1.
> +        *
> +        * We have to temporarily disable preemption to read the per-CPU
> +        * variable.  It doesn't matter if we actually get preempted
> +        * after enabling preemption because we only need to figure out if
> +        * this thread is a running VCPU thread, and in that case for which
> +        * VCPU.  If we're migrated the preempt notifiers will migrate the
> +        * running VCPU pointer with us.
>          */
> +       preempt_disable();
>         requester_vcpu = kvm_arm_get_running_vcpu();
> +       preempt_enable();
> +
> +       spin_lock(&irq->irq_lock);
>
>         /*
>          * If this virtual IRQ was written into a list register, we
> --
> 2.7.4
>
>

[-- Attachment #1.2: Type: text/html, Size: 3503 bytes --]

[-- Attachment #2: Type: text/plain, Size: 151 bytes --]

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-09-11 11:29 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-08 14:52 [PATCH 0/2] Fix preemption issues with kvm_arm_get_running_vcpu Christoffer Dall
2017-09-08 14:52 ` Christoffer Dall
2017-09-08 14:52 ` [PATCH 1/2] KVM: arm/arm64: Remove redundant preemptible checks Christoffer Dall
2017-09-08 14:52   ` Christoffer Dall
2017-09-08 14:52 ` [PATCH 2/2] KVM: arm/arm64: Fix vgic_mmio_change_active with PREEMPT_RT Christoffer Dall
2017-09-08 14:52   ` Christoffer Dall
2017-09-08 14:52   ` Christoffer Dall
2017-09-11 11:29   ` Hemanth Kumar

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.