All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 18:14 ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2021-08-18 18:14 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Andre Przywara,
	Eric Auger, Oliver Upton, Ricardo Koller, kernel-team,
	Raghavendra Rao Ananta, stable

When a mapped level interrupt (a timer, for example) is deactivated
by the guest, the corresponding host interrupt is equally deactivated.
However, the fate of the pending state still needs to be dealt
with in SW.

This is specially true when the interrupt was in the active+pending
state in the virtual distributor at the point where the guest
was entered. On exit, the pending state is potentially stale
(the guest may have put the interrupt in a non-pending state).

If we don't do anything, the interrupt will be spuriously injected
in the guest. Although this shouldn't have any ill effect (spurious
interrupts are always possible), we can improve the emulation by
detecting the deactivation-while-pending case and resample the
interrupt.

Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
Reported-by: Raghavendra Rao Ananta <rananta@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
---
 arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
 arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
 2 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
index 2c580204f1dc..3e52ea86a87f 100644
--- a/arch/arm64/kvm/vgic/vgic-v2.c
+++ b/arch/arm64/kvm/vgic/vgic-v2.c
@@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		u32 val = cpuif->vgic_lr[lr];
 		u32 cpuid, intid = val & GICH_LR_VIRTUALID;
 		struct vgic_irq *irq;
+		bool deactivated;
 
 		/* Extract the source vCPU id from the LR */
 		cpuid = val & GICH_LR_PHYSID_CPUID;
@@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 
 		raw_spin_lock(&irq->irq_lock);
 
-		/* Always preserve the active bit */
+		/* Always preserve the active bit, note deactivation */
+		deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
 		irq->active = !!(val & GICH_LR_ACTIVE_BIT);
 
 		if (irq->active && vgic_irq_is_sgi(intid))
@@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		 * device state could have changed or we simply need to
 		 * process the still pending interrupt later.
 		 *
+		 * We could also have entered the guest with the interrupt
+		 * active+pending. On the next exit, we need to re-evaluate
+		 * the pending state, as it could otherwise result in a
+		 * spurious interrupt by injecting a now potentially stale
+		 * pending state.
+		 *
 		 * If this causes us to lower the level, we have to also clear
 		 * the physical active state, since we will otherwise never be
 		 * told when the interrupt becomes asserted again.
@@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		if (vgic_irq_is_mapped_level(irq)) {
 			bool resample = false;
 
-			if (val & GICH_LR_PENDING_BIT) {
-				irq->line_level = vgic_get_phys_line_level(irq);
-				resample = !irq->line_level;
-			} else if (vgic_irq_needs_resampling(irq) &&
-				   !(irq->active || irq->pending_latch)) {
-				resample = true;
+			if (unlikely(vgic_irq_needs_resampling(irq))) {
+				if (!(irq->active || irq->pending_latch))
+					resample = true;
+			} else {
+				if ((val & GICH_LR_PENDING_BIT) ||
+				    (deactivated && irq->line_level)) {
+					irq->line_level = vgic_get_phys_line_level(irq);
+					resample = !irq->line_level;
+				}
 			}
 
 			if (resample)
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 66004f61cd83..74f9aefffd5e 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		u32 intid, cpuid;
 		struct vgic_irq *irq;
 		bool is_v2_sgi = false;
+		bool deactivated;
 
 		cpuid = val & GICH_LR_PHYSID_CPUID;
 		cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
@@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 
 		raw_spin_lock(&irq->irq_lock);
 
-		/* Always preserve the active bit */
+		/* Always preserve the active bit, note deactivation */
+		deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
 		irq->active = !!(val & ICH_LR_ACTIVE_BIT);
 
 		if (irq->active && is_v2_sgi)
@@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		 * device state could have changed or we simply need to
 		 * process the still pending interrupt later.
 		 *
+		 * We could also have entered the guest with the interrupt
+		 * active+pending. On the next exit, we need to re-evaluate
+		 * the pending state, as it could otherwise result in a
+		 * spurious interrupt by injecting a now potentially stale
+		 * pending state.
+		 *
 		 * If this causes us to lower the level, we have to also clear
 		 * the physical active state, since we will otherwise never be
 		 * told when the interrupt becomes asserted again.
@@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		if (vgic_irq_is_mapped_level(irq)) {
 			bool resample = false;
 
-			if (val & ICH_LR_PENDING_BIT) {
-				irq->line_level = vgic_get_phys_line_level(irq);
-				resample = !irq->line_level;
-			} else if (vgic_irq_needs_resampling(irq) &&
-				   !(irq->active || irq->pending_latch)) {
-				resample = true;
+			if (unlikely(vgic_irq_needs_resampling(irq))) {
+				if (!(irq->active || irq->pending_latch))
+					resample = true;
+			} else {
+				if ((val & ICH_LR_PENDING_BIT) ||
+				    (deactivated && irq->line_level)) {
+					irq->line_level = vgic_get_phys_line_level(irq);
+					resample = !irq->line_level;
+				}
 			}
 
 			if (resample)
-- 
2.30.2


^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 18:14 ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2021-08-18 18:14 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: kernel-team, Andre Przywara, stable, Raghavendra Rao Ananta

When a mapped level interrupt (a timer, for example) is deactivated
by the guest, the corresponding host interrupt is equally deactivated.
However, the fate of the pending state still needs to be dealt
with in SW.

This is specially true when the interrupt was in the active+pending
state in the virtual distributor at the point where the guest
was entered. On exit, the pending state is potentially stale
(the guest may have put the interrupt in a non-pending state).

If we don't do anything, the interrupt will be spuriously injected
in the guest. Although this shouldn't have any ill effect (spurious
interrupts are always possible), we can improve the emulation by
detecting the deactivation-while-pending case and resample the
interrupt.

Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
Reported-by: Raghavendra Rao Ananta <rananta@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
---
 arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
 arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
 2 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
index 2c580204f1dc..3e52ea86a87f 100644
--- a/arch/arm64/kvm/vgic/vgic-v2.c
+++ b/arch/arm64/kvm/vgic/vgic-v2.c
@@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		u32 val = cpuif->vgic_lr[lr];
 		u32 cpuid, intid = val & GICH_LR_VIRTUALID;
 		struct vgic_irq *irq;
+		bool deactivated;
 
 		/* Extract the source vCPU id from the LR */
 		cpuid = val & GICH_LR_PHYSID_CPUID;
@@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 
 		raw_spin_lock(&irq->irq_lock);
 
-		/* Always preserve the active bit */
+		/* Always preserve the active bit, note deactivation */
+		deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
 		irq->active = !!(val & GICH_LR_ACTIVE_BIT);
 
 		if (irq->active && vgic_irq_is_sgi(intid))
@@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		 * device state could have changed or we simply need to
 		 * process the still pending interrupt later.
 		 *
+		 * We could also have entered the guest with the interrupt
+		 * active+pending. On the next exit, we need to re-evaluate
+		 * the pending state, as it could otherwise result in a
+		 * spurious interrupt by injecting a now potentially stale
+		 * pending state.
+		 *
 		 * If this causes us to lower the level, we have to also clear
 		 * the physical active state, since we will otherwise never be
 		 * told when the interrupt becomes asserted again.
@@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		if (vgic_irq_is_mapped_level(irq)) {
 			bool resample = false;
 
-			if (val & GICH_LR_PENDING_BIT) {
-				irq->line_level = vgic_get_phys_line_level(irq);
-				resample = !irq->line_level;
-			} else if (vgic_irq_needs_resampling(irq) &&
-				   !(irq->active || irq->pending_latch)) {
-				resample = true;
+			if (unlikely(vgic_irq_needs_resampling(irq))) {
+				if (!(irq->active || irq->pending_latch))
+					resample = true;
+			} else {
+				if ((val & GICH_LR_PENDING_BIT) ||
+				    (deactivated && irq->line_level)) {
+					irq->line_level = vgic_get_phys_line_level(irq);
+					resample = !irq->line_level;
+				}
 			}
 
 			if (resample)
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 66004f61cd83..74f9aefffd5e 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		u32 intid, cpuid;
 		struct vgic_irq *irq;
 		bool is_v2_sgi = false;
+		bool deactivated;
 
 		cpuid = val & GICH_LR_PHYSID_CPUID;
 		cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
@@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 
 		raw_spin_lock(&irq->irq_lock);
 
-		/* Always preserve the active bit */
+		/* Always preserve the active bit, note deactivation */
+		deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
 		irq->active = !!(val & ICH_LR_ACTIVE_BIT);
 
 		if (irq->active && is_v2_sgi)
@@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		 * device state could have changed or we simply need to
 		 * process the still pending interrupt later.
 		 *
+		 * We could also have entered the guest with the interrupt
+		 * active+pending. On the next exit, we need to re-evaluate
+		 * the pending state, as it could otherwise result in a
+		 * spurious interrupt by injecting a now potentially stale
+		 * pending state.
+		 *
 		 * If this causes us to lower the level, we have to also clear
 		 * the physical active state, since we will otherwise never be
 		 * told when the interrupt becomes asserted again.
@@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		if (vgic_irq_is_mapped_level(irq)) {
 			bool resample = false;
 
-			if (val & ICH_LR_PENDING_BIT) {
-				irq->line_level = vgic_get_phys_line_level(irq);
-				resample = !irq->line_level;
-			} else if (vgic_irq_needs_resampling(irq) &&
-				   !(irq->active || irq->pending_latch)) {
-				resample = true;
+			if (unlikely(vgic_irq_needs_resampling(irq))) {
+				if (!(irq->active || irq->pending_latch))
+					resample = true;
+			} else {
+				if ((val & ICH_LR_PENDING_BIT) ||
+				    (deactivated && irq->line_level)) {
+					irq->line_level = vgic_get_phys_line_level(irq);
+					resample = !irq->line_level;
+				}
 			}
 
 			if (resample)
-- 
2.30.2

_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 18:14 ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2021-08-18 18:14 UTC (permalink / raw)
  To: linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Andre Przywara,
	Eric Auger, Oliver Upton, Ricardo Koller, kernel-team,
	Raghavendra Rao Ananta, stable

When a mapped level interrupt (a timer, for example) is deactivated
by the guest, the corresponding host interrupt is equally deactivated.
However, the fate of the pending state still needs to be dealt
with in SW.

This is specially true when the interrupt was in the active+pending
state in the virtual distributor at the point where the guest
was entered. On exit, the pending state is potentially stale
(the guest may have put the interrupt in a non-pending state).

If we don't do anything, the interrupt will be spuriously injected
in the guest. Although this shouldn't have any ill effect (spurious
interrupts are always possible), we can improve the emulation by
detecting the deactivation-while-pending case and resample the
interrupt.

Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
Reported-by: Raghavendra Rao Ananta <rananta@google.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Cc: stable@vger.kernel.org
---
 arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
 arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
 2 files changed, 36 insertions(+), 14 deletions(-)

diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
index 2c580204f1dc..3e52ea86a87f 100644
--- a/arch/arm64/kvm/vgic/vgic-v2.c
+++ b/arch/arm64/kvm/vgic/vgic-v2.c
@@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		u32 val = cpuif->vgic_lr[lr];
 		u32 cpuid, intid = val & GICH_LR_VIRTUALID;
 		struct vgic_irq *irq;
+		bool deactivated;
 
 		/* Extract the source vCPU id from the LR */
 		cpuid = val & GICH_LR_PHYSID_CPUID;
@@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 
 		raw_spin_lock(&irq->irq_lock);
 
-		/* Always preserve the active bit */
+		/* Always preserve the active bit, note deactivation */
+		deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
 		irq->active = !!(val & GICH_LR_ACTIVE_BIT);
 
 		if (irq->active && vgic_irq_is_sgi(intid))
@@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		 * device state could have changed or we simply need to
 		 * process the still pending interrupt later.
 		 *
+		 * We could also have entered the guest with the interrupt
+		 * active+pending. On the next exit, we need to re-evaluate
+		 * the pending state, as it could otherwise result in a
+		 * spurious interrupt by injecting a now potentially stale
+		 * pending state.
+		 *
 		 * If this causes us to lower the level, we have to also clear
 		 * the physical active state, since we will otherwise never be
 		 * told when the interrupt becomes asserted again.
@@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
 		if (vgic_irq_is_mapped_level(irq)) {
 			bool resample = false;
 
-			if (val & GICH_LR_PENDING_BIT) {
-				irq->line_level = vgic_get_phys_line_level(irq);
-				resample = !irq->line_level;
-			} else if (vgic_irq_needs_resampling(irq) &&
-				   !(irq->active || irq->pending_latch)) {
-				resample = true;
+			if (unlikely(vgic_irq_needs_resampling(irq))) {
+				if (!(irq->active || irq->pending_latch))
+					resample = true;
+			} else {
+				if ((val & GICH_LR_PENDING_BIT) ||
+				    (deactivated && irq->line_level)) {
+					irq->line_level = vgic_get_phys_line_level(irq);
+					resample = !irq->line_level;
+				}
 			}
 
 			if (resample)
diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
index 66004f61cd83..74f9aefffd5e 100644
--- a/arch/arm64/kvm/vgic/vgic-v3.c
+++ b/arch/arm64/kvm/vgic/vgic-v3.c
@@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		u32 intid, cpuid;
 		struct vgic_irq *irq;
 		bool is_v2_sgi = false;
+		bool deactivated;
 
 		cpuid = val & GICH_LR_PHYSID_CPUID;
 		cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
@@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 
 		raw_spin_lock(&irq->irq_lock);
 
-		/* Always preserve the active bit */
+		/* Always preserve the active bit, note deactivation */
+		deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
 		irq->active = !!(val & ICH_LR_ACTIVE_BIT);
 
 		if (irq->active && is_v2_sgi)
@@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		 * device state could have changed or we simply need to
 		 * process the still pending interrupt later.
 		 *
+		 * We could also have entered the guest with the interrupt
+		 * active+pending. On the next exit, we need to re-evaluate
+		 * the pending state, as it could otherwise result in a
+		 * spurious interrupt by injecting a now potentially stale
+		 * pending state.
+		 *
 		 * If this causes us to lower the level, we have to also clear
 		 * the physical active state, since we will otherwise never be
 		 * told when the interrupt becomes asserted again.
@@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
 		if (vgic_irq_is_mapped_level(irq)) {
 			bool resample = false;
 
-			if (val & ICH_LR_PENDING_BIT) {
-				irq->line_level = vgic_get_phys_line_level(irq);
-				resample = !irq->line_level;
-			} else if (vgic_irq_needs_resampling(irq) &&
-				   !(irq->active || irq->pending_latch)) {
-				resample = true;
+			if (unlikely(vgic_irq_needs_resampling(irq))) {
+				if (!(irq->active || irq->pending_latch))
+					resample = true;
+			} else {
+				if ((val & ICH_LR_PENDING_BIT) ||
+				    (deactivated && irq->line_level)) {
+					irq->line_level = vgic_get_phys_line_level(irq);
+					resample = !irq->line_level;
+				}
 			}
 
 			if (resample)
-- 
2.30.2


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
  2021-08-18 18:14 ` Marc Zyngier
  (?)
@ 2021-08-18 19:05   ` Raghavendra Rao Ananta
  -1 siblings, 0 replies; 12+ messages in thread
From: Raghavendra Rao Ananta @ 2021-08-18 19:05 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Andre Przywara,
	Eric Auger, Oliver Upton, Ricardo Koller, kernel-team, stable

On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
>
> When a mapped level interrupt (a timer, for example) is deactivated
> by the guest, the corresponding host interrupt is equally deactivated.
> However, the fate of the pending state still needs to be dealt
> with in SW.
>
> This is specially true when the interrupt was in the active+pending
> state in the virtual distributor at the point where the guest
> was entered. On exit, the pending state is potentially stale
> (the guest may have put the interrupt in a non-pending state).
>
> If we don't do anything, the interrupt will be spuriously injected
> in the guest. Although this shouldn't have any ill effect (spurious
> interrupts are always possible), we can improve the emulation by
> detecting the deactivation-while-pending case and resample the
> interrupt.
>
> Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Cc: stable@vger.kernel.org
> ---
>  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
>  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
>  2 files changed, 36 insertions(+), 14 deletions(-)
>
Tested-by: Raghavendra Rao Ananta <rananta@google.com>

Thanks,
Raghavendra
> diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> index 2c580204f1dc..3e52ea86a87f 100644
> --- a/arch/arm64/kvm/vgic/vgic-v2.c
> +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                 u32 val = cpuif->vgic_lr[lr];
>                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
>                 struct vgic_irq *irq;
> +               bool deactivated;
>
>                 /* Extract the source vCPU id from the LR */
>                 cpuid = val & GICH_LR_PHYSID_CPUID;
> @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>
>                 raw_spin_lock(&irq->irq_lock);
>
> -               /* Always preserve the active bit */
> +               /* Always preserve the active bit, note deactivation */
> +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
>                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
>
>                 if (irq->active && vgic_irq_is_sgi(intid))
> @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                  * device state could have changed or we simply need to
>                  * process the still pending interrupt later.
>                  *
> +                * We could also have entered the guest with the interrupt
> +                * active+pending. On the next exit, we need to re-evaluate
> +                * the pending state, as it could otherwise result in a
> +                * spurious interrupt by injecting a now potentially stale
> +                * pending state.
> +                *
>                  * If this causes us to lower the level, we have to also clear
>                  * the physical active state, since we will otherwise never be
>                  * told when the interrupt becomes asserted again.
> @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                 if (vgic_irq_is_mapped_level(irq)) {
>                         bool resample = false;
>
> -                       if (val & GICH_LR_PENDING_BIT) {
> -                               irq->line_level = vgic_get_phys_line_level(irq);
> -                               resample = !irq->line_level;
> -                       } else if (vgic_irq_needs_resampling(irq) &&
> -                                  !(irq->active || irq->pending_latch)) {
> -                               resample = true;
> +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> +                               if (!(irq->active || irq->pending_latch))
> +                                       resample = true;
> +                       } else {
> +                               if ((val & GICH_LR_PENDING_BIT) ||
> +                                   (deactivated && irq->line_level)) {
> +                                       irq->line_level = vgic_get_phys_line_level(irq);
> +                                       resample = !irq->line_level;
> +                               }
>                         }
>
>                         if (resample)
> diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> index 66004f61cd83..74f9aefffd5e 100644
> --- a/arch/arm64/kvm/vgic/vgic-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                 u32 intid, cpuid;
>                 struct vgic_irq *irq;
>                 bool is_v2_sgi = false;
> +               bool deactivated;
>
>                 cpuid = val & GICH_LR_PHYSID_CPUID;
>                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>
>                 raw_spin_lock(&irq->irq_lock);
>
> -               /* Always preserve the active bit */
> +               /* Always preserve the active bit, note deactivation */
> +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
>                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
>
>                 if (irq->active && is_v2_sgi)
> @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                  * device state could have changed or we simply need to
>                  * process the still pending interrupt later.
>                  *
> +                * We could also have entered the guest with the interrupt
> +                * active+pending. On the next exit, we need to re-evaluate
> +                * the pending state, as it could otherwise result in a
> +                * spurious interrupt by injecting a now potentially stale
> +                * pending state.
> +                *
>                  * If this causes us to lower the level, we have to also clear
>                  * the physical active state, since we will otherwise never be
>                  * told when the interrupt becomes asserted again.
> @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                 if (vgic_irq_is_mapped_level(irq)) {
>                         bool resample = false;
>
> -                       if (val & ICH_LR_PENDING_BIT) {
> -                               irq->line_level = vgic_get_phys_line_level(irq);
> -                               resample = !irq->line_level;
> -                       } else if (vgic_irq_needs_resampling(irq) &&
> -                                  !(irq->active || irq->pending_latch)) {
> -                               resample = true;
> +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> +                               if (!(irq->active || irq->pending_latch))
> +                                       resample = true;
> +                       } else {
> +                               if ((val & ICH_LR_PENDING_BIT) ||
> +                                   (deactivated && irq->line_level)) {
> +                                       irq->line_level = vgic_get_phys_line_level(irq);
> +                                       resample = !irq->line_level;
> +                               }
>                         }
>
>                         if (resample)
> --
> 2.30.2
>

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 19:05   ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 12+ messages in thread
From: Raghavendra Rao Ananta @ 2021-08-18 19:05 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
  Cc: James Morse, Suzuki K Poulose, Alexandru Elisei, Andre Przywara,
	Eric Auger, Oliver Upton, Ricardo Koller, kernel-team, stable

On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
>
> When a mapped level interrupt (a timer, for example) is deactivated
> by the guest, the corresponding host interrupt is equally deactivated.
> However, the fate of the pending state still needs to be dealt
> with in SW.
>
> This is specially true when the interrupt was in the active+pending
> state in the virtual distributor at the point where the guest
> was entered. On exit, the pending state is potentially stale
> (the guest may have put the interrupt in a non-pending state).
>
> If we don't do anything, the interrupt will be spuriously injected
> in the guest. Although this shouldn't have any ill effect (spurious
> interrupts are always possible), we can improve the emulation by
> detecting the deactivation-while-pending case and resample the
> interrupt.
>
> Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Cc: stable@vger.kernel.org
> ---
>  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
>  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
>  2 files changed, 36 insertions(+), 14 deletions(-)
>
Tested-by: Raghavendra Rao Ananta <rananta@google.com>

Thanks,
Raghavendra
> diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> index 2c580204f1dc..3e52ea86a87f 100644
> --- a/arch/arm64/kvm/vgic/vgic-v2.c
> +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                 u32 val = cpuif->vgic_lr[lr];
>                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
>                 struct vgic_irq *irq;
> +               bool deactivated;
>
>                 /* Extract the source vCPU id from the LR */
>                 cpuid = val & GICH_LR_PHYSID_CPUID;
> @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>
>                 raw_spin_lock(&irq->irq_lock);
>
> -               /* Always preserve the active bit */
> +               /* Always preserve the active bit, note deactivation */
> +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
>                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
>
>                 if (irq->active && vgic_irq_is_sgi(intid))
> @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                  * device state could have changed or we simply need to
>                  * process the still pending interrupt later.
>                  *
> +                * We could also have entered the guest with the interrupt
> +                * active+pending. On the next exit, we need to re-evaluate
> +                * the pending state, as it could otherwise result in a
> +                * spurious interrupt by injecting a now potentially stale
> +                * pending state.
> +                *
>                  * If this causes us to lower the level, we have to also clear
>                  * the physical active state, since we will otherwise never be
>                  * told when the interrupt becomes asserted again.
> @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                 if (vgic_irq_is_mapped_level(irq)) {
>                         bool resample = false;
>
> -                       if (val & GICH_LR_PENDING_BIT) {
> -                               irq->line_level = vgic_get_phys_line_level(irq);
> -                               resample = !irq->line_level;
> -                       } else if (vgic_irq_needs_resampling(irq) &&
> -                                  !(irq->active || irq->pending_latch)) {
> -                               resample = true;
> +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> +                               if (!(irq->active || irq->pending_latch))
> +                                       resample = true;
> +                       } else {
> +                               if ((val & GICH_LR_PENDING_BIT) ||
> +                                   (deactivated && irq->line_level)) {
> +                                       irq->line_level = vgic_get_phys_line_level(irq);
> +                                       resample = !irq->line_level;
> +                               }
>                         }
>
>                         if (resample)
> diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> index 66004f61cd83..74f9aefffd5e 100644
> --- a/arch/arm64/kvm/vgic/vgic-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                 u32 intid, cpuid;
>                 struct vgic_irq *irq;
>                 bool is_v2_sgi = false;
> +               bool deactivated;
>
>                 cpuid = val & GICH_LR_PHYSID_CPUID;
>                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>
>                 raw_spin_lock(&irq->irq_lock);
>
> -               /* Always preserve the active bit */
> +               /* Always preserve the active bit, note deactivation */
> +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
>                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
>
>                 if (irq->active && is_v2_sgi)
> @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                  * device state could have changed or we simply need to
>                  * process the still pending interrupt later.
>                  *
> +                * We could also have entered the guest with the interrupt
> +                * active+pending. On the next exit, we need to re-evaluate
> +                * the pending state, as it could otherwise result in a
> +                * spurious interrupt by injecting a now potentially stale
> +                * pending state.
> +                *
>                  * If this causes us to lower the level, we have to also clear
>                  * the physical active state, since we will otherwise never be
>                  * told when the interrupt becomes asserted again.
> @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                 if (vgic_irq_is_mapped_level(irq)) {
>                         bool resample = false;
>
> -                       if (val & ICH_LR_PENDING_BIT) {
> -                               irq->line_level = vgic_get_phys_line_level(irq);
> -                               resample = !irq->line_level;
> -                       } else if (vgic_irq_needs_resampling(irq) &&
> -                                  !(irq->active || irq->pending_latch)) {
> -                               resample = true;
> +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> +                               if (!(irq->active || irq->pending_latch))
> +                                       resample = true;
> +                       } else {
> +                               if ((val & ICH_LR_PENDING_BIT) ||
> +                                   (deactivated && irq->line_level)) {
> +                                       irq->line_level = vgic_get_phys_line_level(irq);
> +                                       resample = !irq->line_level;
> +                               }
>                         }
>
>                         if (resample)
> --
> 2.30.2
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 19:05   ` Raghavendra Rao Ananta
  0 siblings, 0 replies; 12+ messages in thread
From: Raghavendra Rao Ananta @ 2021-08-18 19:05 UTC (permalink / raw)
  To: Marc Zyngier, linux-arm-kernel, kvmarm, kvm
  Cc: kernel-team, Andre Przywara, stable

On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
>
> When a mapped level interrupt (a timer, for example) is deactivated
> by the guest, the corresponding host interrupt is equally deactivated.
> However, the fate of the pending state still needs to be dealt
> with in SW.
>
> This is specially true when the interrupt was in the active+pending
> state in the virtual distributor at the point where the guest
> was entered. On exit, the pending state is potentially stale
> (the guest may have put the interrupt in a non-pending state).
>
> If we don't do anything, the interrupt will be spuriously injected
> in the guest. Although this shouldn't have any ill effect (spurious
> interrupts are always possible), we can improve the emulation by
> detecting the deactivation-while-pending case and resample the
> interrupt.
>
> Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> Signed-off-by: Marc Zyngier <maz@kernel.org>
> Cc: stable@vger.kernel.org
> ---
>  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
>  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
>  2 files changed, 36 insertions(+), 14 deletions(-)
>
Tested-by: Raghavendra Rao Ananta <rananta@google.com>

Thanks,
Raghavendra
> diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> index 2c580204f1dc..3e52ea86a87f 100644
> --- a/arch/arm64/kvm/vgic/vgic-v2.c
> +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                 u32 val = cpuif->vgic_lr[lr];
>                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
>                 struct vgic_irq *irq;
> +               bool deactivated;
>
>                 /* Extract the source vCPU id from the LR */
>                 cpuid = val & GICH_LR_PHYSID_CPUID;
> @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>
>                 raw_spin_lock(&irq->irq_lock);
>
> -               /* Always preserve the active bit */
> +               /* Always preserve the active bit, note deactivation */
> +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
>                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
>
>                 if (irq->active && vgic_irq_is_sgi(intid))
> @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                  * device state could have changed or we simply need to
>                  * process the still pending interrupt later.
>                  *
> +                * We could also have entered the guest with the interrupt
> +                * active+pending. On the next exit, we need to re-evaluate
> +                * the pending state, as it could otherwise result in a
> +                * spurious interrupt by injecting a now potentially stale
> +                * pending state.
> +                *
>                  * If this causes us to lower the level, we have to also clear
>                  * the physical active state, since we will otherwise never be
>                  * told when the interrupt becomes asserted again.
> @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
>                 if (vgic_irq_is_mapped_level(irq)) {
>                         bool resample = false;
>
> -                       if (val & GICH_LR_PENDING_BIT) {
> -                               irq->line_level = vgic_get_phys_line_level(irq);
> -                               resample = !irq->line_level;
> -                       } else if (vgic_irq_needs_resampling(irq) &&
> -                                  !(irq->active || irq->pending_latch)) {
> -                               resample = true;
> +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> +                               if (!(irq->active || irq->pending_latch))
> +                                       resample = true;
> +                       } else {
> +                               if ((val & GICH_LR_PENDING_BIT) ||
> +                                   (deactivated && irq->line_level)) {
> +                                       irq->line_level = vgic_get_phys_line_level(irq);
> +                                       resample = !irq->line_level;
> +                               }
>                         }
>
>                         if (resample)
> diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> index 66004f61cd83..74f9aefffd5e 100644
> --- a/arch/arm64/kvm/vgic/vgic-v3.c
> +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                 u32 intid, cpuid;
>                 struct vgic_irq *irq;
>                 bool is_v2_sgi = false;
> +               bool deactivated;
>
>                 cpuid = val & GICH_LR_PHYSID_CPUID;
>                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>
>                 raw_spin_lock(&irq->irq_lock);
>
> -               /* Always preserve the active bit */
> +               /* Always preserve the active bit, note deactivation */
> +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
>                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
>
>                 if (irq->active && is_v2_sgi)
> @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                  * device state could have changed or we simply need to
>                  * process the still pending interrupt later.
>                  *
> +                * We could also have entered the guest with the interrupt
> +                * active+pending. On the next exit, we need to re-evaluate
> +                * the pending state, as it could otherwise result in a
> +                * spurious interrupt by injecting a now potentially stale
> +                * pending state.
> +                *
>                  * If this causes us to lower the level, we have to also clear
>                  * the physical active state, since we will otherwise never be
>                  * told when the interrupt becomes asserted again.
> @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
>                 if (vgic_irq_is_mapped_level(irq)) {
>                         bool resample = false;
>
> -                       if (val & ICH_LR_PENDING_BIT) {
> -                               irq->line_level = vgic_get_phys_line_level(irq);
> -                               resample = !irq->line_level;
> -                       } else if (vgic_irq_needs_resampling(irq) &&
> -                                  !(irq->active || irq->pending_latch)) {
> -                               resample = true;
> +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> +                               if (!(irq->active || irq->pending_latch))
> +                                       resample = true;
> +                       } else {
> +                               if ((val & ICH_LR_PENDING_BIT) ||
> +                                   (deactivated && irq->line_level)) {
> +                                       irq->line_level = vgic_get_phys_line_level(irq);
> +                                       resample = !irq->line_level;
> +                               }
>                         }
>
>                         if (resample)
> --
> 2.30.2
>
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
  2021-08-18 19:05   ` Raghavendra Rao Ananta
  (?)
@ 2021-08-18 19:40     ` Oliver Upton
  -1 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2021-08-18 19:40 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: kernel-team, kvm, Marc Zyngier, stable, Andre Przywara, kvmarm,
	linux-arm-kernel

Hey Marc,

On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > When a mapped level interrupt (a timer, for example) is deactivated
> > by the guest, the corresponding host interrupt is equally deactivated.
> > However, the fate of the pending state still needs to be dealt
> > with in SW.
> >
> > This is specially true when the interrupt was in the active+pending
> > state in the virtual distributor at the point where the guest
> > was entered. On exit, the pending state is potentially stale
> > (the guest may have put the interrupt in a non-pending state).
> >
> > If we don't do anything, the interrupt will be spuriously injected
> > in the guest. Although this shouldn't have any ill effect (spurious
> > interrupts are always possible), we can improve the emulation by
> > detecting the deactivation-while-pending case and resample the
> > interrupt.
> >
> > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Cc: stable@vger.kernel.org
> > ---
> >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> >  2 files changed, 36 insertions(+), 14 deletions(-)
> >
> Tested-by: Raghavendra Rao Ananta <rananta@google.com>
>
> Thanks,
> Raghavendra
> > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > index 2c580204f1dc..3e52ea86a87f 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 u32 val = cpuif->vgic_lr[lr];
> >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> >                 struct vgic_irq *irq;
> > +               bool deactivated;
> >
> >                 /* Extract the source vCPU id from the LR */
> >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >
> >                 raw_spin_lock(&irq->irq_lock);
> >
> > -               /* Always preserve the active bit */
> > +               /* Always preserve the active bit, note deactivation */
> > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> >
> >                 if (irq->active && vgic_irq_is_sgi(intid))
> > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                  * device state could have changed or we simply need to
> >                  * process the still pending interrupt later.
> >                  *
> > +                * We could also have entered the guest with the interrupt
> > +                * active+pending. On the next exit, we need to re-evaluate
> > +                * the pending state, as it could otherwise result in a
> > +                * spurious interrupt by injecting a now potentially stale
> > +                * pending state.
> > +                *
> >                  * If this causes us to lower the level, we have to also clear
> >                  * the physical active state, since we will otherwise never be
> >                  * told when the interrupt becomes asserted again.
> > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 if (vgic_irq_is_mapped_level(irq)) {
> >                         bool resample = false;
> >
> > -                       if (val & GICH_LR_PENDING_BIT) {
> > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > -                               resample = !irq->line_level;
> > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > -                                  !(irq->active || irq->pending_latch)) {
> > -                               resample = true;
> > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > +                               if (!(irq->active || irq->pending_latch))
> > +                                       resample = true;
> > +                       } else {
> > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > +                                   (deactivated && irq->line_level)) {
> > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > +                                       resample = !irq->line_level;
> > +                               }
> >                         }
> >
> >                         if (resample)
> > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > index 66004f61cd83..74f9aefffd5e 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 u32 intid, cpuid;
> >                 struct vgic_irq *irq;
> >                 bool is_v2_sgi = false;
> > +               bool deactivated;
> >
> >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >
> >                 raw_spin_lock(&irq->irq_lock);
> >
> > -               /* Always preserve the active bit */
> > +               /* Always preserve the active bit, note deactivation */
> > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> >
> >                 if (irq->active && is_v2_sgi)
> > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                  * device state could have changed or we simply need to
> >                  * process the still pending interrupt later.
> >                  *
> > +                * We could also have entered the guest with the interrupt
> > +                * active+pending. On the next exit, we need to re-evaluate
> > +                * the pending state, as it could otherwise result in a
> > +                * spurious interrupt by injecting a now potentially stale
> > +                * pending state.
> > +                *
> >                  * If this causes us to lower the level, we have to also clear
> >                  * the physical active state, since we will otherwise never be
> >                  * told when the interrupt becomes asserted again.
> > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 if (vgic_irq_is_mapped_level(irq)) {
> >                         bool resample = false;
> >
> > -                       if (val & ICH_LR_PENDING_BIT) {
> > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > -                               resample = !irq->line_level;
> > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > -                                  !(irq->active || irq->pending_latch)) {
> > -                               resample = true;
> > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > +                               if (!(irq->active || irq->pending_latch))
> > +                                       resample = true;
> > +                       } else {
> > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > +                                   (deactivated && irq->line_level)) {
> > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > +                                       resample = !irq->line_level;
> > +                               }

The vGICv3 and vGICv2 implementations look identical here, should we
have a helper that keeps the code common between the two?

Otherwise, the functional change LGTM, so:

Reviewed-by: Oliver Upton <oupton@google.com>

> >                         }
> >
> >                         if (resample)
> > --
> > 2.30.2
> >
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 19:40     ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2021-08-18 19:40 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Andre Przywara, Eric Auger,
	Ricardo Koller, kernel-team, stable

Hey Marc,

On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > When a mapped level interrupt (a timer, for example) is deactivated
> > by the guest, the corresponding host interrupt is equally deactivated.
> > However, the fate of the pending state still needs to be dealt
> > with in SW.
> >
> > This is specially true when the interrupt was in the active+pending
> > state in the virtual distributor at the point where the guest
> > was entered. On exit, the pending state is potentially stale
> > (the guest may have put the interrupt in a non-pending state).
> >
> > If we don't do anything, the interrupt will be spuriously injected
> > in the guest. Although this shouldn't have any ill effect (spurious
> > interrupts are always possible), we can improve the emulation by
> > detecting the deactivation-while-pending case and resample the
> > interrupt.
> >
> > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Cc: stable@vger.kernel.org
> > ---
> >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> >  2 files changed, 36 insertions(+), 14 deletions(-)
> >
> Tested-by: Raghavendra Rao Ananta <rananta@google.com>
>
> Thanks,
> Raghavendra
> > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > index 2c580204f1dc..3e52ea86a87f 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 u32 val = cpuif->vgic_lr[lr];
> >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> >                 struct vgic_irq *irq;
> > +               bool deactivated;
> >
> >                 /* Extract the source vCPU id from the LR */
> >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >
> >                 raw_spin_lock(&irq->irq_lock);
> >
> > -               /* Always preserve the active bit */
> > +               /* Always preserve the active bit, note deactivation */
> > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> >
> >                 if (irq->active && vgic_irq_is_sgi(intid))
> > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                  * device state could have changed or we simply need to
> >                  * process the still pending interrupt later.
> >                  *
> > +                * We could also have entered the guest with the interrupt
> > +                * active+pending. On the next exit, we need to re-evaluate
> > +                * the pending state, as it could otherwise result in a
> > +                * spurious interrupt by injecting a now potentially stale
> > +                * pending state.
> > +                *
> >                  * If this causes us to lower the level, we have to also clear
> >                  * the physical active state, since we will otherwise never be
> >                  * told when the interrupt becomes asserted again.
> > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 if (vgic_irq_is_mapped_level(irq)) {
> >                         bool resample = false;
> >
> > -                       if (val & GICH_LR_PENDING_BIT) {
> > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > -                               resample = !irq->line_level;
> > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > -                                  !(irq->active || irq->pending_latch)) {
> > -                               resample = true;
> > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > +                               if (!(irq->active || irq->pending_latch))
> > +                                       resample = true;
> > +                       } else {
> > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > +                                   (deactivated && irq->line_level)) {
> > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > +                                       resample = !irq->line_level;
> > +                               }
> >                         }
> >
> >                         if (resample)
> > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > index 66004f61cd83..74f9aefffd5e 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 u32 intid, cpuid;
> >                 struct vgic_irq *irq;
> >                 bool is_v2_sgi = false;
> > +               bool deactivated;
> >
> >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >
> >                 raw_spin_lock(&irq->irq_lock);
> >
> > -               /* Always preserve the active bit */
> > +               /* Always preserve the active bit, note deactivation */
> > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> >
> >                 if (irq->active && is_v2_sgi)
> > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                  * device state could have changed or we simply need to
> >                  * process the still pending interrupt later.
> >                  *
> > +                * We could also have entered the guest with the interrupt
> > +                * active+pending. On the next exit, we need to re-evaluate
> > +                * the pending state, as it could otherwise result in a
> > +                * spurious interrupt by injecting a now potentially stale
> > +                * pending state.
> > +                *
> >                  * If this causes us to lower the level, we have to also clear
> >                  * the physical active state, since we will otherwise never be
> >                  * told when the interrupt becomes asserted again.
> > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 if (vgic_irq_is_mapped_level(irq)) {
> >                         bool resample = false;
> >
> > -                       if (val & ICH_LR_PENDING_BIT) {
> > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > -                               resample = !irq->line_level;
> > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > -                                  !(irq->active || irq->pending_latch)) {
> > -                               resample = true;
> > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > +                               if (!(irq->active || irq->pending_latch))
> > +                                       resample = true;
> > +                       } else {
> > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > +                                   (deactivated && irq->line_level)) {
> > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > +                                       resample = !irq->line_level;
> > +                               }

The vGICv3 and vGICv2 implementations look identical here, should we
have a helper that keeps the code common between the two?

Otherwise, the functional change LGTM, so:

Reviewed-by: Oliver Upton <oupton@google.com>

> >                         }
> >
> >                         if (resample)
> > --
> > 2.30.2
> >

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 19:40     ` Oliver Upton
  0 siblings, 0 replies; 12+ messages in thread
From: Oliver Upton @ 2021-08-18 19:40 UTC (permalink / raw)
  To: Raghavendra Rao Ananta
  Cc: Marc Zyngier, linux-arm-kernel, kvmarm, kvm, James Morse,
	Suzuki K Poulose, Alexandru Elisei, Andre Przywara, Eric Auger,
	Ricardo Koller, kernel-team, stable

Hey Marc,

On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
<rananta@google.com> wrote:
>
> On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> >
> > When a mapped level interrupt (a timer, for example) is deactivated
> > by the guest, the corresponding host interrupt is equally deactivated.
> > However, the fate of the pending state still needs to be dealt
> > with in SW.
> >
> > This is specially true when the interrupt was in the active+pending
> > state in the virtual distributor at the point where the guest
> > was entered. On exit, the pending state is potentially stale
> > (the guest may have put the interrupt in a non-pending state).
> >
> > If we don't do anything, the interrupt will be spuriously injected
> > in the guest. Although this shouldn't have any ill effect (spurious
> > interrupts are always possible), we can improve the emulation by
> > detecting the deactivation-while-pending case and resample the
> > interrupt.
> >
> > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > Cc: stable@vger.kernel.org
> > ---
> >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> >  2 files changed, 36 insertions(+), 14 deletions(-)
> >
> Tested-by: Raghavendra Rao Ananta <rananta@google.com>
>
> Thanks,
> Raghavendra
> > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > index 2c580204f1dc..3e52ea86a87f 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 u32 val = cpuif->vgic_lr[lr];
> >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> >                 struct vgic_irq *irq;
> > +               bool deactivated;
> >
> >                 /* Extract the source vCPU id from the LR */
> >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >
> >                 raw_spin_lock(&irq->irq_lock);
> >
> > -               /* Always preserve the active bit */
> > +               /* Always preserve the active bit, note deactivation */
> > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> >
> >                 if (irq->active && vgic_irq_is_sgi(intid))
> > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                  * device state could have changed or we simply need to
> >                  * process the still pending interrupt later.
> >                  *
> > +                * We could also have entered the guest with the interrupt
> > +                * active+pending. On the next exit, we need to re-evaluate
> > +                * the pending state, as it could otherwise result in a
> > +                * spurious interrupt by injecting a now potentially stale
> > +                * pending state.
> > +                *
> >                  * If this causes us to lower the level, we have to also clear
> >                  * the physical active state, since we will otherwise never be
> >                  * told when the interrupt becomes asserted again.
> > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 if (vgic_irq_is_mapped_level(irq)) {
> >                         bool resample = false;
> >
> > -                       if (val & GICH_LR_PENDING_BIT) {
> > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > -                               resample = !irq->line_level;
> > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > -                                  !(irq->active || irq->pending_latch)) {
> > -                               resample = true;
> > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > +                               if (!(irq->active || irq->pending_latch))
> > +                                       resample = true;
> > +                       } else {
> > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > +                                   (deactivated && irq->line_level)) {
> > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > +                                       resample = !irq->line_level;
> > +                               }
> >                         }
> >
> >                         if (resample)
> > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > index 66004f61cd83..74f9aefffd5e 100644
> > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 u32 intid, cpuid;
> >                 struct vgic_irq *irq;
> >                 bool is_v2_sgi = false;
> > +               bool deactivated;
> >
> >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >
> >                 raw_spin_lock(&irq->irq_lock);
> >
> > -               /* Always preserve the active bit */
> > +               /* Always preserve the active bit, note deactivation */
> > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> >
> >                 if (irq->active && is_v2_sgi)
> > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                  * device state could have changed or we simply need to
> >                  * process the still pending interrupt later.
> >                  *
> > +                * We could also have entered the guest with the interrupt
> > +                * active+pending. On the next exit, we need to re-evaluate
> > +                * the pending state, as it could otherwise result in a
> > +                * spurious interrupt by injecting a now potentially stale
> > +                * pending state.
> > +                *
> >                  * If this causes us to lower the level, we have to also clear
> >                  * the physical active state, since we will otherwise never be
> >                  * told when the interrupt becomes asserted again.
> > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> >                 if (vgic_irq_is_mapped_level(irq)) {
> >                         bool resample = false;
> >
> > -                       if (val & ICH_LR_PENDING_BIT) {
> > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > -                               resample = !irq->line_level;
> > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > -                                  !(irq->active || irq->pending_latch)) {
> > -                               resample = true;
> > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > +                               if (!(irq->active || irq->pending_latch))
> > +                                       resample = true;
> > +                       } else {
> > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > +                                   (deactivated && irq->line_level)) {
> > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > +                                       resample = !irq->line_level;
> > +                               }

The vGICv3 and vGICv2 implementations look identical here, should we
have a helper that keeps the code common between the two?

Otherwise, the functional change LGTM, so:

Reviewed-by: Oliver Upton <oupton@google.com>

> >                         }
> >
> >                         if (resample)
> > --
> > 2.30.2
> >

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
  2021-08-18 19:40     ` Oliver Upton
  (?)
@ 2021-08-18 21:24       ` Marc Zyngier
  -1 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2021-08-18 21:24 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Raghavendra Rao Ananta, linux-arm-kernel, kvmarm, kvm,
	James Morse, Suzuki K Poulose, Alexandru Elisei, Andre Przywara,
	Eric Auger, Ricardo Koller, kernel-team, stable

On Wed, 18 Aug 2021 20:40:42 +0100,
Oliver Upton <oupton@google.com> wrote:
> 
> Hey Marc,
> 
> On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > When a mapped level interrupt (a timer, for example) is deactivated
> > > by the guest, the corresponding host interrupt is equally deactivated.
> > > However, the fate of the pending state still needs to be dealt
> > > with in SW.
> > >
> > > This is specially true when the interrupt was in the active+pending
> > > state in the virtual distributor at the point where the guest
> > > was entered. On exit, the pending state is potentially stale
> > > (the guest may have put the interrupt in a non-pending state).
> > >
> > > If we don't do anything, the interrupt will be spuriously injected
> > > in the guest. Although this shouldn't have any ill effect (spurious
> > > interrupts are always possible), we can improve the emulation by
> > > detecting the deactivation-while-pending case and resample the
> > > interrupt.
> > >
> > > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> > >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> > >  2 files changed, 36 insertions(+), 14 deletions(-)
> > >
> > Tested-by: Raghavendra Rao Ananta <rananta@google.com>
> >
> > Thanks,
> > Raghavendra
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > > index 2c580204f1dc..3e52ea86a87f 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 val = cpuif->vgic_lr[lr];
> > >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> > >                 struct vgic_irq *irq;
> > > +               bool deactivated;
> > >
> > >                 /* Extract the source vCPU id from the LR */
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && vgic_irq_is_sgi(intid))
> > > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & GICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> > >                         }
> > >
> > >                         if (resample)
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > > index 66004f61cd83..74f9aefffd5e 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 intid, cpuid;
> > >                 struct vgic_irq *irq;
> > >                 bool is_v2_sgi = false;
> > > +               bool deactivated;
> > >
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && is_v2_sgi)
> > > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & ICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> 
> The vGICv3 and vGICv2 implementations look identical here, should we
> have a helper that keeps the code common between the two?

Probably. This code used to be much simpler, but it has grown a bit
unwieldy since I added the M1 support hack. This change doesn't make
look any better so it is probably time for a minor refactor.

I've pushed out an updated patch, but I'll wait a bit more for
additional feedback before posting it again.

> 
> Otherwise, the functional change LGTM, so:
> 
> Reviewed-by: Oliver Upton <oupton@google.com>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 21:24       ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2021-08-18 21:24 UTC (permalink / raw)
  To: Oliver Upton
  Cc: kernel-team, kvm, Andre Przywara, stable, Raghavendra Rao Ananta,
	kvmarm, linux-arm-kernel

On Wed, 18 Aug 2021 20:40:42 +0100,
Oliver Upton <oupton@google.com> wrote:
> 
> Hey Marc,
> 
> On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > When a mapped level interrupt (a timer, for example) is deactivated
> > > by the guest, the corresponding host interrupt is equally deactivated.
> > > However, the fate of the pending state still needs to be dealt
> > > with in SW.
> > >
> > > This is specially true when the interrupt was in the active+pending
> > > state in the virtual distributor at the point where the guest
> > > was entered. On exit, the pending state is potentially stale
> > > (the guest may have put the interrupt in a non-pending state).
> > >
> > > If we don't do anything, the interrupt will be spuriously injected
> > > in the guest. Although this shouldn't have any ill effect (spurious
> > > interrupts are always possible), we can improve the emulation by
> > > detecting the deactivation-while-pending case and resample the
> > > interrupt.
> > >
> > > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> > >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> > >  2 files changed, 36 insertions(+), 14 deletions(-)
> > >
> > Tested-by: Raghavendra Rao Ananta <rananta@google.com>
> >
> > Thanks,
> > Raghavendra
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > > index 2c580204f1dc..3e52ea86a87f 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 val = cpuif->vgic_lr[lr];
> > >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> > >                 struct vgic_irq *irq;
> > > +               bool deactivated;
> > >
> > >                 /* Extract the source vCPU id from the LR */
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && vgic_irq_is_sgi(intid))
> > > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & GICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> > >                         }
> > >
> > >                         if (resample)
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > > index 66004f61cd83..74f9aefffd5e 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 intid, cpuid;
> > >                 struct vgic_irq *irq;
> > >                 bool is_v2_sgi = false;
> > > +               bool deactivated;
> > >
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && is_v2_sgi)
> > > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & ICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> 
> The vGICv3 and vGICv2 implementations look identical here, should we
> have a helper that keeps the code common between the two?

Probably. This code used to be much simpler, but it has grown a bit
unwieldy since I added the M1 support hack. This change doesn't make
look any better so it is probably time for a minor refactor.

I've pushed out an updated patch, but I'll wait a bit more for
additional feedback before posting it again.

> 
> Otherwise, the functional change LGTM, so:
> 
> Reviewed-by: Oliver Upton <oupton@google.com>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
@ 2021-08-18 21:24       ` Marc Zyngier
  0 siblings, 0 replies; 12+ messages in thread
From: Marc Zyngier @ 2021-08-18 21:24 UTC (permalink / raw)
  To: Oliver Upton
  Cc: Raghavendra Rao Ananta, linux-arm-kernel, kvmarm, kvm,
	James Morse, Suzuki K Poulose, Alexandru Elisei, Andre Przywara,
	Eric Auger, Ricardo Koller, kernel-team, stable

On Wed, 18 Aug 2021 20:40:42 +0100,
Oliver Upton <oupton@google.com> wrote:
> 
> Hey Marc,
> 
> On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > When a mapped level interrupt (a timer, for example) is deactivated
> > > by the guest, the corresponding host interrupt is equally deactivated.
> > > However, the fate of the pending state still needs to be dealt
> > > with in SW.
> > >
> > > This is specially true when the interrupt was in the active+pending
> > > state in the virtual distributor at the point where the guest
> > > was entered. On exit, the pending state is potentially stale
> > > (the guest may have put the interrupt in a non-pending state).
> > >
> > > If we don't do anything, the interrupt will be spuriously injected
> > > in the guest. Although this shouldn't have any ill effect (spurious
> > > interrupts are always possible), we can improve the emulation by
> > > detecting the deactivation-while-pending case and resample the
> > > interrupt.
> > >
> > > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> > >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> > >  2 files changed, 36 insertions(+), 14 deletions(-)
> > >
> > Tested-by: Raghavendra Rao Ananta <rananta@google.com>
> >
> > Thanks,
> > Raghavendra
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > > index 2c580204f1dc..3e52ea86a87f 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 val = cpuif->vgic_lr[lr];
> > >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> > >                 struct vgic_irq *irq;
> > > +               bool deactivated;
> > >
> > >                 /* Extract the source vCPU id from the LR */
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && vgic_irq_is_sgi(intid))
> > > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & GICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> > >                         }
> > >
> > >                         if (resample)
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > > index 66004f61cd83..74f9aefffd5e 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 intid, cpuid;
> > >                 struct vgic_irq *irq;
> > >                 bool is_v2_sgi = false;
> > > +               bool deactivated;
> > >
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && is_v2_sgi)
> > > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & ICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> 
> The vGICv3 and vGICv2 implementations look identical here, should we
> have a helper that keeps the code common between the two?

Probably. This code used to be much simpler, but it has grown a bit
unwieldy since I added the M1 support hack. This change doesn't make
look any better so it is probably time for a minor refactor.

I've pushed out an updated patch, but I'll wait a bit more for
additional feedback before posting it again.

> 
> Otherwise, the functional change LGTM, so:
> 
> Reviewed-by: Oliver Upton <oupton@google.com>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-08-18 21:27 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-18 18:14 [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation Marc Zyngier
2021-08-18 18:14 ` Marc Zyngier
2021-08-18 18:14 ` Marc Zyngier
2021-08-18 19:05 ` Raghavendra Rao Ananta
2021-08-18 19:05   ` Raghavendra Rao Ananta
2021-08-18 19:05   ` Raghavendra Rao Ananta
2021-08-18 19:40   ` Oliver Upton
2021-08-18 19:40     ` Oliver Upton
2021-08-18 19:40     ` Oliver Upton
2021-08-18 21:24     ` Marc Zyngier
2021-08-18 21:24       ` Marc Zyngier
2021-08-18 21:24       ` Marc Zyngier

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.