All of lore.kernel.org
 help / color / mirror / Atom feed
From: Marc Zyngier <maz@kernel.org>
To: Oliver Upton <oupton@google.com>
Cc: Raghavendra Rao Ananta <rananta@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Alexandru Elisei <Alexandru.Elisei@arm.com>,
	Andre Przywara <andre.przywara@arm.com>,
	Eric Auger <eric.auger@redhat.com>,
	Ricardo Koller <ricarkol@google.com>,
	kernel-team@android.com, stable@vger.kernel.org
Subject: Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
Date: Wed, 18 Aug 2021 22:24:38 +0100	[thread overview]
Message-ID: <87o89uzbs9.wl-maz@kernel.org> (raw)
In-Reply-To: <CAOQ_QsgGiGSfEq1QGfePiRF-=spCuR6XZ2QXfUsZ1zWds0ftag@mail.gmail.com>

On Wed, 18 Aug 2021 20:40:42 +0100,
Oliver Upton <oupton@google.com> wrote:
> 
> Hey Marc,
> 
> On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > When a mapped level interrupt (a timer, for example) is deactivated
> > > by the guest, the corresponding host interrupt is equally deactivated.
> > > However, the fate of the pending state still needs to be dealt
> > > with in SW.
> > >
> > > This is specially true when the interrupt was in the active+pending
> > > state in the virtual distributor at the point where the guest
> > > was entered. On exit, the pending state is potentially stale
> > > (the guest may have put the interrupt in a non-pending state).
> > >
> > > If we don't do anything, the interrupt will be spuriously injected
> > > in the guest. Although this shouldn't have any ill effect (spurious
> > > interrupts are always possible), we can improve the emulation by
> > > detecting the deactivation-while-pending case and resample the
> > > interrupt.
> > >
> > > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> > >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> > >  2 files changed, 36 insertions(+), 14 deletions(-)
> > >
> > Tested-by: Raghavendra Rao Ananta <rananta@google.com>
> >
> > Thanks,
> > Raghavendra
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > > index 2c580204f1dc..3e52ea86a87f 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 val = cpuif->vgic_lr[lr];
> > >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> > >                 struct vgic_irq *irq;
> > > +               bool deactivated;
> > >
> > >                 /* Extract the source vCPU id from the LR */
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && vgic_irq_is_sgi(intid))
> > > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & GICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> > >                         }
> > >
> > >                         if (resample)
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > > index 66004f61cd83..74f9aefffd5e 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 intid, cpuid;
> > >                 struct vgic_irq *irq;
> > >                 bool is_v2_sgi = false;
> > > +               bool deactivated;
> > >
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && is_v2_sgi)
> > > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & ICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> 
> The vGICv3 and vGICv2 implementations look identical here, should we
> have a helper that keeps the code common between the two?

Probably. This code used to be much simpler, but it has grown a bit
unwieldy since I added the M1 support hack. This change doesn't make
look any better so it is probably time for a minor refactor.

I've pushed out an updated patch, but I'll wait a bit more for
additional feedback before posting it again.

> 
> Otherwise, the functional change LGTM, so:
> 
> Reviewed-by: Oliver Upton <oupton@google.com>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Oliver Upton <oupton@google.com>
Cc: kernel-team@android.com, kvm@vger.kernel.org,
	Andre Przywara <andre.przywara@arm.com>,
	stable@vger.kernel.org,
	Raghavendra Rao Ananta <rananta@google.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
Date: Wed, 18 Aug 2021 22:24:38 +0100	[thread overview]
Message-ID: <87o89uzbs9.wl-maz@kernel.org> (raw)
In-Reply-To: <CAOQ_QsgGiGSfEq1QGfePiRF-=spCuR6XZ2QXfUsZ1zWds0ftag@mail.gmail.com>

On Wed, 18 Aug 2021 20:40:42 +0100,
Oliver Upton <oupton@google.com> wrote:
> 
> Hey Marc,
> 
> On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > When a mapped level interrupt (a timer, for example) is deactivated
> > > by the guest, the corresponding host interrupt is equally deactivated.
> > > However, the fate of the pending state still needs to be dealt
> > > with in SW.
> > >
> > > This is specially true when the interrupt was in the active+pending
> > > state in the virtual distributor at the point where the guest
> > > was entered. On exit, the pending state is potentially stale
> > > (the guest may have put the interrupt in a non-pending state).
> > >
> > > If we don't do anything, the interrupt will be spuriously injected
> > > in the guest. Although this shouldn't have any ill effect (spurious
> > > interrupts are always possible), we can improve the emulation by
> > > detecting the deactivation-while-pending case and resample the
> > > interrupt.
> > >
> > > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> > >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> > >  2 files changed, 36 insertions(+), 14 deletions(-)
> > >
> > Tested-by: Raghavendra Rao Ananta <rananta@google.com>
> >
> > Thanks,
> > Raghavendra
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > > index 2c580204f1dc..3e52ea86a87f 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 val = cpuif->vgic_lr[lr];
> > >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> > >                 struct vgic_irq *irq;
> > > +               bool deactivated;
> > >
> > >                 /* Extract the source vCPU id from the LR */
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && vgic_irq_is_sgi(intid))
> > > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & GICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> > >                         }
> > >
> > >                         if (resample)
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > > index 66004f61cd83..74f9aefffd5e 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 intid, cpuid;
> > >                 struct vgic_irq *irq;
> > >                 bool is_v2_sgi = false;
> > > +               bool deactivated;
> > >
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && is_v2_sgi)
> > > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & ICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> 
> The vGICv3 and vGICv2 implementations look identical here, should we
> have a helper that keeps the code common between the two?

Probably. This code used to be much simpler, but it has grown a bit
unwieldy since I added the M1 support hack. This change doesn't make
look any better so it is probably time for a minor refactor.

I've pushed out an updated patch, but I'll wait a bit more for
additional feedback before posting it again.

> 
> Otherwise, the functional change LGTM, so:
> 
> Reviewed-by: Oliver Upton <oupton@google.com>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

WARNING: multiple messages have this Message-ID (diff)
From: Marc Zyngier <maz@kernel.org>
To: Oliver Upton <oupton@google.com>
Cc: Raghavendra Rao Ananta <rananta@google.com>,
	linux-arm-kernel@lists.infradead.org,
	kvmarm@lists.cs.columbia.edu, kvm@vger.kernel.org,
	James Morse <james.morse@arm.com>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Alexandru Elisei <Alexandru.Elisei@arm.com>,
	Andre Przywara <andre.przywara@arm.com>,
	Eric Auger <eric.auger@redhat.com>,
	Ricardo Koller <ricarkol@google.com>,
	kernel-team@android.com, stable@vger.kernel.org
Subject: Re: [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation
Date: Wed, 18 Aug 2021 22:24:38 +0100	[thread overview]
Message-ID: <87o89uzbs9.wl-maz@kernel.org> (raw)
In-Reply-To: <CAOQ_QsgGiGSfEq1QGfePiRF-=spCuR6XZ2QXfUsZ1zWds0ftag@mail.gmail.com>

On Wed, 18 Aug 2021 20:40:42 +0100,
Oliver Upton <oupton@google.com> wrote:
> 
> Hey Marc,
> 
> On Wed, Aug 18, 2021 at 12:05 PM Raghavendra Rao Ananta
> <rananta@google.com> wrote:
> >
> > On Wed, Aug 18, 2021 at 11:14 AM Marc Zyngier <maz@kernel.org> wrote:
> > >
> > > When a mapped level interrupt (a timer, for example) is deactivated
> > > by the guest, the corresponding host interrupt is equally deactivated.
> > > However, the fate of the pending state still needs to be dealt
> > > with in SW.
> > >
> > > This is specially true when the interrupt was in the active+pending
> > > state in the virtual distributor at the point where the guest
> > > was entered. On exit, the pending state is potentially stale
> > > (the guest may have put the interrupt in a non-pending state).
> > >
> > > If we don't do anything, the interrupt will be spuriously injected
> > > in the guest. Although this shouldn't have any ill effect (spurious
> > > interrupts are always possible), we can improve the emulation by
> > > detecting the deactivation-while-pending case and resample the
> > > interrupt.
> > >
> > > Fixes: e40cc57bac79 ("KVM: arm/arm64: vgic: Support level-triggered mapped interrupts")
> > > Reported-by: Raghavendra Rao Ananta <rananta@google.com>
> > > Signed-off-by: Marc Zyngier <maz@kernel.org>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/arm64/kvm/vgic/vgic-v2.c | 25 ++++++++++++++++++-------
> > >  arch/arm64/kvm/vgic/vgic-v3.c | 25 ++++++++++++++++++-------
> > >  2 files changed, 36 insertions(+), 14 deletions(-)
> > >
> > Tested-by: Raghavendra Rao Ananta <rananta@google.com>
> >
> > Thanks,
> > Raghavendra
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v2.c b/arch/arm64/kvm/vgic/vgic-v2.c
> > > index 2c580204f1dc..3e52ea86a87f 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v2.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v2.c
> > > @@ -60,6 +60,7 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 val = cpuif->vgic_lr[lr];
> > >                 u32 cpuid, intid = val & GICH_LR_VIRTUALID;
> > >                 struct vgic_irq *irq;
> > > +               bool deactivated;
> > >
> > >                 /* Extract the source vCPU id from the LR */
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > > @@ -75,7 +76,8 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & GICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & GICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && vgic_irq_is_sgi(intid))
> > > @@ -105,6 +107,12 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -115,12 +123,15 @@ void vgic_v2_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & GICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & GICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> > >                         }
> > >
> > >                         if (resample)
> > > diff --git a/arch/arm64/kvm/vgic/vgic-v3.c b/arch/arm64/kvm/vgic/vgic-v3.c
> > > index 66004f61cd83..74f9aefffd5e 100644
> > > --- a/arch/arm64/kvm/vgic/vgic-v3.c
> > > +++ b/arch/arm64/kvm/vgic/vgic-v3.c
> > > @@ -46,6 +46,7 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 u32 intid, cpuid;
> > >                 struct vgic_irq *irq;
> > >                 bool is_v2_sgi = false;
> > > +               bool deactivated;
> > >
> > >                 cpuid = val & GICH_LR_PHYSID_CPUID;
> > >                 cpuid >>= GICH_LR_PHYSID_CPUID_SHIFT;
> > > @@ -68,7 +69,8 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >
> > >                 raw_spin_lock(&irq->irq_lock);
> > >
> > > -               /* Always preserve the active bit */
> > > +               /* Always preserve the active bit, note deactivation */
> > > +               deactivated = irq->active && !(val & ICH_LR_ACTIVE_BIT);
> > >                 irq->active = !!(val & ICH_LR_ACTIVE_BIT);
> > >
> > >                 if (irq->active && is_v2_sgi)
> > > @@ -98,6 +100,12 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                  * device state could have changed or we simply need to
> > >                  * process the still pending interrupt later.
> > >                  *
> > > +                * We could also have entered the guest with the interrupt
> > > +                * active+pending. On the next exit, we need to re-evaluate
> > > +                * the pending state, as it could otherwise result in a
> > > +                * spurious interrupt by injecting a now potentially stale
> > > +                * pending state.
> > > +                *
> > >                  * If this causes us to lower the level, we have to also clear
> > >                  * the physical active state, since we will otherwise never be
> > >                  * told when the interrupt becomes asserted again.
> > > @@ -108,12 +116,15 @@ void vgic_v3_fold_lr_state(struct kvm_vcpu *vcpu)
> > >                 if (vgic_irq_is_mapped_level(irq)) {
> > >                         bool resample = false;
> > >
> > > -                       if (val & ICH_LR_PENDING_BIT) {
> > > -                               irq->line_level = vgic_get_phys_line_level(irq);
> > > -                               resample = !irq->line_level;
> > > -                       } else if (vgic_irq_needs_resampling(irq) &&
> > > -                                  !(irq->active || irq->pending_latch)) {
> > > -                               resample = true;
> > > +                       if (unlikely(vgic_irq_needs_resampling(irq))) {
> > > +                               if (!(irq->active || irq->pending_latch))
> > > +                                       resample = true;
> > > +                       } else {
> > > +                               if ((val & ICH_LR_PENDING_BIT) ||
> > > +                                   (deactivated && irq->line_level)) {
> > > +                                       irq->line_level = vgic_get_phys_line_level(irq);
> > > +                                       resample = !irq->line_level;
> > > +                               }
> 
> The vGICv3 and vGICv2 implementations look identical here, should we
> have a helper that keeps the code common between the two?

Probably. This code used to be much simpler, but it has grown a bit
unwieldy since I added the M1 support hack. This change doesn't make
look any better so it is probably time for a minor refactor.

I've pushed out an updated patch, but I'll wait a bit more for
additional feedback before posting it again.

> 
> Otherwise, the functional change LGTM, so:
> 
> Reviewed-by: Oliver Upton <oupton@google.com>

Thanks,

	M.

-- 
Without deviation from the norm, progress is not possible.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-08-18 21:24 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-18 18:14 [PATCH] KVM: arm64: vgic: Resample HW pending state on deactivation Marc Zyngier
2021-08-18 18:14 ` Marc Zyngier
2021-08-18 18:14 ` Marc Zyngier
2021-08-18 19:05 ` Raghavendra Rao Ananta
2021-08-18 19:05   ` Raghavendra Rao Ananta
2021-08-18 19:05   ` Raghavendra Rao Ananta
2021-08-18 19:40   ` Oliver Upton
2021-08-18 19:40     ` Oliver Upton
2021-08-18 19:40     ` Oliver Upton
2021-08-18 21:24     ` Marc Zyngier [this message]
2021-08-18 21:24       ` Marc Zyngier
2021-08-18 21:24       ` Marc Zyngier

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87o89uzbs9.wl-maz@kernel.org \
    --to=maz@kernel.org \
    --cc=Alexandru.Elisei@arm.com \
    --cc=andre.przywara@arm.com \
    --cc=eric.auger@redhat.com \
    --cc=james.morse@arm.com \
    --cc=kernel-team@android.com \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=oupton@google.com \
    --cc=rananta@google.com \
    --cc=ricarkol@google.com \
    --cc=stable@vger.kernel.org \
    --cc=suzuki.poulose@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.