All of lore.kernel.org
 help / color / mirror / Atom feed
From: Christoffer Dall <christoffer.dall@linaro.org>
To: Yury Norov <ynorov@caviumnetworks.com>
Cc: Christoffer Dall <cdall@kernel.org>,
	kvm@vger.kernel.org, Marc Zyngier <marc.zyngier@arm.com>,
	Andre Przywara <andre.przywara@arm.com>,
	kvmarm@lists.cs.columbia.edu,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH v6 2/8] KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu
Date: Wed, 6 Dec 2017 11:54:21 +0100	[thread overview]
Message-ID: <20171206105421.GL32397@cbox> (raw)
In-Reply-To: <20171205134608.msg6wvh7px273mud@yury-thinkpad>

On Tue, Dec 05, 2017 at 04:46:08PM +0300, Yury Norov wrote:
> On Mon, Dec 04, 2017 at 09:05:00PM +0100, Christoffer Dall wrote:
> > From: Christoffer Dall <christoffer.dall@linaro.org>
> > 
> > We are about to distinguish between userspace accesses and mmio traps
> > for a number of the mmio handlers.  When the requester vcpu is NULL, it
> > mens we are handling a userspace acccess.
> 
> Typo: means?
> 

yes

> > Factor out the functionality to get the request vcpu into its own
> > function, mostly so we have a common place to document the semantics of
> > the return value.
> > 
> > Also take the chance to move the functionality outside of holding a
> > spinlock and instead explicitly disable and enable preemption.  This
> > supports PREEMPT_RT kernels as well.
> > 
> > Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> > Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> >  virt/kvm/arm/vgic/vgic-mmio.c | 44 +++++++++++++++++++++++++++----------------
> >  1 file changed, 28 insertions(+), 16 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > index deb51ee16a3d..747b0a3b4784 100644
> > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > @@ -122,6 +122,27 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
> >  	return value;
> >  }
> >  
> > +/*
> > + * This function will return the VCPU that performed the MMIO access and
> > + * trapped from twithin the VM, and will return NULL if this is a userspace
> 
> Typo: from within?
> 

yes

> > + * access.
> > + *
> > + * We can disable preemption locally around accessing the per-CPU variable,
> > + * and use the resolved vcpu pointer after enabling preemption again, because
> > + * even if the current thread is migrated to another CPU, reading the per-CPU
> > + * value later will give us the same value as we update the per-CPU variable
> > + * in the preempt notifier handlers.
> > + */
> > +static struct kvm_vcpu *vgic_get_mmio_requester_vcpu(void)
> > +{
> > +	struct kvm_vcpu *vcpu;
> > +
> > +	preempt_disable();
> > +	vcpu = kvm_arm_get_running_vcpu();
> > +	preempt_enable();
> > +	return vcpu;
> > +}
> > +
> >  void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
> >  			      gpa_t addr, unsigned int len,
> >  			      unsigned long val)
> > @@ -184,24 +205,10 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
> >  static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >  				    bool new_active_state)
> >  {
> > -	struct kvm_vcpu *requester_vcpu;
> >  	unsigned long flags;
> > -	spin_lock_irqsave(&irq->irq_lock, flags);
> > +	struct kvm_vcpu *requester_vcpu = vgic_get_mmio_requester_vcpu();
> >  
> > -	/*
> > -	 * The vcpu parameter here can mean multiple things depending on how
> > -	 * this function is called; when handling a trap from the kernel it
> > -	 * depends on the GIC version, and these functions are also called as
> > -	 * part of save/restore from userspace.
> > -	 *
> > -	 * Therefore, we have to figure out the requester in a reliable way.
> > -	 *
> > -	 * When accessing VGIC state from user space, the requester_vcpu is
> > -	 * NULL, which is fine, because we guarantee that no VCPUs are running
> > -	 * when accessing VGIC state from user space so irq->vcpu->cpu is
> > -	 * always -1.
> > -	 */
> > -	requester_vcpu = kvm_arm_get_running_vcpu();
> > +	spin_lock_irqsave(&irq->irq_lock, flags);
> >  
> >  	/*
> >  	 * If this virtual IRQ was written into a list register, we
> > @@ -213,6 +220,11 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >  	 * vgic_change_active_prepare)  and still has to sync back this IRQ,
> >  	 * so we release and re-acquire the spin_lock to let the other thread
> >  	 * sync back the IRQ.
> > +	 *
> > +	 * When accessing VGIC state from user space, requester_vcpu is
> > +	 * NULL, which is fine, because we guarantee that no VCPUs are running
> > +	 * when accessing VGIC state from user space so irq->vcpu->cpu is
> > +	 * always -1.
> >  	 */
> >  	while (irq->vcpu && /* IRQ may have state in an LR somewhere */
> >  	       irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
> > -- 
> > 2.14.2

Thanks,
-Christoffer

WARNING: multiple messages have this Message-ID (diff)
From: christoffer.dall@linaro.org (Christoffer Dall)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH v6 2/8] KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu
Date: Wed, 6 Dec 2017 11:54:21 +0100	[thread overview]
Message-ID: <20171206105421.GL32397@cbox> (raw)
In-Reply-To: <20171205134608.msg6wvh7px273mud@yury-thinkpad>

On Tue, Dec 05, 2017 at 04:46:08PM +0300, Yury Norov wrote:
> On Mon, Dec 04, 2017 at 09:05:00PM +0100, Christoffer Dall wrote:
> > From: Christoffer Dall <christoffer.dall@linaro.org>
> > 
> > We are about to distinguish between userspace accesses and mmio traps
> > for a number of the mmio handlers.  When the requester vcpu is NULL, it
> > mens we are handling a userspace acccess.
> 
> Typo: means?
> 

yes

> > Factor out the functionality to get the request vcpu into its own
> > function, mostly so we have a common place to document the semantics of
> > the return value.
> > 
> > Also take the chance to move the functionality outside of holding a
> > spinlock and instead explicitly disable and enable preemption.  This
> > supports PREEMPT_RT kernels as well.
> > 
> > Acked-by: Marc Zyngier <marc.zyngier@arm.com>
> > Reviewed-by: Andre Przywara <andre.przywara@arm.com>
> > Signed-off-by: Christoffer Dall <christoffer.dall@linaro.org>
> > ---
> >  virt/kvm/arm/vgic/vgic-mmio.c | 44 +++++++++++++++++++++++++++----------------
> >  1 file changed, 28 insertions(+), 16 deletions(-)
> > 
> > diff --git a/virt/kvm/arm/vgic/vgic-mmio.c b/virt/kvm/arm/vgic/vgic-mmio.c
> > index deb51ee16a3d..747b0a3b4784 100644
> > --- a/virt/kvm/arm/vgic/vgic-mmio.c
> > +++ b/virt/kvm/arm/vgic/vgic-mmio.c
> > @@ -122,6 +122,27 @@ unsigned long vgic_mmio_read_pending(struct kvm_vcpu *vcpu,
> >  	return value;
> >  }
> >  
> > +/*
> > + * This function will return the VCPU that performed the MMIO access and
> > + * trapped from twithin the VM, and will return NULL if this is a userspace
> 
> Typo: from within?
> 

yes

> > + * access.
> > + *
> > + * We can disable preemption locally around accessing the per-CPU variable,
> > + * and use the resolved vcpu pointer after enabling preemption again, because
> > + * even if the current thread is migrated to another CPU, reading the per-CPU
> > + * value later will give us the same value as we update the per-CPU variable
> > + * in the preempt notifier handlers.
> > + */
> > +static struct kvm_vcpu *vgic_get_mmio_requester_vcpu(void)
> > +{
> > +	struct kvm_vcpu *vcpu;
> > +
> > +	preempt_disable();
> > +	vcpu = kvm_arm_get_running_vcpu();
> > +	preempt_enable();
> > +	return vcpu;
> > +}
> > +
> >  void vgic_mmio_write_spending(struct kvm_vcpu *vcpu,
> >  			      gpa_t addr, unsigned int len,
> >  			      unsigned long val)
> > @@ -184,24 +205,10 @@ unsigned long vgic_mmio_read_active(struct kvm_vcpu *vcpu,
> >  static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >  				    bool new_active_state)
> >  {
> > -	struct kvm_vcpu *requester_vcpu;
> >  	unsigned long flags;
> > -	spin_lock_irqsave(&irq->irq_lock, flags);
> > +	struct kvm_vcpu *requester_vcpu = vgic_get_mmio_requester_vcpu();
> >  
> > -	/*
> > -	 * The vcpu parameter here can mean multiple things depending on how
> > -	 * this function is called; when handling a trap from the kernel it
> > -	 * depends on the GIC version, and these functions are also called as
> > -	 * part of save/restore from userspace.
> > -	 *
> > -	 * Therefore, we have to figure out the requester in a reliable way.
> > -	 *
> > -	 * When accessing VGIC state from user space, the requester_vcpu is
> > -	 * NULL, which is fine, because we guarantee that no VCPUs are running
> > -	 * when accessing VGIC state from user space so irq->vcpu->cpu is
> > -	 * always -1.
> > -	 */
> > -	requester_vcpu = kvm_arm_get_running_vcpu();
> > +	spin_lock_irqsave(&irq->irq_lock, flags);
> >  
> >  	/*
> >  	 * If this virtual IRQ was written into a list register, we
> > @@ -213,6 +220,11 @@ static void vgic_mmio_change_active(struct kvm_vcpu *vcpu, struct vgic_irq *irq,
> >  	 * vgic_change_active_prepare)  and still has to sync back this IRQ,
> >  	 * so we release and re-acquire the spin_lock to let the other thread
> >  	 * sync back the IRQ.
> > +	 *
> > +	 * When accessing VGIC state from user space, requester_vcpu is
> > +	 * NULL, which is fine, because we guarantee that no VCPUs are running
> > +	 * when accessing VGIC state from user space so irq->vcpu->cpu is
> > +	 * always -1.
> >  	 */
> >  	while (irq->vcpu && /* IRQ may have state in an LR somewhere */
> >  	       irq->vcpu != requester_vcpu && /* Current thread is not the VCPU thread */
> > -- 
> > 2.14.2

Thanks,
-Christoffer

  reply	other threads:[~2017-12-06 10:54 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-04 20:04 [PATCH v6 0/8] Handle forwarded level-triggered interrupts Christoffer Dall
2017-12-04 20:04 ` Christoffer Dall
2017-12-04 20:04 ` [PATCH v6 1/8] KVM: arm/arm64: Remove redundant preemptible checks Christoffer Dall
2017-12-04 20:04   ` Christoffer Dall
2017-12-04 20:05 ` [PATCH v6 2/8] KVM: arm/arm64: Factor out functionality to get vgic mmio requester_vcpu Christoffer Dall
2017-12-04 20:05   ` Christoffer Dall
2017-12-05 13:46   ` Yury Norov
2017-12-05 13:46     ` Yury Norov
2017-12-06 10:54     ` Christoffer Dall [this message]
2017-12-06 10:54       ` Christoffer Dall
2017-12-04 20:05 ` [PATCH v6 3/8] KVM: arm/arm64: Don't cache the timer IRQ level Christoffer Dall
2017-12-04 20:05   ` Christoffer Dall
2017-12-04 20:05 ` [PATCH v6 4/8] KVM: arm/arm64: vgic: Support level-triggered mapped interrupts Christoffer Dall
2017-12-04 20:05   ` Christoffer Dall
2017-12-04 20:05 ` [PATCH v6 5/8] KVM: arm/arm64: Support a vgic interrupt line level sample function Christoffer Dall
2017-12-04 20:05   ` Christoffer Dall
2017-12-04 20:05 ` [PATCH v6 6/8] KVM: arm/arm64: Support VGIC dist pend/active changes for mapped IRQs Christoffer Dall
2017-12-04 20:05   ` Christoffer Dall
2017-12-05 12:43   ` Andrew Jones
2017-12-05 12:43     ` Andrew Jones
2017-12-05 15:03   ` Yury Norov
2017-12-05 15:03     ` Yury Norov
2017-12-05 16:47     ` Marc Zyngier
2017-12-05 16:47       ` Marc Zyngier
2017-12-05 22:39       ` Yury Norov
2017-12-05 22:39         ` Yury Norov
2017-12-06  8:56         ` Marc Zyngier
2017-12-06  8:56           ` Marc Zyngier
2017-12-04 20:05 ` [PATCH v6 7/8] KVM: arm/arm64: Provide a get_input_level for the arch timer Christoffer Dall
2017-12-04 20:05   ` Christoffer Dall
2017-12-05 15:24   ` Yury Norov
2017-12-05 15:24     ` Yury Norov
2017-12-06 10:59     ` Christoffer Dall
2017-12-06 10:59       ` Christoffer Dall
2017-12-06 14:17       ` Yury Norov
2017-12-06 14:17         ` Yury Norov
2017-12-06 16:38         ` Christoffer Dall
2017-12-06 16:38           ` Christoffer Dall
2017-12-04 20:05 ` [PATCH v6 8/8] KVM: arm/arm64: Avoid work when userspace iqchips are not used Christoffer Dall
2017-12-04 20:05   ` Christoffer Dall

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171206105421.GL32397@cbox \
    --to=christoffer.dall@linaro.org \
    --cc=andre.przywara@arm.com \
    --cc=cdall@kernel.org \
    --cc=kvm@vger.kernel.org \
    --cc=kvmarm@lists.cs.columbia.edu \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=marc.zyngier@arm.com \
    --cc=ynorov@caviumnetworks.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.