All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Feng Wu <feng.wu@intel.com>
Cc: kevin.tian@intel.com, keir@xen.org, george.dunlap@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed
Date: Fri, 27 May 2016 07:43:28 -0600	[thread overview]
Message-ID: <57486B2002000078000EF3D0@prv-mh.provo.novell.com> (raw)
In-Reply-To: <1464269954-8056-2-git-send-email-feng.wu@intel.com>

>>> On 26.05.16 at 15:39, <feng.wu@intel.com> wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -113,7 +113,19 @@ static void vmx_vcpu_block(struct vcpu *v)
>  		&per_cpu(vmx_pi_blocking, v->processor).lock;
>      struct pi_desc *pi_desc = &v->arch.hvm_vmx.pi_desc;
>  
> -    spin_lock_irqsave(pi_blocking_list_lock, flags);
> +    spin_lock_irqsave(&v->arch.hvm_vmx.pi_hotplug_lock, flags);
> +    if ( unlikely(v->arch.hvm_vmx.pi_blocking_cleaned_up) )
> +    {
> +        /*
> +         * The vcpu is to be destroyed and it has already been removed
> +         * from the per-CPU list if it is blocking, we shouldn't add
> +         * new vCPU to the list.
> +         */

This comment appears to be stale wrt the implementation further
down. But see there for whether it's the comment or the code
that need to change.

> +        spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags);
> +        return;
> +    }
> +
> +    spin_lock(pi_blocking_list_lock);

There doesn't appear to be an active problem with this, but
taking a global lock inside a per-vCPU one feels bad. Both here
and in vmx_pi_blocking_cleanup() I can't see why the global
lock alone won't do - you acquire it anyway.

> +static void vmx_pi_blocking_cleanup(struct vcpu *v)
> +{
> +    unsigned long flags;
> +    spinlock_t *pi_blocking_list_lock;
> +
> +    if ( !iommu_intpost )
> +        return;

If the function is to remain to be called from just the body of a loop
over all vCPU-s, please make that loop conditional upon iommu_intpost
instead of checking it here (and bailing) for every vCPU.

> +    spin_lock_irqsave(&v->arch.hvm_vmx.pi_hotplug_lock, flags);
> +    v->arch.hvm_vmx.pi_blocking_cleaned_up = 1;
> +
> +    pi_blocking_list_lock = v->arch.hvm_vmx.pi_blocking.lock;
> +    if (pi_blocking_list_lock == NULL)

Coding style.

> +    {
> +        spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags);
> +        return;
> +    }
> +
> +    spin_lock(pi_blocking_list_lock);
> +    if ( v->arch.hvm_vmx.pi_blocking.lock != NULL )
> +    {
> +        ASSERT(v->arch.hvm_vmx.pi_blocking.lock == pi_blocking_list_lock);
> +        list_del(&v->arch.hvm_vmx.pi_blocking.list);
> +        v->arch.hvm_vmx.pi_blocking.lock = NULL;
> +    }
> +    spin_unlock(pi_blocking_list_lock);
> +    spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags);
> +}
> +
>  /* This function is called when pcidevs_lock is held */
>  void vmx_pi_hooks_assign(struct domain *d)
>  {
> +    struct vcpu *v;
> +
>      if ( !iommu_intpost || !has_hvm_container_domain(d) )
>          return;
>  
> -    ASSERT(!d->arch.hvm_domain.vmx.vcpu_block);
> +    for_each_vcpu ( d, v )
> +        v->arch.hvm_vmx.pi_blocking_cleaned_up = 0;
>  
> -    d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
> -    d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from;
> -    d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to;
> -    d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
> +    if ( !d->arch.hvm_domain.vmx.vcpu_block )
> +    {
> +        d->arch.hvm_domain.vmx.vcpu_block = vmx_vcpu_block;
> +        d->arch.hvm_domain.vmx.pi_switch_from = vmx_pi_switch_from;
> +        d->arch.hvm_domain.vmx.pi_switch_to = vmx_pi_switch_to;
> +        d->arch.hvm_domain.vmx.pi_do_resume = vmx_pi_do_resume;
> +    }
>  }
>  
>  /* This function is called when pcidevs_lock is held */
>  void vmx_pi_hooks_deassign(struct domain *d)
>  {
> +    struct vcpu *v;
> +
>      if ( !iommu_intpost || !has_hvm_container_domain(d) )
>          return;
>  
> -    ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
> -
> -    d->arch.hvm_domain.vmx.vcpu_block = NULL;
> -    d->arch.hvm_domain.vmx.pi_switch_from = NULL;
> -    d->arch.hvm_domain.vmx.pi_switch_to = NULL;
> -    d->arch.hvm_domain.vmx.pi_do_resume = NULL;
> +    for_each_vcpu ( d, v )
> +        vmx_pi_blocking_cleanup(v);

If you keep the hooks in place, why is it relevant to do the cleanup
here instead of just at domain death? As suggested before, if you
did it there, you'd likely get away without adding the new per-vCPU
flag.

Furthermore, if things remain the way they are now, a per-domain
flag would suffice.

And finally - do you really need to retain all four hooks? Since if not,
one that gets zapped here could be used in place of such a per-
domain flag.

> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -231,6 +231,9 @@ struct arch_vmx_struct {
>       * pCPU and wakeup the related vCPU.
>       */
>      struct pi_blocking_vcpu pi_blocking;
> +
> +    spinlock_t            pi_hotplug_lock;

Being through all of the patch, I have a problem seeing the hotplug
nature of this.

> +    bool_t                pi_blocking_cleaned_up;

If this flag is to remain, it should move into its designated structure
(right above your addition).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-05-27 13:43 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-26 13:39 [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list Feng Wu
2016-05-26 13:39 ` [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed Feng Wu
2016-05-27 13:43   ` Jan Beulich [this message]
2016-05-31 10:22     ` Wu, Feng
2016-05-31 11:52       ` Jan Beulich
2016-06-03  5:12         ` Wu, Feng
2016-06-03  9:45           ` Jan Beulich
2016-05-26 13:39 ` [PATCH v2 2/4] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed Feng Wu
2016-05-27 13:49   ` Jan Beulich
2016-05-31 10:22     ` Wu, Feng
2016-05-31 11:54       ` Jan Beulich
2016-05-26 13:39 ` [PATCH v2 3/4] VMX: Assign the right value to 'NDST' field in a concern case Feng Wu
2016-05-27 14:00   ` Jan Beulich
2016-05-31 10:27     ` Wu, Feng
2016-05-31 11:58       ` Jan Beulich
2016-06-03  5:23         ` Wu, Feng
2016-06-03  9:57           ` Jan Beulich
2016-06-22 18:00   ` George Dunlap
2016-06-24  9:08     ` Wu, Feng
2016-05-26 13:39 ` [PATCH v2 4/4] VMX: fixup PI descritpor when cpu is offline Feng Wu
2016-05-27 14:56   ` Jan Beulich
2016-05-31 10:31     ` Wu, Feng
2016-06-22 18:33       ` George Dunlap
2016-06-24  6:34         ` Wu, Feng
2016-05-26 17:20 ` [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list Dario Faggioli
2016-05-31 10:19   ` Wu, Feng
2016-06-22 21:33     ` Dario Faggioli
2016-06-24  6:33       ` Wu, Feng
2016-06-24 10:29         ` Dario Faggioli
2016-06-24 13:42           ` Wu, Feng
2016-06-24 13:49             ` George Dunlap
2016-06-24 14:36               ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57486B2002000078000EF3D0@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=feng.wu@intel.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.