xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Wu, Feng" <feng.wu@intel.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: "george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"dario.faggioli@citrix.com" <dario.faggioli@citrix.com>,
	"Wu, Feng" <feng.wu@intel.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v4 2/6] VMX: Properly handle pi when all the assigned devices are removed
Date: Wed, 28 Sep 2016 06:50:01 +0000	[thread overview]
Message-ID: <E959C4978C3B6342920538CF579893F01988049B@SHSMSX104.ccr.corp.intel.com> (raw)
In-Reply-To: <57E926B90200007800112666@prv-mh.provo.novell.com>



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Monday, September 26, 2016 7:47 PM
> To: Wu, Feng <feng.wu@intel.com>
> Cc: andrew.cooper3@citrix.com; dario.faggioli@citrix.com;
> george.dunlap@eu.citrix.com; Tian, Kevin <kevin.tian@intel.com>; xen-
> devel@lists.xen.org
> Subject: Re: [PATCH v4 2/6] VMX: Properly handle pi when all the assigned
> devices are removed
> 
> >>> On 21.09.16 at 04:37, <feng.wu@intel.com> wrote:
> > +static void vmx_pi_list_cleanup(struct vcpu *v)
> > +{
> > +    vmx_pi_list_remove(v);
> > +}
> 
> Please avoid such a no-op wrapper - the caller can easily call
> vmx_pi_list_remove() directly.
> 
> > @@ -215,13 +225,28 @@ void vmx_pi_hooks_assign(struct domain *d)
> >  /* This function is called when pcidevs_lock is held */
> >  void vmx_pi_hooks_deassign(struct domain *d)
> >  {
> > +    struct vcpu *v;
> > +
> >      if ( !iommu_intpost || !has_hvm_container_domain(d) )
> >          return;
> >
> >      ASSERT(d->arch.hvm_domain.vmx.vcpu_block);
> >
> > +    /*
> > +     * Pausing the domain can make sure the vCPU is not
> > +     * running and hence calling the hooks simultaneously
> > +     * when deassigning the PI hooks and removing the vCPU
> > +     * from the blocking list.
> > +     */
> > +    domain_pause(d);
> > +
> >      d->arch.hvm_domain.vmx.vcpu_block = NULL;
> >      d->arch.hvm_domain.vmx.pi_do_resume = NULL;
> > +
> > +    for_each_vcpu ( d, v )
> > +        vmx_pi_list_cleanup(v);
> > +
> > +    domain_unpause(d);
> >  }
> 
> So you continue using pausing, and I continue to miss the argumentation
> of why you can't do without (even if previously the discussion was for
> patch 4, but it obviously applies here as well).

I think this case is slightly different. Here we need to call vmx_pi_list_cleanup()
to remove the vCPU from the blocking list if it is on the list. However, this
can be happened when vmx_vcpu_block() is called, hence we might incorrectly
add the vcpu to the blocking list while the last device is detached from the domain.
In fact, v2 gave some trick methods to handle this, and that was considered as
hard to maintain, so George suggested to use pause/unpause for this case, and I
also think it is easy and acceptable consider that devices detaching is not a
frequent action.

Thanks,
Feng

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-09-28  6:50 UTC|newest]

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-21  2:37 [PATCH v4 0/6] VMX: Properly handle pi descriptor and per-cpu blocking list Feng Wu
2016-09-21  2:37 ` [PATCH v4 1/6] VMX: Statically assign two PI hooks Feng Wu
2016-09-26 11:37   ` Jan Beulich
2016-09-26 12:09     ` Jan Beulich
2016-09-28  6:48       ` Wu, Feng
2016-09-28  9:38         ` Jan Beulich
2016-10-09  8:30           ` Wu, Feng
2016-10-10  7:26             ` Jan Beulich
2016-09-21  2:37 ` [PATCH v4 2/6] VMX: Properly handle pi when all the assigned devices are removed Feng Wu
2016-09-26 11:46   ` Jan Beulich
2016-09-28  6:50     ` Wu, Feng [this message]
2016-09-28  9:52       ` Jan Beulich
2016-09-29  3:08         ` Wu, Feng
2016-09-21  2:37 ` [PATCH v4 3/6] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed Feng Wu
2016-09-21  2:37 ` [PATCH v4 4/6] VMX: Make sure PI is in proper state before install the hooks Feng Wu
2016-09-26 12:45   ` Jan Beulich
2016-09-28  6:50     ` Wu, Feng
2016-09-21  2:37 ` [PATCH v4 5/6] VT-d: No need to set irq affinity for posted format IRTE Feng Wu
2016-09-26 12:58   ` Jan Beulich
2016-09-28  6:51     ` Wu, Feng
2016-09-28  9:58       ` Jan Beulich
2016-10-09  5:35         ` Wu, Feng
2016-10-10  7:02           ` Jan Beulich
2016-10-10 22:55             ` Wu, Feng
2016-09-21  2:37 ` [PATCH v4 6/6] VMX: Fixup PI descritpor when cpu is offline Feng Wu
2016-09-26 13:03   ` Jan Beulich
2016-09-28  6:53     ` Wu, Feng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E959C4978C3B6342920538CF579893F01988049B@SHSMSX104.ccr.corp.intel.com \
    --to=feng.wu@intel.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).