All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Wu, Feng" <feng.wu@intel.com>
To: Dario Faggioli <dario.faggioli@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
	"keir@xen.org" <keir@xen.org>,
	"george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
	"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"jbeulich@suse.com" <jbeulich@suse.com>,
	"Wu, Feng" <feng.wu@intel.com>
Subject: Re: [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list
Date: Fri, 24 Jun 2016 06:33:07 +0000	[thread overview]
Message-ID: <E959C4978C3B6342920538CF579893F01972E2A2@SHSMSX103.ccr.corp.intel.com> (raw)
In-Reply-To: <1466631187.18398.55.camel@citrix.com>



> -----Original Message-----
> From: Dario Faggioli [mailto:dario.faggioli@citrix.com]
> Sent: Thursday, June 23, 2016 5:33 AM
> To: Wu, Feng <feng.wu@intel.com>; xen-devel@lists.xen.org
> Cc: keir@xen.org; Tian, Kevin <kevin.tian@intel.com>; jbeulich@suse.com;
> andrew.cooper3@citrix.com; george.dunlap@eu.citrix.com;
> konrad.wilk@oracle.com
> Subject: Re: [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu
> blocking list
> 
> On Tue, 2016-05-31 at 10:19 +0000, Wu, Feng wrote:
> > > -----Original Message-----
> > > From: Dario Faggioli [mailto:dario.faggioli@citrix.com]
> > >
> > > So, if you want try argumenting a bit on what was your line of
> > > reasoning when doing things this way, that would be helpful (at
> > > least
> > > to me).
> > 'pi_hotplug_lock' is trying to protect the following scenario:
> > vmx_pi_blocking_cleanup() gets called either when the last assigned
> > device is detached or the vCPU is going to be destroyed, and at the
> > same time vmx_vcpu_block() is running. We need to make sure
> > after all the blocking vCPU is cleaned up, we should not add new
> > vCPU to the per-cpu blocking list. And that is why I introduce
> > ' pi_blocking_cleaned_up' for each vCPU, which being set to
> > 1 means that we cannot add it to the blocking list in
> > vmx_vcpu_block().
> >
> By "the vCPU is going to be destroyed", to what code path do you refer?
> Because, for instance, there's this:
> 
>   domain_kill() --> domain_destroy() --> complete_domain_destroy() --
>   --> vcpu_destroy() --> hvm_vcpu_destroy()

Yes, this is the case I was thinking of.

> 
> in which case, the vCPUs are not running --and hence can't block--
> during their own destruction.

Thanks for the clarification. That is good I don't need to consider
this case.:)

> 
> I take that you've found a path where that does not hold, and hence
> requires this kind of protection?
> 
> For the other race (last device being unassigned), I'll look more into
> it, but, in general, I share George's and Jan's views that we need
> simpler, more consistent and easier to maintain solutions.

Thanks for your time and looking forward to your comments!

> 
> > For the other flag 'down', it is used for the following scenario:
> > When a pCPU is going away and meanwhile vmx_vcpu_block() is
> > called, we should not put the vCPU to a per-cpu blocking list, which
> > is going away.
> >
> But, in this case, as George basically said already, if a pCPU is being
> destroyed, there should be no vCPU running on it, and hence no vCPU
> that, if blocking, would need being added to the pCPU blocking list.
> 
> > > For instance, now arch_vcpu_block() returns a value and, as you say
> > > yourself in a comment, that is for (potentially) preventing a vcpu
> > > to
> > > block. So the behavior of schedule.c:vcpu_block(), now depends on
> > > your
> > > new flag per_cpu(vmx_pi_blocking, v->processor).down. Again, I'll
> > > look
> > > better, but this has few chances of being right (at least
> > > logically).
> > Like in vcpu_block(),it will check events before actually blocking
> > the vcpu,
> > here we just introduce another case in which the vCPU cannot be
> > blocked.
> > I don't know why you think this is problematic?
> >
> Well, but, right now, it's like this:
>  - the vCPU should block, waiting for an event
>  - it turns out the event is already arrive
>  - we can avoid blocking
> 
> In your case, AFAICUI, it's:
>  - the vCPU should block, waiting for an event
>  - the event is _not_ arrived, so we indeed should block
>  - we do *not* block, due to some other reason
> 
> That does not look right to me... what about the fact that we wanted to
> actually wait for the event? :-O

I understand your point. In my understanding, currently, vcpu_block() is
for guest's HLT operation, which means, guest has nothing to do. In
this case, even we return (not blocking), seems the function is not
broken.

Thanks,
Feng

> 
> Regards,
> Dario
> --
> <<This happens because I choose it to happen!>> (Raistlin Majere)
> -----------------------------------------------------------------
> Dario Faggioli, Ph.D, http://about.me/dario.faggioli
> Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-06-24  6:33 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-26 13:39 [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list Feng Wu
2016-05-26 13:39 ` [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed Feng Wu
2016-05-27 13:43   ` Jan Beulich
2016-05-31 10:22     ` Wu, Feng
2016-05-31 11:52       ` Jan Beulich
2016-06-03  5:12         ` Wu, Feng
2016-06-03  9:45           ` Jan Beulich
2016-05-26 13:39 ` [PATCH v2 2/4] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed Feng Wu
2016-05-27 13:49   ` Jan Beulich
2016-05-31 10:22     ` Wu, Feng
2016-05-31 11:54       ` Jan Beulich
2016-05-26 13:39 ` [PATCH v2 3/4] VMX: Assign the right value to 'NDST' field in a concern case Feng Wu
2016-05-27 14:00   ` Jan Beulich
2016-05-31 10:27     ` Wu, Feng
2016-05-31 11:58       ` Jan Beulich
2016-06-03  5:23         ` Wu, Feng
2016-06-03  9:57           ` Jan Beulich
2016-06-22 18:00   ` George Dunlap
2016-06-24  9:08     ` Wu, Feng
2016-05-26 13:39 ` [PATCH v2 4/4] VMX: fixup PI descritpor when cpu is offline Feng Wu
2016-05-27 14:56   ` Jan Beulich
2016-05-31 10:31     ` Wu, Feng
2016-06-22 18:33       ` George Dunlap
2016-06-24  6:34         ` Wu, Feng
2016-05-26 17:20 ` [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list Dario Faggioli
2016-05-31 10:19   ` Wu, Feng
2016-06-22 21:33     ` Dario Faggioli
2016-06-24  6:33       ` Wu, Feng [this message]
2016-06-24 10:29         ` Dario Faggioli
2016-06-24 13:42           ` Wu, Feng
2016-06-24 13:49             ` George Dunlap
2016-06-24 14:36               ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E959C4978C3B6342920538CF579893F01972E2A2@SHSMSX103.ccr.corp.intel.com \
    --to=feng.wu@intel.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=jbeulich@suse.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.