xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Feng Wu <feng.wu@intel.com>
Cc: kevin.tian@intel.com, keir@xen.org, george.dunlap@eu.citrix.com,
	andrew.cooper3@citrix.com, dario.faggioli@citrix.com,
	xen-devel@lists.xen.org
Subject: Re: [PATCH v2 4/4] VMX: fixup PI descritpor when cpu is offline
Date: Fri, 27 May 2016 08:56:53 -0600	[thread overview]
Message-ID: <57487C5502000078000EF457@prv-mh.provo.novell.com> (raw)
In-Reply-To: <1464269954-8056-5-git-send-email-feng.wu@intel.com>

>>> On 26.05.16 at 15:39, <feng.wu@intel.com> wrote:
> @@ -102,9 +103,10 @@ void vmx_pi_per_cpu_init(unsigned int cpu)
>  {
>      INIT_LIST_HEAD(&per_cpu(vmx_pi_blocking, cpu).list);
>      spin_lock_init(&per_cpu(vmx_pi_blocking, cpu).lock);
> +    per_cpu(vmx_pi_blocking, cpu).down = 0;

This seems pointless - per-CPU data starts out all zero (and there
are various places already which rely on that).

> @@ -122,10 +124,25 @@ static void vmx_vcpu_block(struct vcpu *v)
>           * new vCPU to the list.
>           */
>          spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags);
> -        return;
> +        return 1;
>      }
>  
>      spin_lock(pi_blocking_list_lock);
> +    if ( unlikely(per_cpu(vmx_pi_blocking, v->processor).down) )

Is this something that can actually happen? vmx_pi_desc_fixup()
runs in stop-machine context, i.e. no CPU can actively be here (or
anywhere near the arch_vcpu_block() call sites).

> +    {
> +        /*
> +         * We being here means that the v->processor is going away, and all
> +         * the vcpus on its blocking list were removed from it. Hence we
> +         * cannot add new vcpu to it. Besides that, we return -1 to
> +         * prevent the vcpu from being blocked. This is needed because
> +         * if the vCPU is continue to block and here we don't put it
> +         * in a per-cpu blocking list, it might not be woken up by the
> +         * notification event.
> +         */
> +        spin_unlock(pi_blocking_list_lock);
> +        spin_unlock_irqrestore(&v->arch.hvm_vmx.pi_hotplug_lock, flags);
> +        return 0;

The comment says you mean to return -1 here...

> +void vmx_pi_desc_fixup(int cpu)

unsigned int

> +{
> +    unsigned int new_cpu, dest;
> +    unsigned long flags;
> +    struct arch_vmx_struct *vmx, *tmp;
> +    spinlock_t *new_lock, *old_lock = &per_cpu(vmx_pi_blocking, cpu).lock;
> +    struct list_head *blocked_vcpus = &per_cpu(vmx_pi_blocking, cpu).list;
> +
> +    if ( !iommu_intpost )
> +        return;
> +
> +    spin_lock_irqsave(old_lock, flags);
> +    per_cpu(vmx_pi_blocking, cpu).down = 1;
> +
> +    list_for_each_entry_safe(vmx, tmp, blocked_vcpus, pi_blocking.list)
> +    {
> +        /*
> +         * We need to find an online cpu as the NDST of the PI descriptor, it
> +         * doesn't matter whether it is within the cpupool of the domain or
> +         * not. As long as it is online, the vCPU will be woken up once the
> +         * notification event arrives.
> +         */
> +        new_cpu = cpu;
> +restart:

Labels indented by at least one blank please. Or even better, get
things done without goto.

> +        while ( 1 )
> +        {
> +            new_cpu = (new_cpu + 1) % nr_cpu_ids;
> +            if ( cpu_online(new_cpu) )
> +                break;
> +        }

Please don't open code things like cpumask_cycle(). But with the
restart logic likely unnecessary (see below), this would probably
better become cpumask_any() then.

> +        new_lock = &per_cpu(vmx_pi_blocking, cpu).lock;

DYM new_cpu here? In fact with ...

> +        spin_lock(new_lock);

... this I can't see how you would have successfully tested this
new code path, as I can't see how this would end in other than
a deadlock (as you hold this very lock already).

> +        /*
> +         * After acquiring the blocking list lock for the new cpu, we need
> +         * to check whether new_cpu is still online.

How could it have gone offline? As mentioned, CPUs get brought
down in stop-machine context (and btw for the very reason of
avoiding complexity like this).

> +         * If '.down' is true, it mean 'new_cpu' is also going to be offline,
> +         * so just go back to find another one, otherwise, there are two
> +         * possibilities:
> +         *   case 1 - 'new_cpu' is online.
> +         *   case 2 - 'new_cpu' is about to be offline, but doesn't get to
> +         *            the point where '.down' is set.
> +         * In either case above, we can just set 'new_cpu' to 'NDST' field.
> +         * For case 2 the 'NDST' field will be set to another online cpu when
> +         * we get to this function for 'new_cpu' some time later.
> +         */
> +        if ( per_cpu(vmx_pi_blocking, cpu).down )

And again I suspect you mean new_cpu here.

> --- a/xen/common/schedule.c
> +++ b/xen/common/schedule.c
> @@ -833,10 +833,8 @@ void vcpu_block(void)
>  
>      set_bit(_VPF_blocked, &v->pause_flags);
>  
> -    arch_vcpu_block(v);
> -
>      /* Check for events /after/ blocking: avoids wakeup waiting race. */
> -    if ( local_events_need_delivery() )
> +    if ( arch_vcpu_block(v) || local_events_need_delivery() )

Here as well as below I'm getting the impression that you have things
backwards: vmx_vcpu_block() returns true for the two pre-existing
return paths (in which case you previously did not enter this if()'s
body), and false on the one new return path. Plus ...

> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -608,11 +608,13 @@ unsigned long hvm_cr4_guest_reserved_bits(const struct vcpu *v, bool_t restore);
>   * not been defined yet.
>   */
>  #define arch_vcpu_block(v) ({                                   \
> +    bool_t rc = 0;                                              \
>      struct vcpu *v_ = (v);                                      \
>      struct domain *d_ = v_->domain;                             \
>      if ( has_hvm_container_domain(d_) &&                        \
>           d_->arch.hvm_domain.vmx.vcpu_block )                   \
> -        d_->arch.hvm_domain.vmx.vcpu_block(v_);                 \
> +        rc = d_->arch.hvm_domain.vmx.vcpu_block(v_);            \
> +    rc;                                                         \
>  })

... rc defaulting to zero here supports my suspicion of something
having got mixed up.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-05-27 14:56 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-05-26 13:39 [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list Feng Wu
2016-05-26 13:39 ` [PATCH v2 1/4] VMX: Properly handle pi when all the assigned devices are removed Feng Wu
2016-05-27 13:43   ` Jan Beulich
2016-05-31 10:22     ` Wu, Feng
2016-05-31 11:52       ` Jan Beulich
2016-06-03  5:12         ` Wu, Feng
2016-06-03  9:45           ` Jan Beulich
2016-05-26 13:39 ` [PATCH v2 2/4] VMX: Cleanup PI per-cpu blocking list when vcpu is destroyed Feng Wu
2016-05-27 13:49   ` Jan Beulich
2016-05-31 10:22     ` Wu, Feng
2016-05-31 11:54       ` Jan Beulich
2016-05-26 13:39 ` [PATCH v2 3/4] VMX: Assign the right value to 'NDST' field in a concern case Feng Wu
2016-05-27 14:00   ` Jan Beulich
2016-05-31 10:27     ` Wu, Feng
2016-05-31 11:58       ` Jan Beulich
2016-06-03  5:23         ` Wu, Feng
2016-06-03  9:57           ` Jan Beulich
2016-06-22 18:00   ` George Dunlap
2016-06-24  9:08     ` Wu, Feng
2016-05-26 13:39 ` [PATCH v2 4/4] VMX: fixup PI descritpor when cpu is offline Feng Wu
2016-05-27 14:56   ` Jan Beulich [this message]
2016-05-31 10:31     ` Wu, Feng
2016-06-22 18:33       ` George Dunlap
2016-06-24  6:34         ` Wu, Feng
2016-05-26 17:20 ` [PATCH v2 0/4] VMX: Properly handle pi descriptor and per-cpu blocking list Dario Faggioli
2016-05-31 10:19   ` Wu, Feng
2016-06-22 21:33     ` Dario Faggioli
2016-06-24  6:33       ` Wu, Feng
2016-06-24 10:29         ` Dario Faggioli
2016-06-24 13:42           ` Wu, Feng
2016-06-24 13:49             ` George Dunlap
2016-06-24 14:36               ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57487C5502000078000EF457@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dario.faggioli@citrix.com \
    --cc=feng.wu@intel.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=keir@xen.org \
    --cc=kevin.tian@intel.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).