xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Dario Faggioli <dario.faggioli@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 1/2] xen: sched: reorganize cpu_disable_scheduler()
Date: Mon, 20 Jul 2015 14:06:13 +0200	[thread overview]
Message-ID: <55ACE435.6040308@suse.com> (raw)
In-Reply-To: <1437393585.5036.8.camel@citrix.com>

On 07/20/2015 01:59 PM, Dario Faggioli wrote:
> On Mon, 2015-07-20 at 13:41 +0200, Juergen Gross wrote:
>> On 07/17/2015 03:35 PM, Dario Faggioli wrote:
>
>>> @@ -644,25 +673,66 @@ int cpu_disable_scheduler(unsigned int cpu)
>>>                    cpumask_setall(v->cpu_hard_affinity);
>>>                }
>>>
>>> -            if ( v->processor == cpu )
>>> +            if ( v->processor != cpu )
>>>                {
>>> -                set_bit(_VPF_migrating, &v->pause_flags);
>>> +                /* This vcpu is not on cpu, so we can move on. */
>>>                    vcpu_schedule_unlock_irqrestore(lock, flags, v);
>>> -                vcpu_sleep_nosync(v);
>>> -                vcpu_migrate(v);
>>> +                continue;
>>> +            }
>>> +
>>> +            /* If it is on cpu, we must send it away. */
>>> +            if ( unlikely(system_state == SYS_STATE_suspend) )
>>> +            {
>>> +                /*
>>> +                 * If we are doing a shutdown/suspend, it is not necessary to
>>> +                 * ask the scheduler to chime in. In fact:
>>> +                 *  * there is no reason for it: the end result we are after
>>> +                 *    is just 'all the vcpus on the boot pcpu, and no vcpu
>>> +                 *    anywhere else', so let's just go for it;
>>> +                 *  * it's wrong, for cpupools with only non-boot pcpus, as
>>> +                 *    the scheduler would always fail to send the vcpus away
>>> +                 *    from the last online (non boot) pcpu!
>>> +                 *
>>> +                 * Therefore, in the shutdown/suspend case, we just pick up
>>> +                 * one (still) online pcpu. Note that, at this stage, all
>>> +                 * domains (including dom0) have been paused already, so we
>>> +                 * do not expect any vcpu activity at all.
>>> +                 */
>>> +                cpumask_andnot(&online_affinity, &cpu_online_map,
>>> +                               cpumask_of(cpu));
>>> +                BUG_ON(cpumask_empty(&online_affinity));
>>> +                /*
>>> +                 * As boot cpu is, usually, pcpu #0, using cpumask_first()
>>> +                 * will make us converge quicker.
>>> +                 */
>>> +                new_cpu = cpumask_first(&online_affinity);
>>> +                vcpu_move_nosched(v, new_cpu);
>>
>> Shouldn't there be a vcpu_schedule_unlock_irqrestore() ?
>>
> I'm sure I put one there, as I was sure that it was there the last time
> I inspected the patch before hitting send.
>
> But I see that it's not there now, so I must have messed up when
> formatting the patch, or something like that. :-(
>
> It's really really weird, as I forgot it during development, and then
> the system was hanging, and then I added it, and that's why I'm sure I
> did have it in place... but perhaps I fat fingered some stgit command
> which made it disappear.

Or you forgot stg refresh? I just managed to do so. :-(


Juergen

  reply	other threads:[~2015-07-20 12:06 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-07-17 13:35 [PATCH v2 0/2] xen: sched/cpupool: more fixing of (corner?) cases Dario Faggioli
2015-07-17 13:35 ` [PATCH v2 1/2] xen: sched: reorganize cpu_disable_scheduler() Dario Faggioli
2015-07-20 11:41   ` Juergen Gross
2015-07-20 11:59     ` Dario Faggioli
2015-07-20 12:06       ` Juergen Gross [this message]
2015-07-17 13:36 ` [PATCH v2 2/2] xen: sched/cpupool: properly update affinity when removing a cpu from a cpupool Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55ACE435.6040308@suse.com \
    --to=jgross@suse.com \
    --cc=dario.faggioli@citrix.com \
    --cc=george.dunlap@eu.citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).