xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jürgen Groß" <jgross@suse.com>
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a runqueue
Date: Thu, 30 Apr 2020 14:52:22 +0200	[thread overview]
Message-ID: <7e039c65-4532-c3ea-8707-72a86cf48e0e@suse.com> (raw)
In-Reply-To: <d60d5b917d517b1dfa8292cfb456639c736ec173.camel@suse.com>

On 30.04.20 14:28, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 09:35 +0200, Jürgen Groß wrote:
>> On 29.04.20 19:36, Dario Faggioli wrote:
>>>
>>> Therefore, let's set a limit to the max number of CPUs that can
>>> share a
>>> Credit2 runqueue. The actual value is configurable (at boot time),
>>> the
>>> default being 16. If, for instance,  there are more than 16 CPUs in
>>> a
>>> socket, they'll be split among two (or more) runqueues.
>>
>> Did you think about balancing the runqueues regarding the number of
>> cpus? E.g. in case of max being 16 and having 20 cpus to put 10 in
>> each
>> runqueue? I know this will need more logic as cpus are added one by
>> one,
>> but the result would be much better IMO.
>>
> I know. Point is, CPUs not only are added one by one, but they can,
> once the system is running, be offlined/onlined or moved among
> cpupools.
> 
> Therefore, if we have 20 CPUs, even if we put 10 in each runqueue at
> boot, if the admin removes 4 CPUs that happened to be all in the same
> runqueue, we end up in an unbalanced (6 vs 10) situation again. So we'd
> indeed need full runqueue online rebalancing logic, which will probably
> end up being quite complex and I'm not sure it's worth it.
> 
> That being said, I can try to make things a bit more fair, when CPUs
> come up and are added to the pool. Something around the line of adding
> them to the runqueue with the least number of CPUs in it (among the
> suitable ones, of course).
> 
> With that, when the user removes 4 CPUs, we will have the 6 vs 10
> situation. But we would make sure that, when she adds them back, we
> will go back to 10 vs. 10, instead than, say, 6 vs 14 or something like
> that.
> 
> Was something like this that you had in mind? And in any case, what do
> you think about it?

Yes, this would be better already.

> 
>>> --- a/xen/common/sched/cpupool.c
>>> +++ b/xen/common/sched/cpupool.c
>>> @@ -37,7 +37,7 @@ static cpumask_t cpupool_locked_cpus;
>>>    
>>>    static DEFINE_SPINLOCK(cpupool_lock);
>>>    
>>> -static enum sched_gran __read_mostly opt_sched_granularity =
>>> SCHED_GRAN_cpu;
>>> +enum sched_gran __read_mostly opt_sched_granularity =
>>> SCHED_GRAN_cpu;
>>
>> Please don't use the global option value, but the per-cpupool one.
>>
> Yep, you're right. Will do.
> 
>>> +/* Additional checks, to avoid separating siblings in different
>>> runqueues. */
>>> +static bool
>>> +cpu_runqueue_smt_match(const struct csched2_runqueue_data *rqd,
>>> unsigned int cpu)
>>> +{
>>> +    unsigned int nr_sibl =
>>> cpumask_weight(per_cpu(cpu_sibling_mask, cpu));
>>
>> Shouldn't you mask away siblings not in the cpupool?
>>
> So, point here is: if I have Pool-0 and Pool-1, each with 2 runqueues
> and CPU 0 is in Pool-1, when I add CPU 1 --which is CPU 0's sibling--
> to Pool-0, I always want to make sure that there is room for both CPUs
> 0 and 1 in the runqueue of Pool-0 where I'm putting it (CPU 0). Even if
> CPU 1 is currently in another pool.
> 
> This way if, in future, CPU 1 is removed from Pool-1 and added to
> Pool-0, I am sure it can go in the same runqueue where CPU 0 is. If I
> don't consider CPUs which currently are in another pool, we risk that
> when/if they're added to this very pool, they'll end up in a different
> runqueue.
> 
> And we don't want that.
> 
> Makes sense?

Yes.

You should add a comment in this regard.

And you should either reject the case of less cpus per queue than
siblings per core, or you should handle this situation. Otherwise you
won't ever find a suitable run-queue. :-)


Juergen


  reply	other threads:[~2020-04-30 12:52 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-29 17:36 [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue Dario Faggioli
2020-04-29 17:36 ` [PATCH 1/2] xen: credit2: factor cpu to runqueue matching in a function Dario Faggioli
2020-04-29 17:36 ` [PATCH 2/2] xen: credit2: limit the max number of CPUs in a runqueue Dario Faggioli
2020-04-30  6:45   ` Jan Beulich
2020-05-26 22:00     ` Dario Faggioli
2020-05-27  4:26       ` Jürgen Groß
2020-05-28  9:32         ` Dario Faggioli
2020-05-27  6:17       ` Jan Beulich
2020-05-28 14:55         ` Dario Faggioli
2020-05-29  9:58           ` Jan Beulich
2020-05-29 10:19             ` Dario Faggioli
2020-04-30  7:35   ` Jürgen Groß
2020-04-30 12:28     ` Dario Faggioli
2020-04-30 12:52       ` Jürgen Groß [this message]
2020-04-30 14:01         ` Dario Faggioli
2020-05-26 21:18         ` Dario Faggioli
2020-05-27  6:22           ` Jan Beulich
2020-05-28  9:36             ` Dario Faggioli
2020-05-29 11:46 ` [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue Dario Faggioli
2020-05-29 15:06   ` Dario Faggioli
2020-05-29 16:15   ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7e039c65-4532-c3ea-8707-72a86cf48e0e@suse.com \
    --to=jgross@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dfaggioli@suse.com \
    --cc=george.dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).