From: Jan Beulich <jbeulich@suse.com>
To: Dario Faggioli <dfaggioli@suse.com>
Cc: "Jürgen Groß" <jgross@suse.com>,
xen-devel@lists.xenproject.org, paul@xen.org,
"George Dunlap" <george.dunlap@citrix.com>,
"Andrew Cooper" <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a runqueue
Date: Wed, 27 May 2020 08:22:47 +0200 [thread overview]
Message-ID: <9948ac59-af64-d77d-57df-38a771a472b4@suse.com> (raw)
In-Reply-To: <8bf86f0c2bcce449cf7643aa9b98aa26ea558c2c.camel@suse.com>
On 26.05.2020 23:18, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 14:52 +0200, Jürgen Groß wrote:
>> On 30.04.20 14:28, Dario Faggioli wrote:
>>> That being said, I can try to make things a bit more fair, when
>>> CPUs
>>> come up and are added to the pool. Something around the line of
>>> adding
>>> them to the runqueue with the least number of CPUs in it (among the
>>> suitable ones, of course).
>>>
>>> With that, when the user removes 4 CPUs, we will have the 6 vs 10
>>> situation. But we would make sure that, when she adds them back, we
>>> will go back to 10 vs. 10, instead than, say, 6 vs 14 or something
>>> like
>>> that.
>>>
>>> Was something like this that you had in mind? And in any case, what
>>> do
>>> you think about it?
>>
>> Yes, this would be better already.
>>
> So, a couple of thoughts. Doing something like what I tried to describe
> above is not too bad, and I have it pretty much ready.
>
> With that, on an Intel system with 96 CPUs on two sockets, and
> max_cpus_per_runqueue set to 16, I got, after boot, instead of just 2
> giant runqueues with 48 CPUs in each:
>
> - 96 CPUs online, split in 6 runqueues (3 for each socket) with 16
> runqueues in each of them
>
> I can also "tweak" it in such a way that, if one for instance boots
> with "smt=no", we get to a point where we have:
>
> - 48 CPUs online, split in 6 runqueues, with 8 CPUs in each
>
> Now, I think this is good, and actually better than the current
> situation where on such a system, we only have two very big runqueues
> (and let me repeat that introducing a per-LLC runqueue arrangement, on
> which I'm also working, won't help in this case, as NUMA node == LLC).
>
> So, problem is that if one starts to fiddle with cpupools and cpu on
> and offlining, things can get pretty unbalanced. E.g., I can end up
> with 2 runqueues on a socket, one with 16 CPUs and the other with just
> a couple of them.
>
> Now, this is all possible as of now (I mean, without this patch)
> already, although at a different level. In fact, I can very well remove
> all-but-1 CPUs of node 1 from Pool-0, and end up again with a runqueue
> with a lot of CPUs and another with just one.
>
> It looks like we need a way to rebalance the runqueues, which should be
> doable... But despite having spent a couple of days trying to come up
> with something decent, that I could include in v2 of this series, I
> couldn't get it to work sensibly.
CPU on-/offlining may not need considering here at all imo. I think it
would be quite reasonable as a first step to have a static model where
from system topology (and perhaps a command line option) one can
predict which runqueue a particular CPU will end up in, no matter when
it gets brought online.
Jan
next prev parent reply other threads:[~2020-05-27 6:23 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-29 17:36 [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue Dario Faggioli
2020-04-29 17:36 ` [PATCH 1/2] xen: credit2: factor cpu to runqueue matching in a function Dario Faggioli
2020-04-29 17:36 ` [PATCH 2/2] xen: credit2: limit the max number of CPUs in a runqueue Dario Faggioli
2020-04-30 6:45 ` Jan Beulich
2020-05-26 22:00 ` Dario Faggioli
2020-05-27 4:26 ` Jürgen Groß
2020-05-28 9:32 ` Dario Faggioli
2020-05-27 6:17 ` Jan Beulich
2020-05-28 14:55 ` Dario Faggioli
2020-05-29 9:58 ` Jan Beulich
2020-05-29 10:19 ` Dario Faggioli
2020-04-30 7:35 ` Jürgen Groß
2020-04-30 12:28 ` Dario Faggioli
2020-04-30 12:52 ` Jürgen Groß
2020-04-30 14:01 ` Dario Faggioli
2020-05-26 21:18 ` Dario Faggioli
2020-05-27 6:22 ` Jan Beulich [this message]
2020-05-28 9:36 ` Dario Faggioli
2020-05-29 11:46 ` [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue Dario Faggioli
2020-05-29 15:06 ` Dario Faggioli
2020-05-29 16:15 ` George Dunlap
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9948ac59-af64-d77d-57df-38a771a472b4@suse.com \
--to=jbeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=dfaggioli@suse.com \
--cc=george.dunlap@citrix.com \
--cc=jgross@suse.com \
--cc=paul@xen.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).