From: Juergen Gross <jgross@suse.com> To: Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <JBeulich@suse.com> Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wei.liu2@citrix.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, George Dunlap <George.Dunlap@eu.citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>, Julien Grall <julien.grall@arm.com>, xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>, Roger Pau Monne <roger.pau@citrix.com> Subject: Re: [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum Date: Fri, 10 May 2019 13:22:26 +0200 [thread overview] Message-ID: <2537b1f3-5047-56b2-5884-581ea71ccf58@suse.com> (raw) On 10/05/2019 12:29, Dario Faggioli wrote: > On Fri, 2019-05-10 at 11:00 +0200, Juergen Gross wrote: >> On 10/05/2019 10:53, Jan Beulich wrote: >>>>>> On 08.05.19 at 16:36, <jgross@suse.com> wrote: >>>> >>>> With sched-gran=core or sched-gran=socket offlining a single cpu >>>> results >>>> in moving the complete core or socket to cpupool_free_cpus and >>>> then >>>> offlining from there. Only complete cores/sockets can be moved to >>>> any >>>> cpupool. When onlining a cpu it is added to cpupool_free_cpus and >>>> if >>>> the core/socket is completely online it will automatically be >>>> added to >>>> Pool-0 (as today any single onlined cpu). >>> >>> Well, this is in line with what was discussed on the call >>> yesterday, so >>> I think it's an acceptable initial state to end up in. Albeit, just >>> for >>> completeness, I'm not convinced there's no use for "smt- >>> {dis,en}able" >>> anymore with core-aware scheduling implemented just in Xen - it >>> may still be considered useful as long as we don't expose proper >>> topology to guests, for them to be able to do something similar. >> >> As the extra complexity for supporting that is significant I'd like >> to >> at least postpone it. And with the (later) introduction of per- >> cpupool >> smt on/off I guess this would be even less important. >> > I agree. > > Isn't it the case that (but note that I'm just thinking out loud here), > if we make smt= and sched-gran= per-cpupool, the user gains the chance > to use both, if he/she wants (e.g., for testing)? Yes. > If yes, is such a thing valuable enough that it'd it make sense to work > on that, as a first thing, I mean? My planned roadmap is: 1. this series 2. scheduler clean-up 3. per-cpupool smt and granularity > We'd still forbid moving things from pools with different > configuration, at least at the beginning, of course. Right, allowing that would be 4. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
WARNING: multiple messages have this Message-ID (diff)
From: Juergen Gross <jgross@suse.com> To: Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <JBeulich@suse.com> Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wei.liu2@citrix.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, George Dunlap <George.Dunlap@eu.citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>, Julien Grall <julien.grall@arm.com>, xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <Ian.Jackson@eu.citrix.com>, Roger Pau Monne <roger.pau@citrix.com> Subject: Re: [Xen-devel] [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum Date: Fri, 10 May 2019 13:22:26 +0200 [thread overview] Message-ID: <2537b1f3-5047-56b2-5884-581ea71ccf58@suse.com> (raw) Message-ID: <20190510112226.OZXLeLtjg5m1qebk02Dz9X_kCaf6MPgDZVe-u45cAP0@z> (raw) On 10/05/2019 12:29, Dario Faggioli wrote: > On Fri, 2019-05-10 at 11:00 +0200, Juergen Gross wrote: >> On 10/05/2019 10:53, Jan Beulich wrote: >>>>>> On 08.05.19 at 16:36, <jgross@suse.com> wrote: >>>> >>>> With sched-gran=core or sched-gran=socket offlining a single cpu >>>> results >>>> in moving the complete core or socket to cpupool_free_cpus and >>>> then >>>> offlining from there. Only complete cores/sockets can be moved to >>>> any >>>> cpupool. When onlining a cpu it is added to cpupool_free_cpus and >>>> if >>>> the core/socket is completely online it will automatically be >>>> added to >>>> Pool-0 (as today any single onlined cpu). >>> >>> Well, this is in line with what was discussed on the call >>> yesterday, so >>> I think it's an acceptable initial state to end up in. Albeit, just >>> for >>> completeness, I'm not convinced there's no use for "smt- >>> {dis,en}able" >>> anymore with core-aware scheduling implemented just in Xen - it >>> may still be considered useful as long as we don't expose proper >>> topology to guests, for them to be able to do something similar. >> >> As the extra complexity for supporting that is significant I'd like >> to >> at least postpone it. And with the (later) introduction of per- >> cpupool >> smt on/off I guess this would be even less important. >> > I agree. > > Isn't it the case that (but note that I'm just thinking out loud here), > if we make smt= and sched-gran= per-cpupool, the user gains the chance > to use both, if he/she wants (e.g., for testing)? Yes. > If yes, is such a thing valuable enough that it'd it make sense to work > on that, as a first thing, I mean? My planned roadmap is: 1. this series 2. scheduler clean-up 3. per-cpupool smt and granularity > We'd still forbid moving things from pools with different > configuration, at least at the beginning, of course. Right, allowing that would be 4. Juergen _______________________________________________ Xen-devel mailing list Xen-devel@lists.xenproject.org https://lists.xenproject.org/mailman/listinfo/xen-devel
next reply other threads:[~2019-05-10 11:22 UTC|newest] Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-05-10 11:22 Juergen Gross [this message] 2019-05-10 11:22 ` [Xen-devel] [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum Juergen Gross -- strict thread matches above, loose matches on Subject: below -- 2019-05-06 13:29 Juergen Gross 2019-05-06 6:55 [PATCH RFC V2 00/45] xen: add core scheduling support Juergen Gross 2019-05-06 6:56 ` [PATCH RFC V2 45/45] xen/sched: add scheduling granularity enum Juergen Gross 2019-05-06 8:57 ` Jan Beulich [not found] ` <5CCFF6F1020000780022C12B@suse.com> [not found] ` <ac57c420*a72e*7570*db8f*27e4693c2755@suse.com> 2019-05-06 9:23 ` Juergen Gross 2019-05-06 10:01 ` Jan Beulich 2019-05-08 14:36 ` Juergen Gross 2019-05-10 8:53 ` Jan Beulich [not found] ` <5CD53C1C020000780022D706@suse.com> 2019-05-10 9:00 ` Juergen Gross 2019-05-10 10:29 ` Dario Faggioli 2019-05-10 11:17 ` Jan Beulich [not found] ` <5CD005E7020000780022C1B5@suse.com> 2019-05-06 10:20 ` Juergen Gross 2019-05-06 11:58 ` Jan Beulich [not found] ` <5CD02161020000780022C257@suse.com> 2019-05-06 12:23 ` Juergen Gross 2019-05-06 13:14 ` Jan Beulich [not found] <20190506065644.7415****1****jgross@suse.com> [not found] <20190506065644.7415*1*jgross@suse.com>
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=2537b1f3-5047-56b2-5884-581ea71ccf58@suse.com \ --to=jgross@suse.com \ --cc=George.Dunlap@eu.citrix.com \ --cc=Ian.Jackson@eu.citrix.com \ --cc=JBeulich@suse.com \ --cc=andrew.cooper3@citrix.com \ --cc=dfaggioli@suse.com \ --cc=julien.grall@arm.com \ --cc=konrad.wilk@oracle.com \ --cc=roger.pau@citrix.com \ --cc=sstabellini@kernel.org \ --cc=tim@xen.org \ --cc=wei.liu2@citrix.com \ --cc=xen-devel@lists.xenproject.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.