xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Juergen Gross <jgross@suse.com>,
	Julien Grall <julien.grall@arm.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Peng Fan <peng.fan@nxp.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <jbeulich@suse.com>, Peng Fan <van.freenix@gmail.com>
Subject: Re: [RFC 0/5] xen/arm: support big.little SoC
Date: Mon, 19 Sep 2016 19:18:02 +0200	[thread overview]
Message-ID: <1474305482.4393.42.camel@citrix.com> (raw)
In-Reply-To: <97d77bdb-2f4e-e89a-95b9-8aacb56eebc0@suse.com>


[-- Attachment #1.1: Type: text/plain, Size: 5025 bytes --]

On Mon, 2016-09-19 at 12:23 +0200, Juergen Gross wrote:
> On 19/09/16 12:06, Julien Grall wrote:
> > On 19/09/2016 11:45, George Dunlap wrote:
> > > But expanding the schedulers to know about different classes of
> > > cpus,
> > > and having vcpus specified as running only on specific types of
> > > pcpus,
> > > seems like a more flexible approach.
> > 
> > So, if I understand correctly, you would not recommend to extend
> > the
> > number of CPU pool per domain, correct?
> 
> Before deciding in which direction to go (multiple cpupools, sub-
> pools,
> kind of implicit cpu pinning) 
>
You mention "implicit pinning" here, and I'd like to stress this,
because basically no one (else) in the conversation seem to have
considered it. In fact, it may not necessarily be the best long term
solution, but doing something based on pinning is, IMO, a very
convenient first step (and may well become one of the 'modes' available
to the user for taking advantage of big.LITTLE.

So, if cpus 0-3 are big and cpus 4,5 are LITTLE, we can:
 - for domain X, which wants to run only on big cores, pin all it's
   vcpus to pcpus 0-3
 - for domain Y, which wants to run only on LITTLE cores, pin all it's
   vcpus to pcpus 4,5
 - for domain Z, which wants its vcpus 0,1 to run on big cores, and
   it's vcpus 2,3 to run on LITTLE cores, pin vcpus 0,1 to pcpus 0-3, 
   and pin vcpus 2,3 to pcpus 4,5

Setting thing up like this, even automatically, either in hypervisor or
toolstack, is basically already possible (with all the good and bad
aspects of pinning, of course).

Then, sure (as I said when replying to George), we may want things to
be more flexible, and we also probably want to be on the safe side --if 
ever some components manages to undo our automatic pinning-- wrt the
scheduler not picking up work for the wrong architecture... But still
I'm a bit surprised this did not came up... Julien, Peng, is that
because you think this is not doable for any reason I'm missing?

> I think we should think about the
> implications regarding today's interfaces:
> 
I totally agree. (At least) These three things should be very clear,
before starting to implement anything:
 - what is the behavior that we want to achieve, from the point of 
   view of both the hypervisor and the guests
 - what will be the interface
 - how this new interface will map and will interact with existing 
   interfaces

> - Do we want to be able to use different schedulers for big/little
>   (this would mean some cpupool related solution)? I'd prefer to
>   have only one scheduler type for each domain. :-)
> 
Well, this, actually is, IMO, from a behavioral perspective, a nice
point in favour of supporting a split-cpupool solution. In fact, I
think I can envision scenario and reasons for having different
schedulers between big cpus and LITTLE cpus (or same scheduler with
different parameters).

But then, yes, if we then want a domain to have both big and LITTLE
cpus, we'd need to allow a domain to live in more than one cpupool at a
time, which means a domain will have multiple schedulers.

I don't think this is impossible... almost all the scheduling happens
at the vcpu level already. The biggest challenge is probably the
interface. _HOWEVER_, I think this is something that can well come
later, like in phase 2 or 3, as an enhancement/possibility, instead
than be the foundation of big.LITTLE support in Xen.

> - What about scheduling parameters like weight and cap? How would
>   those apply (answer probably influencing pinning solution).
>   Remember that especially the downsides of pinning led to the
>   introduction of cpupools.
> 
Very important bit indeed. FWIW, there's already a scheduler that
supports per-vcpu parameters (so some glue code, or code from which to
take inspiration) is there already. And scheduling happens at the vcpu
level anyway. I.e., it would not be to hard to make it possible to pass
down to Xen, say, per-vcpu weights. Then, at, e.g., xl level, you
specify a set of parameters for big cpus, and another set for LITTLE
cpus, and either xl itself or libxl will do the mapping and prepare the
per-vcpu values.

Again, this is just to say that the "cpupool way" does not look too
impossible, and may be interesting. However, although I'd like to think
more (and see more thoughts) about designs and possibilities, I still
continue to think it should not be neither the only nor the first mode
that we will implement.

> - Is big.LITTLE to be expected to be combined with NUMA?
> 
> - Do we need to support live migration for domains containing both
>   types of cpus?
> 
Interesting points too.

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-09-19 17:18 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-19  2:08 [RFC 0/5] xen/arm: support big.little SoC van.freenix
2016-09-19  2:08 ` [RFC 1/5] xen/arm: domain_build: setting opt_dom0_max_vcpus according to cpupool0 info van.freenix
2016-09-19  2:08 ` [RFC 2/5] xen: cpupool: introduce cpupool_arch_info van.freenix
2016-09-19  2:08 ` [RFC 3/5] xen: cpupool: add arch cpupool hook van.freenix
2016-09-19  2:08 ` [RFC 4/5] xen/arm: move vpidr from arch_domain to arch_vcpu van.freenix
2016-09-19  2:08 ` [RFC 5/5] xen/arm: cpupool: implement arch_domain_cpupool_compatible van.freenix
2016-09-19  8:09 ` [RFC 0/5] xen/arm: support big.little SoC Julien Grall
2016-09-19  8:36   ` Peng Fan
2016-09-19  8:53     ` Julien Grall
2016-09-19  9:38       ` Peng Fan
2016-09-19  9:59         ` Julien Grall
2016-09-19 13:15           ` Peng Fan
2016-09-19 20:56             ` Stefano Stabellini
2016-09-19  9:45       ` George Dunlap
2016-09-19 10:06         ` Julien Grall
2016-09-19 10:23           ` Juergen Gross
2016-09-19 17:18             ` Dario Faggioli [this message]
2016-09-19 21:03               ` Stefano Stabellini
2016-09-19 22:55                 ` Dario Faggioli
2016-09-20  0:01                   ` Stefano Stabellini
2016-09-20  0:54                     ` Dario Faggioli
2016-09-20 10:03                       ` Peng Fan
2016-09-20 10:27                         ` George Dunlap
2016-09-20 15:34                           ` Julien Grall
2016-09-20 17:24                             ` Dario Faggioli
2016-09-20 19:09                             ` Stefano Stabellini
2016-09-20 19:41                               ` Julien Grall
2016-09-20 20:17                                 ` Stefano Stabellini
2016-09-21  8:38                                   ` Peng Fan
2016-09-21  9:22                                     ` George Dunlap
2016-09-21 12:35                                       ` Peng Fan
2016-09-21 15:00                                       ` Dario Faggioli
2016-09-21 10:15                                     ` Julien Grall
2016-09-21 12:28                                       ` Peng Fan
2016-09-21 15:06                                         ` Dario Faggioli
2016-09-22  9:45                                       ` Peng Fan
2016-09-22 11:21                                         ` Julien Grall
2016-09-23  2:38                                           ` Peng Fan
2016-09-21 10:09                                   ` Julien Grall
2016-09-21 10:22                                     ` George Dunlap
2016-09-21 13:06                                       ` Julien Grall
2016-09-21 15:45                                         ` Dario Faggioli
2016-09-21 19:28                                           ` Julien Grall
2016-09-22  6:16                                             ` Peng Fan
2016-09-22  8:43                                             ` Dario Faggioli
2016-09-22 11:24                                               ` Julien Grall
2016-09-22 16:31                                                 ` Dario Faggioli
2016-09-23 13:56                                                   ` Julien Grall
2016-09-21 18:13                                         ` Stefano Stabellini
2016-09-21 19:11                                           ` Julien Grall
2016-09-21 19:21                                             ` Julien Grall
2016-09-21 23:45                                             ` Stefano Stabellini
2016-09-22  6:49                                             ` Peng Fan
2016-09-22  8:50                                               ` Dario Faggioli
2016-09-22  9:27                                                 ` Peng Fan
2016-09-22  9:51                                                   ` George Dunlap
2016-09-22 10:09                                                     ` Peng Fan
2016-09-22 10:39                                                       ` Dario Faggioli
2016-09-22 10:13                                                     ` Juergen Gross
2016-09-22  9:52                                                   ` Dario Faggioli
2016-09-22 11:29                                                   ` Julien Grall
2016-09-22 17:31                                                     ` Stefano Stabellini
2016-09-22 18:54                                                       ` Julien Grall
2016-09-23  2:14                                                         ` Peng Fan
2016-09-23  9:24                                                           ` Julien Grall
2016-09-23 10:05                                                             ` Peng Fan
2016-09-23 10:15                                                               ` Julien Grall
2016-09-23 13:36                                                                 ` Dario Faggioli
2016-09-24  1:57                                                                   ` Stefano Stabellini
2016-09-23 13:52                                                               ` Dario Faggioli
2016-09-24  1:35                                                         ` Stefano Stabellini
2016-09-23  2:03                                                     ` Peng Fan
2016-09-22 10:05                                                 ` Peng Fan
2016-09-22 16:26                                                   ` Dario Faggioli
2016-09-22 17:33                                                     ` Stefano Stabellini
2016-09-21 12:38                                     ` Peng Fan
2016-09-21  9:45                         ` Dario Faggioli
2016-09-20 10:18                     ` George Dunlap
2016-09-19 20:55             ` Stefano Stabellini
2016-09-19 10:33           ` George Dunlap
2016-09-19 13:33             ` Peng Fan
2016-09-20  0:11               ` Dario Faggioli
2016-09-20  6:18                 ` Peng Fan
2016-09-19 16:43             ` Dario Faggioli
2016-09-19 13:08       ` Peng Fan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1474305482.4393.42.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=julien.grall@arm.com \
    --cc=peng.fan@nxp.com \
    --cc=sstabellini@kernel.org \
    --cc=van.freenix@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).