All of lore.kernel.org
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Julien Grall <julien.grall@arm.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Juergen Gross <jgross@suse.com>, Peng Fan <peng.fan@nxp.com>,
	Steve Capper <Steve.Capper@arm.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Punit Agrawal <punit.agrawal@arm.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Jan Beulich <jbeulich@suse.com>, Peng Fan <van.freenix@gmail.com>
Subject: Re: [RFC 0/5] xen/arm: support big.little SoC
Date: Wed, 21 Sep 2016 17:45:40 +0200	[thread overview]
Message-ID: <1474472740.4393.281.camel@citrix.com> (raw)
In-Reply-To: <6bd7d587-f9ba-c3bf-db96-46a2958d9e5b@arm.com>


[-- Attachment #1.1: Type: text/plain, Size: 5096 bytes --]

On Wed, 2016-09-21 at 14:06 +0100, Julien Grall wrote:
> (CC a couple of ARM folks)
> 
Yay, thanks for this! :-)

> I had few discussions and  more thought about big.LITTLE support in
> Xen. 
> The main goal of big.LITTLE is power efficiency by moving task
> around 
> and been able to idle one cluster. All the solutions suggested 
> (including mine) so far, can be replicated by hand (except the VPIDR)
> so 
> they are mostly an automatic way. 
>
I'm sorry, how is this (going to be) handled in Linux? Is it that any
arbitrary task executing any arbitrary binary code can be run on both
big and LITTLE pcpus, depending on the scheduler's and energy
management's decisions?

This does not seem to match with what has been said at some point in
this thread... And if it's like that, how's that possible, if the
pcpus' ISAs are (even only slightly) different?

> This will also remove the real 
> benefits of big.LITTLE because Xen will not be able to migrate vCPU 
> across cluster for power efficiency.
> 
> If we care about power efficiency, we would have to handle
> seamlessly 
> big.LITTLE in Xen (i.e a guess would only see a kind of CPU). 
>
Well, I'm a big fan of an approach that leaves the guests' scheduler
dumb about things like these (i.e., load balancing, energy efficiency,
etc), and hence puts Xen in charge. In fact, on a Xen system, it is
only Xen that has all the info necessary to make wise decisions (e.g.,
the load of the _whole_ host, the effect of any decisions on the
_whole_ host, etc).

But this case may be a LITTLE.bit ( :-PP ) different.

Anyway, I guess I'll way your reply to my question above before
commenting more.

> This arise 
> quite few problem, nothing insurmountable, similar to migration
> across 
> two platforms with different micro-architecture (e.g processors): 
> errata, features supported... The guest would have to know the union
> of 
> all the errata (this is done so far via the MIDR, so we would a PV
> way 
> to do it), and only the intersection of features would be exposed to
> the 
> guest. This also means the scheduler would have to be modified to
> handle 
> power efficiency (not strictly necessary at the beginning).
> 
> I agree that a such solution would require some work to implement, 
> although Xen will have a better control of the energy consumption of
> the 
> platform.
> 
> So the question here, is what do we want to achieve with big.LITTLE?
> 
Just thinking out loud here. So, instead of "just", as George
suggested:

 vcpuclass=["0-1:A35","2-5:A53", "6-7:A72"]

we can allow something like the following (note that I'm tossing out
random numbers next to the 'A's):

 vcpuclass = ["0-1:A35", "2-5:A53,A17", "6-7:A72,A24,A31", "12-13:A8"]

with the following meaning:
 - vcpus 0, 1 can only run on pcpus of class A35
 - vcpus 2,3,4,5 can run on pcpus of class A53 _and_ on pcpus of class 
   A17
 - vcpus 6,7 can run on pcpus of class A72, A24, A31
 - vcpus 8,9,10,11 --since they're not mentioned, can run on pcpus of 
   any class
 - vcpus 12,13 can only run on pcpus of class A8

This will set the "boundaries", for each vcpu. Then, within these
boundaries, once in the (Xen's) scheduler, we can implement whatever
complex/magic/silly logic we want, e.g.:
 - only use a pcpu of class A53 for vcpus that have an average load 
   above 50%
 - only use a pcpu of class A31 if there are no idle pcpus of class A24
 - only use a pcpu of class A17 for a vcpu if the total system load 
   divided by the vcpu ID give 42 as result
 - whatever

This allows us to achieve both the following goals:
 - allow Xen to take smart decisions, considering the load and the 
   efficiency of the host as a whole
 - allow the guest to take smart decisions, like running lightweight 
   tasks on low power vcpus (which then Xen will run on low 
   power pcpus, at least on a properly configured system)

Of course this **requires** that, for instance, vcpu 6 must be able to
run on A72, A24 and A31 just fine, i.e., it must be possible for it to
block on I/O when executing on an A72 pcpu, and, later, after wakeup,
restart executing on an A24 pcpu.

If that is not possible, and doing such vcpu movement, instead than
just calling schedule.c:vcpu_migrate() (or equivalent), requires some
more complex fiddling, involving local migration --or alike--
techniques, then I honestly don't think this is something that can be
solved at the scheduler level anyway... :-O

> [1] https://lwn.net/Articles/699569/
> 
I tried to have a quick look, but I don't have the time right now, and
firthermore, it's all about ARM, and I still speak too few ARM for
properly understanding what's going on... :-(

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2016-09-21 15:45 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-19  2:08 [RFC 0/5] xen/arm: support big.little SoC van.freenix
2016-09-19  2:08 ` [RFC 1/5] xen/arm: domain_build: setting opt_dom0_max_vcpus according to cpupool0 info van.freenix
2016-09-19  2:08 ` [RFC 2/5] xen: cpupool: introduce cpupool_arch_info van.freenix
2016-09-19  2:08 ` [RFC 3/5] xen: cpupool: add arch cpupool hook van.freenix
2016-09-19  2:08 ` [RFC 4/5] xen/arm: move vpidr from arch_domain to arch_vcpu van.freenix
2016-09-19  2:08 ` [RFC 5/5] xen/arm: cpupool: implement arch_domain_cpupool_compatible van.freenix
2016-09-19  8:09 ` [RFC 0/5] xen/arm: support big.little SoC Julien Grall
2016-09-19  8:36   ` Peng Fan
2016-09-19  8:53     ` Julien Grall
2016-09-19  9:38       ` Peng Fan
2016-09-19  9:59         ` Julien Grall
2016-09-19 13:15           ` Peng Fan
2016-09-19 20:56             ` Stefano Stabellini
2016-09-19  9:45       ` George Dunlap
2016-09-19 10:06         ` Julien Grall
2016-09-19 10:23           ` Juergen Gross
2016-09-19 17:18             ` Dario Faggioli
2016-09-19 21:03               ` Stefano Stabellini
2016-09-19 22:55                 ` Dario Faggioli
2016-09-20  0:01                   ` Stefano Stabellini
2016-09-20  0:54                     ` Dario Faggioli
2016-09-20 10:03                       ` Peng Fan
2016-09-20 10:27                         ` George Dunlap
2016-09-20 15:34                           ` Julien Grall
2016-09-20 17:24                             ` Dario Faggioli
2016-09-20 19:09                             ` Stefano Stabellini
2016-09-20 19:41                               ` Julien Grall
2016-09-20 20:17                                 ` Stefano Stabellini
2016-09-21  8:38                                   ` Peng Fan
2016-09-21  9:22                                     ` George Dunlap
2016-09-21 12:35                                       ` Peng Fan
2016-09-21 15:00                                       ` Dario Faggioli
2016-09-21 10:15                                     ` Julien Grall
2016-09-21 12:28                                       ` Peng Fan
2016-09-21 15:06                                         ` Dario Faggioli
2016-09-22  9:45                                       ` Peng Fan
2016-09-22 11:21                                         ` Julien Grall
2016-09-23  2:38                                           ` Peng Fan
2016-09-21 10:09                                   ` Julien Grall
2016-09-21 10:22                                     ` George Dunlap
2016-09-21 13:06                                       ` Julien Grall
2016-09-21 15:45                                         ` Dario Faggioli [this message]
2016-09-21 19:28                                           ` Julien Grall
2016-09-22  6:16                                             ` Peng Fan
2016-09-22  8:43                                             ` Dario Faggioli
2016-09-22 11:24                                               ` Julien Grall
2016-09-22 16:31                                                 ` Dario Faggioli
2016-09-23 13:56                                                   ` Julien Grall
2016-09-21 18:13                                         ` Stefano Stabellini
2016-09-21 19:11                                           ` Julien Grall
2016-09-21 19:21                                             ` Julien Grall
2016-09-21 23:45                                             ` Stefano Stabellini
2016-09-22  6:49                                             ` Peng Fan
2016-09-22  8:50                                               ` Dario Faggioli
2016-09-22  9:27                                                 ` Peng Fan
2016-09-22  9:51                                                   ` George Dunlap
2016-09-22 10:09                                                     ` Peng Fan
2016-09-22 10:39                                                       ` Dario Faggioli
2016-09-22 10:13                                                     ` Juergen Gross
2016-09-22  9:52                                                   ` Dario Faggioli
2016-09-22 11:29                                                   ` Julien Grall
2016-09-22 17:31                                                     ` Stefano Stabellini
2016-09-22 18:54                                                       ` Julien Grall
2016-09-23  2:14                                                         ` Peng Fan
2016-09-23  9:24                                                           ` Julien Grall
2016-09-23 10:05                                                             ` Peng Fan
2016-09-23 10:15                                                               ` Julien Grall
2016-09-23 13:36                                                                 ` Dario Faggioli
2016-09-24  1:57                                                                   ` Stefano Stabellini
2016-09-23 13:52                                                               ` Dario Faggioli
2016-09-24  1:35                                                         ` Stefano Stabellini
2016-09-23  2:03                                                     ` Peng Fan
2016-09-22 10:05                                                 ` Peng Fan
2016-09-22 16:26                                                   ` Dario Faggioli
2016-09-22 17:33                                                     ` Stefano Stabellini
2016-09-21 12:38                                     ` Peng Fan
2016-09-21  9:45                         ` Dario Faggioli
2016-09-20 10:18                     ` George Dunlap
2016-09-19 20:55             ` Stefano Stabellini
2016-09-19 10:33           ` George Dunlap
2016-09-19 13:33             ` Peng Fan
2016-09-20  0:11               ` Dario Faggioli
2016-09-20  6:18                 ` Peng Fan
2016-09-19 16:43             ` Dario Faggioli
2016-09-19 13:08       ` Peng Fan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1474472740.4393.281.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=Steve.Capper@arm.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=george.dunlap@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=julien.grall@arm.com \
    --cc=peng.fan@nxp.com \
    --cc=punit.agrawal@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=van.freenix@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.