xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Peng Fan <van.freenix@gmail.com>
Cc: Juergen Gross <jgross@suse.com>, Peng Fan <peng.fan@nxp.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>,
	Julien Grall <julien.grall@arm.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: Re: [RFC 0/5] xen/arm: support big.little SoC
Date: Wed, 21 Sep 2016 11:45:26 +0200	[thread overview]
Message-ID: <1474451126.4393.233.camel@citrix.com> (raw)
In-Reply-To: <20160920100331.GB8084@linux-u7w5.ap.freescale.net>


[-- Attachment #1.1: Type: text/plain, Size: 5318 bytes --]

On Tue, 2016-09-20 at 18:03 +0800, Peng Fan wrote:
> Hi Dario,
> On Tue, Sep 20, 2016 at 02:54:06AM +0200, Dario Faggioli wrote:
> > 
> > On Mon, 2016-09-19 at 17:01 -0700, Stefano Stabellini wrote:
> > > 
> > > On Tue, 20 Sep 2016, Dario Faggioli wrote:
> > > > 
> > > > And this would work even if/when there is only one cpupool, or
> > > > in
> > > > general for domains that are in a pool that has both big and
> > > > LITTLE
> > > > pcpus. Furthermore, big.LITTLE support and cpupools will be
> > > > orthogonal,
> > > > just like pinning and cpupools are orthogonal right now. I.e.,
> > > > once
> > > > we
> > > > will have what I described above, nothing prevents us from
> > > > implementing
> > > > per-vcpu cpupool membership, and either create the two (or
> > > > more!)
> > > > big
> > > > and LITTLE pools, or from mixing things even more, for more
> > > > complex
> > > > and
> > > > specific use cases. :-)
> > > 
> > > I think that everybody agrees that this is the best long term
> > > solution.
> > > 
> > Well, no, that wasn't obvious to me. If that's the case, it's
> > already
> > something! :-)
> > 
> > > 
> > > > 
> > > > 
> > > > Actually, with the cpupool solution, if you want a guest (or
> > > > dom0)
> > > > to
> > > > actually have both big and LITTLE vcpus, you necessarily have
> > > > to
> > > > implement per-vcpu (rather than per-domain, as it is now)
> > > > cpupool
> > > > membership. I said myself it's not impossible, but certainly
> > > > it's
> > > > some
> > > > work... with the scheduler solution you basically get that for
> > > > free!
> > > > 
> > > > So, basically, if we use cpupools for the basics of big.LITTLE
> > > > support,
> > > > there's no way out of it (apart from going implementing
> > > > scheduling
> > > > support afterwords, but that looks backwards to me, especially
> > > > when
> > > > thinking at it with the code in mind).
> > > 
> > > The question is: what is the best short-term solution we can ask
> > > Peng
> > > to
> > > implement that allows Xen to run on big.LITTLE systems today?
> > > Possibly
> > > getting us closer to the long term solution, or at least not
> > > farther
> > > from it?
> > > 
> > So, I still have to look closely at the patches in these series.
> > But,
> > with Credit2 in mind, if one:
> > 
> > ??- take advantage of the knowledge of what arch a pcpu belongs
> > inside??
> 
> > 
> > ?? ??the code that arrange the pcpus in runqueues, which means
> > we'll end??
> > ?? ??up with big runqueues and LITTLE runqueues. I re-wrote that
> > code, I
> > ?? ??can provide pointers and help, if necessary;
> > ??- tweak the one or two instance of for_each_runqueue() [*] that
> > there
> > ?? ??are in the code into a for_each_runqueue_of_same_class(),
> > i.e.:
> 
> Do you have plan to add this support for big.LITTLE?
> 
> I admit that this is the first time I look into the scheduler part.
> If I understand wrongly, please correct me.
> 
No, I was not really planning to work on this directly myself... I was
only providing opinions and advice.

That of course may change, e.g., if we think that it is absolutely and
of capital importance for Xen to gain big.LITTLE support in matter of
days. :-)  That's a bit unlikely at this stage anyway, though, even
independently of who'll work on that, given where we stand in Xen 4.8
release process.

In any case, I'm happy to help, though, with any kind of advice --as
I'm already trying to do-- but also in a more concrete way, on actual
code... but I strongly think that it's better if you lead the effort,
e.g., by trying to do what we agree upon, and ask immediately, as soon
as you get stuck. :-)

> There is a runqueue for each physical cpu, and there are several
> vcpus in the runqueue.
> The scheduler will pick a vcpu in the runqueue to run on the physical
> cpu.
> 
If you start by "just" using pinning, as I envisioned for early
support, and that also George is suggesting as first step, there's
going to be nothing to do withing Xen and on scheduler's runqueue at
all.

And it won't actually even be wasted effort, because all the code for
parsing and implementing the interface in xl and libxl, will be
reusable for when we'll switch to ditch implicit pinning and integrate
the mechanism within the scheduler's logic.

> A vcpu is bind to a physical cpu when alloc_vcpu, but the vcpu can be
> scheduled
> or migrated to a different physical cpu.
> 
> Settings cpu soft affinity and hard affinity to restrict vcpus be
> scheduled
> on specific cpus. Then is there a need to introuduce more runqueues?
> 
No, it's all more dynamic and --allow me-- more elegant than this that
you describe... But I do understand the fact that you've never looked
at scheduling code, so it's ok to not have this clear. :-_

> This seems more complicated than cpupool (:
> 
Nah, it's not... It may be a comparable amount of effort, but for a
better end result! :-)

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2016-09-21  9:45 UTC|newest]

Thread overview: 85+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-19  2:08 [RFC 0/5] xen/arm: support big.little SoC van.freenix
2016-09-19  2:08 ` [RFC 1/5] xen/arm: domain_build: setting opt_dom0_max_vcpus according to cpupool0 info van.freenix
2016-09-19  2:08 ` [RFC 2/5] xen: cpupool: introduce cpupool_arch_info van.freenix
2016-09-19  2:08 ` [RFC 3/5] xen: cpupool: add arch cpupool hook van.freenix
2016-09-19  2:08 ` [RFC 4/5] xen/arm: move vpidr from arch_domain to arch_vcpu van.freenix
2016-09-19  2:08 ` [RFC 5/5] xen/arm: cpupool: implement arch_domain_cpupool_compatible van.freenix
2016-09-19  8:09 ` [RFC 0/5] xen/arm: support big.little SoC Julien Grall
2016-09-19  8:36   ` Peng Fan
2016-09-19  8:53     ` Julien Grall
2016-09-19  9:38       ` Peng Fan
2016-09-19  9:59         ` Julien Grall
2016-09-19 13:15           ` Peng Fan
2016-09-19 20:56             ` Stefano Stabellini
2016-09-19  9:45       ` George Dunlap
2016-09-19 10:06         ` Julien Grall
2016-09-19 10:23           ` Juergen Gross
2016-09-19 17:18             ` Dario Faggioli
2016-09-19 21:03               ` Stefano Stabellini
2016-09-19 22:55                 ` Dario Faggioli
2016-09-20  0:01                   ` Stefano Stabellini
2016-09-20  0:54                     ` Dario Faggioli
2016-09-20 10:03                       ` Peng Fan
2016-09-20 10:27                         ` George Dunlap
2016-09-20 15:34                           ` Julien Grall
2016-09-20 17:24                             ` Dario Faggioli
2016-09-20 19:09                             ` Stefano Stabellini
2016-09-20 19:41                               ` Julien Grall
2016-09-20 20:17                                 ` Stefano Stabellini
2016-09-21  8:38                                   ` Peng Fan
2016-09-21  9:22                                     ` George Dunlap
2016-09-21 12:35                                       ` Peng Fan
2016-09-21 15:00                                       ` Dario Faggioli
2016-09-21 10:15                                     ` Julien Grall
2016-09-21 12:28                                       ` Peng Fan
2016-09-21 15:06                                         ` Dario Faggioli
2016-09-22  9:45                                       ` Peng Fan
2016-09-22 11:21                                         ` Julien Grall
2016-09-23  2:38                                           ` Peng Fan
2016-09-21 10:09                                   ` Julien Grall
2016-09-21 10:22                                     ` George Dunlap
2016-09-21 13:06                                       ` Julien Grall
2016-09-21 15:45                                         ` Dario Faggioli
2016-09-21 19:28                                           ` Julien Grall
2016-09-22  6:16                                             ` Peng Fan
2016-09-22  8:43                                             ` Dario Faggioli
2016-09-22 11:24                                               ` Julien Grall
2016-09-22 16:31                                                 ` Dario Faggioli
2016-09-23 13:56                                                   ` Julien Grall
2016-09-21 18:13                                         ` Stefano Stabellini
2016-09-21 19:11                                           ` Julien Grall
2016-09-21 19:21                                             ` Julien Grall
2016-09-21 23:45                                             ` Stefano Stabellini
2016-09-22  6:49                                             ` Peng Fan
2016-09-22  8:50                                               ` Dario Faggioli
2016-09-22  9:27                                                 ` Peng Fan
2016-09-22  9:51                                                   ` George Dunlap
2016-09-22 10:09                                                     ` Peng Fan
2016-09-22 10:39                                                       ` Dario Faggioli
2016-09-22 10:13                                                     ` Juergen Gross
2016-09-22  9:52                                                   ` Dario Faggioli
2016-09-22 11:29                                                   ` Julien Grall
2016-09-22 17:31                                                     ` Stefano Stabellini
2016-09-22 18:54                                                       ` Julien Grall
2016-09-23  2:14                                                         ` Peng Fan
2016-09-23  9:24                                                           ` Julien Grall
2016-09-23 10:05                                                             ` Peng Fan
2016-09-23 10:15                                                               ` Julien Grall
2016-09-23 13:36                                                                 ` Dario Faggioli
2016-09-24  1:57                                                                   ` Stefano Stabellini
2016-09-23 13:52                                                               ` Dario Faggioli
2016-09-24  1:35                                                         ` Stefano Stabellini
2016-09-23  2:03                                                     ` Peng Fan
2016-09-22 10:05                                                 ` Peng Fan
2016-09-22 16:26                                                   ` Dario Faggioli
2016-09-22 17:33                                                     ` Stefano Stabellini
2016-09-21 12:38                                     ` Peng Fan
2016-09-21  9:45                         ` Dario Faggioli [this message]
2016-09-20 10:18                     ` George Dunlap
2016-09-19 20:55             ` Stefano Stabellini
2016-09-19 10:33           ` George Dunlap
2016-09-19 13:33             ` Peng Fan
2016-09-20  0:11               ` Dario Faggioli
2016-09-20  6:18                 ` Peng Fan
2016-09-19 16:43             ` Dario Faggioli
2016-09-19 13:08       ` Peng Fan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1474451126.4393.233.camel@citrix.com \
    --to=dario.faggioli@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=jgross@suse.com \
    --cc=julien.grall@arm.com \
    --cc=peng.fan@nxp.com \
    --cc=sstabellini@kernel.org \
    --cc=van.freenix@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).