From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [RFC 0/5] xen/arm: support big.little SoC Date: Wed, 21 Sep 2016 11:45:26 +0200 Message-ID: <1474451126.4393.233.camel@citrix.com> References: <20160919083619.GA16854@linux-7smt.suse> <5ddefbc1-3bd4-c990-b615-0039761535d8@arm.com> <97d77bdb-2f4e-e89a-95b9-8aacb56eebc0@suse.com> <1474305482.4393.42.camel@citrix.com> <1474325742.4393.78.camel@citrix.com> <1474332846.4393.153.camel@citrix.com> <20160920100331.GB8084@linux-u7w5.ap.freescale.net> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============7665348893088198500==" Return-path: In-Reply-To: <20160920100331.GB8084@linux-u7w5.ap.freescale.net> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: Peng Fan Cc: Juergen Gross , Peng Fan , Stefano Stabellini , George Dunlap , Andrew Cooper , "xen-devel@lists.xen.org" , Julien Grall , Jan Beulich List-Id: xen-devel@lists.xenproject.org --===============7665348893088198500== Content-Type: multipart/signed; micalg=pgp-sha256; protocol="application/pgp-signature"; boundary="=-+G/nMsMFEjeV3DSzsReQ" --=-+G/nMsMFEjeV3DSzsReQ Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Tue, 2016-09-20 at 18:03 +0800, Peng Fan wrote: > Hi Dario, > On Tue, Sep 20, 2016 at 02:54:06AM +0200, Dario Faggioli wrote: > >=20 > > On Mon, 2016-09-19 at 17:01 -0700, Stefano Stabellini wrote: > > >=20 > > > On Tue, 20 Sep 2016, Dario Faggioli wrote: > > > >=20 > > > > And this would work even if/when there is only one cpupool, or > > > > in > > > > general for domains that are in a pool that has both big and > > > > LITTLE > > > > pcpus. Furthermore, big.LITTLE support and cpupools will be > > > > orthogonal, > > > > just like pinning and cpupools are orthogonal right now. I.e., > > > > once > > > > we > > > > will have what I described above, nothing prevents us from > > > > implementing > > > > per-vcpu cpupool membership, and either create the two (or > > > > more!) > > > > big > > > > and LITTLE pools, or from mixing things even more, for more > > > > complex > > > > and > > > > specific use cases. :-) > > >=20 > > > I think that everybody agrees that this is the best long term > > > solution. > > >=20 > > Well, no, that wasn't obvious to me. If that's the case, it's > > already > > something! :-) > >=20 > > >=20 > > > >=20 > > > >=20 > > > > Actually, with the cpupool solution, if you want a guest (or > > > > dom0) > > > > to > > > > actually have both big and LITTLE vcpus, you necessarily have > > > > to > > > > implement per-vcpu (rather than per-domain, as it is now) > > > > cpupool > > > > membership. I said myself it's not impossible, but certainly > > > > it's > > > > some > > > > work... with the scheduler solution you basically get that for > > > > free! > > > >=20 > > > > So, basically, if we use cpupools for the basics of big.LITTLE > > > > support, > > > > there's no way out of it (apart from going implementing > > > > scheduling > > > > support afterwords, but that looks backwards to me, especially > > > > when > > > > thinking at it with the code in mind). > > >=20 > > > The question is: what is the best short-term solution we can ask > > > Peng > > > to > > > implement that allows Xen to run on big.LITTLE systems today? > > > Possibly > > > getting us closer to the long term solution, or at least not > > > farther > > > from it? > > >=20 > > So, I still have to look closely at the patches in these series. > > But, > > with Credit2 in mind, if one: > >=20 > > ??- take advantage of the knowledge of what arch a pcpu belongs > > inside?? >=20 > >=20 > > ?? ??the code that arrange the pcpus in runqueues, which means > > we'll end?? > > ?? ??up with big runqueues and LITTLE runqueues. I re-wrote that > > code, I > > ?? ??can provide pointers and help, if necessary; > > ??- tweak the one or two instance of for_each_runqueue() [*] that > > there > > ?? ??are in the code into a for_each_runqueue_of_same_class(), > > i.e.: >=20 > Do you have plan to add this support for big.LITTLE? >=20 > I admit that this is the first time I look into the scheduler part. > If I understand wrongly, please correct me. >=20 No, I was not really planning to work on this directly myself... I was only providing opinions and advice. That of course may change, e.g., if we think that it is absolutely and of capital importance for Xen to gain big.LITTLE support in matter of days. :-) =C2=A0That's a bit unlikely at this stage anyway, though, even independently of who'll work on that, given where we stand in Xen 4.8 release process. In any case, I'm happy to help, though, with any kind of advice --as I'm already trying to do-- but also in a more concrete way, on actual code... but I strongly think that it's better if you lead the effort, e.g., by trying to do what we agree upon, and ask immediately, as soon as you get stuck. :-) > There is a runqueue for each physical cpu, and there are several > vcpus in the runqueue. > The scheduler will pick a vcpu in the runqueue to run on the physical > cpu. >=20 If you start by "just" using pinning, as I envisioned for early support, and that also George is suggesting as first step, there's going to be nothing to do withing Xen and on scheduler's runqueue at all. And it won't actually even be wasted effort, because all the code for parsing and implementing the interface in xl and libxl, will be reusable for when we'll switch to ditch implicit pinning and integrate the mechanism within the scheduler's logic. > A vcpu is bind to a physical cpu when alloc_vcpu, but the vcpu can be > scheduled > or migrated to a different physical cpu. >=20 > Settings cpu soft affinity and hard affinity to restrict vcpus be > scheduled > on specific cpus. Then is there a need to introuduce more runqueues? >=20 No, it's all more dynamic and --allow me-- more elegant than this that you describe... But I do understand the fact that you've never looked at scheduling code, so it's ok to not have this clear. :-_ > This seems more complicated than cpupool (: >=20 Nah, it's not... It may be a comparable amount of effort, but for a better end result! :-) Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-+G/nMsMFEjeV3DSzsReQ Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAABCAAGBQJX4la3AAoJEBZCeImluHPuslUP/0nIihVKEFTbMjWbUi2PU8MX VKhnhsMGjnjFB/iaDHyb9wLiWTNrPjXyDzUm4+sSVwx03xmDvj6+OXIzuRyMGerS HudjYRJElvZrtgSvsszF394WTnP7wygBA0a9ip+gF0f0tgjqehSxI7iHzIhxPXk4 llr664gmctNRsGlTBJr7x5+glafihn4AV9V+y64/z0F2B9Z43031qeOK/aehUGyE WwY70bK54Q1GFgFSB1mE9boI+g89UVL/xRfTp6H+g+43pN/y4O6+qc8nIM1CAKKb xpXeJM7QySxnoPorp8qgzs+VVI7h3xchc8kdTgF3lOxypdmXknhtF1M9jFHU+va1 VtHpdK0l/EKAtisnEoBVjhJzC1nSuar2lox29WwwmWsqKd3pzuVs8VdPaEXO9zIm zKnl4M24feh26scL9vNkQp5mL8jrZ92n7CgLV4HEVn+Uq166xk2HOC/hbPGXGe+L rJOAx5QKu57e9IHjJgmSjV91VFpbZIUnge2QKUeRf0BWbjMt2Yf5Stj8fB/id1SI Qd98PtwcZXcAyMrxL5r7hvKI4WLJWySGeFKbaq/ZXP8R7yKNoi70uSAwDA3x/Gxf SNS1SWgC0cX25QjSsPAP2dRZwlK8ulfPxbyM5amQIaO39R9SJJ2hi8uDnAbRkKjP Ct4BM5o0xwV2Vup5mUPU =Df8b -----END PGP SIGNATURE----- --=-+G/nMsMFEjeV3DSzsReQ-- --===============7665348893088198500== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwczovL2xpc3RzLnhlbi5v cmcveGVuLWRldmVsCg== --===============7665348893088198500==--