From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: PV-vNUMA issue: topology is misinterpreted by the guest Date: Tue, 28 Jul 2015 18:17:18 +0200 Message-ID: <1438100238.2889.135.camel@citrix.com> References: <1437042762.28251.18.camel@citrix.com> <55A78DF2.1060709@citrix.com> <20150716152513.GU12455@zion.uk.xensource.com> <55A7D17C.5060602@citrix.com> <55A7D2CC.1050708@oracle.com> <55A7F7F40200007800092152@mail.emea.novell.com> <55A7DE45.4040804@citrix.com> <55A7E2D8.3040203@oracle.com> <55A8B83802000078000924AE@mail.emea.novell.com> <1437118075.23656.25.camel@citrix.com> <55A946C6.8000002@oracle.com> <1437401354.5036.19.camel@citrix.com> <55AD08F7.7020105@oracle.com> <55AEA4DD.7080406@oracle.com> <1437572160.5036.39.camel@citrix.com> <55AF9F8F.7030200@suse.com> <55AFA16B.3070103@oracle.com> <55AFA41E.1080101@suse.com> <55AFAC34.1060606@oracle.com> <55B070ED.2040200@suse.com> <1437660433.5036.96.camel@citrix.com> <55B21364.5040906@suse.com> <1437749076.4682.47.camel@citrix.com> <55B25650.4030402@suse.com> <55B258C9.4040400@suse.com> <1437753509.4682.78.camel@citrix.com> <55B26377.4060807@suse.com> <1438006166.5036.156.camel@citrix.com> <55B7052F.8090804@suse.com> <55B79B9C.6030505@suse.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============0251123863303225155==" Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZK7ZO-000636-2k for xen-devel@lists.xenproject.org; Tue, 28 Jul 2015 16:17:34 +0000 In-Reply-To: <55B79B9C.6030505@suse.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Juergen Gross Cc: Elena Ufimtseva , Wei Liu , Andrew Cooper , David Vrabel , Jan Beulich , "xen-devel@lists.xenproject.org" , Boris Ostrovsky List-Id: xen-devel@lists.xenproject.org --===============0251123863303225155== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-89WEPhkWoOCR3J3OqAsN" --=-89WEPhkWoOCR3J3OqAsN Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Tue, 2015-07-28 at 17:11 +0200, Juergen Gross wrote: > On 07/28/2015 06:29 AM, Juergen Gross wrote: > > I'll make some performance tests on a big machine (4 sockets, 60 cores, > > 120 threads) regarding topology information: > > > > - bare metal > > - "random" topology (like today) > > - "simple" topology (all vcpus regarded as equal) > > - "real" topology with all vcpus pinned > > > > This should show: > > > > - how intrusive would the topology patch(es) be? > > - what is the performance impact of a "wrong" scheduling data base >=20 > On the above box I used a pvops kernel 4.2-rc4 plus a rather small patch > (see attachment). I did 5 kernel builds in each environment: >=20 > make clean > time make -j 120 >=20 Right. If you have time, can you try '-j60' and '-j30' (maybe even -j45 and -j15, if you've got _a_lot_ of time! :-)). I'm asking this because, with hyperthreading involved, I've sometimes seen things being the worse when *not* (over)saturating the CPU capacity. The explanation is that, if every vcpu is busy, meaning that every thread is busy, it does not make much difference where you schedule the busy vcpus. OTOH, if only 1/2 of the threads are busy, a properly setup system will effectively spread the load in such a way that each vcpu has a full core available; a messed up one will, when trying to do the same, end up scheduling stuff on siblings, even if there are idle cores available. In this case, things are a bit more tricky. In fact, I've observed the above while looking after the Xen scheduler. In this case, it is the guest (dom0) scheduler that we are looking at, and, e.g., if the load is small enough, Xen's scheduler will fix things up, at least up to a certain extent. It's worth a try anyway, I guess, if you have time, of course. > The first result of the 5 runs was always omitted as it would have to > build up buffer caches etc. The Xen cases were all done in dom0, pinning > of vcpus in the last scenario was done via dom0_vcpus_pin boot parameter > of the hypervisor. >=20 > Here are the results (everything in seconds): >=20 > elapsed user system > bare metal: 100 5770 805 > "random" topology: 283 6740 20700 > "simple" topology: 290 6740 22200 > "real" topology: 185 7800 8040 >=20 > As expected bare metal is the best. Next is "real" topology with pinned > vcpus (expected again - but system time already factor of 10 up!). > I also think that (massively) overloading biases things in favour of pinning. In fact, pinning incurs in less overhead, as there are no scheduling decisions involved, and no migrations of vcpus among pcpus. With the system oversubscribed to to 200%, even in the non-pinning case there shouldn't be much migrations, but certainly there will be some, and they turn out to be pure overhead! In fact, they bring zero benefits, as it's not possible that any of them will put the system in a more advantageous state, performance wise: we're fully loaded and we want to stay fully loaded! > What I didn't expect is: "random" is better than "simple" topology.=20 > Weird indeed! > I > could test some other topologies (e.g. everything on one socket, or even > on one core), but I'm not sure this makes sense. I didn't check the > exact topology result of the "random" case, maybe I'll do that tomorrow > with another measurement. >=20 So, my test box looks like this: cpu_topology : cpu: core socket node 0: 0 1 0 1: 0 1 0 2: 1 1 0 3: 1 1 0 4: 9 1 0 5: 9 1 0 6: 10 1 0 7: 10 1 0 8: 0 0 1 9: 0 0 1 10: 1 0 1 11: 1 0 1 12: 9 0 1 13: 9 0 1 14: 10 0 1 15: 10 0 1 In Dom0, here's what I see _without_ any pinning: root@Zhaman:~# for i in `seq 0 15`;do cat /sys/devices/system/cpu/cpu$i/top= ology/thread_siblings_list ;done 0-1 0-1 2-3 2-3 4-5 4-5 6-7 6-7 8-9 8-9 10-11 10-11 12-13 12-13 14-15 14-15 root@Zhaman:~# cat /proc/cpuinfo |grep "physical id" physical id : 1 physical id : 1 physical id : 1 physical id : 1 physical id : 1 physical id : 1 physical id : 1 physical id : 1 physical id : 0 physical id : 0 physical id : 0 physical id : 0 physical id : 0 physical id : 0 physical id : 0 physical id : 0 root@Zhaman:~# cat /proc/cpuinfo |grep "core id" core id : 0 core id : 0 core id : 1 core id : 1 core id : 9 core id : 9 core id : 10 core id : 10 core id : 0 core id : 0 core id : 1 core id : 1 core id : 9 core id : 9 core id : 10 core id : 10 root@Zhaman:~# cat /proc/cpuinfo |grep "cpu cores" cpu cores : 4 root@Zhaman:~# cat /proc/cpuinfo |grep "siblings"=20 siblings : 8 So, basically, as far as Dom0 on my test box is concerned, "random" actually matches the host topology. Sure, without pinning, this looks equally wrong, as Xen's scheduler can well execute, say, vcpu 0 and vcpu 4, which are not siblings, on the same core. But then again, if the load is small, it just won't happen (e.g., if there are only those two busy vcpus, Xen will send them on !siblings core), while if it's too hugh, it won't matter... :-/ Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-89WEPhkWoOCR3J3OqAsN Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEABECAAYFAlW3qw4ACgkQk4XaBE3IOsSJgACeJwO3Ld5xONnNulZvCnQlYz/q W1MAn32elrSA9u/6Q9RB5tioL8Y375/n =IxwE -----END PGP SIGNATURE----- --=-89WEPhkWoOCR3J3OqAsN-- --===============0251123863303225155== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============0251123863303225155==--