From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [PATCH 0/2] Credit2: fix per-socket runqueue setup Date: Tue, 2 Sep 2014 18:46:17 +0200 Message-ID: <1409676377.2673.12.camel@Solace.lan> References: <20140822165628.32764.15082.stgit@Solace.lan> <53FB1088020000780002D105@mail.emea.novell.com> <54047BBB.3050507@eu.citrix.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3675087326095160775==" Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XOrG1-0004cL-1u for xen-devel@lists.xenproject.org; Tue, 02 Sep 2014 16:48:37 +0000 In-Reply-To: <54047BBB.3050507@eu.citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: George Dunlap Cc: Andrew Cooper , keir@xen.org, Jan Beulich , xen-devel List-Id: xen-devel@lists.xenproject.org --===============3675087326095160775== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-NyML9mlYVXw88Wc3Npwv" --=-NyML9mlYVXw88Wc3Npwv Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On lun, 2014-09-01 at 14:59 +0100, George Dunlap wrote: > On 08/25/2014 09:31 AM, Jan Beulich wrote: > >>>> On 22.08.14 at 19:15, wrote: > >> root@tg03:~# xl dmesg |grep -i runqueue > >> (XEN) Adding cpu 0 to runqueue 1 > >> (XEN) First cpu on runqueue, activating > >> (XEN) Adding cpu 1 to runqueue 1 > >> (XEN) Adding cpu 2 to runqueue 1 > >> (XEN) Adding cpu 3 to runqueue 1 > >> (XEN) Adding cpu 4 to runqueue 1 > >> (XEN) Adding cpu 5 to runqueue 1 > >> (XEN) Adding cpu 6 to runqueue 1 > >> (XEN) Adding cpu 7 to runqueue 1 > >> (XEN) Adding cpu 8 to runqueue 1 > >> (XEN) Adding cpu 9 to runqueue 1 > >> (XEN) Adding cpu 10 to runqueue 1 > >> (XEN) Adding cpu 11 to runqueue 1 > >> (XEN) Adding cpu 12 to runqueue 0 > >> (XEN) First cpu on runqueue, activating > >> (XEN) Adding cpu 13 to runqueue 0 > >> (XEN) Adding cpu 14 to runqueue 0 > >> (XEN) Adding cpu 15 to runqueue 0 > >> (XEN) Adding cpu 16 to runqueue 0 > >> (XEN) Adding cpu 17 to runqueue 0 > >> (XEN) Adding cpu 18 to runqueue 0 > >> (XEN) Adding cpu 19 to runqueue 0 > >> (XEN) Adding cpu 20 to runqueue 0 > >> (XEN) Adding cpu 21 to runqueue 0 > >> (XEN) Adding cpu 22 to runqueue 0 > >> (XEN) Adding cpu 23 to runqueue 0 > >> > >> Which makes a lot more sense. :-) > > But it looks suspicious that the low numbered CPUs get assigned to > > runqueue 1. Is there an explanation for this, or are surprises to be > > expected on larger than dual-socket systems? >=20 Not sure what kind of surprises you're thinking to, but I have a big box handy. I'll test the new version of the series on it, and report what happens. > Well the explanation is most likely from the cpu_topology info from the= =20 > cover letter (0/2): On his machine, cpus 0-11 are on socket 1, and cpus= =20 > 12-23 are on socket 0. =20 > Exactly, here it is again, coming from `xl info -n'. cpu_topology : cpu: core socket node 0: 0 1 0 1: 0 1 0 2: 1 1 0 3: 1 1 0 4: 2 1 0 5: 2 1 0 6: 8 1 0 7: 8 1 0 8: 9 1 0 9: 9 1 0 10: 10 1 0 11: 10 1 0 12: 0 0 1 13: 0 0 1 14: 1 0 1 15: 1 0 1 16: 2 0 1 17: 2 0 1 18: 8 0 1 19: 8 0 1 20: 9 0 1 21: 9 0 1 22: 10 0 1 23: 10 0 1 > Why that's the topology reported (I presume in=20 > ACPI?) I'm not sure. >=20 Me neither. BTW, on baremetal, here's what I see: root@tg03:~# numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 node 0 size: 18432 MB node 0 free: 17927 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 node 1 size: 18419 MB node 1 free: 17926 MB node distances: node 0 1=20 0: 10 20=20 1: 20 10=20 Also: root@tg03:~# for i in `seq 0 23`;do echo "CPU$i is on socket `cat /sys/bus/= cpu/devices/cpu$i/topology/physical_package_id`";done CPU0 is on socket 1 CPU1 is on socket 0 CPU2 is on socket 1 CPU3 is on socket 0 CPU4 is on socket 1 CPU5 is on socket 0 CPU6 is on socket 1 CPU7 is on socket 0 CPU8 is on socket 1 CPU9 is on socket 0 CPU10 is on socket 1 CPU11 is on socket 0 CPU12 is on socket 1 CPU13 is on socket 0 CPU14 is on socket 1 CPU15 is on socket 0 CPU16 is on socket 1 CPU17 is on socket 0 CPU18 is on socket 1 CPU19 is on socket 0 CPU20 is on socket 1 CPU21 is on socket 0 CPU22 is on socket 1 CPU23 is on socket 0 I've noticed this before, but, TBH, I never dug the cause of the discrepancy between us and Linux. Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-NyML9mlYVXw88Wc3Npwv Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEABECAAYFAlQF9FkACgkQk4XaBE3IOsQqqQCeIuDmmyrAA9GzQ8ByXVx1MrtN 5+kAnibPiQDmIEHZCSyCJQGRRZjAoyr5 =1oTd -----END PGP SIGNATURE----- --=-NyML9mlYVXw88Wc3Npwv-- --===============3675087326095160775== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============3675087326095160775==--