From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [PATCH v3 0/4] sched: credit2: introduce per-vcpu hard and soft affinity Date: Thu, 17 Sep 2015 16:27:20 +0200 Message-ID: <1442500040.15327.87.camel@citrix.com> References: <1427363314-25430-1-git-send-email-jtweaver@hawaii.edu> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============1154626922144865561==" Return-path: In-Reply-To: <1427363314-25430-1-git-send-email-jtweaver@hawaii.edu> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: "Justin T. Weaver" Cc: george.dunlap@eu.citrix.com, henric@hawaii.edu, xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org --===============1154626922144865561== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-6tcuD+C9dnpNSyA1yEVe" --=-6tcuD+C9dnpNSyA1yEVe Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Wed, 2015-03-25 at 23:48 -1000, Justin T. Weaver wrote: > Here are the results I gathered from testing. Each guest had 2 vcpus and = 1GB > of memory.=20 > Hey, thanks for doing the benchmarking as well! :-) > The hardware consisted of two quad core Intel Xeon X5570 processors > and 8GB of RAM per node. The sysbench memory test was run with the num-th= reads > option set to four, and was run simultaneously on two, then six, then ten= VMs. > Each result below is an average of three runs. >=20 > ------------------------------------------------------- > | Sysbench memory, throughput MB/s (higher is better) | > ------------------------------------------------------- > | #VMs | No affinity | Pinning | NUMA scheduling | > | 2 | 417.01 | 406.16 | 428.83 | > | 6 | 389.31 | 407.07 | 402.90 | > | 10 | 317.91 | 320.53 | 321.98 | > ------------------------------------------------------- >=20 > Despite the overhead added, NUMA scheduling performed best in both the tw= o and > ten VM tests. >=20 Nice. Just to be sure, is my understending of the columns label accurate? - 'No affinity' =3D=3D no hard nor soft affinity for any VM - 'Pinning' =3D=3D hard affinity used to pin VMs to NUMA nodes (evenly, I guess?); soft affinity untouched - 'NUMA scheduling' =3D=3D soft affinity used to associate VMs to NUMA nodes (evenly, I guess?); hard affinity untouched Also, can you confirm that all the hard and soft affinity setting were done at VM creation time, i.e., they were effectively influencing where the memory of the VMs was being allocated? (It looks like so, from the number, but I wanted to be sure...) Thanks again and Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-6tcuD+C9dnpNSyA1yEVe Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEABECAAYFAlX6zcgACgkQk4XaBE3IOsTcjQCfR5M2UrmC5TzPo0O2xaItfa9x ks8An1DLiw1rfC/r2q/S7Wa/2KV4xnsk =5IAP -----END PGP SIGNATURE----- --=-6tcuD+C9dnpNSyA1yEVe-- --===============1154626922144865561== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============1154626922144865561==--