From mboxrd@z Thu Jan 1 00:00:00 1970 From: Dario Faggioli Subject: Re: [PATCH v6 00/10] vnuma introduction Date: Tue, 22 Jul 2014 14:49:09 +0200 Message-ID: <1406033349.17850.14.camel@Solace> References: <1405662609-31486-1-git-send-email-ufimtseva@gmail.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============9078101010911653229==" Return-path: In-Reply-To: <1405662609-31486-1-git-send-email-ufimtseva@gmail.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Elena Ufimtseva Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com, george.dunlap@eu.citrix.com, msw@linux.com, lccycc123@gmail.com, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org, JBeulich@suse.com, Wei Liu List-Id: xen-devel@lists.xenproject.org --===============9078101010911653229== Content-Type: multipart/signed; micalg=pgp-sha1; protocol="application/pgp-signature"; boundary="=-41z1YxBwG74kgVx19ZFL" --=-41z1YxBwG74kgVx19ZFL Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On ven, 2014-07-18 at 01:49 -0400, Elena Ufimtseva wrote: > vNUMA introduction > Hey Elena! Thanks for this series, and in particular for this clear and complete cover letter. > This series of patches introduces vNUMA topology awareness and > provides interfaces and data structures to enable vNUMA for > PV guests. There is a plan to extend this support for dom0 and > HVM domains. >=20 > vNUMA topology support should be supported by PV guest kernel. > Corresponding patches should be applied. >=20 > Introduction > ------------- >=20 > vNUMA topology is exposed to the PV guest to improve performance when run= ning > workloads on NUMA machines. vNUMA enabled guests may be running on non-NU= MA > machines and thus having virtual NUMA topology visible to guests. > XEN vNUMA implementation provides a way to run vNUMA-enabled guests on NU= MA/UMA > and flexibly map vNUMA topology to physical NUMA topology. >=20 > Mapping to physical NUMA topology may be done in manual and automatic way= . > By default, every PV domain has one vNUMA node. It is populated by defaul= t > parameters and does not affect performance. To use automatic way of initi= alizing > vNUMA topology, configuration file need only to have number of vNUMA node= s > defined. Not-defined vNUMA topology parameters will be initialized to def= ault > ones. >=20 > vNUMA topology is currently defined as a set of parameters such as: > number of vNUMA nodes; > distance table; > vnodes memory sizes; > vcpus to vnodes mapping; > vnode to pnode map (for NUMA machines). >=20 I'd include a brief explanation of what each parameter means and does. > XEN_DOMCTL_setvnumainfo is used by toolstack to populate domain > vNUMA topology with user defined configuration or the parameters by defau= lt. > vNUMA is defined for every PV domain and if no vNUMA configuration found, > one vNUMA node is initialized and all cpus are assigned to it. All other > parameters set to their default values. >=20 > XENMEM_gevnumainfo is used by the PV domain to get the information > from hypervisor about vNUMA topology. Guest sends its memory sizes alloca= ted > for different vNUMA parameters and hypervisor fills it with topology. > Future work to use this in HVM guests in the toolstack is required and > in the hypervisor to allow HVM guests to use these hypercalls. >=20 > libxl >=20 > libxl allows us to define vNUMA topology in configuration file and verifi= es that > configuration is correct. libxl also verifies mapping of vnodes to pnodes= and > uses it in case of NUMA-machine and if automatic placement was disabled. = In case > of incorrect/insufficient configuration, one vNUMA node will be initializ= ed > and populated with default values. >=20 Well, about automatic placement, I think we don't need to disable vNUMA if it's enabled. In fact, automatic placement will try to place the domain on one node only, and yes, if it manages to do so, no point enabling vNUMA (unless the user asked for it, as you're saying). OTOH, if automatic placement puts the domain on 2 or more nodes (e.g., because the domain is 4G, and there is only 3G free on each node), then I think vNUMA should chime in, and provide the guest with an appropriate, internally built, NUMA topology. > libxc >=20 > libxc builds the vnodes memory addresses for guest and makes necessary > alignments to the addresses. It also takes into account guest e820 memory= map > configuration. The domain memory is allocated and vnode to pnode mapping > is used to determine target node for particular vnode. If this mapping wa= s not > defined, it is not a NUMA machine or automatic NUMA placement is enabled,= the > default not node-specific allocation will be used. >=20 Ditto. However, automatic placement does not do much at the libxc level right now, and I think that should continue to be the case. Regards, Dario --=20 <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK) --=-41z1YxBwG74kgVx19ZFL Content-Type: application/pgp-signature; name="signature.asc" Content-Description: This is a digitally signed message part Content-Transfer-Encoding: 7bit -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iEYEABECAAYFAlPOXcUACgkQk4XaBE3IOsT20QCglnKMBVW72hap4F4/eg8FJy2/ +SQAnRd02Dsy5YvHRRjpbfuXS6noAprW =0BiL -----END PGP SIGNATURE----- --=-41z1YxBwG74kgVx19ZFL-- --===============9078101010911653229== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel --===============9078101010911653229==--