On Mon, 2016-11-21 at 11:37 -0800, Sarah Newman wrote: > On 11/21/2016 05:21 AM, Andrew Cooper wrote: > > I suspect that libxl's preference towards NUMA allocation of > > domains > > interferes with this, by adding a NUMA constraints to memory > > allocations > > for 64bit PV guests. > > I ran xl info -n (which I didn't know about before) and that shows > the problem much more clearly. > > If that's the reason not all the higher memory is being used first: > is a potential workaround to pin 64 bit domains to the second > physical core on > boot, and 32 bit domains to the first physical core on boot, and then > change the allowed cores with 'xl vcpu-pin' after the domain is > loaded? > If you're looking for a way to disable libxl's NUMA-aware domain placement --which does indeed interfere whit what memory (as in, the memory of what NUMA node) is going to be used for the domains-- you can "just" specify this, in all the domains' config files: cpus="all" This will leave all the vcpus free to run everywhere, and stop libxl to pass down to Xen any hint on memory allocation. Using cpus="foo" and/or cpus_soft="bar" would allow to tweak things more. And the same is true for creating cpupools, with specific cpus from specific NUMA nodes in them, and creating the domain direcly inside those various pools. That being said, what values are the best for your use case, I'm not really sure... But maybe have a look at this. Some more info: https://wiki.xen.org/wiki/Xen_on_NUMA_Machines https://wiki.xen.org/wiki/Xen_4.3_NUMA_Aware_Scheduling https://wiki.xen.org/wiki/Tuning_Xen_for_Performance#vCPU_Soft_Affinity_for_guests Regards, Dario -- <> (Raistlin Majere) ----------------------------------------------------------------- Dario Faggioli, Ph.D, http://about.me/dario.faggioli Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)