From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [Hackathon minutes] PV frontends/backends and NUMA machines Date: Tue, 21 May 2013 09:47:09 +0100 Message-ID: References: <20130521083251.GD9626@ocelot.phlegethon.org> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <20130521083251.GD9626@ocelot.phlegethon.org> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Tim Deegan Cc: "xen-devel@lists.xensource.com" , Stefano Stabellini List-Id: xen-devel@lists.xenproject.org On Tue, May 21, 2013 at 9:32 AM, Tim Deegan wrote: > At 14:48 +0100 on 20 May (1369061330), George Dunlap wrote: >> So the work items I remember are as follows: >> 1. Implement NUMA affinity for vcpus >> 2. Implement Guest NUMA support for PV guests >> 3. Teach Xen how to make a sensible NUMA allocation layout for dom0 > > Does Xen need to do this? Or could dom0 sort that out for itself after > boot? There are two aspects of this. First would be, if dom0.nvcpus < host.npcpus, to place the vcpus reasonably on the various numa nodes. The second is to make the pfn -> NUMA node layout reasonable. At the moment, as I understand it, pfns will be striped across nodes. In theory dom0 could deal with this, but it seems like in practice it's going to be nasty trying to sort that stuff out. It would be much better, if you have (say) 4 nodes and 4GiB of memory assigned to dom0, to have pfn 0-1G on node 0, 1-2G on node 2, &c. -George