From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity Date: Wed, 06 Nov 2013 16:22:16 +0000 Message-ID: <527A7AC80200007800100435@nat28.tlf.novell.com> References: <20131105142844.30446.78671.stgit@Solace> <20131105143500.30446.9976.stgit@Solace> <5279143702000078000FFB15@nat28.tlf.novell.com> <527908B2.5090208@eu.citrix.com> <52790A93.4020903@eu.citrix.com> <52791B8702000078000FFBC4@nat28.tlf.novell.com> <5279114B.9080405@eu.citrix.com> <52792326.4050206@eu.citrix.com> <527927E3.3000004@eu.citrix.com> <1383730770.9207.93.camel@Solace> <527A1DFC0200007800100033@nat28.tlf.novell.com> <1383732000.9207.99.camel@Solace> <527A2BA8.9060601@eu.citrix.com> <1383747963.9207.134.camel@Solace> <527A5890.90709@eu.citrix.com> <527A6AF2020000780010033E@nat28.tlf.novell.com> <527A6A6A.6030604@eu.citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1Ve5sG-0002Md-7G for xen-devel@lists.xenproject.org; Wed, 06 Nov 2013 16:22:32 +0000 In-Reply-To: <527A6A6A.6030604@eu.citrix.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Dario Faggioli , George Dunlap Cc: MarcusGranado , JustinWeaver , IanCampbell , LiYechen , Andrew Cooper , JuergenGross , Ian Jackson , MattWilson , xen-devel , DanielDe Graaf , KeirFraser , Elena Ufimtseva List-Id: xen-devel@lists.xenproject.org >>> On 06.11.13 at 17:12, George Dunlap wrote: > The question Dario has is this: given that we now have per-vcpu hard and > soft scheduling affinity, how should we automatically construct the > per-domain memory allocation affinity, if at all? Should we construct > it from the "hard" scheduling affinities, or from the "soft" scheduling > affinities? > > I said that I thought we should use the soft affinity; but I really > meant the "effective soft affinity" -- i.e., the union of soft, hard, > and cpupools. Actually I think trying on the most narrow set first (soft) and then widening (hard, anywhere) is indeed the most sensible approach then. Jan