From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: PV-vNUMA issue: topology is misinterpreted by the guest Date: Mon, 27 Jul 2015 14:01:41 +0200 Message-ID: <55B61DA5.5030903@suse.com> References: <55AFAC34.1060606@oracle.com> <55B070ED.2040200@suse.com> <1437660433.5036.96.camel@citrix.com> <55B21364.5040906@suse.com> <1437749076.4682.47.camel@citrix.com> <55B25650.4030402@suse.com> <55B258C9.4040400@suse.com> <1437753509.4682.78.camel@citrix.com> <20150724160948.GA2067@l.oracle.com> <55B26570.1060008@suse.com> <20150724162911.GC2220@l.oracle.com> <55B26A45.2050402@suse.com> <55B26B84.1000101@oracle.com> <55B5B504.2030504@suse.com> <55B60DE7.1020300@suse.com> <55B611F1.80508@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZJh6H-0000rg-Pe for xen-devel@lists.xenproject.org; Mon, 27 Jul 2015 12:01:45 +0000 In-Reply-To: <55B611F1.80508@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: George Dunlap , George Dunlap Cc: Elena Ufimtseva , Wei Liu , Andrew Cooper , Dario Faggioli , David Vrabel , Jan Beulich , "xen-devel@lists.xenproject.org" , Boris Ostrovsky List-Id: xen-devel@lists.xenproject.org On 07/27/2015 01:11 PM, George Dunlap wrote: > On 07/27/2015 11:54 AM, Juergen Gross wrote: >> On 07/27/2015 12:43 PM, George Dunlap wrote: >>> On Mon, Jul 27, 2015 at 5:35 AM, Juergen Gross wrote: >>>> On 07/24/2015 06:44 PM, Boris Ostrovsky wrote: >>>>> >>>>> On 07/24/2015 12:39 PM, Juergen Gross wrote: >>>>>> >>>>>> >>>>>> >>>>>> I don't say mangling cpuids can't solve the scheduling problem. It >>>>>> surely can. But it can't solve the scheduling problem without hiding >>>>>> information like number of sockets or cores which might be required >>>>>> for license purposes. If we don't care, fine. >>>>>> >>>>> >>>>> (this is somewhat repeating the email I just sent) >>>>> >>>>> Why can's we construct socket/core info with CPUID (and *possibly* ACPI >>>>> changes) that we present a reasonable (licensing-wise) picture? >>>>> >>>>> Can you suggest an example where it will not work and then maybe we can >>>>> figure something out? >>>> >>>> >>>> Let's assume a software with license based on core count. You have a >>>> system with a 2 8 core processors and hyperthreads enabled, summing up >>>> to 32 logical processors. Your license is valid for up to 16 cores, so >>>> running the software on bare metal on your system is fine. >>>> >>>> Now you are running the software inside a virtual machine with 24 vcpus >>>> in a cpupool with 24 logical cpus limited to 12 cores (6 cores of each >>>> processor). As we have to hide hyperthreading in order to not to have >>>> to pin each vcpu to just a single logical processor, the topology >>>> resulting from this picture will have to present 24 cores. The license >>>> will not cover this hardware. >>> >>> But how does doing a PV topology help this situation? Because we're >>> telling one thing to the OS (via our PV interface) and another thing >>> to applications (via direct CPUID access)? >> >> Exactly. >> >> In my example it would even work to not modify the cpuid information at >> all. The kernel wouldn't try to be extremely clever regarding scheduling >> and the user land would see the cpuid information from the real hardware >> (only the 12 cores it is running on, of course). > > Right; so it seems > > 1. Userspace applications are in the habit of reading CPUID to determine > the topology of the system they're running on > > 2. Many use the topology information to help themselves make better > scheduling decisions. Because a vcpu is not typically pinned to a > specific pcpu, we may need to lie here slightly (e.g., not mention > threads) to get the optimal behavior overall. > > 3. Others use the topology information to implement licensing > restrictions. Because threads are treated differently to cores, we want > to tell the truth here (i.e., make sure we mention that some of these > are threads) to get the optimal behavior overall. > > Numbers #2 and #3 lead to contradictory courses of action; we cannot > optimize for both at the same time. > > I think at some level we need to just try to accommodate both -- if the > user doesn't have licensing issues, or prefers performance over > licensing, then present a unified topology in PVH / HVM using CPUID, > ACPI, &c. I think this should be the default. > > If the user has licensing issues, and doesn't mind having wonky or > unreliable topology to its guests, then let the raw CPUID through. But > it would, in this case, be good to try to give the guest OS scheduler a > hint that it shouldn't really bother trying to read the topology or do > placement as a result, as any decisions will be unreliable. > > Or alternately, if the user wants to give up on the "consolidation" > aspect of virtualization, they can pin vcpus to pcpus and then pass in > the actual host topology (hyperthreads and all). There would be another solution, of course: Support hyperthreads in the Xen scheduler via gang scheduling. While this is not a simple solution, it is a fair one. Hyperthreads on one core can influence each other rather much. With both threads always running vcpus of the same guest the penalty/advantage would stay in the same domain. The guest could make really sensible scheduling decisions and the licensing would still work as desired. Just an idea, but maybe worth to explore further instead of tweaking more and more bits to make the virtual system somehow act sane. Juergen