From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: Xen Platform QoS design discussion Date: Thu, 29 May 2014 10:13:32 +0100 Message-ID: <5386FA3C.3010201@citrix.com> References: <40776A41FC278F40B59438AD47D147A9119F3FEA@SHSMSX104.ccr.corp.intel.com> <5368B418.9000307@citrix.com> <536AA342.8030003@citrix.com> <40776A41FC278F40B59438AD47D147A911A00A4C@SHSMSX104.ccr.corp.intel.com> <536B69AB.7010005@citrix.com> <40776A41FC278F40B59438AD47D147A911A150FC@SHSMSX104.ccr.corp.intel.com> <537A0B17020000780001390F@mail.emea.novell.com> <5379F576.4050108@eu.citrix.com> <537A18260200007800013A06@mail.emea.novell.com> <40776A41FC278F40B59438AD47D147A911A1AAEC@SHSMSX104.ccr.corp.intel.com> <537DD3E60200007800014CFD@mail.emea.novell.com> <537DC2F2.30702@eu.citrix.com> <40776A41FC278F40B59438AD47D147A911A206A4@SHSMSX104.ccr.corp.intel.com> <5386E96402000078000B525D@mail.emea.novell.com> <40776A41FC278F40B59438AD47D147A911A20944@SHSMSX104.ccr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <40776A41FC278F40B59438AD47D147A911A20944@SHSMSX104.ccr.corp.intel.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: "Xu, Dongxiao" , Jan Beulich , "george.dunlap@eu.citrix.com" Cc: "Auld, Will" , "Ian.Campbell@citrix.com" , "Nakajima, Jun" , "xen-devel@lists.xen.org" List-Id: xen-devel@lists.xenproject.org On 29/05/2014 08:31, Xu, Dongxiao wrote: >> -----Original Message----- >> From: Jan Beulich [mailto:jbeulich@suse.com] >> Sent: Thursday, May 29, 2014 3:02 PM >> To: george.dunlap@eu.citrix.com; Xu, Dongxiao >> Cc: andrew.cooper3@citrix.com; Ian.Campbell@citrix.com; >> xen-devel@lists.xen.org >> Subject: Re: RE: [Xen-devel] Xen Platform QoS design discussion >> >>>>> "Xu, Dongxiao" 05/29/14 2:46 AM >>> >>> I think Jan's opinion here is similar to what I proposed in the beginning of this >> thread. >>> The only difference is that, Jan prefers to get the CQM data per-socket and >> per-domain >>> with data copying, while I proposed to get the CQM data per-domain for all >> sockets >>> that can reduce the amount of hypercalls. >> I don't think I ever voiced any preference between these two. All I said it >> depends on >> prevalent usage models, and to date I don't think I've seen a proper analysis of >> what >> the main usage model would be - it all seems guesswork and/or taking random >> examples. >> >> What I did say I'd prefer is to have all this done outside the hypervisor, with the >> hypervisor just providing fundamental infrastructure (MSR accesses). > Okay. If I understand correctly, you prefer to implement a pure MSR access hypercall for one CPU, and put all other CQM things in libxc/libxl layer. > > In this case, if libvert/XenAPI is trying to query a domain's cache utilization in the system (say 2 sockets), then it will trigger _two_ such MSR access hypercalls for CPUs in the 2 different sockets. > If you are okay with this idea, I am going to implement it. > > Thanks, > Dongxiao While I can see the use and attraction of a generic MSR access hypercalls, using this method for getting QoS data is going to have subsantitally higher overhead than even the original domctl suggestion. I do not believe it will be an effective means of getting large quantities of data from ring0 MSRs into dom0 userspace. This is not to say that having a generic MSR interface is a bad thing, but I don't think it should be used for this purpose. ~Andrew