From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: about the funtion call memory_type_changed() Date: Fri, 16 Jan 2015 11:59:54 +0000 Message-ID: <54B8FD3A.1080401@citrix.com> References: <54B8EBF6.1080901@citrix.com> <54B90A140200007800055BEA@mail.emea.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <54B90A140200007800055BEA@mail.emea.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich , Liang Z Li , "xen-devel@lists.xen.org" Cc: Yang Z Zhang , "keir@xen.org" , Kevin Tian , Eddie Dong , Tim Deegan List-Id: xen-devel@lists.xenproject.org On 16/01/15 11:54, Jan Beulich wrote: >>>> On 16.01.15 at 11:46, wrote: >> On 16/01/15 10:29, Li, Liang Z wrote: >>> I found the restore process of the live migration is quit long, so I try to >> find out what's going on. >>> By debugging, I found the most time consuming process is restore the VM's >> MTRR MSR, >>> The process is done in the function hvm_load_mtrr_msr(), it will call the >>> memory_type_changed(), which eventually call the time consuming function >>> flush_all(). >>> >>> All this is caused by adding the memory_type_changed in your patch, here is >> the link >>> http://lists.xen.org/archives/html/xen-devel/2014-03/msg03792.html, >>> >>> I am not sure if it's necessary to call flush_all, even it's necessary, >> call the function >>> hvm_load_mtrr_msr one time will cause dozens call of flush_all, and each >> call of the >>> flush_all function will consume about 8 milliseconds, in my test >> environment, the VM >>> has 4 VCPUs, the hvm_load_mtrr_msr() will be called four times, and totally >> consumes >>> about 500 milliseconds. Obviously, there are too many flush_all calls. >>> >>> I think something should be done to solve this issue, do you think so? >> The flush_all() cant be avoided completely, as it is permitted to use >> sethvmcontext on an already-running VM. In this case, the flush >> certainly does need to happen if altering the MTRRs has had a real >> effect on dirty cache lines. > Plus the actual functions calling memory_type_changed() in mtrr.c > can also be called while the VM is already running. > >> However, having a batching mechanism across hvm_load_mtrr_msr() with a >> single flush at the end seems like a wise move. > And that shouldn't be very difficult to achieve. Furthermore perhaps > it would be possible to check whether the VM did run at all already, > and if it didn't we could avoid the flush altogether in the context load > case? I am not sure whether we currently have that information available. Guests are currently created with a single ref in the systemcontroller pause count, and require an unpause hypercall to get going. That said, the overwhelmingly common case is that the only hvmsetcontext hypercall made on a domain will be during construction, so there are probably many improvements to be had by knowing there is no dirty state to flush. ~Andrew