From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:49595) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WJLmn-0005NF-9B for qemu-devel@nongnu.org; Fri, 28 Feb 2014 06:39:30 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1WJLmi-0006C6-FI for qemu-devel@nongnu.org; Fri, 28 Feb 2014 06:39:25 -0500 Received: from mx1.redhat.com ([209.132.183.28]:37436) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1WJLmi-0006AG-76 for qemu-devel@nongnu.org; Fri, 28 Feb 2014 06:39:20 -0500 Date: Fri, 28 Feb 2014 11:39:01 +0000 From: "Dr. David Alan Gilbert" Message-ID: <20140228113901.GJ2695@work-vm> References: <33183CC9F5247A488A2544077AF19020815D225E@SZXEMA503-MBS.china.huawei.com> <20140228091952.GA2695@work-vm> <53106B6E.6050301@huawei.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <53106B6E.6050301@huawei.com> Subject: Re: [Qemu-devel] [PATCH 0/7] migration: Optimization the xbzrle and fix two corruption issues List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Gonglei Cc: Peter Maydell , Juan Quintela , luonengjun@huawei.com, "pl@kamp.de" , "Dr. David Alan Gilbert" , "qemu-devel@nongnu.org" , "owasserm@redhat.com" , "aliguori@amazon.com" , "chenliang (T)" , "pbonzini@redhat.com" * Gonglei (arei.gonglei@huawei.com) wrote: > On 2014/2/28 17:19, Dr. David Alan Gilbert wrote: > > > * Gonglei (Arei) (arei.gonglei@huawei.com) wrote: > > > > Hi, > > > >> a. Optimization the xbzrle remarkable decrease the cache misses. > >> The efficiency of compress increases more than fifty times. > >> Before the patch set, the cache almost totally miss when the > >> number of cache item less than the dirty page number. Now the > >> hot pages in the cache will not be replaced by other pages. > > > > Nice, what do you use as your performance test case for xbzrle? > > > The VM we used with 25G memory and 1Gbit nic. We run a test procedure > in the vm, which as this: > > #include > #include > #define PAGE_SIZE 4096 > void main(void) > { > char *p,*p1; > long i,j,z; > > p = (char*)calloc(8*1024,1024*1024); > if( p == NULL ){ > printf("fail to calloc \n"); > exit(1); > } > for(;;){ > p1 = p; > for( i = 0 ; i < 8*1024 ; i++ ){ > > for( j = 0 ; j < 1024*1024 ; j+=PAGE_SIZE ){ > *p1 = 0x55; > p1+=PAGE_SIZE; > } > } > } > } > > finally, > the results of without enable xbzrle: 115MB/sec > > using xbzrle without optimization (the size of cache 2G): 116MB/sec > > using xbzrle with our optimization (the size of cache 2G): 150MB/sec Hmm yes, it's not a very realistic test is it. Having said that, I've not managed to find a reallistic test people can agree on; I was hoping you had one! You're listing the differences in in MB/sec - what about total time to migrate? However, the other question is why your optimisation works well with that test; is it just the CPU overhead that it's reducing because it's not bothering to copy lots of stuff into the cache? If that's all the guest is running, I can't see that it would actually XBZRLE much - maybe just OS pages. What do the 'info migrate' stats look like with/without your optimisation - I'm interested in how many xbzrle pages are sent? Dave -- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK