From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33872) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V0BEa-0006gy-Hf for qemu-devel@nongnu.org; Fri, 19 Jul 2013 10:00:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1V0BEY-0006SD-Vs for qemu-devel@nongnu.org; Fri, 19 Jul 2013 10:00:36 -0400 Received: from mail-pa0-x236.google.com ([2607:f8b0:400e:c03::236]:43608) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1V0BEY-0006Ry-Pm for qemu-devel@nongnu.org; Fri, 19 Jul 2013 10:00:34 -0400 Received: by mail-pa0-f54.google.com with SMTP id kx10so4472829pab.13 for ; Fri, 19 Jul 2013 07:00:33 -0700 (PDT) MIME-Version: 1.0 In-Reply-To: <51E943DF.1050303@kamp.de> References: <1373885375-13601-5-git-send-email-pl@kamp.de> <51E6C5FC.1030304@redhat.com> <7C1EEB41-E2B3-4186-9188-379F02E76FF9@kamp.de> <51E6CE81.6000400@redhat.com> <36C25446-54C7-4D1F-9D8D-E8A3991489BD@kamp.de> <20130718092316.GG3582@dhcp-200-207.str.redhat.com> <51E7C260.50404@redhat.com> <51E7C707.7010101@kamp.de> <51E7C9C4.5010202@redhat.com> <51E7CBC8.1010804@kamp.de> <51E7E035.3010702@redhat.com> <51E7EDD0.6050001@kamp.de> <51E7F332.9020607@redhat.com> <51E7F715.3020706@kamp.de> <51E7F9CB.7050304@redhat.com> <51E7FC87.1040207@kamp.de> <51E7FD2E.5070900@redhat.com> <51E8D573.8030509@redhat.com> <51E8D7F7.40600@kamp.de> <51E943DF.1050303@kamp.de> Date: Fri, 19 Jul 2013 07:00:33 -0700 Message-ID: From: ronnie sahlberg Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: [Qemu-devel] [PATCH 4/4] qemu-img: conditionally discard target on convert List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: Kevin Wolf , Paolo Bonzini , qemu-devel , Stefan Hajnoczi On Fri, Jul 19, 2013 at 6:49 AM, Peter Lieven wrote: > On 19.07.2013 15:25, ronnie sahlberg wrote: >> >> On Thu, Jul 18, 2013 at 11:08 PM, Peter Lieven wrote: >>> >>> On 19.07.2013 07:58, Paolo Bonzini wrote: >>>> >>>> Il 18/07/2013 21:28, Peter Lieven ha scritto: >>>>> >>>>> thanks for the details. I think to have optimal performance and best >>>>> change for unmapping in qemu-img convert >>>>> it might be best to export the OPTIMAL UNMAP GRANULARITY >>>> >>>> Agreed about this. >>>> >>>>> as well as the write_zeroes_w_discard capability via the BDI >>>> >>>> But why this?!? It is _not_ needed. All you need is to change the >>>> default of the "-S" option to be the OPTIMAL UNMAP GRANULARITY if it is >>>> nonzero. >>> >>> 2 reasons: >>> a) Does this guarantee that the requests are aligned to multiple of the >>> -S >>> value? >>> >>> b) If I this flag exists qemu-img convert can do the "zeroing" a priori. >>> This >>> has the benefit that combined with bdrv_get_block_status requests before >>> it >>> might >>> not need to touch large areas of the volume. Speaking for iSCSI its >>> likely >>> that >>> the user sets a fresh volume as the destination, but its not guaranteed. >>> With the Patch 4 there are only done a few get_block_status requests to >>> verify >>> this. If we just write zeroes with BDRV_MAY_UNMAP, we send hundreds or >>> thousands of writesame requests for possibly already unmapped data. >>> >>> To give an example. If I take my storage with an 1TB volume. It takes >>> about >>> 10-12 >>> get_block_status requests to verify that it is completely unmapped. After >>> this >>> I am safe to set has_zero_init = 1 in qemu-img convert. >>> >>> If I would convert a 1TB image to this target where lets say 50% are at >>> leat >>> 15MB >>> zero blocks (15MB is the OUG of my storage) it would take ~35000 write >>> same >>> requests to achieve the same. >> >> I am not sure I am reading this right, but you dont have to writesame >> exactly 1xOUG to get it to unmap. >> nxOUG will work too, >> So instead of sending one writesame for each OUG range, you can send >> one writesame for every ~10G or so. >> Say 10G is ~667 OUGs for your case, so you can send >> writesame for ~667xOUG in each command and then it would "only" take >> ~100 writesames instead of ~35000. >> >> So as long as you are sending in multiples of OUG you should be fine. > > do I not have to take care of max_ws_size? Yes you need to handle mas_ws_size but I would imagine that on most targets that max_ws_size >> OUG I would be surprised if a target would set max_ws_size to just a single OUG. > > Peter