From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:48336) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWLm0-0003Hg-IA for qemu-devel@nongnu.org; Tue, 23 Sep 2014 04:48:42 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1XWLlu-0006WW-DA for qemu-devel@nongnu.org; Tue, 23 Sep 2014 04:48:36 -0400 Received: from mx1.redhat.com ([209.132.183.28]:37672) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1XWLlu-0006Vh-5o for qemu-devel@nongnu.org; Tue, 23 Sep 2014 04:48:30 -0400 Date: Tue, 23 Sep 2014 10:47:58 +0200 From: Kevin Wolf Message-ID: <20140923084758.GA3871@noname.str.redhat.com> References: <1409935888-18552-1-git-send-email-pl@kamp.de> <1409935888-18552-3-git-send-email-pl@kamp.de> <20140908134434.GB22582@irqsave.net> <540DB3E2.6010905@redhat.com> <540DB583.4030101@kamp.de> <540DB5EB.2070705@redhat.com> <540DBEBD.9040701@kamp.de> <540DC059.4000907@redhat.com> <540DC30B.2040408@kamp.de> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <540DC30B.2040408@kamp.de> Subject: Re: [Qemu-devel] [PATCH 2/4] block: immediately cancel oversized read/write requests List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Lieven Cc: =?iso-8859-1?Q?Beno=EEt?= Canet , stefanha@redhat.com, qemu-devel@nongnu.org, mreitz@redhat.com, ronniesahlberg@gmail.com, Paolo Bonzini Am 08.09.2014 um 16:54 hat Peter Lieven geschrieben: > On 08.09.2014 16:42, Paolo Bonzini wrote: > >Il 08/09/2014 16:35, Peter Lieven ha scritto: > >>Whats your opinion changed the max_xfer_len to 0xffff regardsless > >>of use_16_for_rw in iSCSI? > >If you implemented request splitting in the block layer, it would be > >okay to force max_xfer_len to 0xffff. > > Unfortunately, I currently have no time for that. It will include some shuffling with > qiovs that has to be properly tested. > > Regarding iSCSI: In fact currently the limit is 0xffff for all iSCSI Targets < 2TB. > So I thought that its not obvious at all why a > 2TB target can handle bigger requests. > > To the root cause of this patch multiwrite_merge I still have some thoughts: > - why are we merging requests for raw (especially host devices and/or iSCSI?) > The original patch from Kevin was to mitigate a QCOW2 performance regression. The problem wasn't in qcow2, though, it just became more apparent there because lots of small requests are deadly to performance during cluster allocation. Installation simply took ages compared to IDE. If you do a real benchmark, you'll probably see (smaller) differences with raw, too. The core problem is virtio-blk getting much smaller requests. IIRC, I got an average of 128k-512k per request for IDE and something as poor as 4k-32k for virtio-blk. If I read this thread correctly, you're saying that this is still the case. I suspect there is something wrong with the guest driver, but somebody would have to dig into that. > For iSCSI the qiov concats are destroying all the zerocopy efforts we made. If this is true, it just means that the iscsi driver sucks for vectored I/O and needs to be fixed. > - should we only merge requests within the same cluster? Does it hurt to merge everything we can? The block driver needs to be able to take things apart anyway, the large request could come from somewhere else (guest, block job, builtin NBD server, etc.) > - why is there no multiread_merge? Because nobody implemented it. :-) As I said above, writes hurt a lot because of qcow2 cluster allocation. Reads are probably losing some performance as well (someone would need to check this), but processing them separately isn't quite as painful. Kevin