From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A21F5C433F5 for ; Thu, 19 May 2022 02:25:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230457AbiESCZc (ORCPT ); Wed, 18 May 2022 22:25:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229589AbiESCZc (ORCPT ); Wed, 18 May 2022 22:25:32 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 41E8C19F81; Wed, 18 May 2022 19:25:31 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id B35A061879; Thu, 19 May 2022 02:25:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 67A16C385A5; Thu, 19 May 2022 02:25:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1652927130; bh=TydeY+jtKnJsH8MD4u7T9PDCuCkyRPCbzKN2YzuDzCo=; h=Date:From:To:Cc:Subject:References:In-Reply-To:From; b=C7KUr50bcjN0SR7p08PFvWwjd1+HJGvsNNpzRB54ByPt15479RBA02nbID94vp8nZ jMQm/+4KzwoqtsqyZK/OqPRzcxO4IvtVXAg0Iww459TkSioSG7MI8XHI4/EQL8bfhR v3yi9qxwFBRqoxlJ+kEflQoQabpP0hsZuglx2vXyl+FE7wTnAfPjAxiQo4EqiWXkUS BohNM5ghRi5FD1ceNeAs8q/HHNrH1JjMQk0c9LRDob027dxV5lUJaDFL8EeA3ni+rr lPVpyBmWLyCmJHri0YF8uk6Ry3pM/gEbjH7gnrdOO285qHBj7nCz2b501FueVeq0Jr EcoE0faimbCgA== Date: Wed, 18 May 2022 20:25:26 -0600 From: Keith Busch To: Eric Biggers Cc: Keith Busch , linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, axboe@kernel.dk, Kernel Team , hch@lst.de, bvanassche@acm.org, damien.lemoal@opensource.wdc.com Subject: Re: [PATCHv2 3/3] block: relax direct io memory alignment Message-ID: References: <20220518171131.3525293-1-kbusch@fb.com> <20220518171131.3525293-4-kbusch@fb.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Wed, May 18, 2022 at 07:08:11PM -0700, Eric Biggers wrote: > On Wed, May 18, 2022 at 07:59:36PM -0600, Keith Busch wrote: > > I'm aware that spanning pages can cause bad splits on the bi_max_vecs > > condition, but I believe it's well handled here. Unless I'm terribly confused, > > which is certainly possible, I think you may have missed this part of the > > patch: > > > > @@ -1223,6 +1224,8 @@ static int __bio_iov_iter_get_pages(struct bio *bio, struct iov_iter *iter) > > pages += entries_left * (PAGE_PTRS_PER_BVEC - 1); > > > > size = iov_iter_get_pages(iter, pages, LONG_MAX, nr_pages, &offset); > > + if (size > 0) > > + size = ALIGN_DOWN(size, queue_logical_block_size(q)); > > if (unlikely(size <= 0)) > > return size ? size : -EFAULT; > > > > That makes the total length of each "batch" of pages be a multiple of the > logical block size, but individual logical blocks within that batch can still be > divided into multiple bvecs in the loop just below it: I understand that, but the existing code conservatively assumes all pages are physically discontiguous and wouldn't have requested more pages if it didn't have enough bvecs for each of them: unsigned short nr_pages = bio->bi_max_vecs - bio->bi_vcnt; So with the segment alignment guarantee, and ensured available bvec space, the created bio will always be a logical block size multiple. If we need to split it later due to some other constraint, we'll only split on a logical block size, even if its in the middle of a bvec. > for (left = size, i = 0; left > 0; left -= len, i++) { > struct page *page = pages[i]; > > len = min_t(size_t, PAGE_SIZE - offset, left); > > if (__bio_try_merge_page(bio, page, len, offset, &same_page)) { > if (same_page) > put_page(page); > } else { > if (WARN_ON_ONCE(bio_full(bio, len))) { > bio_put_pages(pages + i, left, offset); > return -EINVAL; > } > __bio_add_page(bio, page, len, offset); > } > offset = 0; > } > > - Eric