From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED,USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E524CC43381 for ; Wed, 27 Feb 2019 03:46:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id B30B720C01 for ; Wed, 27 Feb 2019 03:46:08 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729291AbfB0Do1 (ORCPT ); Tue, 26 Feb 2019 22:44:27 -0500 Received: from mx1.redhat.com ([209.132.183.28]:34188 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729128AbfB0Do0 (ORCPT ); Tue, 26 Feb 2019 22:44:26 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 2E3E430917EE; Wed, 27 Feb 2019 03:44:26 +0000 (UTC) Received: from ming.t460p (ovpn-8-21.pek2.redhat.com [10.72.8.21]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 358655C6AF; Wed, 27 Feb 2019 03:44:15 +0000 (UTC) Date: Wed, 27 Feb 2019 11:44:10 +0800 From: Ming Lei To: Jens Axboe Cc: Ming Lei , Eric Biggers , "open list:AIO" , linux-block , linux-api@vger.kernel.org, Christoph Hellwig , Jeff Moyer , Avi Kivity , jannh@google.com, Al Viro Subject: Re: [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Message-ID: <20190227034409.GH16802@ming.t460p> References: <09820845-07cb-5153-e1c5-59ed185db26f@kernel.dk> <20190227015336.GD16802@ming.t460p> <20190227022144.GE16802@ming.t460p> <20190227023719.GF16802@ming.t460p> <9ce37f86-dd8b-ef51-7507-d457585a892a@kernel.dk> <20190227030944.GG16802@ming.t460p> <7a1f4293-8f88-81e0-9464-cc27e39184cb@kernel.dk> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <7a1f4293-8f88-81e0-9464-cc27e39184cb@kernel.dk> User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Wed, 27 Feb 2019 03:44:26 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Tue, Feb 26, 2019 at 08:37:05PM -0700, Jens Axboe wrote: > On 2/26/19 8:09 PM, Ming Lei wrote: > > On Tue, Feb 26, 2019 at 07:43:32PM -0700, Jens Axboe wrote: > >> On 2/26/19 7:37 PM, Ming Lei wrote: > >>> On Tue, Feb 26, 2019 at 07:28:54PM -0700, Jens Axboe wrote: > >>>> On 2/26/19 7:21 PM, Ming Lei wrote: > >>>>> On Tue, Feb 26, 2019 at 06:57:16PM -0700, Jens Axboe wrote: > >>>>>> On 2/26/19 6:53 PM, Ming Lei wrote: > >>>>>>> On Tue, Feb 26, 2019 at 06:47:54PM -0700, Jens Axboe wrote: > >>>>>>>> On 2/26/19 6:21 PM, Ming Lei wrote: > >>>>>>>>> On Tue, Feb 26, 2019 at 11:56 PM Jens Axboe wrote: > >>>>>>>>>> > >>>>>>>>>> On 2/25/19 9:34 PM, Jens Axboe wrote: > >>>>>>>>>>> On 2/25/19 8:46 PM, Eric Biggers wrote: > >>>>>>>>>>>> Hi Jens, > >>>>>>>>>>>> > >>>>>>>>>>>> On Thu, Feb 21, 2019 at 10:45:27AM -0700, Jens Axboe wrote: > >>>>>>>>>>>>> On 2/20/19 3:58 PM, Ming Lei wrote: > >>>>>>>>>>>>>> On Mon, Feb 11, 2019 at 12:00:41PM -0700, Jens Axboe wrote: > >>>>>>>>>>>>>>> For an ITER_BVEC, we can just iterate the iov and add the pages > >>>>>>>>>>>>>>> to the bio directly. This requires that the caller doesn't releases > >>>>>>>>>>>>>>> the pages on IO completion, we add a BIO_NO_PAGE_REF flag for that. > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> The current two callers of bio_iov_iter_get_pages() are updated to > >>>>>>>>>>>>>>> check if they need to release pages on completion. This makes them > >>>>>>>>>>>>>>> work with bvecs that contain kernel mapped pages already. > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> Reviewed-by: Hannes Reinecke > >>>>>>>>>>>>>>> Reviewed-by: Christoph Hellwig > >>>>>>>>>>>>>>> Signed-off-by: Jens Axboe > >>>>>>>>>>>>>>> --- > >>>>>>>>>>>>>>> block/bio.c | 59 ++++++++++++++++++++++++++++++++------- > >>>>>>>>>>>>>>> fs/block_dev.c | 5 ++-- > >>>>>>>>>>>>>>> fs/iomap.c | 5 ++-- > >>>>>>>>>>>>>>> include/linux/blk_types.h | 1 + > >>>>>>>>>>>>>>> 4 files changed, 56 insertions(+), 14 deletions(-) > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> diff --git a/block/bio.c b/block/bio.c > >>>>>>>>>>>>>>> index 4db1008309ed..330df572cfb8 100644 > >>>>>>>>>>>>>>> --- a/block/bio.c > >>>>>>>>>>>>>>> +++ b/block/bio.c > >>>>>>>>>>>>>>> @@ -828,6 +828,23 @@ int bio_add_page(struct bio *bio, struct page *page, > >>>>>>>>>>>>>>> } > >>>>>>>>>>>>>>> EXPORT_SYMBOL(bio_add_page); > >>>>>>>>>>>>>>> > >>>>>>>>>>>>>>> +static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) > >>>>>>>>>>>>>>> +{ > >>>>>>>>>>>>>>> + const struct bio_vec *bv = iter->bvec; > >>>>>>>>>>>>>>> + unsigned int len; > >>>>>>>>>>>>>>> + size_t size; > >>>>>>>>>>>>>>> + > >>>>>>>>>>>>>>> + len = min_t(size_t, bv->bv_len, iter->count); > >>>>>>>>>>>>>>> + size = bio_add_page(bio, bv->bv_page, len, > >>>>>>>>>>>>>>> + bv->bv_offset + iter->iov_offset); > >>>>>>>>>>>>>> > >>>>>>>>>>>>>> iter->iov_offset needs to be subtracted from 'len', looks > >>>>>>>>>>>>>> the following delta change[1] is required, otherwise memory corruption > >>>>>>>>>>>>>> can be observed when running xfstests over loop/dio. > >>>>>>>>>>>>> > >>>>>>>>>>>>> Thanks, I folded this in. > >>>>>>>>>>>>> > >>>>>>>>>>>>> -- > >>>>>>>>>>>>> Jens Axboe > >>>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> syzkaller started hitting a crash on linux-next starting with this commit, and > >>>>>>>>>>>> it still occurs even with your latest version that has Ming's fix folded in. > >>>>>>>>>>>> Specifically, commit a566653ab5ab80a from your io_uring branch with commit date > >>>>>>>>>>>> Sun Feb 24 08:20:53 2019 -0700. > >>>>>>>>>>>> > >>>>>>>>>>>> Reproducer: > >>>>>>>>>>>> > >>>>>>>>>>>> #define _GNU_SOURCE > >>>>>>>>>>>> #include > >>>>>>>>>>>> #include > >>>>>>>>>>>> #include > >>>>>>>>>>>> #include > >>>>>>>>>>>> #include > >>>>>>>>>>>> #include > >>>>>>>>>>>> > >>>>>>>>>>>> int main(void) > >>>>>>>>>>>> { > >>>>>>>>>>>> int memfd, loopfd; > >>>>>>>>>>>> > >>>>>>>>>>>> memfd = syscall(__NR_memfd_create, "foo", 0); > >>>>>>>>>>>> > >>>>>>>>>>>> pwrite(memfd, "\xa8", 1, 4096); > >>>>>>>>>>>> > >>>>>>>>>>>> loopfd = open("/dev/loop0", O_RDWR|O_DIRECT); > >>>>>>>>>>>> > >>>>>>>>>>>> ioctl(loopfd, LOOP_SET_FD, memfd); > >>>>>>>>>>>> > >>>>>>>>>>>> sendfile(loopfd, loopfd, NULL, 1000000); > >>>>>>>>>>>> } > >>>>>>>>>>>> > >>>>>>>>>>>> > >>>>>>>>>>>> Crash: > >>>>>>>>>>>> > >>>>>>>>>>>> page:ffffea0001a6aab8 count:0 mapcount:0 mapping:0000000000000000 index:0x0 > >>>>>>>>>>>> flags: 0x100000000000000() > >>>>>>>>>>>> raw: 0100000000000000 ffffea0001ad2c50 ffff88807fca49d0 0000000000000000 > >>>>>>>>>>>> raw: 0000000000000000 0000000000000000 00000000ffffffff > >>>>>>>>>>>> page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0) > >>>>>>>>>>> > >>>>>>>>>>> I see what this is, I'll cut a fix for this tomorrow. > >>>>>>>>>> > >>>>>>>>>> Folded in a fix for this, it's in my current io_uring branch and my for-next > >>>>>>>>>> branch. > >>>>>>>>> > >>>>>>>>> Hi Jens, > >>>>>>>>> > >>>>>>>>> I saw the following change is added: > >>>>>>>>> > >>>>>>>>> + if (size == len) { > >>>>>>>>> + /* > >>>>>>>>> + * For the normal O_DIRECT case, we could skip grabbing this > >>>>>>>>> + * reference and then not have to put them again when IO > >>>>>>>>> + * completes. But this breaks some in-kernel users, like > >>>>>>>>> + * splicing to/from a loop device, where we release the pipe > >>>>>>>>> + * pages unconditionally. If we can fix that case, we can > >>>>>>>>> + * get rid of the get here and the need to call > >>>>>>>>> + * bio_release_pages() at IO completion time. > >>>>>>>>> + */ > >>>>>>>>> + get_page(bv->bv_page); > >>>>>>>>> > >>>>>>>>> Now the 'bv' may point to more than one page, so the following one may be > >>>>>>>>> needed: > >>>>>>>>> > >>>>>>>>> int i; > >>>>>>>>> struct bvec_iter_all iter_all; > >>>>>>>>> struct bio_vec *tmp; > >>>>>>>>> > >>>>>>>>> mp_bvec_for_each_segment(tmp, bv, i, iter_all) > >>>>>>>>> get_page(tmp->bv_page); > >>>>>>>> > >>>>>>>> I guess that would be the safest, even if we don't currently have more > >>>>>>>> than one page in there. I'll fix it up. > >>>>>>> > >>>>>>> It is easy to see multipage bvec from loop, :-) > >>>>>> > >>>>>> Speaking of this, I took a quick look at why we've now regressed a lot > >>>>>> on IOPS perf with the multipage work. It looks like it's all related to > >>>>>> the (much) fatter setup around iteration, which is related to this very > >>>>>> topic too. > >>>>>> > >>>>>> Basically setup of things like bio_for_each_bvec() and indexing through > >>>>>> nth_page() is MUCH slower than before. > >>>>> > >>>>> But bio_for_each_bvec() needn't nth_page(), and only bio_for_each_segment() > >>>>> needs that. However, bio_for_each_segment() isn't called from > >>>>> blk_queue_split() and blk_rq_map_sg(). > >>>>> > >>>>> One issue is that bio_for_each_bvec() still advances by page size > >>>>> instead of bvec->len, I guess that is the problem, will cook a patch > >>>>> for your test. > >>>> > >>>> Probably won't make a difference for my test case... > >>>> > >>>>>> We need to do something about this, it's like tossing out months of > >>>>>> optimizations. > >>>>> > >>>>> Some following optimization can be done, such as removing > >>>>> biovec_phys_mergeable() from blk_bio_segment_split(). > >>>> > >>>> I think we really need a fast path for <= PAGE_SIZE IOs, to the extent > >>>> that it is possible. But iteration startup cost is a problem in a lot of > >>>> spots, and a split fast path will only help a bit for that specific > >>>> case. > >>>> > >>>> 5% regressions is HUGE. I know I've mentioned this before, I just want > >>>> to really stress how big of a deal that is. It's enough to make me > >>>> consider just reverting it again, which sucks, but I don't feel great > >>>> shipping something that is known that much slower. > >>>> > >>>> Suggestions? > >>> > >>> You mentioned nth_page() costs much in bio_for_each_bvec(), but which > >>> shouldn't call into nth_page(). I will look into it first. > >> > >> I'll check on the test box tomorrow, I lost connectivity before. I'll > >> double check in the morning. > >> > >> I'd focus on the blk_rq_map_sg() path, since that's the biggest cycle > >> consumer. > > > > Hi Jens, > > > > Could you test the following patch which may improve on the 4k randio > > test case? > > A bit, it's up 1% with this patch. I'm going to try without the > get_page/put_page that we had earlier, to see where we are in regards to > the old baseline. OK, today I will test io_uring over null_blk on one real machine and see if something can be improved. Thanks, Ming