From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.5 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_MUTT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 02C01C43381 for ; Thu, 28 Feb 2019 08:37:43 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id C352B218AE for ; Thu, 28 Feb 2019 08:37:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727714AbfB1Ihm (ORCPT ); Thu, 28 Feb 2019 03:37:42 -0500 Received: from mx1.redhat.com ([209.132.183.28]:50926 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726066AbfB1Ihl (ORCPT ); Thu, 28 Feb 2019 03:37:41 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 728D51FE9D; Thu, 28 Feb 2019 08:37:40 +0000 (UTC) Received: from ming.t460p (ovpn-8-33.pek2.redhat.com [10.72.8.33]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 4479B1001DEA; Thu, 28 Feb 2019 08:37:25 +0000 (UTC) Date: Thu, 28 Feb 2019 16:37:20 +0800 From: Ming Lei To: Christoph Hellwig Cc: Jens Axboe , Ming Lei , Eric Biggers , "open list:AIO" , linux-block , linux-api@vger.kernel.org, Christoph Hellwig , Jeff Moyer , Avi Kivity , jannh@google.com, Al Viro Subject: Re: [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Message-ID: <20190228083719.GA24857@ming.t460p> References: <20190227022144.GE16802@ming.t460p> <20190227023719.GF16802@ming.t460p> <9ce37f86-dd8b-ef51-7507-d457585a892a@kernel.dk> <20190227030944.GG16802@ming.t460p> <7a1f4293-8f88-81e0-9464-cc27e39184cb@kernel.dk> <20190227034409.GH16802@ming.t460p> <2a48c5da-7eea-13a1-9939-9e182e11dc9b@kernel.dk> <20190227194241.GA3860@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20190227194241.GA3860@infradead.org> User-Agent: Mutt/1.9.1 (2017-09-22) X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Thu, 28 Feb 2019 08:37:40 +0000 (UTC) Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On Wed, Feb 27, 2019 at 11:42:41AM -0800, Christoph Hellwig wrote: > On Tue, Feb 26, 2019 at 09:06:23PM -0700, Jens Axboe wrote: > > On 2/26/19 9:05 PM, Jens Axboe wrote: > > > On 2/26/19 8:44 PM, Ming Lei wrote: > > >> On Tue, Feb 26, 2019 at 08:37:05PM -0700, Jens Axboe wrote: > > >>> On 2/26/19 8:09 PM, Ming Lei wrote: > > >>>> On Tue, Feb 26, 2019 at 07:43:32PM -0700, Jens Axboe wrote: > > >>>>> On 2/26/19 7:37 PM, Ming Lei wrote: > > >>>>>> On Tue, Feb 26, 2019 at 07:28:54PM -0700, Jens Axboe wrote: > > >>>>>>> On 2/26/19 7:21 PM, Ming Lei wrote: > > >>>>>>>> On Tue, Feb 26, 2019 at 06:57:16PM -0700, Jens Axboe wrote: > > >>>>>>>>> On 2/26/19 6:53 PM, Ming Lei wrote: > > >>>>>>>>>> On Tue, Feb 26, 2019 at 06:47:54PM -0700, Jens Axboe wrote: > > >>>>>>>>>>> On 2/26/19 6:21 PM, Ming Lei wrote: > > >>>>>>>>>>>> On Tue, Feb 26, 2019 at 11:56 PM Jens Axboe wrote: > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> On 2/25/19 9:34 PM, Jens Axboe wrote: > > >>>>>>>>>>>>>> On 2/25/19 8:46 PM, Eric Biggers wrote: > > >>>>>>>>>>>>>>> Hi Jens, > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> On Thu, Feb 21, 2019 at 10:45:27AM -0700, Jens Axboe wrote: > > >>>>>>>>>>>>>>>> On 2/20/19 3:58 PM, Ming Lei wrote: > > >>>>>>>>>>>>>>>>> On Mon, Feb 11, 2019 at 12:00:41PM -0700, Jens Axboe wrote: > > >>>>>>>>>>>>>>>>>> For an ITER_BVEC, we can just iterate the iov and add the pages > > >>>>>>>>>>>>>>>>>> to the bio directly. This requires that the caller doesn't releases > > >>>>>>>>>>>>>>>>>> the pages on IO completion, we add a BIO_NO_PAGE_REF flag for that. > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> The current two callers of bio_iov_iter_get_pages() are updated to > > >>>>>>>>>>>>>>>>>> check if they need to release pages on completion. This makes them > > >>>>>>>>>>>>>>>>>> work with bvecs that contain kernel mapped pages already. > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> Reviewed-by: Hannes Reinecke > > >>>>>>>>>>>>>>>>>> Reviewed-by: Christoph Hellwig > > >>>>>>>>>>>>>>>>>> Signed-off-by: Jens Axboe > > >>>>>>>>>>>>>>>>>> --- > > >>>>>>>>>>>>>>>>>> block/bio.c | 59 ++++++++++++++++++++++++++++++++------- > > >>>>>>>>>>>>>>>>>> fs/block_dev.c | 5 ++-- > > >>>>>>>>>>>>>>>>>> fs/iomap.c | 5 ++-- > > >>>>>>>>>>>>>>>>>> include/linux/blk_types.h | 1 + > > >>>>>>>>>>>>>>>>>> 4 files changed, 56 insertions(+), 14 deletions(-) > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> diff --git a/block/bio.c b/block/bio.c > > >>>>>>>>>>>>>>>>>> index 4db1008309ed..330df572cfb8 100644 > > >>>>>>>>>>>>>>>>>> --- a/block/bio.c > > >>>>>>>>>>>>>>>>>> +++ b/block/bio.c > > >>>>>>>>>>>>>>>>>> @@ -828,6 +828,23 @@ int bio_add_page(struct bio *bio, struct page *page, > > >>>>>>>>>>>>>>>>>> } > > >>>>>>>>>>>>>>>>>> EXPORT_SYMBOL(bio_add_page); > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> +static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) > > >>>>>>>>>>>>>>>>>> +{ > > >>>>>>>>>>>>>>>>>> + const struct bio_vec *bv = iter->bvec; > > >>>>>>>>>>>>>>>>>> + unsigned int len; > > >>>>>>>>>>>>>>>>>> + size_t size; > > >>>>>>>>>>>>>>>>>> + > > >>>>>>>>>>>>>>>>>> + len = min_t(size_t, bv->bv_len, iter->count); > > >>>>>>>>>>>>>>>>>> + size = bio_add_page(bio, bv->bv_page, len, > > >>>>>>>>>>>>>>>>>> + bv->bv_offset + iter->iov_offset); > > >>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>> iter->iov_offset needs to be subtracted from 'len', looks > > >>>>>>>>>>>>>>>>> the following delta change[1] is required, otherwise memory corruption > > >>>>>>>>>>>>>>>>> can be observed when running xfstests over loop/dio. > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> Thanks, I folded this in. > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> -- > > >>>>>>>>>>>>>>>> Jens Axboe > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> syzkaller started hitting a crash on linux-next starting with this commit, and > > >>>>>>>>>>>>>>> it still occurs even with your latest version that has Ming's fix folded in. > > >>>>>>>>>>>>>>> Specifically, commit a566653ab5ab80a from your io_uring branch with commit date > > >>>>>>>>>>>>>>> Sun Feb 24 08:20:53 2019 -0700. > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> Reproducer: > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> #define _GNU_SOURCE > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> int main(void) > > >>>>>>>>>>>>>>> { > > >>>>>>>>>>>>>>> int memfd, loopfd; > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> memfd = syscall(__NR_memfd_create, "foo", 0); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> pwrite(memfd, "\xa8", 1, 4096); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> loopfd = open("/dev/loop0", O_RDWR|O_DIRECT); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> ioctl(loopfd, LOOP_SET_FD, memfd); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> sendfile(loopfd, loopfd, NULL, 1000000); > > >>>>>>>>>>>>>>> } > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> Crash: > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> page:ffffea0001a6aab8 count:0 mapcount:0 mapping:0000000000000000 index:0x0 > > >>>>>>>>>>>>>>> flags: 0x100000000000000() > > >>>>>>>>>>>>>>> raw: 0100000000000000 ffffea0001ad2c50 ffff88807fca49d0 0000000000000000 > > >>>>>>>>>>>>>>> raw: 0000000000000000 0000000000000000 00000000ffffffff > > >>>>>>>>>>>>>>> page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0) > > >>>>>>>>>>>>>> > > >>>>>>>>>>>>>> I see what this is, I'll cut a fix for this tomorrow. > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> Folded in a fix for this, it's in my current io_uring branch and my for-next > > >>>>>>>>>>>>> branch. > > >>>>>>>>>>>> > > >>>>>>>>>>>> Hi Jens, > > >>>>>>>>>>>> > > >>>>>>>>>>>> I saw the following change is added: > > >>>>>>>>>>>> > > >>>>>>>>>>>> + if (size == len) { > > >>>>>>>>>>>> + /* > > >>>>>>>>>>>> + * For the normal O_DIRECT case, we could skip grabbing this > > >>>>>>>>>>>> + * reference and then not have to put them again when IO > > >>>>>>>>>>>> + * completes. But this breaks some in-kernel users, like > > >>>>>>>>>>>> + * splicing to/from a loop device, where we release the pipe > > >>>>>>>>>>>> + * pages unconditionally. If we can fix that case, we can > > >>>>>>>>>>>> + * get rid of the get here and the need to call > > >>>>>>>>>>>> + * bio_release_pages() at IO completion time. > > >>>>>>>>>>>> + */ > > >>>>>>>>>>>> + get_page(bv->bv_page); > > >>>>>>>>>>>> > > >>>>>>>>>>>> Now the 'bv' may point to more than one page, so the following one may be > > >>>>>>>>>>>> needed: > > >>>>>>>>>>>> > > >>>>>>>>>>>> int i; > > >>>>>>>>>>>> struct bvec_iter_all iter_all; > > >>>>>>>>>>>> struct bio_vec *tmp; > > >>>>>>>>>>>> > > >>>>>>>>>>>> mp_bvec_for_each_segment(tmp, bv, i, iter_all) > > >>>>>>>>>>>> get_page(tmp->bv_page); > > >>>>>>>>>>> > > >>>>>>>>>>> I guess that would be the safest, even if we don't currently have more > > >>>>>>>>>>> than one page in there. I'll fix it up. > > >>>>>>>>>> > > >>>>>>>>>> It is easy to see multipage bvec from loop, :-) > > >>>>>>>>> > > >>>>>>>>> Speaking of this, I took a quick look at why we've now regressed a lot > > >>>>>>>>> on IOPS perf with the multipage work. It looks like it's all related to > > >>>>>>>>> the (much) fatter setup around iteration, which is related to this very > > >>>>>>>>> topic too. > > >>>>>>>>> > > >>>>>>>>> Basically setup of things like bio_for_each_bvec() and indexing through > > >>>>>>>>> nth_page() is MUCH slower than before. > > >>>>>>>> > > >>>>>>>> But bio_for_each_bvec() needn't nth_page(), and only bio_for_each_segment() > > >>>>>>>> needs that. However, bio_for_each_segment() isn't called from > > >>>>>>>> blk_queue_split() and blk_rq_map_sg(). > > >>>>>>>> > > >>>>>>>> One issue is that bio_for_each_bvec() still advances by page size > > >>>>>>>> instead of bvec->len, I guess that is the problem, will cook a patch > > >>>>>>>> for your test. > > >>>>>>> > > >>>>>>> Probably won't make a difference for my test case... > > >>>>>>> > > >>>>>>>>> We need to do something about this, it's like tossing out months of > > >>>>>>>>> optimizations. > > >>>>>>>> > > >>>>>>>> Some following optimization can be done, such as removing > > >>>>>>>> biovec_phys_mergeable() from blk_bio_segment_split(). > > >>>>>>> > > >>>>>>> I think we really need a fast path for <= PAGE_SIZE IOs, to the extent > > >>>>>>> that it is possible. But iteration startup cost is a problem in a lot of > > >>>>>>> spots, and a split fast path will only help a bit for that specific > > >>>>>>> case. > > >>>>>>> > > >>>>>>> 5% regressions is HUGE. I know I've mentioned this before, I just want > > >>>>>>> to really stress how big of a deal that is. It's enough to make me > > >>>>>>> consider just reverting it again, which sucks, but I don't feel great > > >>>>>>> shipping something that is known that much slower. > > >>>>>>> > > >>>>>>> Suggestions? > > >>>>>> > > >>>>>> You mentioned nth_page() costs much in bio_for_each_bvec(), but which > > >>>>>> shouldn't call into nth_page(). I will look into it first. > > >>>>> > > >>>>> I'll check on the test box tomorrow, I lost connectivity before. I'll > > >>>>> double check in the morning. > > >>>>> > > >>>>> I'd focus on the blk_rq_map_sg() path, since that's the biggest cycle > > >>>>> consumer. > > >>>> > > >>>> Hi Jens, > > >>>> > > >>>> Could you test the following patch which may improve on the 4k randio > > >>>> test case? > > >>> > > >>> A bit, it's up 1% with this patch. I'm going to try without the > > >>> get_page/put_page that we had earlier, to see where we are in regards to > > >>> the old baseline. > > >> > > >> OK, today I will test io_uring over null_blk on one real machine and see > > >> if something can be improved. > > > > > > For reference, I'm running the default t/io_uring from fio, which is > > > QD=128, fixed files/buffers, and polled. Running it on two devices to > > > max out the CPU core: > > > > > > sudo taskset -c 0 t/io_uring /dev/nvme1n1 /dev/nvme5n1 > > > > Forgot to mention, this is loading nvme with 12 poll queues, which is of > > course important to get good performance on this test case. > > Btw, is your nvme device SGL capable? There is some low hanging fruit > in that IFF a device has SGL support we can basically dumb down > blk_mq_map_sg to never split in this case ever because we don't have > any segment size limits. Indeed. In case of SGL, big sg list may not be needed and blk_rq_map_sg() can be skipped if proper DMA mapping interface is to return the dma address for each segment. That can be one big improvement. Thanks, Ming From mboxrd@z Thu Jan 1 00:00:00 1970 From: Ming Lei Subject: Re: [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Date: Thu, 28 Feb 2019 16:37:20 +0800 Message-ID: <20190228083719.GA24857@ming.t460p> References: <20190227022144.GE16802@ming.t460p> <20190227023719.GF16802@ming.t460p> <9ce37f86-dd8b-ef51-7507-d457585a892a@kernel.dk> <20190227030944.GG16802@ming.t460p> <7a1f4293-8f88-81e0-9464-cc27e39184cb@kernel.dk> <20190227034409.GH16802@ming.t460p> <2a48c5da-7eea-13a1-9939-9e182e11dc9b@kernel.dk> <20190227194241.GA3860@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Return-path: Content-Disposition: inline In-Reply-To: <20190227194241.GA3860@infradead.org> Sender: owner-linux-aio@kvack.org To: Christoph Hellwig Cc: Jens Axboe , Ming Lei , Eric Biggers , "open list:AIO" , linux-block , linux-api@vger.kernel.org, Christoph Hellwig , Jeff Moyer , Avi Kivity , jannh@google.com, Al Viro List-Id: linux-api@vger.kernel.org On Wed, Feb 27, 2019 at 11:42:41AM -0800, Christoph Hellwig wrote: > On Tue, Feb 26, 2019 at 09:06:23PM -0700, Jens Axboe wrote: > > On 2/26/19 9:05 PM, Jens Axboe wrote: > > > On 2/26/19 8:44 PM, Ming Lei wrote: > > >> On Tue, Feb 26, 2019 at 08:37:05PM -0700, Jens Axboe wrote: > > >>> On 2/26/19 8:09 PM, Ming Lei wrote: > > >>>> On Tue, Feb 26, 2019 at 07:43:32PM -0700, Jens Axboe wrote: > > >>>>> On 2/26/19 7:37 PM, Ming Lei wrote: > > >>>>>> On Tue, Feb 26, 2019 at 07:28:54PM -0700, Jens Axboe wrote: > > >>>>>>> On 2/26/19 7:21 PM, Ming Lei wrote: > > >>>>>>>> On Tue, Feb 26, 2019 at 06:57:16PM -0700, Jens Axboe wrote: > > >>>>>>>>> On 2/26/19 6:53 PM, Ming Lei wrote: > > >>>>>>>>>> On Tue, Feb 26, 2019 at 06:47:54PM -0700, Jens Axboe wrote: > > >>>>>>>>>>> On 2/26/19 6:21 PM, Ming Lei wrote: > > >>>>>>>>>>>> On Tue, Feb 26, 2019 at 11:56 PM Jens Axboe wrote: > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> On 2/25/19 9:34 PM, Jens Axboe wrote: > > >>>>>>>>>>>>>> On 2/25/19 8:46 PM, Eric Biggers wrote: > > >>>>>>>>>>>>>>> Hi Jens, > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> On Thu, Feb 21, 2019 at 10:45:27AM -0700, Jens Axboe wrote: > > >>>>>>>>>>>>>>>> On 2/20/19 3:58 PM, Ming Lei wrote: > > >>>>>>>>>>>>>>>>> On Mon, Feb 11, 2019 at 12:00:41PM -0700, Jens Axboe wrote: > > >>>>>>>>>>>>>>>>>> For an ITER_BVEC, we can just iterate the iov and add the pages > > >>>>>>>>>>>>>>>>>> to the bio directly. This requires that the caller doesn't releases > > >>>>>>>>>>>>>>>>>> the pages on IO completion, we add a BIO_NO_PAGE_REF flag for that. > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> The current two callers of bio_iov_iter_get_pages() are updated to > > >>>>>>>>>>>>>>>>>> check if they need to release pages on completion. This makes them > > >>>>>>>>>>>>>>>>>> work with bvecs that contain kernel mapped pages already. > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> Reviewed-by: Hannes Reinecke > > >>>>>>>>>>>>>>>>>> Reviewed-by: Christoph Hellwig > > >>>>>>>>>>>>>>>>>> Signed-off-by: Jens Axboe > > >>>>>>>>>>>>>>>>>> --- > > >>>>>>>>>>>>>>>>>> block/bio.c | 59 ++++++++++++++++++++++++++++++++------- > > >>>>>>>>>>>>>>>>>> fs/block_dev.c | 5 ++-- > > >>>>>>>>>>>>>>>>>> fs/iomap.c | 5 ++-- > > >>>>>>>>>>>>>>>>>> include/linux/blk_types.h | 1 + > > >>>>>>>>>>>>>>>>>> 4 files changed, 56 insertions(+), 14 deletions(-) > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> diff --git a/block/bio.c b/block/bio.c > > >>>>>>>>>>>>>>>>>> index 4db1008309ed..330df572cfb8 100644 > > >>>>>>>>>>>>>>>>>> --- a/block/bio.c > > >>>>>>>>>>>>>>>>>> +++ b/block/bio.c > > >>>>>>>>>>>>>>>>>> @@ -828,6 +828,23 @@ int bio_add_page(struct bio *bio, struct page *page, > > >>>>>>>>>>>>>>>>>> } > > >>>>>>>>>>>>>>>>>> EXPORT_SYMBOL(bio_add_page); > > >>>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>>> +static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) > > >>>>>>>>>>>>>>>>>> +{ > > >>>>>>>>>>>>>>>>>> + const struct bio_vec *bv = iter->bvec; > > >>>>>>>>>>>>>>>>>> + unsigned int len; > > >>>>>>>>>>>>>>>>>> + size_t size; > > >>>>>>>>>>>>>>>>>> + > > >>>>>>>>>>>>>>>>>> + len = min_t(size_t, bv->bv_len, iter->count); > > >>>>>>>>>>>>>>>>>> + size = bio_add_page(bio, bv->bv_page, len, > > >>>>>>>>>>>>>>>>>> + bv->bv_offset + iter->iov_offset); > > >>>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>>> iter->iov_offset needs to be subtracted from 'len', looks > > >>>>>>>>>>>>>>>>> the following delta change[1] is required, otherwise memory corruption > > >>>>>>>>>>>>>>>>> can be observed when running xfstests over loop/dio. > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> Thanks, I folded this in. > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>>> -- > > >>>>>>>>>>>>>>>> Jens Axboe > > >>>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> syzkaller started hitting a crash on linux-next starting with this commit, and > > >>>>>>>>>>>>>>> it still occurs even with your latest version that has Ming's fix folded in. > > >>>>>>>>>>>>>>> Specifically, commit a566653ab5ab80a from your io_uring branch with commit date > > >>>>>>>>>>>>>>> Sun Feb 24 08:20:53 2019 -0700. > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> Reproducer: > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> #define _GNU_SOURCE > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> #include > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> int main(void) > > >>>>>>>>>>>>>>> { > > >>>>>>>>>>>>>>> int memfd, loopfd; > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> memfd = syscall(__NR_memfd_create, "foo", 0); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> pwrite(memfd, "\xa8", 1, 4096); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> loopfd = open("/dev/loop0", O_RDWR|O_DIRECT); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> ioctl(loopfd, LOOP_SET_FD, memfd); > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> sendfile(loopfd, loopfd, NULL, 1000000); > > >>>>>>>>>>>>>>> } > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> Crash: > > >>>>>>>>>>>>>>> > > >>>>>>>>>>>>>>> page:ffffea0001a6aab8 count:0 mapcount:0 mapping:0000000000000000 index:0x0 > > >>>>>>>>>>>>>>> flags: 0x100000000000000() > > >>>>>>>>>>>>>>> raw: 0100000000000000 ffffea0001ad2c50 ffff88807fca49d0 0000000000000000 > > >>>>>>>>>>>>>>> raw: 0000000000000000 0000000000000000 00000000ffffffff > > >>>>>>>>>>>>>>> page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0) > > >>>>>>>>>>>>>> > > >>>>>>>>>>>>>> I see what this is, I'll cut a fix for this tomorrow. > > >>>>>>>>>>>>> > > >>>>>>>>>>>>> Folded in a fix for this, it's in my current io_uring branch and my for-next > > >>>>>>>>>>>>> branch. > > >>>>>>>>>>>> > > >>>>>>>>>>>> Hi Jens, > > >>>>>>>>>>>> > > >>>>>>>>>>>> I saw the following change is added: > > >>>>>>>>>>>> > > >>>>>>>>>>>> + if (size == len) { > > >>>>>>>>>>>> + /* > > >>>>>>>>>>>> + * For the normal O_DIRECT case, we could skip grabbing this > > >>>>>>>>>>>> + * reference and then not have to put them again when IO > > >>>>>>>>>>>> + * completes. But this breaks some in-kernel users, like > > >>>>>>>>>>>> + * splicing to/from a loop device, where we release the pipe > > >>>>>>>>>>>> + * pages unconditionally. If we can fix that case, we can > > >>>>>>>>>>>> + * get rid of the get here and the need to call > > >>>>>>>>>>>> + * bio_release_pages() at IO completion time. > > >>>>>>>>>>>> + */ > > >>>>>>>>>>>> + get_page(bv->bv_page); > > >>>>>>>>>>>> > > >>>>>>>>>>>> Now the 'bv' may point to more than one page, so the following one may be > > >>>>>>>>>>>> needed: > > >>>>>>>>>>>> > > >>>>>>>>>>>> int i; > > >>>>>>>>>>>> struct bvec_iter_all iter_all; > > >>>>>>>>>>>> struct bio_vec *tmp; > > >>>>>>>>>>>> > > >>>>>>>>>>>> mp_bvec_for_each_segment(tmp, bv, i, iter_all) > > >>>>>>>>>>>> get_page(tmp->bv_page); > > >>>>>>>>>>> > > >>>>>>>>>>> I guess that would be the safest, even if we don't currently have more > > >>>>>>>>>>> than one page in there. I'll fix it up. > > >>>>>>>>>> > > >>>>>>>>>> It is easy to see multipage bvec from loop, :-) > > >>>>>>>>> > > >>>>>>>>> Speaking of this, I took a quick look at why we've now regressed a lot > > >>>>>>>>> on IOPS perf with the multipage work. It looks like it's all related to > > >>>>>>>>> the (much) fatter setup around iteration, which is related to this very > > >>>>>>>>> topic too. > > >>>>>>>>> > > >>>>>>>>> Basically setup of things like bio_for_each_bvec() and indexing through > > >>>>>>>>> nth_page() is MUCH slower than before. > > >>>>>>>> > > >>>>>>>> But bio_for_each_bvec() needn't nth_page(), and only bio_for_each_segment() > > >>>>>>>> needs that. However, bio_for_each_segment() isn't called from > > >>>>>>>> blk_queue_split() and blk_rq_map_sg(). > > >>>>>>>> > > >>>>>>>> One issue is that bio_for_each_bvec() still advances by page size > > >>>>>>>> instead of bvec->len, I guess that is the problem, will cook a patch > > >>>>>>>> for your test. > > >>>>>>> > > >>>>>>> Probably won't make a difference for my test case... > > >>>>>>> > > >>>>>>>>> We need to do something about this, it's like tossing out months of > > >>>>>>>>> optimizations. > > >>>>>>>> > > >>>>>>>> Some following optimization can be done, such as removing > > >>>>>>>> biovec_phys_mergeable() from blk_bio_segment_split(). > > >>>>>>> > > >>>>>>> I think we really need a fast path for <= PAGE_SIZE IOs, to the extent > > >>>>>>> that it is possible. But iteration startup cost is a problem in a lot of > > >>>>>>> spots, and a split fast path will only help a bit for that specific > > >>>>>>> case. > > >>>>>>> > > >>>>>>> 5% regressions is HUGE. I know I've mentioned this before, I just want > > >>>>>>> to really stress how big of a deal that is. It's enough to make me > > >>>>>>> consider just reverting it again, which sucks, but I don't feel great > > >>>>>>> shipping something that is known that much slower. > > >>>>>>> > > >>>>>>> Suggestions? > > >>>>>> > > >>>>>> You mentioned nth_page() costs much in bio_for_each_bvec(), but which > > >>>>>> shouldn't call into nth_page(). I will look into it first. > > >>>>> > > >>>>> I'll check on the test box tomorrow, I lost connectivity before. I'll > > >>>>> double check in the morning. > > >>>>> > > >>>>> I'd focus on the blk_rq_map_sg() path, since that's the biggest cycle > > >>>>> consumer. > > >>>> > > >>>> Hi Jens, > > >>>> > > >>>> Could you test the following patch which may improve on the 4k randio > > >>>> test case? > > >>> > > >>> A bit, it's up 1% with this patch. I'm going to try without the > > >>> get_page/put_page that we had earlier, to see where we are in regards to > > >>> the old baseline. > > >> > > >> OK, today I will test io_uring over null_blk on one real machine and see > > >> if something can be improved. > > > > > > For reference, I'm running the default t/io_uring from fio, which is > > > QD=128, fixed files/buffers, and polled. Running it on two devices to > > > max out the CPU core: > > > > > > sudo taskset -c 0 t/io_uring /dev/nvme1n1 /dev/nvme5n1 > > > > Forgot to mention, this is loading nvme with 12 poll queues, which is of > > course important to get good performance on this test case. > > Btw, is your nvme device SGL capable? There is some low hanging fruit > in that IFF a device has SGL support we can basically dumb down > blk_mq_map_sg to never split in this case ever because we don't have > any segment size limits. Indeed. In case of SGL, big sg list may not be needed and blk_rq_map_sg() can be skipped if proper DMA mapping interface is to return the dma address for each segment. That can be one big improvement. Thanks, Ming -- To unsubscribe, send a message with 'unsubscribe linux-aio' in the body to majordomo@kvack.org. For more info on Linux AIO, see: http://www.kvack.org/aio/ Don't email: aart@kvack.org