From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E9AB9C43381 for ; Wed, 27 Feb 2019 03:45:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id A49AC218EA for ; Wed, 27 Feb 2019 03:45:18 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="HRi9UnDI" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729329AbfB0Dnf (ORCPT ); Tue, 26 Feb 2019 22:43:35 -0500 Received: from mail-pg1-f194.google.com ([209.85.215.194]:36996 "EHLO mail-pg1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729310AbfB0Dnf (ORCPT ); Tue, 26 Feb 2019 22:43:35 -0500 Received: by mail-pg1-f194.google.com with SMTP id q206so7245850pgq.4 for ; Tue, 26 Feb 2019 19:43:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=subject:from:to:cc:references:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=lmmOlYRaj5FP3cu24iLGk0nGJBrUotXquFg4KXdHUq0=; b=HRi9UnDIF7dwaTDkw+dqw5/l0jDwx29mXtzs2uUsB5uc5YcQVV2U0AsNE4CoXHdOPV 8aICNPkwKxzkJwqwDzz8kUl8JUoJjtTmujXb/sGB1A+zFExEUgjxQHy0kZXI5gDkDIhA BhJT5nE1OWhUuneox/uy/BWINyIG+hpgK63yaBxrU69HYIuRJzLLIyZRLS5oxW2IdyCs bgwvlPKJImG4BDzNR/zJOUueMPB9fT/or9g9BMnhSYI7U9azGplkk+vdZV+ZFyv/VDmG LClL1PSyja7rnh1QRFVTRzmU2sVoMJh4nwLI0o04LjpUnOAgpaHbo4EIanXfgweQz9CS RCsA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:from:to:cc:references:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=lmmOlYRaj5FP3cu24iLGk0nGJBrUotXquFg4KXdHUq0=; b=eQeqwLxcDzFnkARwHYKk6zhp0RzN3+xR4/fbDurGKaGFx9LX4BzBR435ON96uxFpCJ Co3fddqgKSwCQ3MmZ88nC5kiYxvFyCYsTOzdRPPGZLC0SAaSYy4+YRy8pXxdnkLhRV3I U8/wTop77a9unjRqhgQgGnsjxb+37n3+OX+u9nw2GciHv2nsRhTD2xLHxoCrYsyPzxbX VK0zK3s2T24OTiAiYfNZEySuevMgkknyGM8F6FWuZd79rQULx2fl7L6QIGy14S9KIXnk UbL6YjcMrCgVpW0xMyDRkE7tQx7DSAJ4OlkRCBJy1F2BwhRrMFWYZb5B4SQZxPO01G1a 06KQ== X-Gm-Message-State: AHQUAuZBSy259RCT9xL0JKSJFpiVpJ+zGmPEcjZQZygZAuCe4UqGDLGG bLbXx0W/arSYkk97BSyRxwPV8A== X-Google-Smtp-Source: AHgI3IaYXH4Dj2UZ3erhtLT8iltKjIJ/Z/Zn7133uGlcir1yGJb6u60fI6mT/g50JUAfG+eb02ZVmw== X-Received: by 2002:a62:e04b:: with SMTP id f72mr30105706pfh.41.1551239014340; Tue, 26 Feb 2019 19:43:34 -0800 (PST) Received: from [192.168.1.121] (66.29.188.166.static.utbb.net. [66.29.188.166]) by smtp.gmail.com with ESMTPSA id o2sm32617640pgq.29.2019.02.26.19.43.31 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Feb 2019 19:43:33 -0800 (PST) Subject: Re: [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio From: Jens Axboe To: Ming Lei Cc: Ming Lei , Eric Biggers , "open list:AIO" , linux-block , linux-api@vger.kernel.org, Christoph Hellwig , Jeff Moyer , Avi Kivity , jannh@google.com, Al Viro References: <1652577e-787b-638e-625d-c200fb144a9d@kernel.dk> <09820845-07cb-5153-e1c5-59ed185db26f@kernel.dk> <20190227015336.GD16802@ming.t460p> <20190227022144.GE16802@ming.t460p> <20190227023719.GF16802@ming.t460p> <9ce37f86-dd8b-ef51-7507-d457585a892a@kernel.dk> <20190227030944.GG16802@ming.t460p> <7a1f4293-8f88-81e0-9464-cc27e39184cb@kernel.dk> Message-ID: Date: Tue, 26 Feb 2019 20:43:31 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: <7a1f4293-8f88-81e0-9464-cc27e39184cb@kernel.dk> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On 2/26/19 8:37 PM, Jens Axboe wrote: > On 2/26/19 8:09 PM, Ming Lei wrote: >> On Tue, Feb 26, 2019 at 07:43:32PM -0700, Jens Axboe wrote: >>> On 2/26/19 7:37 PM, Ming Lei wrote: >>>> On Tue, Feb 26, 2019 at 07:28:54PM -0700, Jens Axboe wrote: >>>>> On 2/26/19 7:21 PM, Ming Lei wrote: >>>>>> On Tue, Feb 26, 2019 at 06:57:16PM -0700, Jens Axboe wrote: >>>>>>> On 2/26/19 6:53 PM, Ming Lei wrote: >>>>>>>> On Tue, Feb 26, 2019 at 06:47:54PM -0700, Jens Axboe wrote: >>>>>>>>> On 2/26/19 6:21 PM, Ming Lei wrote: >>>>>>>>>> On Tue, Feb 26, 2019 at 11:56 PM Jens Axboe wrote: >>>>>>>>>>> >>>>>>>>>>> On 2/25/19 9:34 PM, Jens Axboe wrote: >>>>>>>>>>>> On 2/25/19 8:46 PM, Eric Biggers wrote: >>>>>>>>>>>>> Hi Jens, >>>>>>>>>>>>> >>>>>>>>>>>>> On Thu, Feb 21, 2019 at 10:45:27AM -0700, Jens Axboe wrote: >>>>>>>>>>>>>> On 2/20/19 3:58 PM, Ming Lei wrote: >>>>>>>>>>>>>>> On Mon, Feb 11, 2019 at 12:00:41PM -0700, Jens Axboe wrote: >>>>>>>>>>>>>>>> For an ITER_BVEC, we can just iterate the iov and add the pages >>>>>>>>>>>>>>>> to the bio directly. This requires that the caller doesn't releases >>>>>>>>>>>>>>>> the pages on IO completion, we add a BIO_NO_PAGE_REF flag for that. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The current two callers of bio_iov_iter_get_pages() are updated to >>>>>>>>>>>>>>>> check if they need to release pages on completion. This makes them >>>>>>>>>>>>>>>> work with bvecs that contain kernel mapped pages already. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Reviewed-by: Hannes Reinecke >>>>>>>>>>>>>>>> Reviewed-by: Christoph Hellwig >>>>>>>>>>>>>>>> Signed-off-by: Jens Axboe >>>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>>> block/bio.c | 59 ++++++++++++++++++++++++++++++++------- >>>>>>>>>>>>>>>> fs/block_dev.c | 5 ++-- >>>>>>>>>>>>>>>> fs/iomap.c | 5 ++-- >>>>>>>>>>>>>>>> include/linux/blk_types.h | 1 + >>>>>>>>>>>>>>>> 4 files changed, 56 insertions(+), 14 deletions(-) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> diff --git a/block/bio.c b/block/bio.c >>>>>>>>>>>>>>>> index 4db1008309ed..330df572cfb8 100644 >>>>>>>>>>>>>>>> --- a/block/bio.c >>>>>>>>>>>>>>>> +++ b/block/bio.c >>>>>>>>>>>>>>>> @@ -828,6 +828,23 @@ int bio_add_page(struct bio *bio, struct page *page, >>>>>>>>>>>>>>>> } >>>>>>>>>>>>>>>> EXPORT_SYMBOL(bio_add_page); >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> +static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) >>>>>>>>>>>>>>>> +{ >>>>>>>>>>>>>>>> + const struct bio_vec *bv = iter->bvec; >>>>>>>>>>>>>>>> + unsigned int len; >>>>>>>>>>>>>>>> + size_t size; >>>>>>>>>>>>>>>> + >>>>>>>>>>>>>>>> + len = min_t(size_t, bv->bv_len, iter->count); >>>>>>>>>>>>>>>> + size = bio_add_page(bio, bv->bv_page, len, >>>>>>>>>>>>>>>> + bv->bv_offset + iter->iov_offset); >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> iter->iov_offset needs to be subtracted from 'len', looks >>>>>>>>>>>>>>> the following delta change[1] is required, otherwise memory corruption >>>>>>>>>>>>>>> can be observed when running xfstests over loop/dio. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Thanks, I folded this in. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> Jens Axboe >>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> syzkaller started hitting a crash on linux-next starting with this commit, and >>>>>>>>>>>>> it still occurs even with your latest version that has Ming's fix folded in. >>>>>>>>>>>>> Specifically, commit a566653ab5ab80a from your io_uring branch with commit date >>>>>>>>>>>>> Sun Feb 24 08:20:53 2019 -0700. >>>>>>>>>>>>> >>>>>>>>>>>>> Reproducer: >>>>>>>>>>>>> >>>>>>>>>>>>> #define _GNU_SOURCE >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> >>>>>>>>>>>>> int main(void) >>>>>>>>>>>>> { >>>>>>>>>>>>> int memfd, loopfd; >>>>>>>>>>>>> >>>>>>>>>>>>> memfd = syscall(__NR_memfd_create, "foo", 0); >>>>>>>>>>>>> >>>>>>>>>>>>> pwrite(memfd, "\xa8", 1, 4096); >>>>>>>>>>>>> >>>>>>>>>>>>> loopfd = open("/dev/loop0", O_RDWR|O_DIRECT); >>>>>>>>>>>>> >>>>>>>>>>>>> ioctl(loopfd, LOOP_SET_FD, memfd); >>>>>>>>>>>>> >>>>>>>>>>>>> sendfile(loopfd, loopfd, NULL, 1000000); >>>>>>>>>>>>> } >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Crash: >>>>>>>>>>>>> >>>>>>>>>>>>> page:ffffea0001a6aab8 count:0 mapcount:0 mapping:0000000000000000 index:0x0 >>>>>>>>>>>>> flags: 0x100000000000000() >>>>>>>>>>>>> raw: 0100000000000000 ffffea0001ad2c50 ffff88807fca49d0 0000000000000000 >>>>>>>>>>>>> raw: 0000000000000000 0000000000000000 00000000ffffffff >>>>>>>>>>>>> page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0) >>>>>>>>>>>> >>>>>>>>>>>> I see what this is, I'll cut a fix for this tomorrow. >>>>>>>>>>> >>>>>>>>>>> Folded in a fix for this, it's in my current io_uring branch and my for-next >>>>>>>>>>> branch. >>>>>>>>>> >>>>>>>>>> Hi Jens, >>>>>>>>>> >>>>>>>>>> I saw the following change is added: >>>>>>>>>> >>>>>>>>>> + if (size == len) { >>>>>>>>>> + /* >>>>>>>>>> + * For the normal O_DIRECT case, we could skip grabbing this >>>>>>>>>> + * reference and then not have to put them again when IO >>>>>>>>>> + * completes. But this breaks some in-kernel users, like >>>>>>>>>> + * splicing to/from a loop device, where we release the pipe >>>>>>>>>> + * pages unconditionally. If we can fix that case, we can >>>>>>>>>> + * get rid of the get here and the need to call >>>>>>>>>> + * bio_release_pages() at IO completion time. >>>>>>>>>> + */ >>>>>>>>>> + get_page(bv->bv_page); >>>>>>>>>> >>>>>>>>>> Now the 'bv' may point to more than one page, so the following one may be >>>>>>>>>> needed: >>>>>>>>>> >>>>>>>>>> int i; >>>>>>>>>> struct bvec_iter_all iter_all; >>>>>>>>>> struct bio_vec *tmp; >>>>>>>>>> >>>>>>>>>> mp_bvec_for_each_segment(tmp, bv, i, iter_all) >>>>>>>>>> get_page(tmp->bv_page); >>>>>>>>> >>>>>>>>> I guess that would be the safest, even if we don't currently have more >>>>>>>>> than one page in there. I'll fix it up. >>>>>>>> >>>>>>>> It is easy to see multipage bvec from loop, :-) >>>>>>> >>>>>>> Speaking of this, I took a quick look at why we've now regressed a lot >>>>>>> on IOPS perf with the multipage work. It looks like it's all related to >>>>>>> the (much) fatter setup around iteration, which is related to this very >>>>>>> topic too. >>>>>>> >>>>>>> Basically setup of things like bio_for_each_bvec() and indexing through >>>>>>> nth_page() is MUCH slower than before. >>>>>> >>>>>> But bio_for_each_bvec() needn't nth_page(), and only bio_for_each_segment() >>>>>> needs that. However, bio_for_each_segment() isn't called from >>>>>> blk_queue_split() and blk_rq_map_sg(). >>>>>> >>>>>> One issue is that bio_for_each_bvec() still advances by page size >>>>>> instead of bvec->len, I guess that is the problem, will cook a patch >>>>>> for your test. >>>>> >>>>> Probably won't make a difference for my test case... >>>>> >>>>>>> We need to do something about this, it's like tossing out months of >>>>>>> optimizations. >>>>>> >>>>>> Some following optimization can be done, such as removing >>>>>> biovec_phys_mergeable() from blk_bio_segment_split(). >>>>> >>>>> I think we really need a fast path for <= PAGE_SIZE IOs, to the extent >>>>> that it is possible. But iteration startup cost is a problem in a lot of >>>>> spots, and a split fast path will only help a bit for that specific >>>>> case. >>>>> >>>>> 5% regressions is HUGE. I know I've mentioned this before, I just want >>>>> to really stress how big of a deal that is. It's enough to make me >>>>> consider just reverting it again, which sucks, but I don't feel great >>>>> shipping something that is known that much slower. >>>>> >>>>> Suggestions? >>>> >>>> You mentioned nth_page() costs much in bio_for_each_bvec(), but which >>>> shouldn't call into nth_page(). I will look into it first. >>> >>> I'll check on the test box tomorrow, I lost connectivity before. I'll >>> double check in the morning. >>> >>> I'd focus on the blk_rq_map_sg() path, since that's the biggest cycle >>> consumer. >> >> Hi Jens, >> >> Could you test the following patch which may improve on the 4k randio >> test case? > > A bit, it's up 1% with this patch. I'm going to try without the > get_page/put_page that we had earlier, to see where we are in regards to > the old baseline. ~1548K now, down from 1615-1620K, which matches the numbers. That's down now roughly 4%, instead of the original 5%, with this recent patch being the source of that reclaimed 1%. So that's a good start, but still 4% to go. -- Jens Axboe