From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.0 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI, SIGNED_OFF_BY,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 984ACC43381 for ; Wed, 27 Feb 2019 02:45:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 56253218D0 for ; Wed, 27 Feb 2019 02:45:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=kernel-dk.20150623.gappssmtp.com header.i=@kernel-dk.20150623.gappssmtp.com header.b="v8Jjk7jV" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729276AbfB0Cnh (ORCPT ); Tue, 26 Feb 2019 21:43:37 -0500 Received: from mail-pg1-f193.google.com ([209.85.215.193]:36885 "EHLO mail-pg1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1729243AbfB0Cng (ORCPT ); Tue, 26 Feb 2019 21:43:36 -0500 Received: by mail-pg1-f193.google.com with SMTP id q206so7178835pgq.4 for ; Tue, 26 Feb 2019 18:43:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kernel-dk.20150623.gappssmtp.com; s=20150623; h=subject:to:cc:references:from:message-id:date:user-agent :mime-version:in-reply-to:content-language:content-transfer-encoding; bh=Lhpvm6GGZ0RM5Y95pWwS7X6IF0U/Kh0Zl84e17lq9KM=; b=v8Jjk7jV2VtbkgvhjLa2NEXRd0YXU8g5U8Cb37/cgCbDLFxcfHdUTf8F+Rzr9crdp0 bOcs0Hz2NbfRH9UjVQBq1W1+a41reyXb1ctQXBI3ypRmMQXcwgzTmZZJ9gUlgNgrVDhT 1optxN6yESoxRRVx3CLu2s3TryYzST0WFQEre3z+Mzg4tLGt8emh4h2Kellj1y1X/5rC vY7s2tYlVbZwSrCcC2fshwPQv1iZaae2Es54YMMvPR1Ahdn9i9hKAeOci93Yzc25EQO+ umQfp8JcFVEHGm1NfUTJyYC1BYRB8M68EUvd4SYvVKC7H3LJdQrY42U9e0QjA1R05frA iE3g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:subject:to:cc:references:from:message-id:date :user-agent:mime-version:in-reply-to:content-language :content-transfer-encoding; bh=Lhpvm6GGZ0RM5Y95pWwS7X6IF0U/Kh0Zl84e17lq9KM=; b=i4kRYXMY5nFpIcgH2j0be3oxY1qRkebQsEN2jQKtUMcWJGslLZP+V+ANFlw+aVJxy4 J1zmOtr1ashVAdZKXGYWysMDpVknuWagBwbIHjk0h5OoR71UpIzJIIk6TYgSvoYkqh8e qdeUM0lyDZ+StHSI+wUQ3eG4ASEqCknk3JHsGkQwytHj1ukwGJxVrFyVcmSv2t1xJl0E N9dqh8+p5Rqf5ho3Z+kd5/9huqMSK7KFt+mfT6TYbgEKMvDp4qtCra03QEuguWE3BE9t UT0vj5606Scs9kZIfRttPqcWfWvsYiHEQtgH6+kaq/ZvhzTnMbjnMD3pbdqJ5AnX7rWY U1LA== X-Gm-Message-State: AHQUAubsHCXk8dmdvNmjWp85YPZfCR7fosGfffb/XK4hRzen9YO2VRSk WJMl4lCp+wz0Xt531vSwfgLmvQ== X-Google-Smtp-Source: AHgI3IZHtmbtpr8ACM7LYYmPoFf456aJpjk85kSF06wpT7NcU6vrAAaCgyMGWNX0oHPu1Qfz88W3kA== X-Received: by 2002:a63:2f47:: with SMTP id v68mr711812pgv.144.1551235415721; Tue, 26 Feb 2019 18:43:35 -0800 (PST) Received: from [192.168.1.121] (66.29.188.166.static.utbb.net. [66.29.188.166]) by smtp.gmail.com with ESMTPSA id p2sm22709296pgs.7.2019.02.26.18.43.33 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 26 Feb 2019 18:43:34 -0800 (PST) Subject: Re: [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio To: Ming Lei Cc: Ming Lei , Eric Biggers , "open list:AIO" , linux-block , linux-api@vger.kernel.org, Christoph Hellwig , Jeff Moyer , Avi Kivity , jannh@google.com, Al Viro References: <20190226034613.GA676@sol.localdomain> <1652577e-787b-638e-625d-c200fb144a9d@kernel.dk> <09820845-07cb-5153-e1c5-59ed185db26f@kernel.dk> <20190227015336.GD16802@ming.t460p> <20190227022144.GE16802@ming.t460p> <20190227023719.GF16802@ming.t460p> From: Jens Axboe Message-ID: <9ce37f86-dd8b-ef51-7507-d457585a892a@kernel.dk> Date: Tue, 26 Feb 2019 19:43:32 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.5.1 MIME-Version: 1.0 In-Reply-To: <20190227023719.GF16802@ming.t460p> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-block-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-block@vger.kernel.org On 2/26/19 7:37 PM, Ming Lei wrote: > On Tue, Feb 26, 2019 at 07:28:54PM -0700, Jens Axboe wrote: >> On 2/26/19 7:21 PM, Ming Lei wrote: >>> On Tue, Feb 26, 2019 at 06:57:16PM -0700, Jens Axboe wrote: >>>> On 2/26/19 6:53 PM, Ming Lei wrote: >>>>> On Tue, Feb 26, 2019 at 06:47:54PM -0700, Jens Axboe wrote: >>>>>> On 2/26/19 6:21 PM, Ming Lei wrote: >>>>>>> On Tue, Feb 26, 2019 at 11:56 PM Jens Axboe wrote: >>>>>>>> >>>>>>>> On 2/25/19 9:34 PM, Jens Axboe wrote: >>>>>>>>> On 2/25/19 8:46 PM, Eric Biggers wrote: >>>>>>>>>> Hi Jens, >>>>>>>>>> >>>>>>>>>> On Thu, Feb 21, 2019 at 10:45:27AM -0700, Jens Axboe wrote: >>>>>>>>>>> On 2/20/19 3:58 PM, Ming Lei wrote: >>>>>>>>>>>> On Mon, Feb 11, 2019 at 12:00:41PM -0700, Jens Axboe wrote: >>>>>>>>>>>>> For an ITER_BVEC, we can just iterate the iov and add the pages >>>>>>>>>>>>> to the bio directly. This requires that the caller doesn't releases >>>>>>>>>>>>> the pages on IO completion, we add a BIO_NO_PAGE_REF flag for that. >>>>>>>>>>>>> >>>>>>>>>>>>> The current two callers of bio_iov_iter_get_pages() are updated to >>>>>>>>>>>>> check if they need to release pages on completion. This makes them >>>>>>>>>>>>> work with bvecs that contain kernel mapped pages already. >>>>>>>>>>>>> >>>>>>>>>>>>> Reviewed-by: Hannes Reinecke >>>>>>>>>>>>> Reviewed-by: Christoph Hellwig >>>>>>>>>>>>> Signed-off-by: Jens Axboe >>>>>>>>>>>>> --- >>>>>>>>>>>>> block/bio.c | 59 ++++++++++++++++++++++++++++++++------- >>>>>>>>>>>>> fs/block_dev.c | 5 ++-- >>>>>>>>>>>>> fs/iomap.c | 5 ++-- >>>>>>>>>>>>> include/linux/blk_types.h | 1 + >>>>>>>>>>>>> 4 files changed, 56 insertions(+), 14 deletions(-) >>>>>>>>>>>>> >>>>>>>>>>>>> diff --git a/block/bio.c b/block/bio.c >>>>>>>>>>>>> index 4db1008309ed..330df572cfb8 100644 >>>>>>>>>>>>> --- a/block/bio.c >>>>>>>>>>>>> +++ b/block/bio.c >>>>>>>>>>>>> @@ -828,6 +828,23 @@ int bio_add_page(struct bio *bio, struct page *page, >>>>>>>>>>>>> } >>>>>>>>>>>>> EXPORT_SYMBOL(bio_add_page); >>>>>>>>>>>>> >>>>>>>>>>>>> +static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter) >>>>>>>>>>>>> +{ >>>>>>>>>>>>> + const struct bio_vec *bv = iter->bvec; >>>>>>>>>>>>> + unsigned int len; >>>>>>>>>>>>> + size_t size; >>>>>>>>>>>>> + >>>>>>>>>>>>> + len = min_t(size_t, bv->bv_len, iter->count); >>>>>>>>>>>>> + size = bio_add_page(bio, bv->bv_page, len, >>>>>>>>>>>>> + bv->bv_offset + iter->iov_offset); >>>>>>>>>>>> >>>>>>>>>>>> iter->iov_offset needs to be subtracted from 'len', looks >>>>>>>>>>>> the following delta change[1] is required, otherwise memory corruption >>>>>>>>>>>> can be observed when running xfstests over loop/dio. >>>>>>>>>>> >>>>>>>>>>> Thanks, I folded this in. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Jens Axboe >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> syzkaller started hitting a crash on linux-next starting with this commit, and >>>>>>>>>> it still occurs even with your latest version that has Ming's fix folded in. >>>>>>>>>> Specifically, commit a566653ab5ab80a from your io_uring branch with commit date >>>>>>>>>> Sun Feb 24 08:20:53 2019 -0700. >>>>>>>>>> >>>>>>>>>> Reproducer: >>>>>>>>>> >>>>>>>>>> #define _GNU_SOURCE >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> >>>>>>>>>> int main(void) >>>>>>>>>> { >>>>>>>>>> int memfd, loopfd; >>>>>>>>>> >>>>>>>>>> memfd = syscall(__NR_memfd_create, "foo", 0); >>>>>>>>>> >>>>>>>>>> pwrite(memfd, "\xa8", 1, 4096); >>>>>>>>>> >>>>>>>>>> loopfd = open("/dev/loop0", O_RDWR|O_DIRECT); >>>>>>>>>> >>>>>>>>>> ioctl(loopfd, LOOP_SET_FD, memfd); >>>>>>>>>> >>>>>>>>>> sendfile(loopfd, loopfd, NULL, 1000000); >>>>>>>>>> } >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Crash: >>>>>>>>>> >>>>>>>>>> page:ffffea0001a6aab8 count:0 mapcount:0 mapping:0000000000000000 index:0x0 >>>>>>>>>> flags: 0x100000000000000() >>>>>>>>>> raw: 0100000000000000 ffffea0001ad2c50 ffff88807fca49d0 0000000000000000 >>>>>>>>>> raw: 0000000000000000 0000000000000000 00000000ffffffff >>>>>>>>>> page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0) >>>>>>>>> >>>>>>>>> I see what this is, I'll cut a fix for this tomorrow. >>>>>>>> >>>>>>>> Folded in a fix for this, it's in my current io_uring branch and my for-next >>>>>>>> branch. >>>>>>> >>>>>>> Hi Jens, >>>>>>> >>>>>>> I saw the following change is added: >>>>>>> >>>>>>> + if (size == len) { >>>>>>> + /* >>>>>>> + * For the normal O_DIRECT case, we could skip grabbing this >>>>>>> + * reference and then not have to put them again when IO >>>>>>> + * completes. But this breaks some in-kernel users, like >>>>>>> + * splicing to/from a loop device, where we release the pipe >>>>>>> + * pages unconditionally. If we can fix that case, we can >>>>>>> + * get rid of the get here and the need to call >>>>>>> + * bio_release_pages() at IO completion time. >>>>>>> + */ >>>>>>> + get_page(bv->bv_page); >>>>>>> >>>>>>> Now the 'bv' may point to more than one page, so the following one may be >>>>>>> needed: >>>>>>> >>>>>>> int i; >>>>>>> struct bvec_iter_all iter_all; >>>>>>> struct bio_vec *tmp; >>>>>>> >>>>>>> mp_bvec_for_each_segment(tmp, bv, i, iter_all) >>>>>>> get_page(tmp->bv_page); >>>>>> >>>>>> I guess that would be the safest, even if we don't currently have more >>>>>> than one page in there. I'll fix it up. >>>>> >>>>> It is easy to see multipage bvec from loop, :-) >>>> >>>> Speaking of this, I took a quick look at why we've now regressed a lot >>>> on IOPS perf with the multipage work. It looks like it's all related to >>>> the (much) fatter setup around iteration, which is related to this very >>>> topic too. >>>> >>>> Basically setup of things like bio_for_each_bvec() and indexing through >>>> nth_page() is MUCH slower than before. >>> >>> But bio_for_each_bvec() needn't nth_page(), and only bio_for_each_segment() >>> needs that. However, bio_for_each_segment() isn't called from >>> blk_queue_split() and blk_rq_map_sg(). >>> >>> One issue is that bio_for_each_bvec() still advances by page size >>> instead of bvec->len, I guess that is the problem, will cook a patch >>> for your test. >> >> Probably won't make a difference for my test case... >> >>>> We need to do something about this, it's like tossing out months of >>>> optimizations. >>> >>> Some following optimization can be done, such as removing >>> biovec_phys_mergeable() from blk_bio_segment_split(). >> >> I think we really need a fast path for <= PAGE_SIZE IOs, to the extent >> that it is possible. But iteration startup cost is a problem in a lot of >> spots, and a split fast path will only help a bit for that specific >> case. >> >> 5% regressions is HUGE. I know I've mentioned this before, I just want >> to really stress how big of a deal that is. It's enough to make me >> consider just reverting it again, which sucks, but I don't feel great >> shipping something that is known that much slower. >> >> Suggestions? > > You mentioned nth_page() costs much in bio_for_each_bvec(), but which > shouldn't call into nth_page(). I will look into it first. I'll check on the test box tomorrow, I lost connectivity before. I'll double check in the morning. I'd focus on the blk_rq_map_sg() path, since that's the biggest cycle consumer. -- Jens Axboe