linux-block.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jens Axboe <axboe@kernel.dk>
To: Ming Lei <ming.lei@redhat.com>
Cc: Ming Lei <tom.leiming@gmail.com>,
	Eric Biggers <ebiggers@kernel.org>,
	"open list:AIO" <linux-aio@kvack.org>,
	linux-block <linux-block@vger.kernel.org>,
	linux-api@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
	Jeff Moyer <jmoyer@redhat.com>, Avi Kivity <avi@scylladb.com>,
	jannh@google.com, Al Viro <viro@zeniv.linux.org.uk>
Subject: Re: [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio
Date: Tue, 26 Feb 2019 19:43:32 -0700	[thread overview]
Message-ID: <9ce37f86-dd8b-ef51-7507-d457585a892a@kernel.dk> (raw)
In-Reply-To: <20190227023719.GF16802@ming.t460p>

On 2/26/19 7:37 PM, Ming Lei wrote:
> On Tue, Feb 26, 2019 at 07:28:54PM -0700, Jens Axboe wrote:
>> On 2/26/19 7:21 PM, Ming Lei wrote:
>>> On Tue, Feb 26, 2019 at 06:57:16PM -0700, Jens Axboe wrote:
>>>> On 2/26/19 6:53 PM, Ming Lei wrote:
>>>>> On Tue, Feb 26, 2019 at 06:47:54PM -0700, Jens Axboe wrote:
>>>>>> On 2/26/19 6:21 PM, Ming Lei wrote:
>>>>>>> On Tue, Feb 26, 2019 at 11:56 PM Jens Axboe <axboe@kernel.dk> wrote:
>>>>>>>>
>>>>>>>> On 2/25/19 9:34 PM, Jens Axboe wrote:
>>>>>>>>> On 2/25/19 8:46 PM, Eric Biggers wrote:
>>>>>>>>>> Hi Jens,
>>>>>>>>>>
>>>>>>>>>> On Thu, Feb 21, 2019 at 10:45:27AM -0700, Jens Axboe wrote:
>>>>>>>>>>> On 2/20/19 3:58 PM, Ming Lei wrote:
>>>>>>>>>>>> On Mon, Feb 11, 2019 at 12:00:41PM -0700, Jens Axboe wrote:
>>>>>>>>>>>>> For an ITER_BVEC, we can just iterate the iov and add the pages
>>>>>>>>>>>>> to the bio directly. This requires that the caller doesn't releases
>>>>>>>>>>>>> the pages on IO completion, we add a BIO_NO_PAGE_REF flag for that.
>>>>>>>>>>>>>
>>>>>>>>>>>>> The current two callers of bio_iov_iter_get_pages() are updated to
>>>>>>>>>>>>> check if they need to release pages on completion. This makes them
>>>>>>>>>>>>> work with bvecs that contain kernel mapped pages already.
>>>>>>>>>>>>>
>>>>>>>>>>>>> Reviewed-by: Hannes Reinecke <hare@suse.com>
>>>>>>>>>>>>> Reviewed-by: Christoph Hellwig <hch@lst.de>
>>>>>>>>>>>>> Signed-off-by: Jens Axboe <axboe@kernel.dk>
>>>>>>>>>>>>> ---
>>>>>>>>>>>>>  block/bio.c               | 59 ++++++++++++++++++++++++++++++++-------
>>>>>>>>>>>>>  fs/block_dev.c            |  5 ++--
>>>>>>>>>>>>>  fs/iomap.c                |  5 ++--
>>>>>>>>>>>>>  include/linux/blk_types.h |  1 +
>>>>>>>>>>>>>  4 files changed, 56 insertions(+), 14 deletions(-)
>>>>>>>>>>>>>
>>>>>>>>>>>>> diff --git a/block/bio.c b/block/bio.c
>>>>>>>>>>>>> index 4db1008309ed..330df572cfb8 100644
>>>>>>>>>>>>> --- a/block/bio.c
>>>>>>>>>>>>> +++ b/block/bio.c
>>>>>>>>>>>>> @@ -828,6 +828,23 @@ int bio_add_page(struct bio *bio, struct page *page,
>>>>>>>>>>>>>  }
>>>>>>>>>>>>>  EXPORT_SYMBOL(bio_add_page);
>>>>>>>>>>>>>
>>>>>>>>>>>>> +static int __bio_iov_bvec_add_pages(struct bio *bio, struct iov_iter *iter)
>>>>>>>>>>>>> +{
>>>>>>>>>>>>> + const struct bio_vec *bv = iter->bvec;
>>>>>>>>>>>>> + unsigned int len;
>>>>>>>>>>>>> + size_t size;
>>>>>>>>>>>>> +
>>>>>>>>>>>>> + len = min_t(size_t, bv->bv_len, iter->count);
>>>>>>>>>>>>> + size = bio_add_page(bio, bv->bv_page, len,
>>>>>>>>>>>>> +                         bv->bv_offset + iter->iov_offset);
>>>>>>>>>>>>
>>>>>>>>>>>> iter->iov_offset needs to be subtracted from 'len', looks
>>>>>>>>>>>> the following delta change[1] is required, otherwise memory corruption
>>>>>>>>>>>> can be observed when running xfstests over loop/dio.
>>>>>>>>>>>
>>>>>>>>>>> Thanks, I folded this in.
>>>>>>>>>>>
>>>>>>>>>>> --
>>>>>>>>>>> Jens Axboe
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> syzkaller started hitting a crash on linux-next starting with this commit, and
>>>>>>>>>> it still occurs even with your latest version that has Ming's fix folded in.
>>>>>>>>>> Specifically, commit a566653ab5ab80a from your io_uring branch with commit date
>>>>>>>>>> Sun Feb 24 08:20:53 2019 -0700.
>>>>>>>>>>
>>>>>>>>>> Reproducer:
>>>>>>>>>>
>>>>>>>>>> #define _GNU_SOURCE
>>>>>>>>>> #include <fcntl.h>
>>>>>>>>>> #include <linux/loop.h>
>>>>>>>>>> #include <sys/ioctl.h>
>>>>>>>>>> #include <sys/sendfile.h>
>>>>>>>>>> #include <sys/syscall.h>
>>>>>>>>>> #include <unistd.h>
>>>>>>>>>>
>>>>>>>>>> int main(void)
>>>>>>>>>> {
>>>>>>>>>>         int memfd, loopfd;
>>>>>>>>>>
>>>>>>>>>>         memfd = syscall(__NR_memfd_create, "foo", 0);
>>>>>>>>>>
>>>>>>>>>>         pwrite(memfd, "\xa8", 1, 4096);
>>>>>>>>>>
>>>>>>>>>>         loopfd = open("/dev/loop0", O_RDWR|O_DIRECT);
>>>>>>>>>>
>>>>>>>>>>         ioctl(loopfd, LOOP_SET_FD, memfd);
>>>>>>>>>>
>>>>>>>>>>         sendfile(loopfd, loopfd, NULL, 1000000);
>>>>>>>>>> }
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Crash:
>>>>>>>>>>
>>>>>>>>>> page:ffffea0001a6aab8 count:0 mapcount:0 mapping:0000000000000000 index:0x0
>>>>>>>>>> flags: 0x100000000000000()
>>>>>>>>>> raw: 0100000000000000 ffffea0001ad2c50 ffff88807fca49d0 0000000000000000
>>>>>>>>>> raw: 0000000000000000 0000000000000000 00000000ffffffff
>>>>>>>>>> page dumped because: VM_BUG_ON_PAGE(page_ref_count(page) == 0)
>>>>>>>>>
>>>>>>>>> I see what this is, I'll cut a fix for this tomorrow.
>>>>>>>>
>>>>>>>> Folded in a fix for this, it's in my current io_uring branch and my for-next
>>>>>>>> branch.
>>>>>>>
>>>>>>> Hi Jens,
>>>>>>>
>>>>>>> I saw the following change is added:
>>>>>>>
>>>>>>> + if (size == len) {
>>>>>>> + /*
>>>>>>> + * For the normal O_DIRECT case, we could skip grabbing this
>>>>>>> + * reference and then not have to put them again when IO
>>>>>>> + * completes. But this breaks some in-kernel users, like
>>>>>>> + * splicing to/from a loop device, where we release the pipe
>>>>>>> + * pages unconditionally. If we can fix that case, we can
>>>>>>> + * get rid of the get here and the need to call
>>>>>>> + * bio_release_pages() at IO completion time.
>>>>>>> + */
>>>>>>> + get_page(bv->bv_page);
>>>>>>>
>>>>>>> Now the 'bv' may point to more than one page, so the following one may be
>>>>>>> needed:
>>>>>>>
>>>>>>> int i;
>>>>>>> struct bvec_iter_all iter_all;
>>>>>>> struct bio_vec *tmp;
>>>>>>>
>>>>>>> mp_bvec_for_each_segment(tmp, bv, i, iter_all)
>>>>>>>       get_page(tmp->bv_page);
>>>>>>
>>>>>> I guess that would be the safest, even if we don't currently have more
>>>>>> than one page in there. I'll fix it up.
>>>>>
>>>>> It is easy to see multipage bvec from loop, :-)
>>>>
>>>> Speaking of this, I took a quick look at why we've now regressed a lot
>>>> on IOPS perf with the multipage work. It looks like it's all related to
>>>> the (much) fatter setup around iteration, which is related to this very
>>>> topic too.
>>>>
>>>> Basically setup of things like bio_for_each_bvec() and indexing through
>>>> nth_page() is MUCH slower than before.
>>>
>>> But bio_for_each_bvec() needn't nth_page(), and only bio_for_each_segment()
>>> needs that. However, bio_for_each_segment() isn't called from
>>> blk_queue_split() and blk_rq_map_sg().
>>>
>>> One issue is that bio_for_each_bvec() still advances by page size
>>> instead of bvec->len, I guess that is the problem, will cook a patch
>>> for your test.
>>
>> Probably won't make a difference for my test case...
>>
>>>> We need to do something about this, it's like tossing out months of
>>>> optimizations.
>>>
>>> Some following optimization can be done, such as removing
>>> biovec_phys_mergeable() from blk_bio_segment_split().
>>
>> I think we really need a fast path for <= PAGE_SIZE IOs, to the extent
>> that it is possible. But iteration startup cost is a problem in a lot of
>> spots, and a split fast path will only help a bit for that specific
>> case.
>>
>> 5% regressions is HUGE. I know I've mentioned this before, I just want
>> to really stress how big of a deal that is. It's enough to make me
>> consider just reverting it again, which sucks, but I don't feel great
>> shipping something that is known that much slower.
>>
>> Suggestions?
> 
> You mentioned nth_page() costs much in bio_for_each_bvec(), but which
> shouldn't call into nth_page(). I will look into it first.

I'll check on the test box tomorrow, I lost connectivity before. I'll
double check in the morning.

I'd focus on the blk_rq_map_sg() path, since that's the biggest cycle
consumer.

-- 
Jens Axboe


  reply	other threads:[~2019-02-27  2:45 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-11 19:00 [PATCHSET v15] io_uring IO interface Jens Axboe
2019-02-11 19:00 ` [PATCH 01/19] fs: add an iopoll method to struct file_operations Jens Axboe
2019-02-11 19:00 ` [PATCH] io_uring: add io_uring_event cache hit information Jens Axboe
2019-02-11 19:00 ` [PATCH 02/19] block: wire up block device iopoll method Jens Axboe
2019-02-11 19:00 ` [PATCH 03/19] block: add bio_set_polled() helper Jens Axboe
2019-02-11 19:00 ` [PATCH 04/19] iomap: wire up the iopoll method Jens Axboe
2019-02-11 19:00 ` [PATCH 05/19] Add io_uring IO interface Jens Axboe
2019-02-11 19:00 ` [PATCH 06/19] io_uring: add fsync support Jens Axboe
2019-02-11 19:00 ` [PATCH 07/19] io_uring: support for IO polling Jens Axboe
2019-02-11 19:00 ` [PATCH 08/19] fs: add fget_many() and fput_many() Jens Axboe
2019-02-11 19:00 ` [PATCH 09/19] io_uring: use fget/fput_many() for file references Jens Axboe
2019-02-11 19:00 ` [PATCH 10/19] io_uring: batch io_kiocb allocation Jens Axboe
2019-02-11 19:00 ` [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Jens Axboe
2019-02-20 22:58   ` Ming Lei
2019-02-21 17:45     ` Jens Axboe
2019-02-26  3:46       ` Eric Biggers
2019-02-26  4:34         ` Jens Axboe
2019-02-26 15:54           ` Jens Axboe
2019-02-27  1:21             ` Ming Lei
2019-02-27  1:47               ` Jens Axboe
2019-02-27  1:53                 ` Ming Lei
2019-02-27  1:57                   ` Jens Axboe
2019-02-27  2:21                     ` Ming Lei
2019-02-27  2:28                       ` Jens Axboe
2019-02-27  2:37                         ` Ming Lei
2019-02-27  2:43                           ` Jens Axboe [this message]
2019-02-27  3:09                             ` Ming Lei
2019-02-27  3:37                               ` Jens Axboe
2019-02-27  3:43                                 ` Jens Axboe
2019-02-27  3:44                                 ` Ming Lei
2019-02-27  4:05                                   ` Jens Axboe
2019-02-27  4:06                                     ` Jens Axboe
2019-02-27 19:42                                       ` Christoph Hellwig
2019-02-28  8:37                                         ` Ming Lei
2019-02-27 23:35                         ` Ming Lei
2019-03-08  7:55                         ` Christoph Hellwig
2019-03-08  9:12                           ` Ming Lei
2019-03-08  8:18                     ` Christoph Hellwig
2019-02-11 19:00 ` [PATCH 12/19] io_uring: add support for pre-mapped user IO buffers Jens Axboe
2019-02-19 19:08   ` Jann Horn
2019-02-22 22:29     ` Jens Axboe
2019-02-11 19:00 ` [PATCH 13/19] net: split out functions related to registering inflight socket files Jens Axboe
2019-02-11 19:00 ` [PATCH 14/19] io_uring: add file set registration Jens Axboe
2019-02-19 16:12   ` Jann Horn
2019-02-22 22:29     ` Jens Axboe
2019-02-11 19:00 ` [PATCH 15/19] io_uring: add submission polling Jens Axboe
2019-02-11 19:00 ` [PATCH 16/19] io_uring: add io_kiocb ref count Jens Axboe
2019-02-11 19:00 ` [PATCH 17/19] io_uring: add support for IORING_OP_POLL Jens Axboe
2019-02-11 19:00 ` [PATCH 18/19] io_uring: allow workqueue item to handle multiple buffered requests Jens Axboe
2019-02-11 19:00 ` [PATCH 19/19] io_uring: add io_uring_event cache hit information Jens Axboe
2019-02-21 12:10 ` [PATCHSET v15] io_uring IO interface Marek Majkowski
2019-02-21 17:48   ` Jens Axboe
2019-02-22 15:01     ` Marek Majkowski
2019-02-22 22:32       ` Jens Axboe
  -- strict thread matches above, loose matches on Subject: below --
2019-02-09 21:13 [PATCHSET v14] " Jens Axboe
2019-02-09 21:13 ` [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Jens Axboe
2019-02-08 17:34 [PATCHSET v13] io_uring IO interface Jens Axboe
2019-02-08 17:34 ` [PATCH 11/19] block: implement bio helper to add iter bvec pages to bio Jens Axboe
2019-02-09  9:45   ` Hannes Reinecke

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9ce37f86-dd8b-ef51-7507-d457585a892a@kernel.dk \
    --to=axboe@kernel.dk \
    --cc=avi@scylladb.com \
    --cc=ebiggers@kernel.org \
    --cc=hch@lst.de \
    --cc=jannh@google.com \
    --cc=jmoyer@redhat.com \
    --cc=linux-aio@kvack.org \
    --cc=linux-api@vger.kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=tom.leiming@gmail.com \
    --cc=viro@zeniv.linux.org.uk \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).