All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shaohua Li <shli@kernel.org>
To: Christoph Hellwig <hch@infradead.org>
Cc: linux-raid@vger.kernel.org, linux-block@vger.kernel.org, neilb@suse.com
Subject: Re: "creative" bio usage in the RAID code
Date: Fri, 11 Nov 2016 11:02:23 -0800	[thread overview]
Message-ID: <20161111190223.4xrq3vvvvohzgs5e@kernel.org> (raw)
In-Reply-To: <20161110194636.GA32241@infradead.org>

On Thu, Nov 10, 2016 at 11:46:36AM -0800, Christoph Hellwig wrote:
> Hi Shaohua,
> 
> one of the major issues with Ming Lei's multipage biovec works
> is that we can't easily enabled the MD RAID code for it.  I had
> a quick chat on that with Chris and Jens and they suggested talking
> to you about it.
> 
> It's mostly about the RAID1 and RAID10 code which does a lot of funny
> things with the bi_iov_vec and bi_vcnt fields, which we'd prefer that
> drivers don't touch.  One example is the r1buf_pool_alloc code,
> which I think should simply use bio_clone for the MD_RECOVERY_REQUESTED
> case, which would also take care of r1buf_pool_free.  I'm not sure
> about all the others cases, as some bits don't fully make sense to me,

The problem is we use the iov_vec to track the pages allocated. We will read
data to the pages and write out later for resync. If we add new fields to track
the pages in r1bio, we could use standard API bio_kmalloc/bio_add_page and
avoid the tricky parts. This should work for both the resync and writebehind
cases.

> e.g. why we're trying to do single page I/O out of a bigger bio.

what's this one?

> Maybe you have some better ideas what's going on there?
> 
> Another not quite as urgent issue is how the RAID5 code abuses
> ->bi_phys_segments as and outstanding I/O counter, and I have no
> really good answer to that either.

I don't have good idea for this one either if we don't want to allocate extra
memory. The good side is we never dispatch the original bio to under layer
disks.

Thanks,
Shaohua

  reply	other threads:[~2016-11-11 19:02 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-11-10 19:46 "creative" bio usage in the RAID code Christoph Hellwig
2016-11-11 19:02 ` Shaohua Li [this message]
2016-11-12 17:42   ` Christoph Hellwig
2016-11-13 22:53     ` NeilBrown
2016-11-13 22:53       ` NeilBrown
2016-11-14  8:57       ` Christoph Hellwig
2016-11-14  9:51         ` NeilBrown
2016-11-14  9:51           ` NeilBrown
2016-11-15  0:13     ` Shaohua Li
2016-11-15  1:30       ` Ming Lei
2016-11-15  1:30         ` Ming Lei
2016-11-13 23:03 ` NeilBrown
2016-11-14  8:51   ` Christoph Hellwig
2016-11-14  9:43     ` NeilBrown
2016-11-14  9:43       ` NeilBrown

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20161111190223.4xrq3vvvvohzgs5e@kernel.org \
    --to=shli@kernel.org \
    --cc=hch@infradead.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.