From: Shaohua Li <shli@kernel.org>
To: Ming Lei <ming.lei@redhat.com>
Cc: NeilBrown <neilb@suse.com>, Ming Lei <tom.leiming@gmail.com>,
Jens Axboe <axboe@fb.com>,
"open list:SOFTWARE RAID (Multiple Disks) SUPPORT"
<linux-raid@vger.kernel.org>,
linux-block <linux-block@vger.kernel.org>,
Christoph Hellwig <hch@infradead.org>
Subject: Re: [PATCH v3 05/14] md: raid1: don't use bio's vec table to manage resync pages
Date: Mon, 10 Jul 2017 12:05:49 -0700 [thread overview]
Message-ID: <20170710190549.luj7zrnq7mo4x36b@kernel.org> (raw)
In-Reply-To: <20170710072538.GA32208@ming.t460p>
On Mon, Jul 10, 2017 at 03:25:41PM +0800, Ming Lei wrote:
> On Mon, Jul 10, 2017 at 02:38:19PM +1000, NeilBrown wrote:
> > On Mon, Jul 10 2017, Ming Lei wrote:
> >
> > > On Mon, Jul 10, 2017 at 11:35:12AM +0800, Ming Lei wrote:
> > >> On Mon, Jul 10, 2017 at 7:09 AM, NeilBrown <neilb@suse.com> wrote:
> > ...
> > >> >> +
> > >> >> + rp->idx = 0;
> > >> >
> > >> > This is the only place the ->idx is initialized, in r1buf_pool_alloc().
> > >> > The mempool alloc function is suppose to allocate memory, not initialize
> > >> > it.
> > >> >
> > >> > If the mempool_alloc() call cannot allocate memory it will use memory
> > >> > from the pool. If this memory has already been used, then it will no
> > >> > longer have the initialized value.
> > >> >
> > >> > In short: you need to initialise memory *after* calling
> > >> > mempool_alloc(), unless you ensure it is reset to the init values before
> > >> > calling mempool_free().
> > >> >
> > >> > https://bugzilla.kernel.org/show_bug.cgi?id=196307
> > >>
> > >> OK, thanks for posting it out.
> > >>
> > >> Another fix might be to reinitialize the variable(rp->idx = 0) in
> > >> r1buf_pool_free().
> > >> Or just set it as zero every time when it is used.
> > >>
> > >> But I don't understand why mempool_free() calls pool->free() at the end of
> > >> this function, which may cause to run pool->free() on a new allocated buf,
> > >> seems a bug in mempool?
> > >
> > > Looks I missed the 'return' in mempool_free(), so it is fine.
> > >
> > > How about the following fix?
> >
> > It looks like it would probably work, but it is rather unusual to
> > initialise something just before freeing it.
> >
> > Couldn't you just move the initialization to shortly after the
> > mempool_alloc() call. There looks like a good place that already loops
> > over all the bios....
>
> OK, follows the revised patch according to your suggestion.
> ---
>
> From 68f9936635b3dda13c87a6b6125ac543145bb940 Mon Sep 17 00:00:00 2001
> From: Ming Lei <ming.lei@redhat.com>
> Date: Mon, 10 Jul 2017 15:16:16 +0800
> Subject: [PATCH] MD: move initialization of resync pages' index out of mempool
> allocator
>
> mempool_alloc() is only responsible for allocation, not for initialization,
> so we need to move the initialization of resync pages's index out of the
> allocator function.
>
> Reported-by: NeilBrown <neilb@suse.com>
> Fixes: f0250618361d(md: raid10: don't use bio's vec table to manage resync pages)
> Fixes: 98d30c5812c3(md: raid1: don't use bio's vec table to manage resync pages)
> Signed-off-by: Ming Lei <ming.lei@redhat.com>
> ---
> drivers/md/raid1.c | 4 +++-
> drivers/md/raid10.c | 6 +++++-
> 2 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index e1a7e3d4c5e4..26f5efba0504 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -170,7 +170,6 @@ static void * r1buf_pool_alloc(gfp_t gfp_flags, void *data)
> resync_get_all_pages(rp);
> }
>
> - rp->idx = 0;
> rp->raid_bio = r1_bio;
> bio->bi_private = rp;
> }
> @@ -2698,6 +2697,9 @@ static sector_t raid1_sync_request(struct mddev *mddev, sector_t sector_nr,
> struct md_rdev *rdev;
> bio = r1_bio->bios[i];
>
> + /* This initialization should follow mempool_alloc() */
> + get_resync_pages(bio)->idx = 0;
> +
This is fragile and hard to maintain. Can we add a wrap for the
allocation/init?
Thanks,
Shaohua
> rdev = rcu_dereference(conf->mirrors[i].rdev);
> if (rdev == NULL ||
> test_bit(Faulty, &rdev->flags)) {
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index 797ed60abd5e..5ebcb7487284 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -221,7 +221,6 @@ static void * r10buf_pool_alloc(gfp_t gfp_flags, void *data)
> resync_get_all_pages(rp);
> }
>
> - rp->idx = 0;
> rp->raid_bio = r10_bio;
> bio->bi_private = rp;
> if (rbio) {
> @@ -3095,6 +3094,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
> bio = r10_bio->devs[0].bio;
> bio->bi_next = biolist;
> biolist = bio;
> + get_resync_pages(bio)->idx = 0;
> bio->bi_end_io = end_sync_read;
> bio_set_op_attrs(bio, REQ_OP_READ, 0);
> if (test_bit(FailFast, &rdev->flags))
> @@ -3120,6 +3120,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
> bio = r10_bio->devs[1].bio;
> bio->bi_next = biolist;
> biolist = bio;
> + get_resync_pages(bio)->idx = 0;
> bio->bi_end_io = end_sync_write;
> bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
> bio->bi_iter.bi_sector = to_addr
> @@ -3146,6 +3147,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
> break;
> bio->bi_next = biolist;
> biolist = bio;
> + get_resync_pages(bio)->idx = 0;
> bio->bi_end_io = end_sync_write;
> bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
> bio->bi_iter.bi_sector = to_addr +
> @@ -3291,6 +3293,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
> atomic_inc(&r10_bio->remaining);
> bio->bi_next = biolist;
> biolist = bio;
> + get_resync_pages(bio)->idx = 0;
> bio->bi_end_io = end_sync_read;
> bio_set_op_attrs(bio, REQ_OP_READ, 0);
> if (test_bit(FailFast, &conf->mirrors[d].rdev->flags))
> @@ -3314,6 +3317,7 @@ static sector_t raid10_sync_request(struct mddev *mddev, sector_t sector_nr,
> sector = r10_bio->devs[i].addr;
> bio->bi_next = biolist;
> biolist = bio;
> + get_resync_pages(bio)->idx = 0;
> bio->bi_end_io = end_sync_write;
> bio_set_op_attrs(bio, REQ_OP_WRITE, 0);
> if (test_bit(FailFast, &conf->mirrors[d].rdev->flags))
> --
> 2.9.4
>
>
>
> --
> Ming
next prev parent reply other threads:[~2017-07-10 19:06 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-03-16 16:12 [PATCH v3 00/14] md: cleanup on direct access to bvec table Ming Lei
2017-03-16 16:12 ` [PATCH v3 01/14] md: raid1/raid10: don't handle failure of bio_add_page() Ming Lei
2017-03-27 9:14 ` Christoph Hellwig
2017-03-16 16:12 ` [PATCH v3 02/14] md: move two macros into md.h Ming Lei
2017-03-24 5:57 ` NeilBrown
2017-03-24 6:30 ` Ming Lei
2017-03-24 16:53 ` Shaohua Li
2017-03-27 9:15 ` Christoph Hellwig
2017-03-27 9:52 ` NeilBrown
2017-03-16 16:12 ` [PATCH v3 03/14] md: prepare for managing resync I/O pages in clean way Ming Lei
2017-03-24 6:00 ` NeilBrown
2017-03-16 16:12 ` [PATCH v3 04/14] md: raid1: simplify r1buf_pool_free() Ming Lei
2017-03-16 16:12 ` [PATCH v3 05/14] md: raid1: don't use bio's vec table to manage resync pages Ming Lei
2017-07-09 23:09 ` NeilBrown
2017-07-10 3:35 ` Ming Lei
2017-07-10 4:13 ` Ming Lei
2017-07-10 4:38 ` NeilBrown
2017-07-10 7:25 ` Ming Lei
2017-07-10 19:05 ` Shaohua Li [this message]
2017-07-10 22:54 ` Ming Lei
2017-07-10 23:14 ` NeilBrown
2017-07-12 1:40 ` Ming Lei
2017-07-12 16:30 ` Shaohua Li
2017-07-13 1:22 ` Ming Lei
2017-03-16 16:12 ` [PATCH v3 06/14] md: raid1: retrieve page from pre-allocated resync page array Ming Lei
2017-03-16 16:12 ` [PATCH v3 07/14] md: raid1: use bio helper in process_checks() Ming Lei
2017-03-16 16:12 ` [PATCH v3 08/14] block: introduce bio_copy_data_partial Ming Lei
2017-03-24 5:34 ` Shaohua Li
2017-03-24 16:41 ` Jens Axboe
2017-03-16 16:12 ` [PATCH v3 09/14] md: raid1: move 'offset' out of loop Ming Lei
2017-03-16 16:12 ` [PATCH v3 10/14] md: raid1: improve write behind Ming Lei
2017-03-16 16:12 ` [PATCH v3 11/14] md: raid10: refactor code of read reshape's .bi_end_io Ming Lei
2017-03-16 16:12 ` [PATCH v3 12/14] md: raid10: don't use bio's vec table to manage resync pages Ming Lei
2017-03-16 16:12 ` [PATCH v3 13/14] md: raid10: retrieve page from preallocated resync page array Ming Lei
2017-03-16 16:12 ` [PATCH v3 14/14] md: raid10: avoid direct access to bvec table in handle_reshape_read_error Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170710190549.luj7zrnq7mo4x36b@kernel.org \
--to=shli@kernel.org \
--cc=axboe@fb.com \
--cc=hch@infradead.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=ming.lei@redhat.com \
--cc=neilb@suse.com \
--cc=tom.leiming@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).