linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Xiao Ni <xni@redhat.com>
To: Yu Kuai <yukuai1@huaweicloud.com>
Cc: song@kernel.org, neilb@suse.de, akpm@osdl.org,
	linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org,
	yukuai3@huawei.com, yi.zhang@huawei.com, yangerkun@huawei.com
Subject: Re: [PATCH -next v3 6/7] md/raid1-10: don't handle pluged bio by daemon thread
Date: Wed, 31 May 2023 15:50:56 +0800	[thread overview]
Message-ID: <CALTww29ixKpcVknNe36D+x=2c1Aw-=z32SP-dJ_Hj8WxL2n4bg@mail.gmail.com> (raw)
In-Reply-To: <20230529131106.2123367-7-yukuai1@huaweicloud.com>

On Mon, May 29, 2023 at 9:14 PM Yu Kuai <yukuai1@huaweicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@huawei.com>
>
> current->bio_list will be set under submit_bio() context, in this case
> bitmap io will be added to the list and wait for current io submission to
> finish, while current io submission must wait for bitmap io to be done.
> commit 874807a83139 ("md/raid1{,0}: fix deadlock in bitmap_unplug.") fix
> the deadlock by handling plugged bio by daemon thread.

Thanks for the historic introduction. I did a test and printed the
logs in raid10_unplug. The tools I used are dd and mkfs. from_schedule
is always true during I/O and it's 0 when io finishes. So I have a
question here, how can I trigger the condition that from_schedule is 0
and current->list is not NULL? In other words, is there really a
deadlock here? Before your patch it looks like all bios are merged
into conf->pending_bio_list and are handled by raid10d. It can't
submit bio directly in the originating process which mentioned in
57c67df48866

>
> On the one hand, the deadlock won't exist after commit a214b949d8e3
> ("blk-mq: only flush requests from the plug in blk_mq_submit_bio"). On
> the other hand, current solution makes it impossible to flush plugged bio
> in raid1/10_make_request(), because this will cause that all the writes
> will goto daemon thread.
>
> In order to limit the number of plugged bio, commit 874807a83139
> ("md/raid1{,0}: fix deadlock in bitmap_unplug.") is reverted, and the
> deadlock is fixed by handling bitmap io asynchronously.
>
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>  drivers/md/raid1-10.c | 14 ++++++++++++++
>  drivers/md/raid1.c    |  4 ++--
>  drivers/md/raid10.c   |  8 +++-----
>  3 files changed, 19 insertions(+), 7 deletions(-)
>
> diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
> index 73cc3cb9154d..17e55c1fd5a1 100644
> --- a/drivers/md/raid1-10.c
> +++ b/drivers/md/raid1-10.c
> @@ -151,3 +151,17 @@ static inline bool raid1_add_bio_to_plug(struct mddev *mddev, struct bio *bio,
>
>         return true;
>  }
> +
> +/*
> + * current->bio_list will be set under submit_bio() context, in this case bitmap
> + * io will be added to the list and wait for current io submission to finish,
> + * while current io submission must wait for bitmap io to be done. In order to
> + * avoid such deadlock, submit bitmap io asynchronously.
> + */
> +static inline void raid1_prepare_flush_writes(struct bitmap *bitmap)
> +{
> +       if (current->bio_list)
> +               md_bitmap_unplug_async(bitmap);
> +       else
> +               md_bitmap_unplug(bitmap);
> +}
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 0778e398584c..006620fed595 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -794,7 +794,7 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
>  static void flush_bio_list(struct r1conf *conf, struct bio *bio)
>  {
>         /* flush any pending bitmap writes to disk before proceeding w/ I/O */
> -       md_bitmap_unplug(conf->mddev->bitmap);
> +       raid1_prepare_flush_writes(conf->mddev->bitmap);

If we unplug bitmap asynchronously, can we make sure the bitmap are
flushed before the corresponding data?

Regards
Xiao

>         wake_up(&conf->wait_barrier);
>
>         while (bio) { /* submit pending writes */
> @@ -1166,7 +1166,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
>         struct r1conf *conf = mddev->private;
>         struct bio *bio;
>
> -       if (from_schedule || current->bio_list) {
> +       if (from_schedule) {
>                 spin_lock_irq(&conf->device_lock);
>                 bio_list_merge(&conf->pending_bio_list, &plug->pending);
>                 spin_unlock_irq(&conf->device_lock);
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index 6640507ecb0d..fb22cfe94d32 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -902,9 +902,7 @@ static void flush_pending_writes(struct r10conf *conf)
>                 __set_current_state(TASK_RUNNING);
>
>                 blk_start_plug(&plug);
> -               /* flush any pending bitmap writes to disk
> -                * before proceeding w/ I/O */
> -               md_bitmap_unplug(conf->mddev->bitmap);
> +               raid1_prepare_flush_writes(conf->mddev->bitmap);
>                 wake_up(&conf->wait_barrier);
>
>                 while (bio) { /* submit pending writes */
> @@ -1108,7 +1106,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
>         struct r10conf *conf = mddev->private;
>         struct bio *bio;
>
> -       if (from_schedule || current->bio_list) {
> +       if (from_schedule) {
>                 spin_lock_irq(&conf->device_lock);
>                 bio_list_merge(&conf->pending_bio_list, &plug->pending);
>                 spin_unlock_irq(&conf->device_lock);
> @@ -1120,7 +1118,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
>
>         /* we aren't scheduling, so we can do the write-out directly. */
>         bio = bio_list_get(&plug->pending);
> -       md_bitmap_unplug(mddev->bitmap);
> +       raid1_prepare_flush_writes(mddev->bitmap);
>         wake_up(&conf->wait_barrier);
>
>         while (bio) { /* submit pending writes */
> --
> 2.39.2
>


  reply	other threads:[~2023-05-31  7:53 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-29 13:10 [PATCH -next v3 0/7] limit the number of plugged bio Yu Kuai
2023-05-29 13:11 ` [PATCH -next v3 1/7] md/raid10: prevent soft lockup while flush writes Yu Kuai
2023-05-29 13:11 ` [PATCH -next v3 2/7] md/raid1-10: factor out a helper to add bio to plug Yu Kuai
2023-05-29 13:11 ` [PATCH -next v3 3/7] md/raid1-10: factor out a helper to submit normal write Yu Kuai
2023-05-31  7:20   ` Xiao Ni
2023-05-31  7:56     ` Yu Kuai
2023-05-29 13:11 ` [PATCH -next v3 4/7] md/raid1-10: submit write io directly if bitmap is not enabled Yu Kuai
2023-05-31  7:26   ` Xiao Ni
2023-05-31  8:25     ` Yu Kuai
2023-05-31 15:19       ` Xiao Ni
2023-05-29 13:11 ` [PATCH -next v3 5/7] md/md-bitmap: add a new helper to unplug bitmap asynchrously Yu Kuai
2023-06-06 17:34   ` Song Liu
2023-05-29 13:11 ` [PATCH -next v3 6/7] md/raid1-10: don't handle pluged bio by daemon thread Yu Kuai
2023-05-31  7:50   ` Xiao Ni [this message]
2023-05-31  7:55     ` Yu Kuai
2023-05-31  8:00       ` Xiao Ni
2023-05-31  8:06         ` Yu Kuai
2023-05-31 15:23           ` Xiao Ni
2023-05-31  7:57   ` Paul Menzel
2023-05-29 13:11 ` [PATCH -next v3 7/7] md/raid1-10: limit the number of plugged bio Yu Kuai
2023-05-31 15:42   ` Xiao Ni
2023-06-01  1:41     ` Yu Kuai
2023-06-06 21:54 ` [PATCH -next v3 0/7] " Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALTww29ixKpcVknNe36D+x=2c1Aw-=z32SP-dJ_Hj8WxL2n4bg@mail.gmail.com' \
    --to=xni@redhat.com \
    --cc=akpm@osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=neilb@suse.de \
    --cc=song@kernel.org \
    --cc=yangerkun@huawei.com \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai1@huaweicloud.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).