linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yu Kuai <yukuai1@huaweicloud.com>
To: Xiao Ni <xni@redhat.com>, Yu Kuai <yukuai1@huaweicloud.com>,
	guoqing.jiang@linux.dev, agk@redhat.com, snitzer@kernel.org,
	dm-devel@redhat.com, song@kernel.org
Cc: linux-raid@vger.kernel.org, yangerkun@huawei.com,
	linux-kernel@vger.kernel.org, yi.zhang@huawei.com,
	"yukuai (C)" <yukuai3@huawei.com>
Subject: Re: [dm-devel] [PATCH -next v2 4/6] md: refactor idle/frozen_sync_thread() to fix deadlock
Date: Wed, 14 Jun 2023 09:48:42 +0800	[thread overview]
Message-ID: <74b404c4-4fdb-6eb3-93f1-0e640793bba6@huaweicloud.com> (raw)
In-Reply-To: <05aa3b09-7bb9-a65a-6231-4707b4b078a0@redhat.com>

Hi,

在 2023/06/13 22:50, Xiao Ni 写道:
> 
> 在 2023/5/29 下午9:20, Yu Kuai 写道:
>> From: Yu Kuai <yukuai3@huawei.com>
>>
>> Our test found a following deadlock in raid10:
>>
>> 1) Issue a normal write, and such write failed:
>>
>>    raid10_end_write_request
>>     set_bit(R10BIO_WriteError, &r10_bio->state)
>>     one_write_done
>>      reschedule_retry
>>
>>    // later from md thread
>>    raid10d
>>     handle_write_completed
>>      list_add(&r10_bio->retry_list, &conf->bio_end_io_list)
>>
>>    // later from md thread
>>    raid10d
>>     if (!test_bit(MD_SB_CHANGE_PENDING, &mddev->sb_flags))
>>      list_move(conf->bio_end_io_list.prev, &tmp)
>>      r10_bio = list_first_entry(&tmp, struct r10bio, retry_list)
>>      raid_end_bio_io(r10_bio)
>>
>> Dependency chain 1: normal io is waiting for updating superblock
> 
> Hi Kuai
> 
> It looks like the above situation is more complex. It only needs a 
> normal write and md_write_start needs to
> 
> wait until the metadata is written to member disks, right? If so, it 
> doesn't need to introduce raid10 write failure
> 
> here. I guess, it should be your test case. It's nice, if you can put 
> your test steps in the patch. But for the analysis
> 
> of the deadlock here, it's better to be simple.

Test script can be found here, it's pretty easy to trigger:

https://patchwork.kernel.org/project/linux-raid/patch/20230529132826.2125392-4-yukuai1@huaweicloud.com/

While reviewing the related code, I found that io can only be added to
list bio_end_io_list from handle_write_completed() if such io failed, so
I think io failure is needed to trigger deadlock from daemon thread.

I think the key point is how MD_SB_CHANGE_PENDING is set:

1) raid10_error() and rdev_set_badblocks(), trigger by io failure;
2) raid10_write_request() related to reshape;
3) md_write_start() and md_allow_write(), and mddev->in_sync is set,
however, I was thinking this is not a common case;

1) is used here because it's quite easy to trigger and this is what
we meet in real test. 3) is possible but I will say let's keep 1), I
don't think it's necessary to reporduce this deadlock through another
path again.

Thanks,
Kuai
> 
>>
>> 2) Trigger a recovery:
>>
>>    raid10_sync_request
>>     raise_barrier
>>
>> Dependency chain 2: sync thread is waiting for normal io
>>
>> 3) echo idle/frozen to sync_action:
>>
>>    action_store
>>     mddev_lock
>>      md_unregister_thread
>>       kthread_stop
>>
>> Dependency chain 3: drop 'reconfig_mutex' is waiting for sync thread
>>
>> 4) md thread can't update superblock:
>>
>>    raid10d
>>     md_check_recovery
>>      if (mddev_trylock(mddev))
>>       md_update_sb
>>
>> Dependency chain 4: update superblock is waiting for 'reconfig_mutex'
>>
>> Hence cyclic dependency exist, in order to fix the problem, we must
>> break one of them. Dependency 1 and 2 can't be broken because they are
>> foundation design. Dependency 4 may be possible if it can be guaranteed
>> that no io can be inflight, however, this requires a new mechanism which
>> seems complex. Dependency 3 is a good choice, because idle/frozen only
>> requires sync thread to finish, which can be done asynchronously that is
>> already implemented, and 'reconfig_mutex' is not needed anymore.
>>
>> This patch switch 'idle' and 'frozen' to wait sync thread to be done
>> asynchronously, and this patch also add a sequence counter to record how
>> many times sync thread is done, so that 'idle' won't keep waiting on new
>> started sync thread.
> 
> In the patch, sync_seq is added in md_reap_sync_thread. In 
> idle_sync_thread, if sync_seq isn't equal
> 
> mddev->sync_seq, it should mean there is someone that stops the sync 
> thread already, right? Why do
> 
> you say 'new started sync thread' here?
> 
> Regards
> 
> Xiao
> 
> 
>>
>> Noted that raid456 has similiar deadlock([1]), and it's verified[2] this
>> deadlock can be fixed by this patch as well.
>>
>> [1] 
>> https://lore.kernel.org/linux-raid/5ed54ffc-ce82-bf66-4eff-390cb23bc1ac@molgen.mpg.de/T/#t 
>>
>> [2] 
>> https://lore.kernel.org/linux-raid/e9067438-d713-f5f3-0d3d-9e6b0e9efa0e@huaweicloud.com/ 
>>
>> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
>> ---
>>   drivers/md/md.c | 23 +++++++++++++++++++----
>>   drivers/md/md.h |  2 ++
>>   2 files changed, 21 insertions(+), 4 deletions(-)
>>
>> diff --git a/drivers/md/md.c b/drivers/md/md.c
>> index 63a993b52cd7..7912de0e4d12 100644
>> --- a/drivers/md/md.c
>> +++ b/drivers/md/md.c
>> @@ -652,6 +652,7 @@ void mddev_init(struct mddev *mddev)
>>       timer_setup(&mddev->safemode_timer, md_safemode_timeout, 0);
>>       atomic_set(&mddev->active, 1);
>>       atomic_set(&mddev->openers, 0);
>> +    atomic_set(&mddev->sync_seq, 0);
>>       spin_lock_init(&mddev->lock);
>>       atomic_set(&mddev->flush_pending, 0);
>>       init_waitqueue_head(&mddev->sb_wait);
>> @@ -4776,19 +4777,27 @@ static void stop_sync_thread(struct mddev *mddev)
>>       if (work_pending(&mddev->del_work))
>>           flush_workqueue(md_misc_wq);
>> -    if (mddev->sync_thread) {
>> -        set_bit(MD_RECOVERY_INTR, &mddev->recovery);
>> -        md_reap_sync_thread(mddev);
>> -    }
>> +    set_bit(MD_RECOVERY_INTR, &mddev->recovery);
>> +    /*
>> +     * Thread might be blocked waiting for metadata update which will 
>> now
>> +     * never happen
>> +     */
>> +    md_wakeup_thread_directly(mddev->sync_thread);
>>       mddev_unlock(mddev);
>>   }
>>   static void idle_sync_thread(struct mddev *mddev)
>>   {
>> +    int sync_seq = atomic_read(&mddev->sync_seq);
>> +
>>       mutex_lock(&mddev->sync_mutex);
>>       clear_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
>>       stop_sync_thread(mddev);
>> +
>> +    wait_event(resync_wait, sync_seq != atomic_read(&mddev->sync_seq) ||
>> +            !test_bit(MD_RECOVERY_RUNNING, &mddev->recovery));
>> +
>>       mutex_unlock(&mddev->sync_mutex);
>>   }
>> @@ -4797,6 +4806,10 @@ static void frozen_sync_thread(struct mddev 
>> *mddev)
>>       mutex_init(&mddev->delete_mutex);
>>       set_bit(MD_RECOVERY_FROZEN, &mddev->recovery);
>>       stop_sync_thread(mddev);
>> +
>> +    wait_event(resync_wait, mddev->sync_thread == NULL &&
>> +            !test_bit(MD_RECOVERY_RUNNING, &mddev->recovery));
>> +
>>       mutex_unlock(&mddev->sync_mutex);
>>   }
>> @@ -9472,6 +9485,8 @@ void md_reap_sync_thread(struct mddev *mddev)
>>       /* resync has finished, collect result */
>>       md_unregister_thread(&mddev->sync_thread);
>> +    atomic_inc(&mddev->sync_seq);
>> +
>>       if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
>>           !test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery) &&
>>           mddev->degraded != mddev->raid_disks) {
>> diff --git a/drivers/md/md.h b/drivers/md/md.h
>> index 2fa903de5bd0..7cab9c7c45b8 100644
>> --- a/drivers/md/md.h
>> +++ b/drivers/md/md.h
>> @@ -539,6 +539,8 @@ struct mddev {
>>       /* Used to synchronize idle and frozen for action_store() */
>>       struct mutex            sync_mutex;
>> +    /* The sequence number for sync thread */
>> +    atomic_t sync_seq;
>>       bool    has_superblocks:1;
>>       bool    fail_last_dev:1;
> 
> -- 
> dm-devel mailing list
> dm-devel@redhat.com
> https://listman.redhat.com/mailman/listinfo/dm-devel


  reply	other threads:[~2023-06-14  1:49 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-05-29 13:20 [PATCH -next v2 0/6] md: fix that MD_RECOVERY_RUNNING can be cleared while sync_thread is still running Yu Kuai
2023-05-29 13:20 ` [PATCH -next v2 1/6] Revert "md: unlock mddev before reap sync_thread in action_store" Yu Kuai
2023-06-13  6:25   ` Xiao Ni
2023-06-13 11:58     ` Yu Kuai
2023-05-29 13:20 ` [PATCH -next v2 2/6] md: refactor action_store() for 'idle' and 'frozen' Yu Kuai
2023-06-13  8:02   ` [dm-devel] " Xiao Ni
2023-06-13 12:00     ` Yu Kuai
2023-06-13 12:25       ` Xiao Ni
2023-06-13 12:44         ` Yu Kuai
2023-06-13 14:14           ` Xiao Ni
2023-05-29 13:20 ` [PATCH -next v2 3/6] md: add a mutex to synchronize idle and frozen in action_store() Yu Kuai
2023-06-13 14:43   ` [dm-devel] " Xiao Ni
2023-06-14  1:15     ` Yu Kuai
2023-06-16  6:41       ` Song Liu
2023-05-29 13:20 ` [PATCH -next v2 4/6] md: refactor idle/frozen_sync_thread() to fix deadlock Yu Kuai
2023-06-13 14:50   ` [dm-devel] " Xiao Ni
2023-06-14  1:48     ` Yu Kuai [this message]
2023-06-14  2:04       ` Yu Kuai
2023-06-14  7:12         ` Xiao Ni
2023-06-14  7:38           ` Yu Kuai
2023-06-14  7:57             ` Xiao Ni
2023-06-14  8:28               ` Yu Kuai
2023-06-14  9:08                 ` Xiao Ni
2023-06-15  1:28                   ` Yu Kuai
2023-06-15  8:01                     ` Xiao Ni
2023-06-15  8:17                       ` Xiao Ni
2023-06-15  9:05                         ` Yu Kuai
2023-06-15  9:14                           ` Xiao Ni
2023-06-14  3:47       ` Xiao Ni
2023-06-14  6:04         ` Yu Kuai
2023-06-14  6:37           ` Xiao Ni
2023-05-29 13:20 ` [PATCH -next v2 5/6] md: wake up 'resync_wait' at last in md_reap_sync_thread() Yu Kuai
2023-06-14  7:20   ` Xiao Ni
2023-05-29 13:20 ` [PATCH -next v2 6/6] md: enhance checking in md_check_recovery() Yu Kuai
2023-06-14  7:24   ` [dm-devel] " Xiao Ni
2023-06-08  2:41 ` [PATCH -next v2 0/6] md: fix that MD_RECOVERY_RUNNING can be cleared while sync_thread is still running Yu Kuai
2023-06-09  4:44   ` Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=74b404c4-4fdb-6eb3-93f1-0e640793bba6@huaweicloud.com \
    --to=yukuai1@huaweicloud.com \
    --cc=agk@redhat.com \
    --cc=dm-devel@redhat.com \
    --cc=guoqing.jiang@linux.dev \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-raid@vger.kernel.org \
    --cc=snitzer@kernel.org \
    --cc=song@kernel.org \
    --cc=xni@redhat.com \
    --cc=yangerkun@huawei.com \
    --cc=yi.zhang@huawei.com \
    --cc=yukuai3@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).