From: Zhao Heming <heming.zhao@suse.com>
To: linux-raid@vger.kernel.org, song@kernel.org,
guoqing.jiang@cloud.ionos.com
Cc: Zhao Heming <heming.zhao@suse.com>,
lidong.zhong@suse.com, xni@redhat.com, neilb@suse.de,
colyli@suse.de
Subject: [PATCH 2/2] md/cluster: fix deadlock when doing reshape job
Date: Thu, 5 Nov 2020 21:11:28 +0800 [thread overview]
Message-ID: <1604581888-27659-2-git-send-email-heming.zhao@suse.com> (raw)
In-Reply-To: <1604581888-27659-1-git-send-email-heming.zhao@suse.com>
The hunging is very easy to trigger by following script.
Test script (reproducible steps):
```
ssh root@node2 "mdadm -S --scan"
mdadm -S --scan
mdadm --zero-superblock /dev/sd{g,h,i}
for i in {g,h,i};do dd if=/dev/zero of=/dev/sd$i oflag=direct bs=1M \
count=20; done
echo "mdadm create array"
mdadm -C /dev/md0 -b clustered -e 1.2 -n 2 -l mirror /dev/sdg /dev/sdh \
--bitmap-chunk=1M
echo "set up array on node2"
ssh root@node2 "mdadm -A /dev/md0 /dev/sdg /dev/sdh"
sleep 5
mkfs.xfs /dev/md0
mdadm --manage --add /dev/md0 /dev/sdi
mdadm --wait /dev/md0
mdadm --grow --raid-devices=3 /dev/md0
mdadm /dev/md0 --fail /dev/sdg
mdadm /dev/md0 --remove /dev/sdg
mdadm --grow --raid-devices=2 /dev/md0
```
sdg/sdh/sdi are 1GB iscsi luns. The disks size is more large the
issue is more likely to trigger.
Hunging info:
```
node1 # ps axj | grep mdadm
1 5423 5227 2231 ? -1 D 0 0:00 mdadm /dev/md0 --remove /dev/sdg
node1 # cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdi[2] sdh[1] sdg[0](F)
1046528 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.0% (1/1046528)
finish=354.0min speed=47K/sec
bitmap: 1/1 pages [4KB], 1024KB chunk
unused devices: <none>
node2 # cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdi[2] sdg[0](F) sdh[1]
1046528 blocks super 1.2 [2/1] [_U]
bitmap: 1/1 pages [4KB], 1024KB chunk
unused devices: <none>
```
Analysis (timing order):
node A node B
-----------------------------------------------
<1>
sendmsg: METADATA_UPDATED
- token_lockres:EX
<2>
sendmsg: RESYNCING
- set MD_CLUSTER_SEND_LOCK:1
- wait for holding token_lockres:EX
<3.1>
mdadm /dev/md0 --remove /dev/sdg
mddev_lock(mddev)
sendmsg: REMOVE
- wait for MD_CLUSTER_SEND_LOCK:0
<3.2>
recive METADATA_UPDATED from B
wait for mddev_trylock(mddev) to return 1
Explaination:
1>
node B send METADATA_UPDATED msg.
this will block all other nodes to send msg in cluster env.
2>
node A does sync jobs, so it will send RESYNCING msg at intervals.
this will be block for holding token_lockres:EX lock.
md_do_sync
raid1_sync_request
resync_info_update
sendmsg //with mddev_locked: false
lock_comm
+ wait_event(cinfo->wait,
| !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
+ lock_token //for step<1>, block holding EX lock
+ dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX); // blocking
3.1>
node A do "--remove" action and will send REMOVE msg.
this will be blocked by step <2>: MD_CLUSTER_SEND_LOCK is 1.
md_ioctl
+ mddev_lock(mddev) //holding mddev lock, it influnces <3.2>
+ hot_remove_disk
remove_disk
sendmsg
lock_comm
wait_event(cinfo->wait,
!test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));//blocking
3.2>
node A recv METADATA_UPDATED msg which send from node B in step <1>.
this will be blocked by step <3.1>: holding mddev lock, it makes
wait_event can't hold mddev lock. (btw,
MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD keep ZERO in this scenario.)
process_metadata_update
wait_event(mddev->thread->wqueue,
(got_lock = mddev_trylock(mddev)) ||
test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state));
steps: 1, 2, 3.1 & 3.2 lead to a deadlock.
How to fix
There are two sides to fix (or break the dead loop):
1. on sending msg side, modify lock_comm, change it to return
success/failed.
This will make mdadm cmd return error when lock_comm is timeout.
2. on receiving msg side, process_metadata_update need to add error
handling.
currently, other msg types won't trigger error or error doesn't need
to return sender. So only process_metadata_update need to modify.
Ether of 1 & 2 can fix the hunging issue, but I prefer fix on both side.
Signed-off-by: Zhao Heming <heming.zhao@suse.com>
---
drivers/md/md-cluster.c | 42 ++++++++++++++++++++++++++---------------
1 file changed, 27 insertions(+), 15 deletions(-)
diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 4aaf4820b6f6..d59a033e7589 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -523,19 +523,24 @@ static void process_add_new_disk(struct mddev *mddev, struct cluster_msg *cmsg)
}
-static void process_metadata_update(struct mddev *mddev, struct cluster_msg *msg)
+static int process_metadata_update(struct mddev *mddev, struct cluster_msg *msg)
{
- int got_lock = 0;
+ int got_lock = 0, rv;
struct md_cluster_info *cinfo = mddev->cluster_info;
mddev->good_device_nr = le32_to_cpu(msg->raid_slot);
dlm_lock_sync(cinfo->no_new_dev_lockres, DLM_LOCK_CR);
- wait_event(mddev->thread->wqueue,
- (got_lock = mddev_trylock(mddev)) ||
- test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state));
- md_reload_sb(mddev, mddev->good_device_nr);
- if (got_lock)
- mddev_unlock(mddev);
+ rv = wait_event_timeout(mddev->thread->wqueue,
+ (got_lock = mddev_trylock(mddev)) ||
+ test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state),
+ msecs_to_jiffies(5000));
+ if (rv) {
+ md_reload_sb(mddev, mddev->good_device_nr);
+ if (got_lock)
+ mddev_unlock(mddev);
+ return 0;
+ }
+ return -1;
}
static void process_remove_disk(struct mddev *mddev, struct cluster_msg *msg)
@@ -578,7 +583,7 @@ static int process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
return -1;
switch (le32_to_cpu(msg->type)) {
case METADATA_UPDATED:
- process_metadata_update(mddev, msg);
+ ret = process_metadata_update(mddev, msg);
break;
case CHANGE_CAPACITY:
set_capacity(mddev->gendisk, mddev->array_sectors);
@@ -701,10 +706,15 @@ static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
*/
static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
{
- wait_event(cinfo->wait,
- !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
+ int rv;
- return lock_token(cinfo, mddev_locked);
+ rv = wait_event_timeout(cinfo->wait,
+ !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state),
+ msecs_to_jiffies(5000));
+ if (rv)
+ return lock_token(cinfo, mddev_locked);
+ else
+ return -1;
}
static void unlock_comm(struct md_cluster_info *cinfo)
@@ -784,9 +794,11 @@ static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg,
{
int ret;
- lock_comm(cinfo, mddev_locked);
- ret = __sendmsg(cinfo, cmsg);
- unlock_comm(cinfo);
+ ret = lock_comm(cinfo, mddev_locked);
+ if (!ret) {
+ ret = __sendmsg(cinfo, cmsg);
+ unlock_comm(cinfo);
+ }
return ret;
}
--
2.27.0
next prev parent reply other threads:[~2020-11-05 13:12 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-11-05 13:11 [PATCH 1/2] md/cluster: reshape should returns error when remote doing resyncing job Zhao Heming
2020-11-05 13:11 ` Zhao Heming [this message]
2020-11-07 0:17 ` Song Liu
2020-11-07 3:53 ` heming.zhao
2020-11-08 14:52 [PATCH 0/2] md-cluster bugs fix Zhao Heming
2020-11-08 14:53 ` [PATCH 2/2] md/cluster: fix deadlock when doing reshape job Zhao Heming
2020-11-09 2:02 ` heming.zhao
2020-11-09 18:06 ` Song Liu
2020-11-10 2:24 ` heming.zhao
2020-11-10 6:36 ` Guoqing Jiang
2020-11-11 12:13 ` heming.zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1604581888-27659-2-git-send-email-heming.zhao@suse.com \
--to=heming.zhao@suse.com \
--cc=colyli@suse.de \
--cc=guoqing.jiang@cloud.ionos.com \
--cc=lidong.zhong@suse.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
--cc=song@kernel.org \
--cc=xni@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).