* [PATCH v2] md/cluster: fix deadlock when doing reshape job
@ 2020-11-11 15:51 Zhao Heming
2020-11-12 5:08 ` Xiao Ni
0 siblings, 1 reply; 5+ messages in thread
From: Zhao Heming @ 2020-11-11 15:51 UTC (permalink / raw)
To: linux-raid, song, guoqing.jiang
Cc: Zhao Heming, lidong.zhong, xni, neilb, colyli
There is a similar deadlock in commit 0ba959774e93
("md-cluster: use sync way to handle METADATA_UPDATED msg")
My patch fixed issue is very like commit 0ba959774e93, except <c>.
0ba959774e93 step <c> is "update sb", my fix is "mdadm --remove"
```
nodeA nodeB
-------------------- --------------------
a.
send METADATA_UPDATED
held token_lockres:EX
b.
md_do_sync
resync_info_update
send RESYNCING
+ set MD_CLUSTER_SEND_LOCK
+ wait for holding token_lockres:EX
c.
mdadm /dev/md0 --remove /dev/sdg
+ held reconfig_mutex
+ send REMOVE
+ wait_event(MD_CLUSTER_SEND_LOCK)
d.
recv_daemon //METADATA_UPDATED from A
process_metadata_update
+ (mddev_trylock(mddev) ||
MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD)
//this time, both return false forever.
```
Explaination:
a>
A send METADATA_UPDATED
this will block all other nodes to send msg in cluster.
b>
B does sync jobs, so it will send RESYNCING at intervals.
this will be block for holding token_lockres:EX lock.
```
md_do_sync
raid1_sync_request
resync_info_update
sendmsg //with mddev_locked: false
lock_comm
+ wait_event(cinfo->wait,
| !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
+ lock_token //for step<a>, block holding EX lock
+ dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX); // blocking
```
c>
B do "--remove" action and will send REMOVE.
this will be blocked by step <b>: MD_CLUSTER_SEND_LOCK is 1.
```
md_ioctl
+ mddev_lock(mddev) //holding reconfig_mutex, it influnces <d>
+ hot_remove_disk
remove_disk
sendmsg //with mddev_locked: true
lock_comm
wait_event(cinfo->wait,
!test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));//blocking
```
d>
B recv METADATA_UPDATED msg which send from A in step <a>.
this will be blocked by step <c>: holding mddev lock, it makes
wait_event can't hold mddev lock. (btw,
MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD keep ZERO in this scenario.)
```
process_metadata_update
wait_event(mddev->thread->wqueue,
(got_lock = mddev_trylock(mddev)) ||
test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state));
```
Repro steps:
Test env
node A & B share 3 iSCSI luns: sdg/sdh/sdi. Each lun size is 1GB. The disk
size is more large the issues are more likely to trigger.
(more resync time, more easily trigger issues)
Test script
```
ssh root@node2 "mdadm -S --scan"
mdadm -S --scan
for i in {g,h,i};do dd if=/dev/zero of=/dev/sd$i oflag=direct bs=1M \
count=20; done
echo "mdadm create array"
mdadm -C /dev/md0 -b clustered -e 1.2 -n 2 -l mirror /dev/sdg /dev/sdh \
--bitmap-chunk=1M
echo "set up array on node2"
ssh root@node2 "mdadm -A /dev/md0 /dev/sdg /dev/sdh"
sleep 5
mkfs.xfs /dev/md0
mdadm --manage --add /dev/md0 /dev/sdi
mdadm --wait /dev/md0
mdadm --grow --raid-devices=3 /dev/md0
mdadm /dev/md0 --fail /dev/sdg
mdadm /dev/md0 --remove /dev/sdg
mdadm --grow --raid-devices=2 /dev/md0
```
Test result
test script will hung when executing "mdadm --remove".
```
node1 # ps axj | grep mdadm
1 5423 5227 2231 ? -1 D 0 0:00 mdadm /dev/md0 --remove /dev/sdg
node1 # cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdi[2] sdh[1] sdg[0](F)
1046528 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.0% (1/1046528)
finish=354.0min speed=47K/sec
bitmap: 1/1 pages [4KB], 1024KB chunk
unused devices: <none>
node2 # cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdi[2] sdg[0](F) sdh[1]
1046528 blocks super 1.2 [2/1] [_U]
bitmap: 1/1 pages [4KB], 1024KB chunk
unused devices: <none>
node1 # echo t > /proc/sysrq-trigger
md0_cluster_rec D 0 5329 2 0x80004000
Call Trace:
__schedule+0x1f6/0x560
? _cond_resched+0x2d/0x40
? schedule+0x4a/0xb0
? process_metadata_update.isra.0+0xdb/0x140 [md_cluster]
? wait_woken+0x80/0x80
? process_recvd_msg+0x113/0x1d0 [md_cluster]
? recv_daemon+0x9e/0x120 [md_cluster]
? md_thread+0x94/0x160 [md_mod]
? wait_woken+0x80/0x80
? md_congested+0x30/0x30 [md_mod]
? kthread+0x115/0x140
? __kthread_bind_mask+0x60/0x60
? ret_from_fork+0x1f/0x40
mdadm D 0 5423 1 0x00004004
Call Trace:
__schedule+0x1f6/0x560
? __schedule+0x1fe/0x560
? schedule+0x4a/0xb0
? lock_comm.isra.0+0x7b/0xb0 [md_cluster]
? wait_woken+0x80/0x80
? remove_disk+0x4f/0x90 [md_cluster]
? hot_remove_disk+0xb1/0x1b0 [md_mod]
? md_ioctl+0x50c/0xba0 [md_mod]
? wait_woken+0x80/0x80
? blkdev_ioctl+0xa2/0x2a0
? block_ioctl+0x39/0x40
? ksys_ioctl+0x82/0xc0
? __x64_sys_ioctl+0x16/0x20
? do_syscall_64+0x5f/0x150
? entry_SYSCALL_64_after_hwframe+0x44/0xa9
md0_resync D 0 5425 2 0x80004000
Call Trace:
__schedule+0x1f6/0x560
? schedule+0x4a/0xb0
? dlm_lock_sync+0xa1/0xd0 [md_cluster]
? wait_woken+0x80/0x80
? lock_token+0x2d/0x90 [md_cluster]
? resync_info_update+0x95/0x100 [md_cluster]
? raid1_sync_request+0x7d3/0xa40 [raid1]
? md_do_sync.cold+0x737/0xc8f [md_mod]
? md_thread+0x94/0x160 [md_mod]
? md_congested+0x30/0x30 [md_mod]
? kthread+0x115/0x140
? __kthread_bind_mask+0x60/0x60
? ret_from_fork+0x1f/0x40
```
How to fix:
Break sending side bock by wait_event_timeout:5s.
Why only break send side, why not break on receive side or both side?
*** send side***
Currently code, sendmsg only fail when __sendmsg return error. (it's
caused by dlm layer fails.)
After applying patch, there will have new fail cases:
5s timeout & return -1.
All related functions:
resync_bitmap, update_bitmap_size, resync_info_update, remove_disk,
gather_bitmaps
There is only one function which doesn't care return value: resync_bitmap
This function is used in leave path. If the msg doesn't send out (5s
timeout), the result is other nodes won't know the failure event by
BITMAP_NEEDS_SYNC. But even if missing BITMAP_NEEDS_SYNC, there is another
api recover_slot(), which is triggered by dlm and do the same job.
So all the sending side related functions are safe to break deadloop.
*** receive side ***
Related function: process_metadata_update
Receive side should do as more as possible to handle incoming msg.
If there is a 5s timeout code in process_metadata_update, there will
tigger inconsistent issue.
e.g.
A does --faulty, send METADATA_UPDATE to B,
B receives the msg, but meets 5s timeout. It won't trigger to update
kernel md info, and will have a gap between kernel memory and disk
metadata.
So receive side should keep current code.
Signed-off-by: Zhao Heming <heming.zhao@suse.com>
---
v2:
- for clearly, split patch-v1 into two single patch to review.
- add error handling of remove_disk in hot_remove_disk
- add error handling of lock_comm in all caller
- drop 5s timeout patch code in process_metadata_update
- revise commit log
v1:
- add cover-letter
- add more descriptions in commit log
v0:
- create 2 patches, patch 2/2 is this patch.
---
drivers/md/md-cluster.c | 27 +++++++++++++++++++--------
drivers/md/md.c | 6 ++++--
2 files changed, 23 insertions(+), 10 deletions(-)
diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 4aaf4820b6f6..06b4c787dd1f 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -701,10 +701,15 @@ static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
*/
static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
{
- wait_event(cinfo->wait,
- !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
+ int rv;
- return lock_token(cinfo, mddev_locked);
+ rv = wait_event_timeout(cinfo->wait,
+ !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state),
+ msecs_to_jiffies(5000));
+ if (rv)
+ return lock_token(cinfo, mddev_locked);
+ else
+ return -1;
}
static void unlock_comm(struct md_cluster_info *cinfo)
@@ -784,9 +789,11 @@ static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg,
{
int ret;
- lock_comm(cinfo, mddev_locked);
- ret = __sendmsg(cinfo, cmsg);
- unlock_comm(cinfo);
+ ret = lock_comm(cinfo, mddev_locked);
+ if (!ret) {
+ ret = __sendmsg(cinfo, cmsg);
+ unlock_comm(cinfo);
+ }
return ret;
}
@@ -1255,7 +1262,10 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
int raid_slot = -1;
md_update_sb(mddev, 1);
- lock_comm(cinfo, 1);
+ if (lock_comm(cinfo, 1)) {
+ pr_err("%s: lock_comm failed\n", __func__);
+ return;
+ }
memset(&cmsg, 0, sizeof(cmsg));
cmsg.type = cpu_to_le32(METADATA_UPDATED);
@@ -1407,7 +1417,8 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev)
cmsg.type = cpu_to_le32(NEWDISK);
memcpy(cmsg.uuid, uuid, 16);
cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
- lock_comm(cinfo, 1);
+ if (lock_comm(cinfo, 1))
+ return -EAGAIN;
ret = __sendmsg(cinfo, &cmsg);
if (ret) {
unlock_comm(cinfo);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 74280e353b8f..46da165afde2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -6948,8 +6948,10 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev)
goto busy;
kick_rdev:
- if (mddev_is_clustered(mddev))
- md_cluster_ops->remove_disk(mddev, rdev);
+ if (mddev_is_clustered(mddev)) {
+ if (md_cluster_ops->remove_disk(mddev, rdev))
+ goto busy;
+ }
md_kick_rdev_from_array(rdev);
set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
--
2.27.0
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2] md/cluster: fix deadlock when doing reshape job
2020-11-11 15:51 [PATCH v2] md/cluster: fix deadlock when doing reshape job Zhao Heming
@ 2020-11-12 5:08 ` Xiao Ni
2020-11-12 11:27 ` heming.zhao
0 siblings, 1 reply; 5+ messages in thread
From: Xiao Ni @ 2020-11-12 5:08 UTC (permalink / raw)
To: Zhao Heming, linux-raid, song, guoqing.jiang; +Cc: lidong.zhong, neilb, colyli
On 11/11/2020 11:51 PM, Zhao Heming wrote:
> There is a similar deadlock in commit 0ba959774e93
> ("md-cluster: use sync way to handle METADATA_UPDATED msg")
> My patch fixed issue is very like commit 0ba959774e93, except <c>.
> 0ba959774e93 step <c> is "update sb", my fix is "mdadm --remove"
>
> ```
> nodeA nodeB
> -------------------- --------------------
> a.
> send METADATA_UPDATED
> held token_lockres:EX
> b.
> md_do_sync
> resync_info_update
> send RESYNCING
> + set MD_CLUSTER_SEND_LOCK
> + wait for holding token_lockres:EX
>
> c.
> mdadm /dev/md0 --remove /dev/sdg
> + held reconfig_mutex
> + send REMOVE
> + wait_event(MD_CLUSTER_SEND_LOCK)
>
> d.
> recv_daemon //METADATA_UPDATED from A
> process_metadata_update
> + (mddev_trylock(mddev) ||
> MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD)
> //this time, both return false forever.
> ```
>
> Explaination:
>
> a>
> A send METADATA_UPDATED
> this will block all other nodes to send msg in cluster.
>
> b>
> B does sync jobs, so it will send RESYNCING at intervals.
> this will be block for holding token_lockres:EX lock.
> ```
> md_do_sync
> raid1_sync_request
> resync_info_update
> sendmsg //with mddev_locked: false
> lock_comm
> + wait_event(cinfo->wait,
> | !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
> + lock_token //for step<a>, block holding EX lock
> + dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX); // blocking
> ```
> c>
> B do "--remove" action and will send REMOVE.
> this will be blocked by step <b>: MD_CLUSTER_SEND_LOCK is 1.
> ```
> md_ioctl
> + mddev_lock(mddev) //holding reconfig_mutex, it influnces <d>
> + hot_remove_disk
> remove_disk
> sendmsg //with mddev_locked: true
> lock_comm
> wait_event(cinfo->wait,
> !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));//blocking
> ```
> d>
> B recv METADATA_UPDATED msg which send from A in step <a>.
> this will be blocked by step <c>: holding mddev lock, it makes
> wait_event can't hold mddev lock. (btw,
> MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD keep ZERO in this scenario.)
> ```
> process_metadata_update
> wait_event(mddev->thread->wqueue,
> (got_lock = mddev_trylock(mddev)) ||
> test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state));
> ```
>
> Repro steps:
>
> Test env
>
> node A & B share 3 iSCSI luns: sdg/sdh/sdi. Each lun size is 1GB. The disk
> size is more large the issues are more likely to trigger.
> (more resync time, more easily trigger issues)
>
> Test script
>
> ```
> ssh root@node2 "mdadm -S --scan"
> mdadm -S --scan
> for i in {g,h,i};do dd if=/dev/zero of=/dev/sd$i oflag=direct bs=1M \
> count=20; done
>
> echo "mdadm create array"
> mdadm -C /dev/md0 -b clustered -e 1.2 -n 2 -l mirror /dev/sdg /dev/sdh \
> --bitmap-chunk=1M
> echo "set up array on node2"
> ssh root@node2 "mdadm -A /dev/md0 /dev/sdg /dev/sdh"
>
> sleep 5
>
> mkfs.xfs /dev/md0
> mdadm --manage --add /dev/md0 /dev/sdi
> mdadm --wait /dev/md0
> mdadm --grow --raid-devices=3 /dev/md0
>
> mdadm /dev/md0 --fail /dev/sdg
> mdadm /dev/md0 --remove /dev/sdg
> mdadm --grow --raid-devices=2 /dev/md0
> ```
>
>
> Test result
>
> test script will hung when executing "mdadm --remove".
>
> ```
> node1 # ps axj | grep mdadm
> 1 5423 5227 2231 ? -1 D 0 0:00 mdadm /dev/md0 --remove /dev/sdg
>
> node1 # cat /proc/mdstat
> Personalities : [raid1]
> md0 : active raid1 sdi[2] sdh[1] sdg[0](F)
> 1046528 blocks super 1.2 [2/1] [_U]
> [>....................] recovery = 0.0% (1/1046528)
> finish=354.0min speed=47K/sec
> bitmap: 1/1 pages [4KB], 1024KB chunk
>
> unused devices: <none>
>
> node2 # cat /proc/mdstat
> Personalities : [raid1]
> md0 : active raid1 sdi[2] sdg[0](F) sdh[1]
> 1046528 blocks super 1.2 [2/1] [_U]
> bitmap: 1/1 pages [4KB], 1024KB chunk
>
> unused devices: <none>
>
> node1 # echo t > /proc/sysrq-trigger
> md0_cluster_rec D 0 5329 2 0x80004000
> Call Trace:
> __schedule+0x1f6/0x560
> ? _cond_resched+0x2d/0x40
> ? schedule+0x4a/0xb0
> ? process_metadata_update.isra.0+0xdb/0x140 [md_cluster]
> ? wait_woken+0x80/0x80
> ? process_recvd_msg+0x113/0x1d0 [md_cluster]
> ? recv_daemon+0x9e/0x120 [md_cluster]
> ? md_thread+0x94/0x160 [md_mod]
> ? wait_woken+0x80/0x80
> ? md_congested+0x30/0x30 [md_mod]
> ? kthread+0x115/0x140
> ? __kthread_bind_mask+0x60/0x60
> ? ret_from_fork+0x1f/0x40
>
> mdadm D 0 5423 1 0x00004004
> Call Trace:
> __schedule+0x1f6/0x560
> ? __schedule+0x1fe/0x560
> ? schedule+0x4a/0xb0
> ? lock_comm.isra.0+0x7b/0xb0 [md_cluster]
> ? wait_woken+0x80/0x80
> ? remove_disk+0x4f/0x90 [md_cluster]
> ? hot_remove_disk+0xb1/0x1b0 [md_mod]
> ? md_ioctl+0x50c/0xba0 [md_mod]
> ? wait_woken+0x80/0x80
> ? blkdev_ioctl+0xa2/0x2a0
> ? block_ioctl+0x39/0x40
> ? ksys_ioctl+0x82/0xc0
> ? __x64_sys_ioctl+0x16/0x20
> ? do_syscall_64+0x5f/0x150
> ? entry_SYSCALL_64_after_hwframe+0x44/0xa9
>
> md0_resync D 0 5425 2 0x80004000
> Call Trace:
> __schedule+0x1f6/0x560
> ? schedule+0x4a/0xb0
> ? dlm_lock_sync+0xa1/0xd0 [md_cluster]
> ? wait_woken+0x80/0x80
> ? lock_token+0x2d/0x90 [md_cluster]
> ? resync_info_update+0x95/0x100 [md_cluster]
> ? raid1_sync_request+0x7d3/0xa40 [raid1]
> ? md_do_sync.cold+0x737/0xc8f [md_mod]
> ? md_thread+0x94/0x160 [md_mod]
> ? md_congested+0x30/0x30 [md_mod]
> ? kthread+0x115/0x140
> ? __kthread_bind_mask+0x60/0x60
> ? ret_from_fork+0x1f/0x40
> ```
>
> How to fix:
>
> Break sending side bock by wait_event_timeout:5s.
>
> Why only break send side, why not break on receive side or both side?
>
> *** send side***
>
> Currently code, sendmsg only fail when __sendmsg return error. (it's
> caused by dlm layer fails.)
>
> After applying patch, there will have new fail cases:
> 5s timeout & return -1.
>
> All related functions:
> resync_bitmap, update_bitmap_size, resync_info_update, remove_disk,
> gather_bitmaps
>
> There is only one function which doesn't care return value: resync_bitmap
> This function is used in leave path. If the msg doesn't send out (5s
> timeout), the result is other nodes won't know the failure event by
> BITMAP_NEEDS_SYNC. But even if missing BITMAP_NEEDS_SYNC, there is another
> api recover_slot(), which is triggered by dlm and do the same job.
>
> So all the sending side related functions are safe to break deadloop.
>
> *** receive side ***
>
> Related function: process_metadata_update
>
> Receive side should do as more as possible to handle incoming msg.
> If there is a 5s timeout code in process_metadata_update, there will
> tigger inconsistent issue.
> e.g.
> A does --faulty, send METADATA_UPDATE to B,
> B receives the msg, but meets 5s timeout. It won't trigger to update
> kernel md info, and will have a gap between kernel memory and disk
> metadata.
>
> So receive side should keep current code.
>
> Signed-off-by: Zhao Heming <heming.zhao@suse.com>
> ---
> v2:
> - for clearly, split patch-v1 into two single patch to review.
> - add error handling of remove_disk in hot_remove_disk
> - add error handling of lock_comm in all caller
> - drop 5s timeout patch code in process_metadata_update
> - revise commit log
>
> v1:
> - add cover-letter
> - add more descriptions in commit log
>
> v0:
> - create 2 patches, patch 2/2 is this patch.
>
> ---
> drivers/md/md-cluster.c | 27 +++++++++++++++++++--------
> drivers/md/md.c | 6 ++++--
> 2 files changed, 23 insertions(+), 10 deletions(-)
>
> diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
> index 4aaf4820b6f6..06b4c787dd1f 100644
> --- a/drivers/md/md-cluster.c
> +++ b/drivers/md/md-cluster.c
> @@ -701,10 +701,15 @@ static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
> */
> static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
> {
> - wait_event(cinfo->wait,
> - !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
> + int rv;
>
> - return lock_token(cinfo, mddev_locked);
> + rv = wait_event_timeout(cinfo->wait,
> + !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state),
> + msecs_to_jiffies(5000));
> + if (rv)
> + return lock_token(cinfo, mddev_locked);
> + else
> + return -1;
> }
Hi Heming
Can we handle this problem like metadata_update_start? lock_comm and
metadata_update_start are two users that
want to get token lock. lock_comm can do the same thing as
metadata_update_start does. If so, process_recvd_msg
can call md_reload_sb without waiting. All threads can work well when
the initiated node release token lock. Resync
can send message and clear MD_CLUSTER_SEND_LOCK, then lock_comm can go
on working. In this way, all threads
work successfully without failure.
>
> static void unlock_comm(struct md_cluster_info *cinfo)
> @@ -784,9 +789,11 @@ static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg,
> {
> int ret;
>
> - lock_comm(cinfo, mddev_locked);
> - ret = __sendmsg(cinfo, cmsg);
> - unlock_comm(cinfo);
> + ret = lock_comm(cinfo, mddev_locked);
> + if (!ret) {
> + ret = __sendmsg(cinfo, cmsg);
> + unlock_comm(cinfo);
> + }
> return ret;
> }
>
> @@ -1255,7 +1262,10 @@ static void update_size(struct mddev *mddev, sector_t old_dev_sectors)
> int raid_slot = -1;
>
> md_update_sb(mddev, 1);
> - lock_comm(cinfo, 1);
> + if (lock_comm(cinfo, 1)) {
> + pr_err("%s: lock_comm failed\n", __func__);
> + return;
> + }
>
> memset(&cmsg, 0, sizeof(cmsg));
> cmsg.type = cpu_to_le32(METADATA_UPDATED);
> @@ -1407,7 +1417,8 @@ static int add_new_disk(struct mddev *mddev, struct md_rdev *rdev)
> cmsg.type = cpu_to_le32(NEWDISK);
> memcpy(cmsg.uuid, uuid, 16);
> cmsg.raid_slot = cpu_to_le32(rdev->desc_nr);
> - lock_comm(cinfo, 1);
> + if (lock_comm(cinfo, 1))
> + return -EAGAIN;
> ret = __sendmsg(cinfo, &cmsg);
> if (ret) {
> unlock_comm(cinfo);
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 74280e353b8f..46da165afde2 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -6948,8 +6948,10 @@ static int hot_remove_disk(struct mddev *mddev, dev_t dev)
> goto busy;
>
> kick_rdev:
> - if (mddev_is_clustered(mddev))
> - md_cluster_ops->remove_disk(mddev, rdev);
> + if (mddev_is_clustered(mddev)) {
> + if (md_cluster_ops->remove_disk(mddev, rdev))
> + goto busy;
> + }
>
> md_kick_rdev_from_array(rdev);
> set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
These codes are not related with this deadlock problem. It's better to
file a new patch
to fix checking the return value problem.
Best Regards
Xiao
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] md/cluster: fix deadlock when doing reshape job
2020-11-12 5:08 ` Xiao Ni
@ 2020-11-12 11:27 ` heming.zhao
2020-11-13 2:50 ` Xiao Ni
0 siblings, 1 reply; 5+ messages in thread
From: heming.zhao @ 2020-11-12 11:27 UTC (permalink / raw)
To: Xiao Ni, linux-raid, song, guoqing.jiang; +Cc: lidong.zhong, neilb, colyli
Hello,
On 11/12/20 1:08 PM, Xiao Ni wrote:
>
>
> On 11/11/2020 11:51 PM, Zhao Heming wrote:
>> There is a similar deadlock in commit 0ba959774e93
>> ("md-cluster: use sync way to handle METADATA_UPDATED msg")
>> My patch fixed issue is very like commit 0ba959774e93, except <c>.
>> 0ba959774e93 step <c> is "update sb", my fix is "mdadm --remove"
>>
>> ... ...
>> + !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state),
>> + msecs_to_jiffies(5000));
>> + if (rv)
>> + return lock_token(cinfo, mddev_locked);
>> + else
>> + return -1;
>> }
> Hi Heming
>
> Can we handle this problem like metadata_update_start? lock_comm and metadata_update_start are two users that
> want to get token lock. lock_comm can do the same thing as metadata_update_start does. If so, process_recvd_msg
> can call md_reload_sb without waiting. All threads can work well when the initiated node release token lock. Resync
> can send message and clear MD_CLUSTER_SEND_LOCK, then lock_comm can go on working. In this way, all threads
> work successfully without failure.
>
Currently MD_CLUSTER_SEND_LOCKED_ALREADY only for adding a new disk.
(please refer Documentation/driver-api/md/md-cluster.rst section: 5. Adding a new Device")
During ADD_NEW_DISK process, md-cluster will block all other msg sending until metadata_update_finish calls
unlock_comm.
With your idea, md-cluster will allow to concurrently send msg. I'm not very familiar with all raid1 scenarios.
But at least, you break the rule of handling ADD_NEW_DISK. it will allow send other msg during ADD_NEW_DISK.
>> static void unlock_comm(struct md_cluster_info *cinfo)
>> @@ -784,9 +789,11 @@ static int sendmsg(struct md_cluster_info *cinfo, struct cluster_msg *cmsg,
>> {
>> int ret;
>> - lock_comm(cinfo, mddev_locked);
>> ... ...
>> + if (mddev_is_clustered(mddev)) {
>> + if (md_cluster_ops->remove_disk(mddev, rdev))
>> + goto busy;
>> + }
>> md_kick_rdev_from_array(rdev);
>> set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
> These codes are not related with this deadlock problem. It's better to file a new patch
> to fix checking the return value problem.
>
In my opinion: we should include these code in this patch.
For totally fix the deadlock, md layer should return error to userspace.
But with your comments, I found other potential issues of md-cluster.
There still have some cluster_ops APIs, which caller doesn't care error return:
.resync_finish - may cause other nodes never clean MD_RESYNCING_REMOTE.
.resync_info_update - this error could be safely ignore
.metadata_update_finish - may cause other nodes kernel md info is inconsistent with disk metadata.
maybe I or other guys fix them in the future.
> Best Regards
> Xiao
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH v2] md/cluster: fix deadlock when doing reshape job
2020-11-12 11:27 ` heming.zhao
@ 2020-11-13 2:50 ` Xiao Ni
2020-11-13 13:23 ` heming.zhao
0 siblings, 1 reply; 5+ messages in thread
From: Xiao Ni @ 2020-11-13 2:50 UTC (permalink / raw)
To: heming zhao; +Cc: linux-raid, song, guoqing jiang, lidong zhong, neilb, colyli
----- Original Message -----
> From: "heming zhao" <heming.zhao@suse.com>
> To: "Xiao Ni" <xni@redhat.com>, linux-raid@vger.kernel.org, song@kernel.org, "guoqing jiang"
> <guoqing.jiang@cloud.ionos.com>
> Cc: "lidong zhong" <lidong.zhong@suse.com>, neilb@suse.de, colyli@suse.de
> Sent: Thursday, November 12, 2020 7:27:46 PM
> Subject: Re: [PATCH v2] md/cluster: fix deadlock when doing reshape job
>
> Hello,
>
> On 11/12/20 1:08 PM, Xiao Ni wrote:
> >
> >
> > On 11/11/2020 11:51 PM, Zhao Heming wrote:
> >> There is a similar deadlock in commit 0ba959774e93
> >> ("md-cluster: use sync way to handle METADATA_UPDATED msg")
> >> My patch fixed issue is very like commit 0ba959774e93, except <c>.
> >> 0ba959774e93 step <c> is "update sb", my fix is "mdadm --remove"
> >>
> >> ... ...
> >> + !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state),
> >> + msecs_to_jiffies(5000));
> >> + if (rv)
> >> + return lock_token(cinfo, mddev_locked);
> >> + else
> >> + return -1;
> >> }
> > Hi Heming
> >
> > Can we handle this problem like metadata_update_start? lock_comm and
> > metadata_update_start are two users that
> > want to get token lock. lock_comm can do the same thing as
> > metadata_update_start does. If so, process_recvd_msg
> > can call md_reload_sb without waiting. All threads can work well when the
> > initiated node release token lock. Resync
> > can send message and clear MD_CLUSTER_SEND_LOCK, then lock_comm can go on
> > working. In this way, all threads
> > work successfully without failure.
> >
>
> Currently MD_CLUSTER_SEND_LOCKED_ALREADY only for adding a new disk.
> (please refer Documentation/driver-api/md/md-cluster.rst section: 5. Adding a
> new Device")
> During ADD_NEW_DISK process, md-cluster will block all other msg sending
> until metadata_update_finish calls
> unlock_comm.
>
> With your idea, md-cluster will allow to concurrently send msg. I'm not very
> familiar with all raid1 scenarios.
> But at least, you break the rule of handling ADD_NEW_DISK. it will allow send
> other msg during ADD_NEW_DISK.
Hi Heming
It doesn't need to change MD_CLUSTER_SEND_LOCKED_ALREADY. Is it ok to do something like this:
diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 4aaf482..f6f576b 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -664,29 +664,12 @@ static void recv_daemon(struct md_thread *thread)
* Takes the lock on the TOKEN lock resource so no other
* node can communicate while the operation is underway.
*/
-static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
+static int lock_token(struct md_cluster_info *cinfo)
{
- int error, set_bit = 0;
+ int error;
struct mddev *mddev = cinfo->mddev;
- /*
- * If resync thread run after raid1d thread, then process_metadata_update
- * could not continue if raid1d held reconfig_mutex (and raid1d is blocked
- * since another node already got EX on Token and waitting the EX of Ack),
- * so let resync wake up thread in case flag is set.
- */
- if (mddev_locked && !test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
- &cinfo->state)) {
- error = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
- &cinfo->state);
- WARN_ON_ONCE(error);
- md_wakeup_thread(mddev->thread);
- set_bit = 1;
- }
error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
- if (set_bit)
- clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
-
if (error)
pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n",
__func__, __LINE__, error);
@@ -701,10 +684,30 @@ static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
*/
static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
{
+ int ret, set_bit = 0;
+
+ /*
+ * If resync thread run after raid1d thread, then process_metadata_update
+ * could not continue if raid1d held reconfig_mutex (and raid1d is blocked
+ * since another node already got EX on Token and waitting the EX of Ack),
+ * so let resync wake up thread in case flag is set.
+ */
+ if (mddev_locked && !test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
+ &cinfo->state)) {
+ error = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
+ &cinfo->state);
+ WARN_ON_ONCE(error);
+ md_wakeup_thread(mddev->thread);
+ set_bit = 1;
+ }
+
wait_event(cinfo->wait,
!test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
- return lock_token(cinfo, mddev_locked);
+ ret = lock_token(cinfo, mddev_locked);
+ if (ret && set_bit)
+ clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
+ return ret;
}
static void unlock_comm(struct md_cluster_info *cinfo)
>
> >> static void unlock_comm(struct md_cluster_info *cinfo)
> >> @@ -784,9 +789,11 @@ static int sendmsg(struct md_cluster_info *cinfo,
> >> struct cluster_msg *cmsg,
> >> {
> >> int ret;
> >> - lock_comm(cinfo, mddev_locked);
> >> ... ...
> >> + if (mddev_is_clustered(mddev)) {
> >> + if (md_cluster_ops->remove_disk(mddev, rdev))
> >> + goto busy;
> >> + }
> >> md_kick_rdev_from_array(rdev);
> >> set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
> > These codes are not related with this deadlock problem. It's better to file
> > a new patch
> > to fix checking the return value problem.
> >
>
> In my opinion: we should include these code in this patch.
> For totally fix the deadlock, md layer should return error to userspace.
>
> But with your comments, I found other potential issues of md-cluster.
> There still have some cluster_ops APIs, which caller doesn't care error
> return:
> .resync_finish - may cause other nodes never clean MD_RESYNCING_REMOTE.
> .resync_info_update - this error could be safely ignore
> .metadata_update_finish - may cause other nodes kernel md info is
> inconsistent with disk metadata.
>
> maybe I or other guys fix them in the future.
>
> > Best Regards
> > Xiao
> >
>
>
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH v2] md/cluster: fix deadlock when doing reshape job
2020-11-13 2:50 ` Xiao Ni
@ 2020-11-13 13:23 ` heming.zhao
0 siblings, 0 replies; 5+ messages in thread
From: heming.zhao @ 2020-11-13 13:23 UTC (permalink / raw)
To: Xiao Ni; +Cc: linux-raid, song, guoqing jiang, lidong zhong, neilb, colyli
On 11/13/20 10:50 AM, Xiao Ni wrote:
>
>
> ----- Original Message -----
>> From: "heming zhao" <heming.zhao@suse.com>
>> To: "Xiao Ni" <xni@redhat.com>, linux-raid@vger.kernel.org, song@kernel.org, "guoqing jiang"
>> <guoqing.jiang@cloud.ionos.com>
>> Cc: "lidong zhong" <lidong.zhong@suse.com>, neilb@suse.de, colyli@suse.de
>> Sent: Thursday, November 12, 2020 7:27:46 PM
>> Subject: Re: [PATCH v2] md/cluster: fix deadlock when doing reshape job
>>
>> Hello,
>>
>> On 11/12/20 1:08 PM, Xiao Ni wrote:
>>>
>>>
>>> On 11/11/2020 11:51 PM, Zhao Heming wrote:
>>>> There is a similar deadlock in commit 0ba959774e93
>>>> ("md-cluster: use sync way to handle METADATA_UPDATED msg")
>>>> My patch fixed issue is very like commit 0ba959774e93, except <c>.
>>>> 0ba959774e93 step <c> is "update sb", my fix is "mdadm --remove"
>>>>
>>>> ... ...
>>>> + !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state),
>>>> + msecs_to_jiffies(5000));
>>>> + if (rv)
>>>> + return lock_token(cinfo, mddev_locked);
>>>> + else
>>>> + return -1;
>>>> }
>>> Hi Heming
>>>
>>> Can we handle this problem like metadata_update_start? lock_comm and
>>> metadata_update_start are two users that
>>> want to get token lock. lock_comm can do the same thing as
>>> metadata_update_start does. If so, process_recvd_msg
>>> can call md_reload_sb without waiting. All threads can work well when the
>>> initiated node release token lock. Resync
>>> can send message and clear MD_CLUSTER_SEND_LOCK, then lock_comm can go on
>>> working. In this way, all threads
>>> work successfully without failure.
>>>
>>
>> Currently MD_CLUSTER_SEND_LOCKED_ALREADY only for adding a new disk.
>> (please refer Documentation/driver-api/md/md-cluster.rst section: 5. Adding a
>> new Device")
>> During ADD_NEW_DISK process, md-cluster will block all other msg sending
>> until metadata_update_finish calls
>> unlock_comm.
>>
>> With your idea, md-cluster will allow to concurrently send msg. I'm not very
>> familiar with all raid1 scenarios.
>> But at least, you break the rule of handling ADD_NEW_DISK. it will allow send
>> other msg during ADD_NEW_DISK.
>
> Hi Heming
>
> It doesn't need to change MD_CLUSTER_SEND_LOCKED_ALREADY. Is it ok to do something like this:
>
> diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
> index 4aaf482..f6f576b 100644
> --- a/drivers/md/md-cluster.c
> +++ b/drivers/md/md-cluster.c
> @@ -664,29 +664,12 @@ static void recv_daemon(struct md_thread *thread)
> * Takes the lock on the TOKEN lock resource so no other
> * node can communicate while the operation is underway.
> */
> -static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
> +static int lock_token(struct md_cluster_info *cinfo)
> {
> - int error, set_bit = 0;
> + int error;
> struct mddev *mddev = cinfo->mddev;
>
> - /*
> - * If resync thread run after raid1d thread, then process_metadata_update
> - * could not continue if raid1d held reconfig_mutex (and raid1d is blocked
> - * since another node already got EX on Token and waitting the EX of Ack),
> - * so let resync wake up thread in case flag is set.
> - */
> - if (mddev_locked && !test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
> - &cinfo->state)) {
> - error = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
> - &cinfo->state);
> - WARN_ON_ONCE(error);
> - md_wakeup_thread(mddev->thread);
> - set_bit = 1;
> - }
> error = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
> - if (set_bit)
> - clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
> -
> if (error)
> pr_err("md-cluster(%s:%d): failed to get EX on TOKEN (%d)\n",
> __func__, __LINE__, error);
> @@ -701,10 +684,30 @@ static int lock_token(struct md_cluster_info *cinfo, bool mddev_locked)
> */
> static int lock_comm(struct md_cluster_info *cinfo, bool mddev_locked)
> {
> + int ret, set_bit = 0;
> +
> + /*
> + * If resync thread run after raid1d thread, then process_metadata_update
> + * could not continue if raid1d held reconfig_mutex (and raid1d is blocked
> + * since another node already got EX on Token and waitting the EX of Ack),
> + * so let resync wake up thread in case flag is set.
> + */
> + if (mddev_locked && !test_bit(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
> + &cinfo->state)) {
> + error = test_and_set_bit_lock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD,
> + &cinfo->state);
> + WARN_ON_ONCE(error);
> + md_wakeup_thread(mddev->thread);
> + set_bit = 1;
> + }
> +
> wait_event(cinfo->wait,
> !test_and_set_bit(MD_CLUSTER_SEND_LOCK, &cinfo->state));
>
> - return lock_token(cinfo, mddev_locked);
> + ret = lock_token(cinfo, mddev_locked);
> + if (ret && set_bit)
> + clear_bit_unlock(MD_CLUSTER_HOLDING_MUTEX_FOR_RECVD, &cinfo->state);
> + return ret;
> }
>
> static void unlock_comm(struct md_cluster_info *cinfo)
>
thank you for your comments & idea.
I totally understand your solution, it's better than mine.
I will use this idea to file v3 patch.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-11-13 13:24 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-11 15:51 [PATCH v2] md/cluster: fix deadlock when doing reshape job Zhao Heming
2020-11-12 5:08 ` Xiao Ni
2020-11-12 11:27 ` heming.zhao
2020-11-13 2:50 ` Xiao Ni
2020-11-13 13:23 ` heming.zhao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).