* [PATCH] md-cluster: fix use-after-free issue when removing rdev
@ 2021-04-08 3:01 Heming Zhao
2021-04-08 5:09 ` Paul Menzel
0 siblings, 1 reply; 6+ messages in thread
From: Heming Zhao @ 2021-04-08 3:01 UTC (permalink / raw)
To: linux-raid, song
Cc: Heming Zhao, guoqing.jiang, lidong.zhong, xni, colyli, ghe
md_kick_rdev_from_array will remove rdev, so we should
use rdev_for_each_safe to search list.
How to trigger:
```
for i in {1..20}; do
echo ==== $i `date` ====;
mdadm -Ss && ssh ${node2} "mdadm -Ss"
wipefs -a /dev/sda /dev/sdb
mdadm -CR /dev/md0 -b clustered -e 1.2 -n 2 -l 1 /dev/sda \
/dev/sdb --assume-clean
ssh ${node2} "mdadm -A /dev/md0 /dev/sda /dev/sdb"
mdadm --wait /dev/md0
ssh ${node2} "mdadm --wait /dev/md0"
mdadm --manage /dev/md0 --fail /dev/sda --remove /dev/sda
sleep 1
done
```
Crash stack:
```
stack segment: 0000 [#1] SMP
... ...
RIP: 0010:md_check_recovery+0x1e8/0x570 [md_mod]
... ...
RSP: 0018:ffffb149807a7d68 EFLAGS: 00010207
RAX: 0000000000000000 RBX: ffff9d494c180800 RCX: ffff9d490fc01e50
RDX: fffff047c0ed8308 RSI: 0000000000000246 RDI: 0000000000000246
RBP: 6b6b6b6b6b6b6b6b R08: ffff9d490fc01e40 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000000
R13: ffff9d494c180818 R14: ffff9d493399ef38 R15: ffff9d4933a1d800
FS: 0000000000000000(0000) GS:ffff9d494f700000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fe68cab9010 CR3: 000000004c6be001 CR4: 00000000003706e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
raid1d+0x5c/0xd40 [raid1]
? finish_task_switch+0x75/0x2a0
? lock_timer_base+0x67/0x80
? try_to_del_timer_sync+0x4d/0x80
? del_timer_sync+0x41/0x50
? schedule_timeout+0x254/0x2d0
? md_start_sync+0xe0/0xe0 [md_mod]
? md_thread+0x127/0x160 [md_mod]
md_thread+0x127/0x160 [md_mod]
? wait_woken+0x80/0x80
kthread+0x10d/0x130
? kthread_park+0xa0/0xa0
ret_from_fork+0x1f/0x40
```
Signed-off-by: Heming Zhao <heming.zhao@suse.com>
Reviewed-by: Gang He <ghe@suse.com>
---
drivers/md/md.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 21da0c48f6c2..9892c13cdfc8 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -9251,11 +9251,11 @@ void md_check_recovery(struct mddev *mddev)
}
if (mddev_is_clustered(mddev)) {
- struct md_rdev *rdev;
+ struct md_rdev *rdev, *tmp;
/* kick the device if another node issued a
* remove disk.
*/
- rdev_for_each(rdev, mddev) {
+ rdev_for_each_safe(rdev, tmp, mddev) {
if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
rdev->raid_disk < 0)
md_kick_rdev_from_array(rdev);
@@ -9569,7 +9569,7 @@ static int __init md_init(void)
static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
{
struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
- struct md_rdev *rdev2;
+ struct md_rdev *rdev2, *tmp;
int role, ret;
char b[BDEVNAME_SIZE];
@@ -9586,7 +9586,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
}
/* Check for change of roles in the active devices */
- rdev_for_each(rdev2, mddev) {
+ rdev_for_each_safe(rdev2, tmp, mddev) {
if (test_bit(Faulty, &rdev2->flags))
continue;
--
2.30.0
^ permalink raw reply related [flat|nested] 6+ messages in thread
* Re: [PATCH] md-cluster: fix use-after-free issue when removing rdev
2021-04-08 3:01 [PATCH] md-cluster: fix use-after-free issue when removing rdev Heming Zhao
@ 2021-04-08 5:09 ` Paul Menzel
2021-04-08 5:52 ` heming.zhao
0 siblings, 1 reply; 6+ messages in thread
From: Paul Menzel @ 2021-04-08 5:09 UTC (permalink / raw)
To: Heming Zhao
Cc: guoqing.jiang, lidong.zhong, xni, colyli, ghe, linux-raid, Song Liu
Dear Heming,
Thank you for the patch.
Am 08.04.21 um 05:01 schrieb Heming Zhao:
> md_kick_rdev_from_array will remove rdev, so we should
> use rdev_for_each_safe to search list.
>
> How to trigger:
>
> ```
> for i in {1..20}; do
> echo ==== $i `date` ====;
>
> mdadm -Ss && ssh ${node2} "mdadm -Ss"
> wipefs -a /dev/sda /dev/sdb
>
> mdadm -CR /dev/md0 -b clustered -e 1.2 -n 2 -l 1 /dev/sda \
> /dev/sdb --assume-clean
> ssh ${node2} "mdadm -A /dev/md0 /dev/sda /dev/sdb"
> mdadm --wait /dev/md0
> ssh ${node2} "mdadm --wait /dev/md0"
>
> mdadm --manage /dev/md0 --fail /dev/sda --remove /dev/sda
> sleep 1
> done
> ```
In the test script, I do not understand, what node2 is used for, where
you log in over SSH.
> Crash stack:
>
> ```
> stack segment: 0000 [#1] SMP
> ... ...
> RIP: 0010:md_check_recovery+0x1e8/0x570 [md_mod]
> ... ...
> RSP: 0018:ffffb149807a7d68 EFLAGS: 00010207
> RAX: 0000000000000000 RBX: ffff9d494c180800 RCX: ffff9d490fc01e50
> RDX: fffff047c0ed8308 RSI: 0000000000000246 RDI: 0000000000000246
> RBP: 6b6b6b6b6b6b6b6b R08: ffff9d490fc01e40 R09: 0000000000000000
> R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000000
> R13: ffff9d494c180818 R14: ffff9d493399ef38 R15: ffff9d4933a1d800
> FS: 0000000000000000(0000) GS:ffff9d494f700000(0000) knlGS:0000000000000000
> CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
> CR2: 00007fe68cab9010 CR3: 000000004c6be001 CR4: 00000000003706e0
> DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
> Call Trace:
> raid1d+0x5c/0xd40 [raid1]
> ? finish_task_switch+0x75/0x2a0
> ? lock_timer_base+0x67/0x80
> ? try_to_del_timer_sync+0x4d/0x80
> ? del_timer_sync+0x41/0x50
> ? schedule_timeout+0x254/0x2d0
> ? md_start_sync+0xe0/0xe0 [md_mod]
> ? md_thread+0x127/0x160 [md_mod]
> md_thread+0x127/0x160 [md_mod]
> ? wait_woken+0x80/0x80
> kthread+0x10d/0x130
> ? kthread_park+0xa0/0xa0
> ret_from_fork+0x1f/0x40
> ```
>
> Signed-off-by: Heming Zhao <heming.zhao@suse.com>
> Reviewed-by: Gang He <ghe@suse.com>
If there is a commit, your patch is fixing, please add a Fixes: tag.
> ---
> drivers/md/md.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 21da0c48f6c2..9892c13cdfc8 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -9251,11 +9251,11 @@ void md_check_recovery(struct mddev *mddev)
> }
>
> if (mddev_is_clustered(mddev)) {
> - struct md_rdev *rdev;
> + struct md_rdev *rdev, *tmp;
> /* kick the device if another node issued a
> * remove disk.
> */
> - rdev_for_each(rdev, mddev) {
> + rdev_for_each_safe(rdev, tmp, mddev) {
> if (test_and_clear_bit(ClusterRemove, &rdev->flags) &&
> rdev->raid_disk < 0)
> md_kick_rdev_from_array(rdev);
> @@ -9569,7 +9569,7 @@ static int __init md_init(void)
> static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
> {
> struct mdp_superblock_1 *sb = page_address(rdev->sb_page);
> - struct md_rdev *rdev2;
> + struct md_rdev *rdev2, *tmp;
> int role, ret;
> char b[BDEVNAME_SIZE];
>
> @@ -9586,7 +9586,7 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
> }
>
> /* Check for change of roles in the active devices */
> - rdev_for_each(rdev2, mddev) {
> + rdev_for_each_safe(rdev2, tmp, mddev) {
> if (test_bit(Faulty, &rdev2->flags))
> continue;
>
Kind regards,
Paul
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] md-cluster: fix use-after-free issue when removing rdev
2021-04-08 5:09 ` Paul Menzel
@ 2021-04-08 5:52 ` heming.zhao
2021-04-08 6:33 ` Paul Menzel
0 siblings, 1 reply; 6+ messages in thread
From: heming.zhao @ 2021-04-08 5:52 UTC (permalink / raw)
To: Paul Menzel; +Cc: lidong.zhong, xni, colyli, ghe, linux-raid, Song Liu
On 4/8/21 1:09 PM, Paul Menzel wrote:
> Dear Heming,
>
>
> Thank you for the patch.
>
> Am 08.04.21 um 05:01 schrieb Heming Zhao:
>> md_kick_rdev_from_array will remove rdev, so we should
>> use rdev_for_each_safe to search list.
>>
>> How to trigger:
>>
>> ```
>> for i in {1..20}; do
>> echo ==== $i `date` ====;
>>
>> mdadm -Ss && ssh ${node2} "mdadm -Ss"
>> wipefs -a /dev/sda /dev/sdb
>>
>> mdadm -CR /dev/md0 -b clustered -e 1.2 -n 2 -l 1 /dev/sda \
>> /dev/sdb --assume-clean
>> ssh ${node2} "mdadm -A /dev/md0 /dev/sda /dev/sdb"
>> mdadm --wait /dev/md0
>> ssh ${node2} "mdadm --wait /dev/md0"
>>
>> mdadm --manage /dev/md0 --fail /dev/sda --remove /dev/sda
>> sleep 1
>> done
>> ```
>
> In the test script, I do not understand, what node2 is used for, where you log in over SSH.
>
The bug can only be triggered in cluster env. There are two nodes (in cluster),
To run this script on node1, and need ssh to node2 to execute some cmds.
${node2} stands for node2 ip address. e.g.: ssh 192.168.0.3 "mdadm --wait ..."
>> ... ...
>>
>> Signed-off-by: Heming Zhao <heming.zhao@suse.com>
>> Reviewed-by: Gang He <ghe@suse.com>
>
> If there is a commit, your patch is fixing, please add a Fixes: tag.
>
OK, I forgot it, will send v2 patch later.
Thanks,
Heming
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] md-cluster: fix use-after-free issue when removing rdev
2021-04-08 5:52 ` heming.zhao
@ 2021-04-08 6:33 ` Paul Menzel
2021-04-08 6:45 ` heming.zhao
0 siblings, 1 reply; 6+ messages in thread
From: Paul Menzel @ 2021-04-08 6:33 UTC (permalink / raw)
To: Heming Zhao; +Cc: lidong.zhong, xni, colyli, ghe, linux-raid, Song Liu
Dear Heming,
Am 08.04.21 um 07:52 schrieb heming.zhao@suse.com:
> On 4/8/21 1:09 PM, Paul Menzel wrote:
>> Am 08.04.21 um 05:01 schrieb Heming Zhao:
>>> md_kick_rdev_from_array will remove rdev, so we should
>>> use rdev_for_each_safe to search list.
>>>
>>> How to trigger:
>>>
>>> ```
>>> for i in {1..20}; do
>>> echo ==== $i `date` ====;
>>>
>>> mdadm -Ss && ssh ${node2} "mdadm -Ss"
>>> wipefs -a /dev/sda /dev/sdb
>>>
>>> mdadm -CR /dev/md0 -b clustered -e 1.2 -n 2 -l 1 /dev/sda \
>>> /dev/sdb --assume-clean
>>> ssh ${node2} "mdadm -A /dev/md0 /dev/sda /dev/sdb"
>>> mdadm --wait /dev/md0
>>> ssh ${node2} "mdadm --wait /dev/md0"
>>>
>>> mdadm --manage /dev/md0 --fail /dev/sda --remove /dev/sda
>>> sleep 1
>>> done
>>> ```
>>
>> In the test script, I do not understand, what node2 is used for, where
>> you log in over SSH.
>
> The bug can only be triggered in cluster env. There are two nodes (in cluster),
> To run this script on node1, and need ssh to node2 to execute some cmds.
> ${node2} stands for node2 ip address. e.g.: ssh 192.168.0.3 "mdadm
> --wait ..."
Please excuse my ignorance. I guess some other component is needed to
connect the two RAID devices on each node? At least you never tell mdadm
directly to use *node2*. Reading *Cluster Multi-device (Cluster MD)* [1]
a resource agent is needed.
>>> ... ...
>>>
>>> Signed-off-by: Heming Zhao <heming.zhao@suse.com>
>>> Reviewed-by: Gang He <ghe@suse.com>
>>
>> If there is a commit, your patch is fixing, please add a Fixes: tag.
>>
>
> OK, I forgot it, will send v2 patch later.
Awesome.
Kind regards,
Paul
[1]:
https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-cluster-md.html
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] md-cluster: fix use-after-free issue when removing rdev
2021-04-08 6:33 ` Paul Menzel
@ 2021-04-08 6:45 ` heming.zhao
2021-04-08 7:12 ` heming.zhao
0 siblings, 1 reply; 6+ messages in thread
From: heming.zhao @ 2021-04-08 6:45 UTC (permalink / raw)
To: Paul Menzel; +Cc: lidong.zhong, xni, colyli, ghe, linux-raid, Song Liu
On 4/8/21 2:33 PM, Paul Menzel wrote:
> Dear Heming,
>
>
> Am 08.04.21 um 07:52 schrieb heming.zhao@suse.com:
>> On 4/8/21 1:09 PM, Paul Menzel wrote:
>
>>> Am 08.04.21 um 05:01 schrieb Heming Zhao:
>>>> md_kick_rdev_from_array will remove rdev, so we should
>>>> use rdev_for_each_safe to search list.
>>>>
>>>> How to trigger:
>>>>
>>>> ```
>>>> for i in {1..20}; do
>>>> echo ==== $i `date` ====;
>>>>
>>>> mdadm -Ss && ssh ${node2} "mdadm -Ss"
>>>> wipefs -a /dev/sda /dev/sdb
>>>>
>>>> mdadm -CR /dev/md0 -b clustered -e 1.2 -n 2 -l1 /dev/sda \
>>>> /dev/sdb --assume-clean
>>>> ssh ${node2} "mdadm -A /dev/md0 /dev/sda /dev/sdb"
>>>> mdadm --wait /dev/md0
>>>> ssh ${node2} "mdadm --wait /dev/md0"
>>>>
>>>> mdadm --manage /dev/md0 --fail /dev/sda --remove /dev/sda
>>>> sleep 1
>>>> done
>>>> ```
>>>
>>> In the test script, I do not understand, what node2 is used for, where you log in over SSH.
>>
>> The bug can only be triggered in cluster env. There are two nodes (in cluster),
>> To run this script on node1, and need ssh to node2 to execute some cmds.
>> ${node2} stands for node2 ip address. e.g.: ssh 192.168.0.3 "mdadm --wait ..."
>
> Please excuse my ignorance. I guess some other component is needed to connect the two RAID devices on each node? At least you never tell mdadm directly to use *node2*. Reading *Cluster Multi-device (Cluster MD)* [1] a resource agent is needed.
>
> ... ...
>
> [1]: https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-cluster-md.html
>
Your refer is right. this bug is cluster special and I also mentioned "md-cluster" in patch subject.
In my opinion, by default, people are interesting with this patch should have cluster md knowledge.
and I think the above script conatins enough info to show the reproducible steps.
Thanks
Heming
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] md-cluster: fix use-after-free issue when removing rdev
2021-04-08 6:45 ` heming.zhao
@ 2021-04-08 7:12 ` heming.zhao
0 siblings, 0 replies; 6+ messages in thread
From: heming.zhao @ 2021-04-08 7:12 UTC (permalink / raw)
To: Paul Menzel; +Cc: lidong.zhong, xni, colyli, ghe, linux-raid, Song Liu
On 4/8/21 2:45 PM, heming.zhao@suse.com wrote:
> On 4/8/21 2:33 PM, Paul Menzel wrote:
>> Dear Heming,
>>
>>
>> Am 08.04.21 um 07:52 schrieb heming.zhao@suse.com:
>>> On 4/8/21 1:09 PM, Paul Menzel wrote:
>>
>>>> Am 08.04.21 um 05:01 schrieb Heming Zhao:
>>>>> md_kick_rdev_from_array will remove rdev, so we should
>>>>> use rdev_for_each_safe to search list.
>>>>>
>>>>> How to trigger:
>>>>>
>>>>> ```
>>>>> for i in {1..20}; do
>>>>> echo ==== $i `date` ====;
>>>>>
>>>>> mdadm -Ss && ssh ${node2} "mdadm -Ss"
>>>>> wipefs -a /dev/sda /dev/sdb
>>>>>
>>>>> mdadm -CR /dev/md0 -b clustered -e 1.2 -n 2 -l1 /dev/sda \
>>>>> /dev/sdb --assume-clean
>>>>> ssh ${node2} "mdadm -A /dev/md0 /dev/sda /dev/sdb"
>>>>> mdadm --wait /dev/md0
>>>>> ssh ${node2} "mdadm --wait /dev/md0"
>>>>>
>>>>> mdadm --manage /dev/md0 --fail /dev/sda --remove /dev/sda
>>>>> sleep 1
>>>>> done
>>>>> ```
>>>>
>>>> In the test script, I do not understand, what node2 is used for, where you log in over SSH.
>>>
>>> The bug can only be triggered in cluster env. There are two nodes (in cluster),
>>> To run this script on node1, and need ssh to node2 to execute some cmds.
>>> ${node2} stands for node2 ip address. e.g.: ssh 192.168.0.3 "mdadm --wait ..."
>>
>> Please excuse my ignorance. I guess some other component is needed to connect the two RAID devices on each node? At least you never tell mdadm directly to use *node2*. Reading *Cluster Multi-device (Cluster MD)* [1] a resource agent is needed.
>>
>> ... ...
>> [1]: https://documentation.suse.com/sle-ha/12-SP4/html/SLE-HA-all/cha-ha-cluster-md.html
>>
>
> Your refer is right. this bug is cluster special and I also mentioned "md-cluster" in patch subject.
> In my opinion, by default, people are interesting with this patch should have cluster md knowledge.
> and I think the above script conatins enough info to show the reproducible steps.
>
Hello,
I will add more info about the test script in v2 patch.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2021-04-08 7:13 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-08 3:01 [PATCH] md-cluster: fix use-after-free issue when removing rdev Heming Zhao
2021-04-08 5:09 ` Paul Menzel
2021-04-08 5:52 ` heming.zhao
2021-04-08 6:33 ` Paul Menzel
2021-04-08 6:45 ` heming.zhao
2021-04-08 7:12 ` heming.zhao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).