All of lore.kernel.org
 help / color / mirror / Atom feed
* [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
@ 2021-02-02 19:57 mwilck
  2021-02-02 20:40 ` Benjamin Marzinski
  2021-02-03  1:33 ` lixiaokeng
  0 siblings, 2 replies; 8+ messages in thread
From: mwilck @ 2021-02-02 19:57 UTC (permalink / raw)
  To: lixiaokeng, Benjamin Marzinski, Christophe Varoqui; +Cc: dm-devel, Martin Wilck

From: Martin Wilck <mwilck@suse.com>

The description of 2d32d6f ("libmultipath: adopt_paths(): don't bail out on
single path failure") said "we need to check after successful call to
adopt_paths() if that specific path had been actually added, and fail in the
caller otherwise". But the commit failed to actually implement this check.
Instead, it just checked if the path was member of the pathvec, which will
almost always be the case.

Fix it by checking what actually needs to be checked, membership of the
path to be added in mpp->paths.

Fixes: 2d32d6f ("libmultipath: adopt_paths(): don't bail out on single path failure")
Signed-off-by: Martin Wilck <mwilck@suse.com>
---

@lixiaokeng, I believe that this fixes the issue you mentioned in your
post "libmultipath: fix NULL dereference in get_be64".

---
 libmultipath/structs_vec.c | 4 ++--
 multipathd/main.c          | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/libmultipath/structs_vec.c b/libmultipath/structs_vec.c
index f7f45f1..47b1d03 100644
--- a/libmultipath/structs_vec.c
+++ b/libmultipath/structs_vec.c
@@ -707,8 +707,8 @@ struct multipath *add_map_with_path(struct vectors *vecs, struct path *pp,
 		goto out;
 	mpp->size = pp->size;
 
-	if (adopt_paths(vecs->pathvec, mpp) ||
-	    find_slot(vecs->pathvec, pp) == -1)
+	if (adopt_paths(vecs->pathvec, mpp) || pp->mpp != mpp ||
+	    find_slot(mpp->paths, pp) == -1)
 		goto out;
 
 	if (add_vec) {
diff --git a/multipathd/main.c b/multipathd/main.c
index 134185f..425492a 100644
--- a/multipathd/main.c
+++ b/multipathd/main.c
@@ -1008,8 +1008,8 @@ rescan:
 	if (mpp) {
 		condlog(4,"%s: adopting all paths for path %s",
 			mpp->alias, pp->dev);
-		if (adopt_paths(vecs->pathvec, mpp) ||
-		    find_slot(vecs->pathvec, pp) == -1)
+		if (adopt_paths(vecs->pathvec, mpp) || pp->mpp != mpp ||
+		    find_slot(mpp->paths, pp) == -1)
 			goto fail; /* leave path added to pathvec */
 
 		verify_paths(mpp);
-- 
2.29.2


--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
  2021-02-02 19:57 [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path mwilck
@ 2021-02-02 20:40 ` Benjamin Marzinski
  2021-02-03  1:33 ` lixiaokeng
  1 sibling, 0 replies; 8+ messages in thread
From: Benjamin Marzinski @ 2021-02-02 20:40 UTC (permalink / raw)
  To: mwilck; +Cc: lixiaokeng, dm-devel

On Tue, Feb 02, 2021 at 08:57:44PM +0100, mwilck@suse.com wrote:
> From: Martin Wilck <mwilck@suse.com>
> 
> The description of 2d32d6f ("libmultipath: adopt_paths(): don't bail out on
> single path failure") said "we need to check after successful call to
> adopt_paths() if that specific path had been actually added, and fail in the
> caller otherwise". But the commit failed to actually implement this check.
> Instead, it just checked if the path was member of the pathvec, which will
> almost always be the case.
> 
> Fix it by checking what actually needs to be checked, membership of the
> path to be added in mpp->paths.
> 
> Fixes: 2d32d6f ("libmultipath: adopt_paths(): don't bail out on single path failure")
> Signed-off-by: Martin Wilck <mwilck@suse.com>
Reviewed-by: Benjamin Marzinski <bmarzins@redhat.com>
> ---
> 
> @lixiaokeng, I believe that this fixes the issue you mentioned in your
> post "libmultipath: fix NULL dereference in get_be64".
> 
> ---
>  libmultipath/structs_vec.c | 4 ++--
>  multipathd/main.c          | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/libmultipath/structs_vec.c b/libmultipath/structs_vec.c
> index f7f45f1..47b1d03 100644
> --- a/libmultipath/structs_vec.c
> +++ b/libmultipath/structs_vec.c
> @@ -707,8 +707,8 @@ struct multipath *add_map_with_path(struct vectors *vecs, struct path *pp,
>  		goto out;
>  	mpp->size = pp->size;
>  
> -	if (adopt_paths(vecs->pathvec, mpp) ||
> -	    find_slot(vecs->pathvec, pp) == -1)
> +	if (adopt_paths(vecs->pathvec, mpp) || pp->mpp != mpp ||
> +	    find_slot(mpp->paths, pp) == -1)
>  		goto out;
>  
>  	if (add_vec) {
> diff --git a/multipathd/main.c b/multipathd/main.c
> index 134185f..425492a 100644
> --- a/multipathd/main.c
> +++ b/multipathd/main.c
> @@ -1008,8 +1008,8 @@ rescan:
>  	if (mpp) {
>  		condlog(4,"%s: adopting all paths for path %s",
>  			mpp->alias, pp->dev);
> -		if (adopt_paths(vecs->pathvec, mpp) ||
> -		    find_slot(vecs->pathvec, pp) == -1)
> +		if (adopt_paths(vecs->pathvec, mpp) || pp->mpp != mpp ||
> +		    find_slot(mpp->paths, pp) == -1)
>  			goto fail; /* leave path added to pathvec */
>  
>  		verify_paths(mpp);
> -- 
> 2.29.2

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
  2021-02-02 19:57 [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path mwilck
  2021-02-02 20:40 ` Benjamin Marzinski
@ 2021-02-03  1:33 ` lixiaokeng
  2021-02-03  8:14   ` Martin Wilck
  1 sibling, 1 reply; 8+ messages in thread
From: lixiaokeng @ 2021-02-03  1:33 UTC (permalink / raw)
  To: mwilck, Benjamin Marzinski, Christophe Varoqui; +Cc: dm-devel



On 2021/2/3 3:57, mwilck@suse.com wrote:
> From: Martin Wilck <mwilck@suse.com>
> 
> The description of 2d32d6f ("libmultipath: adopt_paths(): don't bail out on
> single path failure") said "we need to check after successful call to
> adopt_paths() if that specific path had been actually added, and fail in the
> caller otherwise". But the commit failed to actually implement this check.
> Instead, it just checked if the path was member of the pathvec, which will
> almost always be the case.
> 
> Fix it by checking what actually needs to be checked, membership of the
> path to be added in mpp->paths.
> 
> Fixes: 2d32d6f ("libmultipath: adopt_paths(): don't bail out on single path failure")
> Signed-off-by: Martin Wilck <mwilck@suse.com>
> ---
> 
> @lixiaokeng, I believe that this fixes the issue you mentioned in your
> post "libmultipath: fix NULL dereference in get_be64".
>Reviewed-by: Lixiaokeng <lixiaokeng@huawei.com>
> ---
>  libmultipath/structs_vec.c | 4 ++--
>  multipathd/main.c          | 4 ++--
>  2 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/libmultipath/structs_vec.c b/libmultipath/structs_vec.c
> index f7f45f1..47b1d03 100644
> --- a/libmultipath/structs_vec.c
> +++ b/libmultipath/structs_vec.c
> @@ -707,8 +707,8 @@ struct multipath *add_map_with_path(struct vectors *vecs, struct path *pp,
>  		goto out;
>  	mpp->size = pp->size;
>  
> -	if (adopt_paths(vecs->pathvec, mpp) ||
> -	    find_slot(vecs->pathvec, pp) == -1)
> +	if (adopt_paths(vecs->pathvec, mpp) || pp->mpp != mpp ||
> +	    find_slot(mpp->paths, pp) == -1)
>  		goto out;
>  
>  	if (add_vec) {
> diff --git a/multipathd/main.c b/multipathd/main.c
> index 134185f..425492a 100644
> --- a/multipathd/main.c
> +++ b/multipathd/main.c
> @@ -1008,8 +1008,8 @@ rescan:
>  	if (mpp) {
>  		condlog(4,"%s: adopting all paths for path %s",
>  			mpp->alias, pp->dev);
> -		if (adopt_paths(vecs->pathvec, mpp) ||
> -		    find_slot(vecs->pathvec, pp) == -1)
> +		if (adopt_paths(vecs->pathvec, mpp) || pp->mpp != mpp ||
> +		    find_slot(mpp->paths, pp) == -1)
>  			goto fail; /* leave path added to pathvec */
>  
>  		verify_paths(mpp);
> 

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
  2021-02-03  1:33 ` lixiaokeng
@ 2021-02-03  8:14   ` Martin Wilck
  2021-02-03  9:42     ` lixiaokeng
  0 siblings, 1 reply; 8+ messages in thread
From: Martin Wilck @ 2021-02-03  8:14 UTC (permalink / raw)
  To: lixiaokeng, Benjamin Marzinski, Christophe Varoqui; +Cc: dm-devel

On Wed, 2021-02-03 at 09:33 +0800, lixiaokeng wrote:
> 
> 
> On 2021/2/3 3:57, mwilck@suse.com wrote:
> > From: Martin Wilck <mwilck@suse.com>
> > 
> > The description of 2d32d6f ("libmultipath: adopt_paths(): don't
> > bail out on
> > single path failure") said "we need to check after successful call
> > to
> > adopt_paths() if that specific path had been actually added, and
> > fail in the
> > caller otherwise". But the commit failed to actually implement this
> > check.
> > Instead, it just checked if the path was member of the pathvec,
> > which will
> > almost always be the case.
> > 
> > Fix it by checking what actually needs to be checked, membership of
> > the
> > path to be added in mpp->paths.
> > 
> > Fixes: 2d32d6f ("libmultipath: adopt_paths(): don't bail out on
> > single path failure")
> > Signed-off-by: Martin Wilck <mwilck@suse.com>
> > ---
> > 
> > @lixiaokeng, I believe that this fixes the issue you mentioned in
> > your
> > post "libmultipath: fix NULL dereference in get_be64".
> > Reviewed-by: Lixiaokeng <lixiaokeng@huawei.com>

Is this also a Tested-by:? 
IOW, did it fix your issue?

Martin



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
  2021-02-03  8:14   ` Martin Wilck
@ 2021-02-03  9:42     ` lixiaokeng
  2021-02-03 13:14       ` Martin Wilck
  0 siblings, 1 reply; 8+ messages in thread
From: lixiaokeng @ 2021-02-03  9:42 UTC (permalink / raw)
  To: Martin Wilck, Benjamin Marzinski, Christophe Varoqui; +Cc: dm-devel



On 2021/2/3 16:14, Martin Wilck wrote:
> Is this also a Tested-by:? 
> IOW, did it fix your issue?

Yes, it solves the crash.But there is an other issue.

multipath.conf
defaults {
        find_multipaths no
}

[root@localhost coredump]# multipathd add path sdb
fail
[root@localhost coredump]# multipath -ll
[root@localhost coredump]# multipathd add path sdb
ok
[root@localhost coredump]# multipath -ll
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 dm-3 QEMU,QEMU HARDDISK
size=1.0G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=enabled
  `- 2:0:0:1 sdb 8:16 active ready running

I add local path twice. The first fails while the second
succeeds.

Regards,
Lixiaokeng

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
  2021-02-03  9:42     ` lixiaokeng
@ 2021-02-03 13:14       ` Martin Wilck
  2021-02-04  7:41         ` lixiaokeng
  0 siblings, 1 reply; 8+ messages in thread
From: Martin Wilck @ 2021-02-03 13:14 UTC (permalink / raw)
  To: lixiaokeng, Benjamin Marzinski, Christophe Varoqui; +Cc: dm-devel

On Wed, 2021-02-03 at 17:42 +0800, lixiaokeng wrote:
> 
> 
> On 2021/2/3 16:14, Martin Wilck wrote:
> > Is this also a Tested-by:? 
> > IOW, did it fix your issue?
> 
> Yes, it solves the crash.But there is an other issue.
> 
> multipath.conf
> defaults {
>         find_multipaths no
> }
> 
> [root@localhost coredump]# multipathd add path sdb
> fail
> [root@localhost coredump]# multipath -ll
> [root@localhost coredump]# multipathd add path sdb
> ok
> [root@localhost coredump]# multipath -ll
> 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 dm-3 QEMU,QEMU HARDDISK
> size=1.0G features='0' hwhandler='0' wp=rw
> `-+- policy='service-time 0' prio=1 status=enabled
>   `- 2:0:0:1 sdb 8:16 active ready running
> 
> I add local path twice. The first fails while the second
> succeeds.

More details please. What exactly were you doing? Was this a regression
caused by my patch? Please provide multipathd -v3 logs.

Also, you're aware that "find_multipaths no" is discouraged?
It leads to inconsistent behavior between multipath and multipathd.

Regards,
Martin



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
  2021-02-03 13:14       ` Martin Wilck
@ 2021-02-04  7:41         ` lixiaokeng
  2021-02-04 11:14           ` Martin Wilck
  0 siblings, 1 reply; 8+ messages in thread
From: lixiaokeng @ 2021-02-04  7:41 UTC (permalink / raw)
  To: Martin Wilck, Benjamin Marzinski, Christophe Varoqui; +Cc: dm-devel

[-- Attachment #1: Type: text/plain, Size: 1399 bytes --]



On 2021/2/3 21:14, Martin Wilck wrote:
> On Wed, 2021-02-03 at 17:42 +0800, lixiaokeng wrote:
>>
>>
>> On 2021/2/3 16:14, Martin Wilck wrote:
>>> Is this also a Tested-by:? 
>>> IOW, did it fix your issue?
>>
>> Yes, it solves the crash.But there is an other issue.
>>
>> multipath.conf
>> defaults {
>>         find_multipaths no
>> }
>>
>> [root@localhost coredump]# multipathd add path sdb
>> fail
>> [root@localhost coredump]# multipath -ll
>> [root@localhost coredump]# multipathd add path sdb
>> ok
>> [root@localhost coredump]# multipath -ll
>> 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 dm-3 QEMU,QEMU HARDDISK
>> size=1.0G features='0' hwhandler='0' wp=rw
>> `-+- policy='service-time 0' prio=1 status=enabled
>>   `- 2:0:0:1 sdb 8:16 active ready running
>>
>> I add local path twice. The first fails while the second
>> succeeds.
> 
> More details please. What exactly were you doing? Was this a regression
> caused by my patch? Please provide multipathd -v3 logs.

I did nothing just "multipathd add path sdb" twice.
Here I do that again with multipath -v3. The attachment shows all
messages.

> Also, you're aware that "find_multipaths no" is discouraged?
> It leads to inconsistent behavior between multipath and multipathd.
> 
There are some different things about local disks between 0.8.5 and 0.7.7.
I just test that.

Regards,
Lixiaokeng

[-- Attachment #2: multipathd add path sdb twice.txt --]
[-- Type: text/plain, Size: 20580 bytes --]

[root@localhost uppatch]# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                8:0    0  140G  0 disk 
├─sda1             8:1    0    1G  0 part /boot
└─sda2             8:2    0  139G  0 part 
  ├─euleros-root 253:0    0   50G  0 lvm  /
  ├─euleros-swap 253:1    0    4G  0 lvm  [SWAP]
  └─euleros-home 253:2    0   85G  0 lvm  /home
sdb                8:16   0   10G  0 disk 
sdc                8:32   0   10G  0 disk 
sdd                8:48   0   10G  0 disk 
sde                8:64   0   10G  0 disk 
sdf                8:80   0    1G  0 disk 
[root@localhost uppatch]# multipath -ll
[root@localhost uppatch]# multipath -v3
Feb 04 15:12:44 | set open fds limit to 1073741816/1073741816
Feb 04 15:12:44 | loading /lib64/multipath/libchecktur.so checker
Feb 04 15:12:44 | checker tur: message table size = 3
Feb 04 15:12:44 | loading /lib64/multipath/libprioconst.so prioritizer
Feb 04 15:12:44 | _init_foreign: foreign library "nvme" is not enabled
Feb 04 15:12:44 | sda: size = 293601280
Feb 04 15:12:44 | sda: vendor = QEMU
Feb 04 15:12:44 | sda: product = QEMU HARDDISK
Feb 04 15:12:44 | sda: rev = 2.5+
Feb 04 15:12:44 | sda: h:b:t:l = 2:0:0:0
Feb 04 15:12:44 | sda: tgt_node_name = 
Feb 04 15:12:44 | sda: 18275 cyl, 255 heads, 63 sectors/track, start at 0
Feb 04 15:12:44 | sda: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:12:44 | 2:0:0:0: attribute vpd_pg80 not found in sysfs
Feb 04 15:12:44 | failed to read sysfs vpd pg80
Feb 04 15:12:44 | sda: fail to get serial
Feb 04 15:12:44 | sda: detect_checker = yes (setting: multipath internal)
Feb 04 15:12:44 | sda: path_checker = tur (setting: multipath internal)
Feb 04 15:12:44 | sda: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:12:44 | sda: tur state = up
Feb 04 15:12:44 | sda: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:12:44 | sda: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0 (udev)
Feb 04 15:12:44 | sda: detect_prio = yes (setting: multipath internal)
Feb 04 15:12:44 | sda: prio = const (setting: multipath internal)
Feb 04 15:12:44 | sda: prio args = "" (setting: multipath internal)
Feb 04 15:12:44 | sda: const prio = 1
Feb 04 15:12:44 | sdf: size = 2097152
Feb 04 15:12:44 | sdf: vendor = QEMU
Feb 04 15:12:44 | sdf: product = QEMU HARDDISK
Feb 04 15:12:44 | sdf: rev = 2.5+
Feb 04 15:12:44 | sdf: h:b:t:l = 2:0:0:1
Feb 04 15:12:44 | sdf: tgt_node_name = 
Feb 04 15:12:44 | sdf: 1011 cyl, 34 heads, 61 sectors/track, start at 0
Feb 04 15:12:44 | sdf: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:12:44 | 2:0:0:1: attribute vpd_pg80 not found in sysfs
Feb 04 15:12:44 | failed to read sysfs vpd pg80
Feb 04 15:12:44 | sdf: fail to get serial
Feb 04 15:12:44 | sdf: detect_checker = yes (setting: multipath internal)
Feb 04 15:12:44 | sdf: path_checker = tur (setting: multipath internal)
Feb 04 15:12:44 | sdf: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:12:44 | sdf: tur state = up
Feb 04 15:12:44 | sdf: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:12:44 | sdf: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 (udev)
Feb 04 15:12:44 | sdf: detect_prio = yes (setting: multipath internal)
Feb 04 15:12:44 | sdf: prio = const (setting: multipath internal)
Feb 04 15:12:44 | sdf: prio args = "" (setting: multipath internal)
Feb 04 15:12:44 | sdf: const prio = 1
Feb 04 15:12:44 | sde: size = 20971520
Feb 04 15:12:44 | sde: vendor = QEMU
Feb 04 15:12:44 | sde: product = QEMU HARDDISK
Feb 04 15:12:44 | sde: rev = 2.5+
Feb 04 15:12:44 | sde: h:b:t:l = 2:0:0:2
Feb 04 15:12:44 | sde: tgt_node_name = 
Feb 04 15:12:44 | sde: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:12:44 | sde: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:12:44 | 2:0:0:2: attribute vpd_pg80 not found in sysfs
Feb 04 15:12:44 | failed to read sysfs vpd pg80
Feb 04 15:12:44 | sde: fail to get serial
Feb 04 15:12:44 | sde: detect_checker = yes (setting: multipath internal)
Feb 04 15:12:44 | sde: path_checker = tur (setting: multipath internal)
Feb 04 15:12:44 | sde: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:12:44 | sde: tur state = up
Feb 04 15:12:44 | sde: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:12:44 | sde: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2 (udev)
Feb 04 15:12:44 | sde: detect_prio = yes (setting: multipath internal)
Feb 04 15:12:44 | sde: prio = const (setting: multipath internal)
Feb 04 15:12:44 | sde: prio args = "" (setting: multipath internal)
Feb 04 15:12:44 | sde: const prio = 1
Feb 04 15:12:44 | sdd: size = 20971520
Feb 04 15:12:44 | sdd: vendor = QEMU
Feb 04 15:12:44 | sdd: product = QEMU HARDDISK
Feb 04 15:12:44 | sdd: rev = 2.5+
Feb 04 15:12:44 | sdd: h:b:t:l = 2:0:0:3
Feb 04 15:12:44 | sdd: tgt_node_name = 
Feb 04 15:12:44 | sdd: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:12:44 | sdd: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:12:44 | 2:0:0:3: attribute vpd_pg80 not found in sysfs
Feb 04 15:12:44 | failed to read sysfs vpd pg80
Feb 04 15:12:44 | sdd: fail to get serial
Feb 04 15:12:44 | sdd: detect_checker = yes (setting: multipath internal)
Feb 04 15:12:44 | sdd: path_checker = tur (setting: multipath internal)
Feb 04 15:12:44 | sdd: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:12:44 | sdd: tur state = up
Feb 04 15:12:44 | sdd: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:12:44 | sdd: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-3 (udev)
Feb 04 15:12:44 | sdd: detect_prio = yes (setting: multipath internal)
Feb 04 15:12:44 | sdd: prio = const (setting: multipath internal)
Feb 04 15:12:44 | sdd: prio args = "" (setting: multipath internal)
Feb 04 15:12:44 | sdd: const prio = 1
Feb 04 15:12:44 | sdc: size = 20971520
Feb 04 15:12:44 | sdc: vendor = QEMU
Feb 04 15:12:44 | sdc: product = QEMU HARDDISK
Feb 04 15:12:44 | sdc: rev = 2.5+
Feb 04 15:12:44 | sdc: h:b:t:l = 2:0:0:4
Feb 04 15:12:44 | sdc: tgt_node_name = 
Feb 04 15:12:44 | sdc: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:12:44 | sdc: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:12:44 | 2:0:0:4: attribute vpd_pg80 not found in sysfs
Feb 04 15:12:44 | failed to read sysfs vpd pg80
Feb 04 15:12:44 | sdc: fail to get serial
Feb 04 15:12:44 | sdc: detect_checker = yes (setting: multipath internal)
Feb 04 15:12:44 | sdc: path_checker = tur (setting: multipath internal)
Feb 04 15:12:44 | sdc: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:12:44 | sdc: tur state = up
Feb 04 15:12:44 | sdc: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:12:44 | sdc: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-4 (udev)
Feb 04 15:12:44 | sdc: detect_prio = yes (setting: multipath internal)
Feb 04 15:12:44 | sdc: prio = const (setting: multipath internal)
Feb 04 15:12:44 | sdc: prio args = "" (setting: multipath internal)
Feb 04 15:12:44 | sdc: const prio = 1
Feb 04 15:12:44 | sdb: size = 20971520
Feb 04 15:12:44 | sdb: vendor = QEMU
Feb 04 15:12:44 | sdb: product = QEMU HARDDISK
Feb 04 15:12:44 | sdb: rev = 2.5+
Feb 04 15:12:44 | sdb: h:b:t:l = 2:0:0:5
Feb 04 15:12:44 | sdb: tgt_node_name = 
Feb 04 15:12:44 | sdb: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:12:44 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:12:44 | 2:0:0:5: attribute vpd_pg80 not found in sysfs
Feb 04 15:12:44 | failed to read sysfs vpd pg80
Feb 04 15:12:44 | sdb: fail to get serial
Feb 04 15:12:44 | sdb: detect_checker = yes (setting: multipath internal)
Feb 04 15:12:44 | sdb: path_checker = tur (setting: multipath internal)
Feb 04 15:12:44 | sdb: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:12:44 | sdb: tur state = up
Feb 04 15:12:44 | sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:12:44 | sdb: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 (udev)
Feb 04 15:12:44 | sdb: detect_prio = yes (setting: multipath internal)
Feb 04 15:12:44 | sdb: prio = const (setting: multipath internal)
Feb 04 15:12:44 | sdb: prio args = "" (setting: multipath internal)
Feb 04 15:12:44 | sdb: const prio = 1
Feb 04 15:12:44 | dm-0: device node name blacklisted
Feb 04 15:12:44 | dm-1: device node name blacklisted
Feb 04 15:12:44 | dm-2: device node name blacklisted
===== paths list =====
uuid                                  hcil    dev dev_t pri dm_st chk_st vend/
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0 2:0:0:0 sda 8:0   1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 2:0:0:1 sdf 8:80  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2 2:0:0:2 sde 8:64  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-3 2:0:0:3 sdd 8:48  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-4 2:0:0:4 sdc 8:32  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 2:0:0:5 sdb 8:16  1   undef undef  QEMU,
Feb 04 15:12:44 | libdevmapper version 1.02.170 (2020-03-24)
Feb 04 15:12:44 | DM multipath kernel driver v1.13.0
Feb 04 15:12:44 | sda: blacklisted, udev property missing
Feb 04 15:12:44 | sda: orphan path, blacklisted
Feb 04 15:12:44 | sdf: blacklisted, udev property missing
Feb 04 15:12:44 | sdf: orphan path, blacklisted
Feb 04 15:12:44 | sde: blacklisted, udev property missing
Feb 04 15:12:44 | sde: orphan path, blacklisted
Feb 04 15:12:44 | sdd: blacklisted, udev property missing
Feb 04 15:12:44 | sdd: orphan path, blacklisted
Feb 04 15:12:44 | sdc: blacklisted, udev property missing
Feb 04 15:12:44 | sdc: orphan path, blacklisted
Feb 04 15:12:44 | sdb: blacklisted, udev property missing
Feb 04 15:12:44 | sdb: orphan path, blacklisted
Feb 04 15:12:44 | unloading const prioritizer
Feb 04 15:12:44 | unloading tur checker
[root@localhost uppatch]# multipathd add path sdb
fail
[root@localhost uppatch]# multipathd add path sdb
ok
[root@localhost uppatch]# lsblk
NAME                                    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                                       8:0    0  140G  0 disk  
├─sda1                                    8:1    0    1G  0 part  /boot
└─sda2                                    8:2    0  139G  0 part  
  ├─euleros-root                        253:0    0   50G  0 lvm   /
  ├─euleros-swap                        253:1    0    4G  0 lvm   [SWAP]
  └─euleros-home                        253:2    0   85G  0 lvm   /home
sdb                                       8:16   0   10G  0 disk  
└─0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 253:3    0   10G  0 mpath 
sdc                                       8:32   0   10G  0 disk  
sdd                                       8:48   0   10G  0 disk  
sde                                       8:64   0   10G  0 disk  
sdf                                       8:80   0    1G  0 disk  
[root@localhost uppatch]# multipath -ll
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 dm-3 QEMU,QEMU HARDDISK
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=enabled
  `- 2:0:0:5 sdb 8:16 active ready running
[root@localhost uppatch]# multipath -v3
Feb 04 15:13:18 | set open fds limit to 1073741816/1073741816
Feb 04 15:13:18 | loading /lib64/multipath/libchecktur.so checker
Feb 04 15:13:18 | checker tur: message table size = 3
Feb 04 15:13:18 | loading /lib64/multipath/libprioconst.so prioritizer
Feb 04 15:13:18 | _init_foreign: foreign library "nvme" is not enabled
Feb 04 15:13:18 | sda: size = 293601280
Feb 04 15:13:18 | sda: vendor = QEMU
Feb 04 15:13:18 | sda: product = QEMU HARDDISK
Feb 04 15:13:18 | sda: rev = 2.5+
Feb 04 15:13:18 | sda: h:b:t:l = 2:0:0:0
Feb 04 15:13:18 | sda: tgt_node_name = 
Feb 04 15:13:18 | sda: 18275 cyl, 255 heads, 63 sectors/track, start at 0
Feb 04 15:13:18 | sda: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:13:18 | 2:0:0:0: attribute vpd_pg80 not found in sysfs
Feb 04 15:13:18 | failed to read sysfs vpd pg80
Feb 04 15:13:18 | sda: fail to get serial
Feb 04 15:13:18 | sda: detect_checker = yes (setting: multipath internal)
Feb 04 15:13:18 | sda: path_checker = tur (setting: multipath internal)
Feb 04 15:13:18 | sda: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:13:18 | sda: tur state = up
Feb 04 15:13:18 | sda: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:13:18 | sda: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0 (udev)
Feb 04 15:13:18 | sda: detect_prio = yes (setting: multipath internal)
Feb 04 15:13:18 | sda: prio = const (setting: multipath internal)
Feb 04 15:13:18 | sda: prio args = "" (setting: multipath internal)
Feb 04 15:13:18 | sda: const prio = 1
Feb 04 15:13:18 | sdf: size = 2097152
Feb 04 15:13:18 | sdf: vendor = QEMU
Feb 04 15:13:18 | sdf: product = QEMU HARDDISK
Feb 04 15:13:18 | sdf: rev = 2.5+
Feb 04 15:13:18 | sdf: h:b:t:l = 2:0:0:1
Feb 04 15:13:18 | sdf: tgt_node_name = 
Feb 04 15:13:18 | sdf: 1011 cyl, 34 heads, 61 sectors/track, start at 0
Feb 04 15:13:18 | sdf: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:13:18 | 2:0:0:1: attribute vpd_pg80 not found in sysfs
Feb 04 15:13:18 | failed to read sysfs vpd pg80
Feb 04 15:13:18 | sdf: fail to get serial
Feb 04 15:13:18 | sdf: detect_checker = yes (setting: multipath internal)
Feb 04 15:13:18 | sdf: path_checker = tur (setting: multipath internal)
Feb 04 15:13:18 | sdf: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:13:18 | sdf: tur state = up
Feb 04 15:13:18 | sdf: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:13:18 | sdf: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 (udev)
Feb 04 15:13:18 | sdf: detect_prio = yes (setting: multipath internal)
Feb 04 15:13:18 | sdf: prio = const (setting: multipath internal)
Feb 04 15:13:18 | sdf: prio args = "" (setting: multipath internal)
Feb 04 15:13:18 | sdf: const prio = 1
Feb 04 15:13:18 | sde: size = 20971520
Feb 04 15:13:18 | sde: vendor = QEMU
Feb 04 15:13:18 | sde: product = QEMU HARDDISK
Feb 04 15:13:18 | sde: rev = 2.5+
Feb 04 15:13:18 | sde: h:b:t:l = 2:0:0:2
Feb 04 15:13:18 | sde: tgt_node_name = 
Feb 04 15:13:18 | sde: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:13:18 | sde: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:13:18 | 2:0:0:2: attribute vpd_pg80 not found in sysfs
Feb 04 15:13:18 | failed to read sysfs vpd pg80
Feb 04 15:13:18 | sde: fail to get serial
Feb 04 15:13:18 | sde: detect_checker = yes (setting: multipath internal)
Feb 04 15:13:18 | sde: path_checker = tur (setting: multipath internal)
Feb 04 15:13:18 | sde: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:13:18 | sde: tur state = up
Feb 04 15:13:18 | sde: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:13:18 | sde: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2 (udev)
Feb 04 15:13:18 | sde: detect_prio = yes (setting: multipath internal)
Feb 04 15:13:18 | sde: prio = const (setting: multipath internal)
Feb 04 15:13:18 | sde: prio args = "" (setting: multipath internal)
Feb 04 15:13:18 | sde: const prio = 1
Feb 04 15:13:18 | sdd: size = 20971520
Feb 04 15:13:18 | sdd: vendor = QEMU
Feb 04 15:13:18 | sdd: product = QEMU HARDDISK
Feb 04 15:13:18 | sdd: rev = 2.5+
Feb 04 15:13:18 | sdd: h:b:t:l = 2:0:0:3
Feb 04 15:13:18 | sdd: tgt_node_name = 
Feb 04 15:13:18 | sdd: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:13:18 | sdd: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:13:18 | 2:0:0:3: attribute vpd_pg80 not found in sysfs
Feb 04 15:13:18 | failed to read sysfs vpd pg80
Feb 04 15:13:18 | sdd: fail to get serial
Feb 04 15:13:18 | sdd: detect_checker = yes (setting: multipath internal)
Feb 04 15:13:18 | sdd: path_checker = tur (setting: multipath internal)
Feb 04 15:13:18 | sdd: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:13:18 | sdd: tur state = up
Feb 04 15:13:18 | sdd: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:13:18 | sdd: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-3 (udev)
Feb 04 15:13:18 | sdd: detect_prio = yes (setting: multipath internal)
Feb 04 15:13:18 | sdd: prio = const (setting: multipath internal)
Feb 04 15:13:18 | sdd: prio args = "" (setting: multipath internal)
Feb 04 15:13:18 | sdd: const prio = 1
Feb 04 15:13:18 | sdc: size = 20971520
Feb 04 15:13:18 | sdc: vendor = QEMU
Feb 04 15:13:18 | sdc: product = QEMU HARDDISK
Feb 04 15:13:18 | sdc: rev = 2.5+
Feb 04 15:13:18 | sdc: h:b:t:l = 2:0:0:4
Feb 04 15:13:18 | sdc: tgt_node_name = 
Feb 04 15:13:18 | sdc: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:13:18 | sdc: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:13:18 | 2:0:0:4: attribute vpd_pg80 not found in sysfs
Feb 04 15:13:18 | failed to read sysfs vpd pg80
Feb 04 15:13:18 | sdc: fail to get serial
Feb 04 15:13:18 | sdc: detect_checker = yes (setting: multipath internal)
Feb 04 15:13:18 | sdc: path_checker = tur (setting: multipath internal)
Feb 04 15:13:18 | sdc: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:13:18 | sdc: tur state = up
Feb 04 15:13:18 | sdc: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:13:18 | sdc: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-4 (udev)
Feb 04 15:13:18 | sdc: detect_prio = yes (setting: multipath internal)
Feb 04 15:13:18 | sdc: prio = const (setting: multipath internal)
Feb 04 15:13:18 | sdc: prio args = "" (setting: multipath internal)
Feb 04 15:13:18 | sdc: const prio = 1
Feb 04 15:13:18 | sdb: size = 20971520
Feb 04 15:13:18 | sdb: vendor = QEMU
Feb 04 15:13:18 | sdb: product = QEMU HARDDISK
Feb 04 15:13:18 | sdb: rev = 2.5+
Feb 04 15:13:18 | sdb: h:b:t:l = 2:0:0:5
Feb 04 15:13:18 | sdb: tgt_node_name = 
Feb 04 15:13:18 | sdb: 10240 cyl, 64 heads, 32 sectors/track, start at 0
Feb 04 15:13:18 | sdb: vpd_vendor_id = 0 "undef" (setting: multipath internal)
Feb 04 15:13:18 | 2:0:0:5: attribute vpd_pg80 not found in sysfs
Feb 04 15:13:18 | failed to read sysfs vpd pg80
Feb 04 15:13:18 | sdb: fail to get serial
Feb 04 15:13:18 | sdb: detect_checker = yes (setting: multipath internal)
Feb 04 15:13:18 | sdb: path_checker = tur (setting: multipath internal)
Feb 04 15:13:18 | sdb: checker timeout = 30 s (setting: kernel sysfs)
Feb 04 15:13:18 | sdb: tur state = up
Feb 04 15:13:18 | sdb: uid_attribute = ID_SERIAL (setting: multipath internal)
Feb 04 15:13:18 | sdb: uid = 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 (udev)
Feb 04 15:13:18 | sdb: detect_prio = yes (setting: multipath internal)
Feb 04 15:13:18 | sdb: prio = const (setting: multipath internal)
Feb 04 15:13:18 | sdb: prio args = "" (setting: multipath internal)
Feb 04 15:13:18 | sdb: const prio = 1
Feb 04 15:13:18 | dm-0: device node name blacklisted
Feb 04 15:13:18 | dm-1: device node name blacklisted
Feb 04 15:13:18 | dm-2: device node name blacklisted
Feb 04 15:13:18 | dm-3: device node name blacklisted
===== paths list =====
uuid                                  hcil    dev dev_t pri dm_st chk_st vend/
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0 2:0:0:0 sda 8:0   1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 2:0:0:1 sdf 8:80  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-2 2:0:0:2 sde 8:64  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-3 2:0:0:3 sdd 8:48  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-4 2:0:0:4 sdc 8:32  1   undef undef  QEMU,
0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-5 2:0:0:5 sdb 8:16  1   undef undef  QEMU,
Feb 04 15:13:18 | libdevmapper version 1.02.170 (2020-03-24)
Feb 04 15:13:18 | DM multipath kernel driver v1.13.0
Feb 04 15:13:18 | sda: blacklisted, udev property missing
Feb 04 15:13:18 | sda: orphan path, blacklisted
Feb 04 15:13:18 | sdf: blacklisted, udev property missing
Feb 04 15:13:18 | sdf: orphan path, blacklisted
Feb 04 15:13:18 | sde: blacklisted, udev property missing
Feb 04 15:13:18 | sde: orphan path, blacklisted
Feb 04 15:13:18 | sdd: blacklisted, udev property missing
Feb 04 15:13:18 | sdd: orphan path, blacklisted
Feb 04 15:13:18 | sdc: blacklisted, udev property missing
Feb 04 15:13:18 | sdc: orphan path, blacklisted
Feb 04 15:13:18 | sdb: blacklisted, udev property missing
Feb 04 15:13:18 | sdb: orphan path, blacklisted
Feb 04 15:13:18 | unloading const prioritizer
Feb 04 15:13:18 | unloading tur checker
[root@localhost uppatch]# 

[-- Attachment #3: Type: text/plain, Size: 93 bytes --]

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path
  2021-02-04  7:41         ` lixiaokeng
@ 2021-02-04 11:14           ` Martin Wilck
  0 siblings, 0 replies; 8+ messages in thread
From: Martin Wilck @ 2021-02-04 11:14 UTC (permalink / raw)
  To: lixiaokeng, Benjamin Marzinski, Christophe Varoqui; +Cc: dm-devel

On Thu, 2021-02-04 at 15:41 +0800, lixiaokeng wrote:
> 
> 
> On 2021/2/3 21:14, Martin Wilck wrote:
> > On Wed, 2021-02-03 at 17:42 +0800, lixiaokeng wrote:
> > > 
> > > 
> > > On 2021/2/3 16:14, Martin Wilck wrote:
> > > > Is this also a Tested-by:? 
> > > > IOW, did it fix your issue?
> > > 
> > > Yes, it solves the crash.But there is an other issue.
> > > 
> > > multipath.conf
> > > defaults {
> > >         find_multipaths no
> > > }
> > > 
> > > [root@localhost coredump]# multipathd add path sdb
> > > fail
> > > [root@localhost coredump]# multipath -ll
> > > [root@localhost coredump]# multipathd add path sdb
> > > ok
> > > [root@localhost coredump]# multipath -ll
> > > 0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-1 dm-3 QEMU,QEMU HARDDISK
> > > size=1.0G features='0' hwhandler='0' wp=rw
> > > `-+- policy='service-time 0' prio=1 status=enabled
> > >   `- 2:0:0:1 sdb 8:16 active ready running
> > > 
> > > I add local path twice. The first fails while the second
> > > succeeds.
> > 
> > More details please. What exactly were you doing? Was this a
> > regression
> > caused by my patch? Please provide multipathd -v3 logs.
> 
> I did nothing just "multipathd add path sdb" twice.
> Here I do that again with multipath -v3. The attachment shows all
> messages.

This is a misunderstanding, sorry for being unlear. What I meant was
the logs of *multipathd* running in the background with -v3. IOW, the
journal or syslog or whatever showing what went wrong the first time
around when you tried to add the disk.

But I was able to reproduce the issue, so I can do this myself.

1st time:

994.196771 | sdb: prio args = "" (setting: multipath internal)
994.196781 | sdb: const prio = 1
994.196831 | QEMU_HARDDISK_QM00007: user_friendly_names = no (setting:
multipath internal)
994.196982 | QEMU_HARDDISK_QM00007: alias = QEMU_HARDDISK_QM00007
(setting: default to WWID)
994.197053 | adopt_paths: pathinfo failed for sdb
994.197065 | sdb: orphan path, failed to add path

2nd time:

1012.157422 | sdb: path already in pathvec

Here, cli_add_path() calls ev_add_path() right away:

1012.157433 | QEMU_HARDDISK_QM00007: user_friendly_names = no (setting:
multipath internal)
1012.157440 | QEMU_HARDDISK_QM00007: alias = QEMU_HARDDISK_QM00007
(setting: default to WWID)
1012.157688 | sdb: detect_checker = yes (setting: multipath internal)
...
1012.158342 | sdb: ownership set to QEMU_HARDDISK_QM00007

The problem here is, again, that we don't handle blacklisting by
property consistently.

Please apply my recent series "consistent behavior of
filter_property()". It should fix the issue (did so for me).
> 

> > Also, you're aware that "find_multipaths no" is discouraged?
> > It leads to inconsistent behavior between multipath and multipathd.
> > 
> There are some different things about local disks between 0.8.5 and
> 0.7.7.
> I just test that.

Sure. I just wanted to make you aware that you are using a possibly
dangerous setting.

Thank you for you hard work and your valuable contributions!

Regards
Martin




--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2021-02-04 11:19 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-02 19:57 [dm-devel] [PATCH] libmultipath: check if adopt_path() really added current path mwilck
2021-02-02 20:40 ` Benjamin Marzinski
2021-02-03  1:33 ` lixiaokeng
2021-02-03  8:14   ` Martin Wilck
2021-02-03  9:42     ` lixiaokeng
2021-02-03 13:14       ` Martin Wilck
2021-02-04  7:41         ` lixiaokeng
2021-02-04 11:14           ` Martin Wilck

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.