* : Multipath Setup for NVMe over Fabric
@ 2017-04-12 9:20 Ankur Srivastava
2017-04-13 7:41 ` Martin Wilck
0 siblings, 1 reply; 2+ messages in thread
From: Ankur Srivastava @ 2017-04-12 9:20 UTC (permalink / raw)
To: dm-devel
[-- Attachment #1.1: Type: text/plain, Size: 1228 bytes --]
Hi All,
I am working on NVMe over fabric and want to experiment the Multipathing
support for the same.
Setup Info:
RHEL 7.2 with Kernel 4.9.3
[root@localhost ~]# nvme list
Node SN Model
Namespace Usage Format FW Rev
---------------- --------------------
---------------------------------------- ---------
-------------------------- ---------------- --------
/dev/nvme0n1 30501b622ed15184 Linux
10 268.44 GB / 268.44 GB 512 B + 0 B 4.9.3
/dev/nvme1n1 ef730272d9be107c Linux
10 268.44 GB / 268.44 GB 512 B + 0 B 4.9.3
[root@localhost ~]# ps ax | grep multipath
1272 ? SLl 0:00 /sbin/multipathd
I have connected my Initiator to both the ports of Ethernet Adapter(Target)
to get 2 IO Paths, from the above data "/dev/nvme0n1" is path 1 and
"/dev/nvme1n1" is path 2 for the same namespace.
Note: I am using Null Block device on the Target Side.
But still the multipath is showing an error ie no path to Host for All the
NVMe Drives mapped on the Initiator. Does multipathd supports NVMe over
Fabric ??
Or what I am missing from configuration side ??
Thanks in advance!!
BR~
Ankur
[-- Attachment #1.2: Type: text/html, Size: 1940 bytes --]
[-- Attachment #2: Type: text/plain, Size: 0 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: : Multipath Setup for NVMe over Fabric
2017-04-12 9:20 : Multipath Setup for NVMe over Fabric Ankur Srivastava
@ 2017-04-13 7:41 ` Martin Wilck
0 siblings, 0 replies; 2+ messages in thread
From: Martin Wilck @ 2017-04-13 7:41 UTC (permalink / raw)
To: dm-devel, Ankur Srivastava
On Wed, 2017-04-12 at 14:50 +0530, Ankur Srivastava wrote:
> Hi All,
>
> I am working on NVMe over fabric and want to experiment the
> Multipathing support for the same.
>
> Setup Info:
> RHEL 7.2 with Kernel 4.9.3
>
>
> [root@localhost ~]# nvme list
> Node SN Model
> Namespace Usage Format
> FW Rev
> ---------------- -------------------- -----------------------------
> ----------- --------- -------------------------- ---------------- -
> -------
> /dev/nvme0n1 30501b622ed15184 Linux
> 10 268.44 GB / 268.44 GB 512 B + 0 B 4.9.3
> /dev/nvme1n1 ef730272d9be107c Linux
> 10 268.44 GB / 268.44 GB 512 B + 0 B
> 4.9.3
>
>
> [root@localhost ~]# ps ax | grep multipath
> 1272 ? SLl 0:00 /sbin/multipathd
>
>
> I have connected my Initiator to both the ports of Ethernet
> Adapter(Target) to get 2 IO Paths, from the above data "/dev/nvme0n1"
> is path 1 and "/dev/nvme1n1" is path 2 for the same namespace.
>
> Note: I am using Null Block device on the Target Side.
>
> But still the multipath is showing an error ie no path to Host for
> All the NVMe Drives mapped on the Initiator. Does multipathd supports
> NVMe over Fabric ??
> Or what I am missing from configuration side ??
I don't have any practical experience with NVMeoF, but you certainly
need a recent upstream multipath-tools version to make it work.
Regards
Martin
--
Dr. Martin Wilck <mwilck@suse.com>, Tel. +49 (0)911 74053 2107
SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2017-04-13 7:41 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-12 9:20 : Multipath Setup for NVMe over Fabric Ankur Srivastava
2017-04-13 7:41 ` Martin Wilck
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.