All of lore.kernel.org
 help / color / mirror / Atom feed
* NVMET Target with esxi 7
@ 2020-05-01 13:47 Grant Albitz
  2020-05-01 14:20 ` Max Gurtovoy
  0 siblings, 1 reply; 6+ messages in thread
From: Grant Albitz @ 2020-05-01 13:47 UTC (permalink / raw)
  To: linux-nvme

Hello, wondering if anyone can lend some advise. I am trying to discover a nvmet target from esxi. My config is below, from esxi i can discover the controller, it sees the namespace and shows the correct size of the drive. The paths are dead and the HPP path driver comes back and states the path is unsupported. I suspect there is some check that is failing but I am not sure what. I havent been able to get anymore logging out of esxi then what is below.

A side note is no matter what i do on ubuntu the mellanox version of nvmet and nvmet-rdma give symbol errors. I have tried it with the inbox ubuntu 19.10 and 20.04 drivers and they both have the behaivor above.

Config:

I used the pure nqn just because i was concerned esxi my reject a simple namespace name. The iqn below came out of another demo from pure that worked, i have tried simple iqns such as testiqn with the same result.

modprobe nvmet
modprobe nvmet-rdma
sudo /bin/mount -t configfs none /sys/kernel/config/
sudo mkdir /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
cd /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
echo 1 | sudo tee -a attr_allow_any_host > /dev/null
sudo mkdir namespaces/1
cd namespaces/1/
echo -n /dev/nvme0n1> device_path
echo 1 | sudo tee -a enable > /dev/null
sudo mkdir /sys/kernel/config/nvmet/ports/1
cd /sys/kernel/config/nvmet/ports/1
echo 10.10.11.1 | sudo tee -a addr_traddr > /dev/null
echo rdma | sudo tee -a addr_trtype > /dev/null
echo 4420 | sudo tee -a addr_trsvcid > /dev/null
echo ipv4 | sudo tee -a addr_adrfam > /dev/null
sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
sudo mkdir /sys/kernel/config/nvmet/ports/2
cd /sys/kernel/config/nvmet/ports/2
echo 10.10.12.1 | sudo tee -a addr_traddr > /dev/null
echo rdma | sudo tee -a addr_trtype > /dev/null
echo 4420 | sudo tee -a addr_trsvcid > /dev/null
echo ipv4 | sudo tee -a addr_adrfam > /dev/null
sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/2/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb


dmesg error if using ofed modules (i realize not really your problem just putting it here)
[ 2498.908659] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
[ 2585.306697] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
[ 2678.580571] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
[ 2764.312226] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)

esxi error when using inbox modules. I have a ticket open with vmware but its as if they never heard of nvmeof. Best guess is they support a handfull of vendor appliances and not linux.


2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppCreateDevice:2957: Created logical device 'uuid.8301e535a182473c96414d4bfe1652cc'.
2020-04-30T15:29:09.255Z cpu3:2097454)WARNING: HPP: HppClaimPath:3719: Failed to claim path 'vmhba65:C0:T0:L0': Not supported
2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppUnclaimPath:3765: Unclaiming path vmhba65:C0:T0:L0
2020-04-30T15:29:09.255Z cpu3:2097454)ScsiPath: 8397: Plugin 'HPP' rejected path 'vmhba65:C0:T0:L0'
2020-04-30T15:29:09.255Z cpu3:2097454)ScsiClaimrule: 1568: Plugin HPP specified by claimrule 65534 was not able to claim path vmhba65:C0:T0:L0: Not supported


I realize this may be a vmware issue, but any advise would be appreciated, I am sort of stuck at this point. I did confirm that on the nvmet server, with the inbox module i can mount the nvme target on the same host. So its working in that sense. Unfortunately I dont have another linux server to test, just the esxi hosts from a seperate client perspective.


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVMET Target with esxi 7
  2020-05-01 13:47 NVMET Target with esxi 7 Grant Albitz
@ 2020-05-01 14:20 ` Max Gurtovoy
  2020-05-01 14:27   ` Grant Albitz
  0 siblings, 1 reply; 6+ messages in thread
From: Max Gurtovoy @ 2020-05-01 14:20 UTC (permalink / raw)
  To: Grant Albitz, linux-nvme

Hi Grant,

In case you're having trouble with MLNX_OFED drivers and configuration - 
the mailing list it not the place to introduce it.

Please work with the correct channels.

In case you have 1 linux server you can try doing a loopback connection 
using RDMA transport and see if it works for you.

-Max.

On 5/1/2020 4:47 PM, Grant Albitz wrote:
> Hello, wondering if anyone can lend some advise. I am trying to discover a nvmet target from esxi. My config is below, from esxi i can discover the controller, it sees the namespace and shows the correct size of the drive. The paths are dead and the HPP path driver comes back and states the path is unsupported. I suspect there is some check that is failing but I am not sure what. I havent been able to get anymore logging out of esxi then what is below.
>
> A side note is no matter what i do on ubuntu the mellanox version of nvmet and nvmet-rdma give symbol errors. I have tried it with the inbox ubuntu 19.10 and 20.04 drivers and they both have the behaivor above.
>
> Config:
>
> I used the pure nqn just because i was concerned esxi my reject a simple namespace name. The iqn below came out of another demo from pure that worked, i have tried simple iqns such as testiqn with the same result.
>
> modprobe nvmet
> modprobe nvmet-rdma
> sudo /bin/mount -t configfs none /sys/kernel/config/
> sudo mkdir /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
> cd /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
> echo 1 | sudo tee -a attr_allow_any_host > /dev/null
> sudo mkdir namespaces/1
> cd namespaces/1/
> echo -n /dev/nvme0n1> device_path
> echo 1 | sudo tee -a enable > /dev/null
> sudo mkdir /sys/kernel/config/nvmet/ports/1
> cd /sys/kernel/config/nvmet/ports/1
> echo 10.10.11.1 | sudo tee -a addr_traddr > /dev/null
> echo rdma | sudo tee -a addr_trtype > /dev/null
> echo 4420 | sudo tee -a addr_trsvcid > /dev/null
> echo ipv4 | sudo tee -a addr_adrfam > /dev/null
> sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
> sudo mkdir /sys/kernel/config/nvmet/ports/2
> cd /sys/kernel/config/nvmet/ports/2
> echo 10.10.12.1 | sudo tee -a addr_traddr > /dev/null
> echo rdma | sudo tee -a addr_trtype > /dev/null
> echo 4420 | sudo tee -a addr_trsvcid > /dev/null
> echo ipv4 | sudo tee -a addr_adrfam > /dev/null
> sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/2/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
>
>
> dmesg error if using ofed modules (i realize not really your problem just putting it here)
> [ 2498.908659] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
> [ 2585.306697] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
> [ 2678.580571] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
> [ 2764.312226] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
>
> esxi error when using inbox modules. I have a ticket open with vmware but its as if they never heard of nvmeof. Best guess is they support a handfull of vendor appliances and not linux.
>
>
> 2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppCreateDevice:2957: Created logical device 'uuid.8301e535a182473c96414d4bfe1652cc'.
> 2020-04-30T15:29:09.255Z cpu3:2097454)WARNING: HPP: HppClaimPath:3719: Failed to claim path 'vmhba65:C0:T0:L0': Not supported
> 2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppUnclaimPath:3765: Unclaiming path vmhba65:C0:T0:L0
> 2020-04-30T15:29:09.255Z cpu3:2097454)ScsiPath: 8397: Plugin 'HPP' rejected path 'vmhba65:C0:T0:L0'
> 2020-04-30T15:29:09.255Z cpu3:2097454)ScsiClaimrule: 1568: Plugin HPP specified by claimrule 65534 was not able to claim path vmhba65:C0:T0:L0: Not supported
>
>
> I realize this may be a vmware issue, but any advise would be appreciated, I am sort of stuck at this point. I did confirm that on the nvmet server, with the inbox module i can mount the nvme target on the same host. So its working in that sense. Unfortunately I dont have another linux server to test, just the esxi hosts from a seperate client perspective.
>
>
> _______________________________________________
> linux-nvme mailing list
> linux-nvme@lists.infradead.org
> https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.infradead.org%2Fmailman%2Flistinfo%2Flinux-nvme&data=02%7C01%7Cmaxg%40mellanox.com%7C440c15ee412f4ce8397308d7edd62e8d%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C637239376473706085&sdata=1Z%2FL7bQJyAsG8HT3HYDrIBzLPVZlq36ppn%2F2RwB%2FEnM%3D&reserved=0

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVMET Target with esxi 7
  2020-05-01 14:20 ` Max Gurtovoy
@ 2020-05-01 14:27   ` Grant Albitz
  2020-05-05 20:18     ` Grant Albitz
  0 siblings, 1 reply; 6+ messages in thread
From: Grant Albitz @ 2020-05-01 14:27 UTC (permalink / raw)
  To: linux-nvme

Thanks Max, but as indicated with the inbox nvmt and rdma drivers the target is n ot discoverable by esxi 7. Curious is it has been tested at all. As indicated the mlnx ofed variant fails to load due to symbol errors, but i have had the outlined experience of discovering the namespace but not the path with the inbox drivers. At the moment I have abandoned the mlnx drivers and am trying to use the inbox ubuntu 19.10 ones.




From: Max Gurtovoy <maxg@mellanox.com>
Sent: Friday, May 1, 2020 10:20 AM
To: Grant Albitz; linux-nvme@lists.infradead.org
Subject: Re: NVMET Target with esxi 7
    
Hi Grant,

In case you're having trouble with MLNX_OFED drivers and configuration - 
the mailing list it not the place to introduce it.

Please work with the correct channels.

In case you have 1 linux server you can try doing a loopback connection 
using RDMA transport and see if it works for you.

-Max.

On 5/1/2020 4:47 PM, Grant Albitz wrote:
> Hello, wondering if anyone can lend some advise. I am trying to discover a nvmet target from esxi. My config is below, from esxi i can discover the controller, it sees the namespace and shows the correct size of the drive. The paths are dead and the HPP path  driver comes back and states the path is unsupported. I suspect there is some check that is failing but I am not sure what. I havent been able to get anymore logging out of esxi then what is below.
>
> A side note is no matter what i do on ubuntu the mellanox version of nvmet and nvmet-rdma give symbol errors. I have tried it with the inbox ubuntu 19.10 and 20.04 drivers and they both have the behaivor above.
>
> Config:
>
> I used the pure nqn just because i was concerned esxi my reject a simple namespace name. The iqn below came out of another demo from pure that worked, i have tried simple iqns such as testiqn with the same result.
>
> modprobe nvmet
> modprobe nvmet-rdma
> sudo /bin/mount -t configfs none /sys/kernel/config/
> sudo mkdir /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
> cd /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
> echo 1 | sudo tee -a attr_allow_any_host > /dev/null
> sudo mkdir namespaces/1
> cd namespaces/1/
> echo -n /dev/nvme0n1> device_path
> echo 1 | sudo tee -a enable > /dev/null
> sudo mkdir /sys/kernel/config/nvmet/ports/1
> cd /sys/kernel/config/nvmet/ports/1
> echo 10.10.11.1 | sudo tee -a addr_traddr > /dev/null
> echo rdma | sudo tee -a addr_trtype > /dev/null
> echo 4420 | sudo tee -a addr_trsvcid > /dev/null
> echo ipv4 | sudo tee -a addr_adrfam > /dev/null
> sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
> sudo mkdir /sys/kernel/config/nvmet/ports/2
> cd /sys/kernel/config/nvmet/ports/2
> echo 10.10.12.1 | sudo tee -a addr_traddr > /dev/null
> echo rdma | sudo tee -a addr_trtype > /dev/null
> echo 4420 | sudo tee -a addr_trsvcid > /dev/null
> echo ipv4 | sudo tee -a addr_adrfam > /dev/null
> sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/2/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
>
>
> dmesg error if using ofed modules (i realize not really your problem just putting it here)
> [ 2498.908659] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
> [ 2585.306697] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
> [ 2678.580571] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
> [ 2764.312226] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
>
> esxi error when using inbox modules. I have a ticket open with vmware but its as if they never heard of nvmeof. Best guess is they support a handfull of vendor appliances and not linux.
>
>
> 2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppCreateDevice:2957: Created logical device 'uuid.8301e535a182473c96414d4bfe1652cc'.
> 2020-04-30T15:29:09.255Z cpu3:2097454)WARNING: HPP: HppClaimPath:3719: Failed to claim path 'vmhba65:C0:T0:L0': Not supported
> 2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppUnclaimPath:3765: Unclaiming path vmhba65:C0:T0:L0
> 2020-04-30T15:29:09.255Z cpu3:2097454)ScsiPath: 8397: Plugin 'HPP' rejected path 'vmhba65:C0:T0:L0'
> 2020-04-30T15:29:09.255Z cpu3:2097454)ScsiClaimrule: 1568: Plugin HPP specified by claimrule 65534 was not able to claim path vmhba65:C0:T0:L0: Not supported
>
>
> I realize this may be a vmware issue, but any advise would be appreciated, I am sort of stuck at this point. I did confirm that on the nvmet server, with the inbox module i can mount the nvme target on the same host. So its working in that sense. Unfortunately  I dont have another linux server to test, just the esxi hosts from a seperate client perspective.
>
>
> _______________________________________________
> linux-nvme mailing list
> linux-nvme@lists.infradead.org
>  https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.infradead.org%2Fmailman%2Flistinfo%2Flinux-nvme&amp;data=02%7C01%7Cmaxg%40mellanox.com%7C440c15ee412f4ce8397308d7edd62e8d%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C1%7C637239376473706085&amp;sdata=1Z%2FL7bQJyAsG8HT3HYDrIBzLPVZlq36ppn%2F2RwB%2FEnM%3D&amp;reserved=0
    
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* NVMET Target with esxi 7
  2020-05-01 14:27   ` Grant Albitz
@ 2020-05-05 20:18     ` Grant Albitz
  2020-05-07 16:19       ` Sagi Grimberg
  0 siblings, 1 reply; 6+ messages in thread
From: Grant Albitz @ 2020-05-05 20:18 UTC (permalink / raw)
  To: linux-nvme

 Hello,

I was trying to configure vmware to connect to nvmet based target. I did not have much luck. VMwares official stance is a very small vendor list. I did see the recent additions such as metadata info. I was curious if anyone developing nvmet was willing/able to test nvmet as a datastore target for vmware to possibly add this functionality. At the moment I am not sure if there are a large number of reasons its not working or a very small check.



_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: NVMET Target with esxi 7
  2020-05-05 20:18     ` Grant Albitz
@ 2020-05-07 16:19       ` Sagi Grimberg
  2020-05-07 16:58         ` Grant Albitz
  0 siblings, 1 reply; 6+ messages in thread
From: Sagi Grimberg @ 2020-05-07 16:19 UTC (permalink / raw)
  To: Grant Albitz, linux-nvme

>   Hello,

Hey Grant,

> I was trying to configure vmware to connect to nvmet based target. I did not have much luck. VMwares official stance is a very small vendor list. I did see the recent additions such as metadata info. I was curious if anyone developing nvmet was willing/able to test nvmet as a datastore target for vmware to possibly add this functionality. At the moment I am not sure if there are a large number of reasons its not working or a very small check.

The linux nvme target does not support fused commands so there is no
support for VMware currently.

_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

* RE: NVMET Target with esxi 7
  2020-05-07 16:19       ` Sagi Grimberg
@ 2020-05-07 16:58         ` Grant Albitz
  0 siblings, 0 replies; 6+ messages in thread
From: Grant Albitz @ 2020-05-07 16:58 UTC (permalink / raw)
  To: Sagi Grimberg, linux-nvme

Thank you sagi, I am unfamiliar with fused commands and the concept, but I will look into it further. I appreciate the response, I can atleast stop trying for now =)

-----Original Message-----
From: Sagi Grimberg <sagi@grimberg.me> 
Sent: Thursday, May 7, 2020 12:20 PM
To: Grant Albitz <GAlbitz@All-Bits.com>; linux-nvme@lists.infradead.org
Subject: Re: NVMET Target with esxi 7

>   Hello,

Hey Grant,

> I was trying to configure vmware to connect to nvmet based target. I did not have much luck. VMwares official stance is a very small vendor list. I did see the recent additions such as metadata info. I was curious if anyone developing nvmet was willing/able to test nvmet as a datastore target for vmware to possibly add this functionality. At the moment I am not sure if there are a large number of reasons its not working or a very small check.

The linux nvme target does not support fused commands so there is no support for VMware currently.
_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-05-07 16:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-01 13:47 NVMET Target with esxi 7 Grant Albitz
2020-05-01 14:20 ` Max Gurtovoy
2020-05-01 14:27   ` Grant Albitz
2020-05-05 20:18     ` Grant Albitz
2020-05-07 16:19       ` Sagi Grimberg
2020-05-07 16:58         ` Grant Albitz

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.