All of lore.kernel.org
 help / color / mirror / Atom feed
From: Grant Albitz <GAlbitz@All-Bits.com>
To: "linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>
Subject: NVMET Target with esxi 7
Date: Fri, 1 May 2020 13:47:01 +0000	[thread overview]
Message-ID: <a28d8b24ece54f8db6e21c78f0bb5aab@All-Bits.com> (raw)

Hello, wondering if anyone can lend some advise. I am trying to discover a nvmet target from esxi. My config is below, from esxi i can discover the controller, it sees the namespace and shows the correct size of the drive. The paths are dead and the HPP path driver comes back and states the path is unsupported. I suspect there is some check that is failing but I am not sure what. I havent been able to get anymore logging out of esxi then what is below.

A side note is no matter what i do on ubuntu the mellanox version of nvmet and nvmet-rdma give symbol errors. I have tried it with the inbox ubuntu 19.10 and 20.04 drivers and they both have the behaivor above.

Config:

I used the pure nqn just because i was concerned esxi my reject a simple namespace name. The iqn below came out of another demo from pure that worked, i have tried simple iqns such as testiqn with the same result.

modprobe nvmet
modprobe nvmet-rdma
sudo /bin/mount -t configfs none /sys/kernel/config/
sudo mkdir /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
cd /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
echo 1 | sudo tee -a attr_allow_any_host > /dev/null
sudo mkdir namespaces/1
cd namespaces/1/
echo -n /dev/nvme0n1> device_path
echo 1 | sudo tee -a enable > /dev/null
sudo mkdir /sys/kernel/config/nvmet/ports/1
cd /sys/kernel/config/nvmet/ports/1
echo 10.10.11.1 | sudo tee -a addr_traddr > /dev/null
echo rdma | sudo tee -a addr_trtype > /dev/null
echo 4420 | sudo tee -a addr_trsvcid > /dev/null
echo ipv4 | sudo tee -a addr_adrfam > /dev/null
sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb
sudo mkdir /sys/kernel/config/nvmet/ports/2
cd /sys/kernel/config/nvmet/ports/2
echo 10.10.12.1 | sudo tee -a addr_traddr > /dev/null
echo rdma | sudo tee -a addr_trtype > /dev/null
echo 4420 | sudo tee -a addr_trsvcid > /dev/null
echo ipv4 | sudo tee -a addr_adrfam > /dev/null
sudo ln -s /sys/kernel/config/nvmet/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb/ /sys/kernel/config/nvmet/ports/2/subsystems/nqn.2010-06.com.purestorage.flasharray.1f3d6733c48eadcb


dmesg error if using ofed modules (i realize not really your problem just putting it here)
[ 2498.908659] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
[ 2585.306697] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
[ 2678.580571] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)
[ 2764.312226] nvmet: Unknown symbol nvme_find_pdev_from_bdev (err -2)

esxi error when using inbox modules. I have a ticket open with vmware but its as if they never heard of nvmeof. Best guess is they support a handfull of vendor appliances and not linux.


2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppCreateDevice:2957: Created logical device 'uuid.8301e535a182473c96414d4bfe1652cc'.
2020-04-30T15:29:09.255Z cpu3:2097454)WARNING: HPP: HppClaimPath:3719: Failed to claim path 'vmhba65:C0:T0:L0': Not supported
2020-04-30T15:29:09.255Z cpu3:2097454)HPP: HppUnclaimPath:3765: Unclaiming path vmhba65:C0:T0:L0
2020-04-30T15:29:09.255Z cpu3:2097454)ScsiPath: 8397: Plugin 'HPP' rejected path 'vmhba65:C0:T0:L0'
2020-04-30T15:29:09.255Z cpu3:2097454)ScsiClaimrule: 1568: Plugin HPP specified by claimrule 65534 was not able to claim path vmhba65:C0:T0:L0: Not supported


I realize this may be a vmware issue, but any advise would be appreciated, I am sort of stuck at this point. I did confirm that on the nvmet server, with the inbox module i can mount the nvme target on the same host. So its working in that sense. Unfortunately I dont have another linux server to test, just the esxi hosts from a seperate client perspective.


_______________________________________________
linux-nvme mailing list
linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

             reply	other threads:[~2020-05-01 13:47 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-01 13:47 Grant Albitz [this message]
2020-05-01 14:20 ` NVMET Target with esxi 7 Max Gurtovoy
2020-05-01 14:27   ` Grant Albitz
2020-05-05 20:18     ` Grant Albitz
2020-05-07 16:19       ` Sagi Grimberg
2020-05-07 16:58         ` Grant Albitz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a28d8b24ece54f8db6e21c78f0bb5aab@All-Bits.com \
    --to=galbitz@all-bits.com \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.