All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-18 16:28 Luse, Paul E
  0 siblings, 0 replies; 17+ messages in thread
From: Luse, Paul E @ 2018-05-18 16:28 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 7033 bytes --]

That's great Joe, thanks for sharing your steps w/everyone...

Thx
Paul

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Gruher, Joseph R
Sent: Friday, May 18, 2018 8:50 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Subject: Re: [SPDK] Working With bdevs/lvols

This seems to be working well for me.  To close the thread, here's the step I performed to create 64 20GiB lvols from 4 NVMe SSDs (16 lvols per):

#go to spdk working directory
cd spdk

#prepare to run SPDK
sudo scripts/setup.sh

#start SPDK with 16 cores
sudo app/nvmf_tgt/nvmf_tgt -m 0xFFFF0

#create bdevs from 4 NVMe devices
sudo ./rpc.py construct_nvme_bdev -b spdkdev1 -t pcie -a 0000:3d:00.0 sudo ./rpc.py construct_nvme_bdev -b spdkdev2 -t pcie -a 0000:3e:00.0 sudo ./rpc.py construct_nvme_bdev -b spdkdev3 -t pcie -a 0000:3f:00.0 sudo ./rpc.py construct_nvme_bdev -b spdkdev4 -t pcie -a 0000:40:00.0

#create 4 lvol stores from 4 NVMe bdevs
sudo ./rpc.py construct_lvol_store spdkdev1n1 lvolstore1 sudo ./rpc.py construct_lvol_store spdkdev2n1 lvolstore2 sudo ./rpc.py construct_lvol_store spdkdev3n1 lvolstore3 sudo ./rpc.py construct_lvol_store spdkdev4n1 lvolstore4

#create 16x 20GB lvols in each lvol store sudo ./rpc.py construct_lvol_bdev -l lvolstore1 lvol01 20480 sudo ./rpc.py construct_lvol_bdev -l lvolstore1 lvol02 20480 <cut for length> sudo ./rpc.py construct_lvol_bdev -l lvolstore4 lvol63 20480 sudo ./rpc.py construct_lvol_bdev -l lvolstore4 lvol64 20480

#Create new NVMeoF subsystem
sudo ./rpc.py construct_nvmf_subsystem -s "TESTSERIAL" -a nqn.2018-05.io.spdk:nqn01 "trtype:RDMA traddr:10.6.0.18 trsvcid:4420" ""

#Confirm subsystem exists
sudo nvme discover -t rdma -a 10.6.0.18

	Discovery Log Number of Records 1, Generation counter 3
	=====Discovery Log Entry 0======
	trtype:  rdma
	adrfam:  ipv4
	subtype: nvme subsystem
	treq:    not specified
	portid:  0
	trsvcid: 4420
	subnqn:  nqn.2018-05.io.spdk:nqn01
	traddr:  10.6.0.18
	rdma_prtype: not specified
	rdma_qptype: connected
	rdma_cms:    rdma-cm
	rdma_pkey: 0x0000

#Add the 64 volumes to the subsystem
sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore1/lvol01 sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore1/lvol02 <cut for length> sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore4/lvol63 sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore4/lvol64

#Connect subsystem using kernel initiator sudo nvme connect -t rdma -a 10.6.0.18 -n nqn.2018-05.io.spdk:nqn01 -i 8

#Confirm volumes available on initiator
rsa(a)tppjoe01:~$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0         7:0    0  86.6M  1 loop /snap/core/4486
sda           8:0    1  14.6G  0 disk
└─sda1        8:1    1   4.3G  0 part
sr0          11:0    1  1024M  0 rom
nvme0n1     259:0    0 119.2G  0 disk
├─nvme0n1p1 259:1    0   512M  0 part /boot/efi
└─nvme0n1p2 259:2    0 118.8G  0 part /
nvme1n1     259:4    0    20G  0 disk
nvme1n2     259:6    0    20G  0 disk
<cut for length>
nvme1n63    259:128  0    20G  0 disk
nvme1n64    259:130  0    20G  0 disk

Now to run some IO...

> -----Original Message-----
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, 
> James R
> Sent: Thursday, May 17, 2018 1:06 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: Re: [SPDK] Working With bdevs/lvols
> 
> Thanks Curt!  These are great examples.  A few comments inline:
> 
> 
> On 5/17/18, 12:59 PM, "SPDK on behalf of Bruns, Curt E" <spdk- 
> bounces(a)lists.01.org on behalf of curt.e.bruns(a)intel.com> wrote:
> 
>     Hi Joe,
> 
>   I played with this just recently.  Make sure when you start the 
> nvmf_target, you also enable RPC listener (the -r parameter):
> 
> [Jim]  You shouldn’t have to specify –r, unless you want to override 
> the default RPC domain socket location (/var/tmp/spdk.sock).
> 
> <snip>
> 
>     % python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_store AIO0
> lvol_store1
>     723221b4-b884-404d-a47e-946c7ffb6d7f  # Make note of this UUID
> 
> <snip>
> 
>     # Create a 100MB Thin provisioned volume on that LVOL_Store:
>     % python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_bdev -u 
> 723221b4- b884-404d-a47e-946c7ffb6d7f -t vol1 100
>   c008972d-68e8-4795-9896-dd09a8d7062a  # Note this UUID
> 
>     # Create a new Subsystem
>     % python rpc.py -s 100.100.1.82 -p 5260 construct_nvmf_subsystem  
> -a -s "SPDKTEST" nqn.2016-06.io.spdk:cnodex "trtype:RDMA 
> traddr:100.100.1.82 trsvcid:4421" ""
> 
>     # Add vol1 to that Subsystem (using that UUID from thin-provisioned volume)
>     % python rpc.py -s 100.100.1.82 -p 5260 nvmf_subsystem_add_ns  
> nqn.2016- 06.io.spdk:cnodex c008972d-68e8-4795-9896-dd09a8d7062a
> 
> You can use the alias (lvol_store1/vol1) instead of the full UUID as 
> of SPDK v18.01.
> You can also combine these two steps by passing –n “lvol_store1/vol1” 
> to the construct_nvmf_subsystem RPC.
> 
> <snip>
> 
>     On 5/17/18, 12:46 PM, "SPDK on behalf of Gruher, Joseph R" <spdk- 
> bounces(a)lists.01.org on behalf of joseph.r.gruher(a)intel.com> wrote:
> 
>         >>>> rpc.py sends RPCs to a running SPDK target application.  
> The target application will create this /var/tmp/spdk.sock Unix domain 
> socket.  Is it possible that the nvmf_tgt process hasn’t been started 
> yet when sending the construct_nvme_bdev RPC?
> 
>         Yes, I haven't started nvmf_tgt.  I guess the process I was looking for was:
>         1) create bdevs
>         2) create lvol store
>         3) create lvols
>         4) start nvmf_tgt with subsystem configuration defined in 
> nvmf.conf.in that uses lvolvs
> 
>         It sounds like what I need to do instead is:
>         1) start nvmf_tgt
>         2) create bdevs
>         3) create lvol store
>         4) create lvols
> 
>         My question then is, how do I create my subsystems?  I'm not 
> seeing a command line way to create subsystems in the documentation, 
> just nvmf.conf.in.  Is there a way to load a new nvmf.conf.in without 
> stopping the target?  Or how do I go about creating the subsystems if 
> the target is already running?
> 
>         Thanks,
>         Joe
>         _______________________________________________
>         SPDK mailing list
>         SPDK(a)lists.01.org
>         https://lists.01.org/mailman/listinfo/spdk
> 
> 
>     _______________________________________________
>     SPDK mailing list
>     SPDK(a)lists.01.org
>     https://lists.01.org/mailman/listinfo/spdk
> 
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-18 15:49 Gruher, Joseph R
  0 siblings, 0 replies; 17+ messages in thread
From: Gruher, Joseph R @ 2018-05-18 15:49 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6563 bytes --]

This seems to be working well for me.  To close the thread, here's the step I performed to create 64 20GiB lvols from 4 NVMe SSDs (16 lvols per):

#go to spdk working directory
cd spdk

#prepare to run SPDK
sudo scripts/setup.sh

#start SPDK with 16 cores
sudo app/nvmf_tgt/nvmf_tgt -m 0xFFFF0

#create bdevs from 4 NVMe devices
sudo ./rpc.py construct_nvme_bdev -b spdkdev1 -t pcie -a 0000:3d:00.0
sudo ./rpc.py construct_nvme_bdev -b spdkdev2 -t pcie -a 0000:3e:00.0
sudo ./rpc.py construct_nvme_bdev -b spdkdev3 -t pcie -a 0000:3f:00.0
sudo ./rpc.py construct_nvme_bdev -b spdkdev4 -t pcie -a 0000:40:00.0

#create 4 lvol stores from 4 NVMe bdevs
sudo ./rpc.py construct_lvol_store spdkdev1n1 lvolstore1
sudo ./rpc.py construct_lvol_store spdkdev2n1 lvolstore2
sudo ./rpc.py construct_lvol_store spdkdev3n1 lvolstore3
sudo ./rpc.py construct_lvol_store spdkdev4n1 lvolstore4

#create 16x 20GB lvols in each lvol store
sudo ./rpc.py construct_lvol_bdev -l lvolstore1 lvol01 20480
sudo ./rpc.py construct_lvol_bdev -l lvolstore1 lvol02 20480
<cut for length>
sudo ./rpc.py construct_lvol_bdev -l lvolstore4 lvol63 20480
sudo ./rpc.py construct_lvol_bdev -l lvolstore4 lvol64 20480

#Create new NVMeoF subsystem 
sudo ./rpc.py construct_nvmf_subsystem -s "TESTSERIAL" -a nqn.2018-05.io.spdk:nqn01 "trtype:RDMA traddr:10.6.0.18 trsvcid:4420" ""

#Confirm subsystem exists
sudo nvme discover -t rdma -a 10.6.0.18

	Discovery Log Number of Records 1, Generation counter 3
	=====Discovery Log Entry 0======
	trtype:  rdma
	adrfam:  ipv4
	subtype: nvme subsystem
	treq:    not specified
	portid:  0
	trsvcid: 4420
	subnqn:  nqn.2018-05.io.spdk:nqn01
	traddr:  10.6.0.18
	rdma_prtype: not specified
	rdma_qptype: connected
	rdma_cms:    rdma-cm
	rdma_pkey: 0x0000

#Add the 64 volumes to the subsystem
sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore1/lvol01
sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore1/lvol02
<cut for length>
sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore4/lvol63
sudo ./rpc.py nvmf_subsystem_add_ns nqn.2018-05.io.spdk:nqn01 lvolstore4/lvol64

#Connect subsystem using kernel initiator
sudo nvme connect -t rdma -a 10.6.0.18 -n nqn.2018-05.io.spdk:nqn01 -i 8

#Confirm volumes available on initiator
rsa(a)tppjoe01:~$ lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
loop0         7:0    0  86.6M  1 loop /snap/core/4486
sda           8:0    1  14.6G  0 disk
└─sda1        8:1    1   4.3G  0 part
sr0          11:0    1  1024M  0 rom
nvme0n1     259:0    0 119.2G  0 disk
├─nvme0n1p1 259:1    0   512M  0 part /boot/efi
└─nvme0n1p2 259:2    0 118.8G  0 part /
nvme1n1     259:4    0    20G  0 disk
nvme1n2     259:6    0    20G  0 disk
<cut for length>
nvme1n63    259:128  0    20G  0 disk
nvme1n64    259:130  0    20G  0 disk

Now to run some IO...

> -----Original Message-----
> From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
> Sent: Thursday, May 17, 2018 1:06 PM
> To: Storage Performance Development Kit <spdk(a)lists.01.org>
> Subject: Re: [SPDK] Working With bdevs/lvols
> 
> Thanks Curt!  These are great examples.  A few comments inline:
> 
> 
> On 5/17/18, 12:59 PM, "SPDK on behalf of Bruns, Curt E" <spdk-
> bounces(a)lists.01.org on behalf of curt.e.bruns(a)intel.com> wrote:
> 
>     Hi Joe,
> 
>   I played with this just recently.  Make sure when you start the nvmf_target,
> you also enable RPC listener (the -r parameter):
> 
> [Jim]  You shouldn’t have to specify –r, unless you want to override the default
> RPC domain socket location (/var/tmp/spdk.sock).
> 
> <snip>
> 
>     % python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_store AIO0
> lvol_store1
>     723221b4-b884-404d-a47e-946c7ffb6d7f  # Make note of this UUID
> 
> <snip>
> 
>     # Create a 100MB Thin provisioned volume on that LVOL_Store:
>     % python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_bdev -u 723221b4-
> b884-404d-a47e-946c7ffb6d7f -t vol1 100
>   c008972d-68e8-4795-9896-dd09a8d7062a  # Note this UUID
> 
>     # Create a new Subsystem
>     % python rpc.py -s 100.100.1.82 -p 5260 construct_nvmf_subsystem  -a -s
> "SPDKTEST" nqn.2016-06.io.spdk:cnodex "trtype:RDMA traddr:100.100.1.82
> trsvcid:4421" ""
> 
>     # Add vol1 to that Subsystem (using that UUID from thin-provisioned volume)
>     % python rpc.py -s 100.100.1.82 -p 5260 nvmf_subsystem_add_ns  nqn.2016-
> 06.io.spdk:cnodex c008972d-68e8-4795-9896-dd09a8d7062a
> 
> You can use the alias (lvol_store1/vol1) instead of the full UUID as of SPDK
> v18.01.
> You can also combine these two steps by passing –n “lvol_store1/vol1” to the
> construct_nvmf_subsystem RPC.
> 
> <snip>
> 
>     On 5/17/18, 12:46 PM, "SPDK on behalf of Gruher, Joseph R" <spdk-
> bounces(a)lists.01.org on behalf of joseph.r.gruher(a)intel.com> wrote:
> 
>         >>>> rpc.py sends RPCs to a running SPDK target application.  The target
> application will create this /var/tmp/spdk.sock Unix domain socket.  Is it
> possible that the nvmf_tgt process hasn’t been started yet when sending the
> construct_nvme_bdev RPC?
> 
>         Yes, I haven't started nvmf_tgt.  I guess the process I was looking for was:
>         1) create bdevs
>         2) create lvol store
>         3) create lvols
>         4) start nvmf_tgt with subsystem configuration defined in nvmf.conf.in
> that uses lvolvs
> 
>         It sounds like what I need to do instead is:
>         1) start nvmf_tgt
>         2) create bdevs
>         3) create lvol store
>         4) create lvols
> 
>         My question then is, how do I create my subsystems?  I'm not seeing a
> command line way to create subsystems in the documentation, just
> nvmf.conf.in.  Is there a way to load a new nvmf.conf.in without stopping the
> target?  Or how do I go about creating the subsystems if the target is already
> running?
> 
>         Thanks,
>         Joe
>         _______________________________________________
>         SPDK mailing list
>         SPDK(a)lists.01.org
>         https://lists.01.org/mailman/listinfo/spdk
> 
> 
>     _______________________________________________
>     SPDK mailing list
>     SPDK(a)lists.01.org
>     https://lists.01.org/mailman/listinfo/spdk
> 
> 
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 20:05 Harris, James R
  0 siblings, 0 replies; 17+ messages in thread
From: Harris, James R @ 2018-05-17 20:05 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3092 bytes --]

Thanks Curt!  These are great examples.  A few comments inline:


On 5/17/18, 12:59 PM, "SPDK on behalf of Bruns, Curt E" <spdk-bounces(a)lists.01.org on behalf of curt.e.bruns(a)intel.com> wrote:

    Hi Joe,
    
  I played with this just recently.  Make sure when you start the nvmf_target, you also enable RPC listener (the -r parameter):

[Jim]  You shouldn’t have to specify –r, unless you want to override the default RPC domain socket location (/var/tmp/spdk.sock).

<snip>

    % python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_store AIO0 lvol_store1
    723221b4-b884-404d-a47e-946c7ffb6d7f  # Make note of this UUID
    
<snip>

    # Create a 100MB Thin provisioned volume on that LVOL_Store:
    % python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_bdev -u 723221b4-b884-404d-a47e-946c7ffb6d7f -t vol1 100
  c008972d-68e8-4795-9896-dd09a8d7062a  # Note this UUID
    
    # Create a new Subsystem
    % python rpc.py -s 100.100.1.82 -p 5260 construct_nvmf_subsystem  -a -s "SPDKTEST" nqn.2016-06.io.spdk:cnodex "trtype:RDMA traddr:100.100.1.82 trsvcid:4421" ""
    
    # Add vol1 to that Subsystem (using that UUID from thin-provisioned volume)
    % python rpc.py -s 100.100.1.82 -p 5260 nvmf_subsystem_add_ns  nqn.2016-06.io.spdk:cnodex c008972d-68e8-4795-9896-dd09a8d7062a

You can use the alias (lvol_store1/vol1) instead of the full UUID as of SPDK v18.01.
You can also combine these two steps by passing –n “lvol_store1/vol1” to the construct_nvmf_subsystem RPC.

<snip>

    On 5/17/18, 12:46 PM, "SPDK on behalf of Gruher, Joseph R" <spdk-bounces(a)lists.01.org on behalf of joseph.r.gruher(a)intel.com> wrote:
    
        >>>> rpc.py sends RPCs to a running SPDK target application.  The target application will create this /var/tmp/spdk.sock Unix domain socket.  Is it possible that the nvmf_tgt process hasn’t been started yet when sending the construct_nvme_bdev RPC?
        
        Yes, I haven't started nvmf_tgt.  I guess the process I was looking for was:
        1) create bdevs
        2) create lvol store
        3) create lvols
        4) start nvmf_tgt with subsystem configuration defined in nvmf.conf.in that uses lvolvs
        
        It sounds like what I need to do instead is:
        1) start nvmf_tgt
        2) create bdevs
        3) create lvol store
        4) create lvols
        
        My question then is, how do I create my subsystems?  I'm not seeing a command line way to create subsystems in the documentation, just nvmf.conf.in.  Is there a way to load a new nvmf.conf.in without stopping the target?  Or how do I go about creating the subsystems if the target is already running?
        
        Thanks,
        Joe
        _______________________________________________
        SPDK mailing list
        SPDK(a)lists.01.org
        https://lists.01.org/mailman/listinfo/spdk
        
    
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:59 Bruns, Curt E
  0 siblings, 0 replies; 17+ messages in thread
From: Bruns, Curt E @ 2018-05-17 19:59 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3448 bytes --]

Hi Joe,

I played with this just recently.  Make sure when you start the nvmf_target, you also enable RPC listener (the -r parameter):
# I export HDD (AIO targets) as NVMe-oF targets in my config file 
[ NVMe-oF Target machine (100.100.1.82)]
/root/source/spdk/app/nvmf_tgt/nvmf_tgt -r 0.0.0.0:5260 -m 0x1 -c /root/source/spdk/etc/spdk/ceb_nvmf.conf.in &

Now you can create lvol_store, then lvols, then a new subsystem, then add the Namespace to the subsystem and connect to it
[NVMe-oF Initiator machine]
% python rpc.py -s 100.100.1.82 -p 5260 get_bdevs
[
  {
    "num_blocks": 7814037168,
    "name": "AIO0",
    "claimed": false,
    "driver_specific": {
      "aio": {
        "filename": "/dev/sda"
      }
    },... <snip>

% python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_store AIO0 lvol_store1
723221b4-b884-404d-a47e-946c7ffb6d7f  # Make note of this UUID

# Or you can find the UUID/name here:
% python rpc.py -s 100.100.1.82 -p 5260 get_lvol_stores
[
  {
    "uuid": "723221b4-b884-404d-a47e-946c7ffb6d7f",
    "base_bdev": "AIO0",
    "free_clusters": 952929,
    "cluster_size": 4194304,
    "total_data_clusters": 952929,
    "block_size": 4096,
    "name": "lvol_store1"
  }
]

# Create a 100MB Thin provisioned volume on that LVOL_Store:
% python rpc.py -s 100.100.1.82 -p 5260 construct_lvol_bdev -u 723221b4-b884-404d-a47e-946c7ffb6d7f -t vol1 100
c008972d-68e8-4795-9896-dd09a8d7062a  # Note this UUID

# Create a new Subsystem
% python rpc.py -s 100.100.1.82 -p 5260 construct_nvmf_subsystem  -a -s "SPDKTEST" nqn.2016-06.io.spdk:cnodex "trtype:RDMA traddr:100.100.1.82 trsvcid:4421" ""

# Add vol1 to that Subsystem (using that UUID from thin-provisioned volume)
% python rpc.py -s 100.100.1.82 -p 5260 nvmf_subsystem_add_ns  nqn.2016-06.io.spdk:cnodex c008972d-68e8-4795-9896-dd09a8d7062a

# See the subsystem:
% nvme discover -t rdma -a 100.100.1.82 -s 4420

# Connect to the subsystem
% nvme connect -t rdma -n "nqn.2016-06.io.spdk:cnodex" -a 100.100.1.82 -s 4421

# lsblk to see it
% lsblk
nvme8n1         259:73   0   100M  0 disk


Hope this helps!

- Curt

On 5/17/18, 12:46 PM, "SPDK on behalf of Gruher, Joseph R" <spdk-bounces(a)lists.01.org on behalf of joseph.r.gruher(a)intel.com> wrote:

    >>>> rpc.py sends RPCs to a running SPDK target application.  The target application will create this /var/tmp/spdk.sock Unix domain socket.  Is it possible that the nvmf_tgt process hasn’t been started yet when sending the construct_nvme_bdev RPC?
    
    Yes, I haven't started nvmf_tgt.  I guess the process I was looking for was:
    1) create bdevs
    2) create lvol store
    3) create lvols
    4) start nvmf_tgt with subsystem configuration defined in nvmf.conf.in that uses lvolvs
    
    It sounds like what I need to do instead is:
    1) start nvmf_tgt
    2) create bdevs
    3) create lvol store
    4) create lvols
    
    My question then is, how do I create my subsystems?  I'm not seeing a command line way to create subsystems in the documentation, just nvmf.conf.in.  Is there a way to load a new nvmf.conf.in without stopping the target?  Or how do I go about creating the subsystems if the target is already running?
    
    Thanks,
    Joe
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:59 Andrey Kuzmin
  0 siblings, 0 replies; 17+ messages in thread
From: Andrey Kuzmin @ 2018-05-17 19:59 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1327 bytes --]

On Thu, May 17, 2018, 22:53 Gruher, Joseph R <joseph.r.gruher(a)intel.com>
wrote:

> >>>>>>>> It sounds like what I need to do instead is:
> >>>>>>>> 1) start nvmf_tgt
> >>>>>>>> 2) create bdevs
> >>>>>>>> 3) create lvol store
> >>>>>>>> 4) create lvols
> >>>>>>>>
> >>>>>>>> My question then is, how do I create my subsystems?
>
> >>>> You don't have to create subsystems, spdk does it for you for each
> config section. Your only job is to either populate nvmf.conf with (2-4)
> above before you start the target, or do it with RPCs if the target already
> runs.
>
> Yes, but how do you create lvol store and lvols as part of nvmf.conf (how
> do you implement steps 3 and 4 above in nvmf.conf)?


Lvols aren't regular bdevs in the sense that they have on-disk metadata
that defines the actual configuration, so the above step isn't necessarily
possible. (1-4) above with RPCs should work as RPCs will do the
heavy-lifting.

Regards,
A.

If possible to do so, I don't see this included in the online
> documentation, or in the included example nvmf.conf.  It would certainly be
> convenient to do it that way if supported.
>

_______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
-- 

Regards,
Andrey

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2353 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:56 Harris, James R
  0 siblings, 0 replies; 17+ messages in thread
From: Harris, James R @ 2018-05-17 19:56 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1604 bytes --]



On 5/17/18, 12:42 PM, "SPDK on behalf of Gruher, Joseph R" <spdk-bounces(a)lists.01.org on behalf of joseph.r.gruher(a)intel.com> wrote:

    >>>> I'm not sure that's the case actually. Lvols may require a blob bdev underneath, so you should consult lvol docs first if it can run atop a raw nvme bdev.
    
  To be more specific, the docs indicate I can make an lvol store on top of my bdev, and then make lvols from that lvol store.

Exactly.  After you have the nvme bdev, you can do:

sudo scripts/rpc.py construct_lvol_store Nvme0n1 lvs0
sudo scripts/rpc.py construct_lvol_bdev –l lvs0 lvol0 128
<repeat for other lvols>

Now you can list the bdevs:

sudo scripts/rpc.py get_bdevs

This is a wall of JSON – but the key parts are “name” and “aliases”.  “name” is a UUID which makes sure the lvol is globally unique, and then “aliases” shows a more convenient way to refer to that bdev.  For lvols, that’s always <lvs name>/<lvol name> - or in this case, lvs0/lvol0.

Note that if you stop the target and restart it, the lvols will automatically appear again – the lvol module will read the metadata off disk and recreate everything.

Next you can create an nvmf_subsystem (construct_nvmf_subsystem) and add bdevs as namespaces to that subsystem through the –n parameter.  Alternatively you can specify the namespaces after the fact using nvmf_subsystem_add_ns.  You just specify the name of the bdev(s) – either “lvs0/lvol0” or the UUID (probably the former since it’s shorter to type and harder to screw up).

Hope this helps!

-Jim



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:53 Gruher, Joseph R
  0 siblings, 0 replies; 17+ messages in thread
From: Gruher, Joseph R @ 2018-05-17 19:53 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 780 bytes --]

>>>>>>>> It sounds like what I need to do instead is:
>>>>>>>> 1) start nvmf_tgt
>>>>>>>> 2) create bdevs
>>>>>>>> 3) create lvol store
>>>>>>>> 4) create lvols
>>>>>>>>
>>>>>>>> My question then is, how do I create my subsystems? 

>>>> You don't have to create subsystems, spdk does it for you for each config section. Your only job is to either populate nvmf.conf with (2-4) above before you start the target, or do it with RPCs if the target already runs.

Yes, but how do you create lvol store and lvols as part of nvmf.conf (how do you implement steps 3 and 4 above in nvmf.conf)?  If possible to do so, I don't see this included in the online documentation, or in the included example nvmf.conf.  It would certainly be convenient to do it that way if supported.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:50 Andrey Kuzmin
  0 siblings, 0 replies; 17+ messages in thread
From: Andrey Kuzmin @ 2018-05-17 19:50 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1478 bytes --]

On Thu, May 17, 2018, 22:46 Gruher, Joseph R <joseph.r.gruher(a)intel.com>
wrote:

> >>>> rpc.py sends RPCs to a running SPDK target application.  The target
> application will create this /var/tmp/spdk.sock Unix domain socket.  Is it
> possible that the nvmf_tgt process hasn’t been started yet when sending the
> construct_nvme_bdev RPC?
>
> Yes, I haven't started nvmf_tgt.  I guess the process I was looking for
> was:
> 1) create bdevs
> 2) create lvol store
> 3) create lvols
> 4) start nvmf_tgt with subsystem configuration defined in nvmf.conf.in
> that uses lvolvs
>
> It sounds like what I need to do instead is:
> 1) start nvmf_tgt
> 2) create bdevs
> 3) create lvol store
> 4) create lvols
>
> My question then is, how do I create my subsystems?


You don't have to create subsystems, spdk does it for you for each config
section. Your only job is to either populate nvmf.conf with (2-4) above
before you start the target, or do it with RPCs if the target already runs.

Regards,
A.

I'm not seeing a command line way to create subsystems in the
> documentation, just nvmf.conf.in.  Is there a way to load a new
> nvmf.conf.in without stopping the target?  Or how do I go about creating
> the subsystems if the target is already running?
>
> Thanks,
> Joe
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
-- 

Regards,
Andrey

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2382 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:46 Gruher, Joseph R
  0 siblings, 0 replies; 17+ messages in thread
From: Gruher, Joseph R @ 2018-05-17 19:46 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 920 bytes --]

>>>> rpc.py sends RPCs to a running SPDK target application.  The target application will create this /var/tmp/spdk.sock Unix domain socket.  Is it possible that the nvmf_tgt process hasn’t been started yet when sending the construct_nvme_bdev RPC?

Yes, I haven't started nvmf_tgt.  I guess the process I was looking for was:
1) create bdevs
2) create lvol store
3) create lvols
4) start nvmf_tgt with subsystem configuration defined in nvmf.conf.in that uses lvolvs

It sounds like what I need to do instead is:
1) start nvmf_tgt
2) create bdevs
3) create lvol store
4) create lvols

My question then is, how do I create my subsystems?  I'm not seeing a command line way to create subsystems in the documentation, just nvmf.conf.in.  Is there a way to load a new nvmf.conf.in without stopping the target?  Or how do I go about creating the subsystems if the target is already running?

Thanks,
Joe

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:42 Gruher, Joseph R
  0 siblings, 0 replies; 17+ messages in thread
From: Gruher, Joseph R @ 2018-05-17 19:42 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 289 bytes --]

>>>> I'm not sure that's the case actually. Lvols may require a blob bdev underneath, so you should consult lvol docs first if it can run atop a raw nvme bdev.

To be more specific, the docs indicate I can make an lvol store on top of my bdev, and then make lvols from that lvol store.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:38 Harris, James R
  0 siblings, 0 replies; 17+ messages in thread
From: Harris, James R @ 2018-05-17 19:38 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1470 bytes --]



On 5/17/18, 12:36 PM, "SPDK on behalf of Gruher, Joseph R" <spdk-bounces(a)lists.01.org on behalf of joseph.r.gruher(a)intel.com> wrote:

    >>>> I assume that should be the case if nvmf target is able to expose virtual bdevs like lvol
    
    The documentation says any type of bdev can be exposed through NVMeoF target.  I'm assuming lvolvs are themselves considered a type of bdev (one that resides on top of another bdev) and thus can be exposed through the NVMeoF target.  If that's wrong hopefully someone will set me straight.
    
    >>>> lvol configuration example should be available, or you can look at the code under /lib/lvol to figure out how to configure an lvol atop a bdev
    
    Yes, I am following the example from the documentation to create bdevs and lvols with rpc.py.  However I'm getting an error when first trying to create a bdev on the command line.  Hoping someone can help me understand what would cause this error and how to resolve.
    
    rsa(a)tppjoe08:~/spdk/scripts$ sudo ./rpc.py construct_nvme_bdev -b spdkdev1 -t PCIe -a 0000:3d:00.0
    Error while connecting to /var/tmp/spdk.sock
  Error details: [Errno 2] No such file or directory

Hi Joe,

rpc.py sends RPCs to a running SPDK target application.  The target application will create this /var/tmp/spdk.sock Unix domain socket.  Is it possible that the nvmf_tgt process hasn’t been started yet when sending the construct_nvme_bdev RPC?

-Jim



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:36 Gruher, Joseph R
  0 siblings, 0 replies; 17+ messages in thread
From: Gruher, Joseph R @ 2018-05-17 19:36 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1014 bytes --]

>>>> I assume that should be the case if nvmf target is able to expose virtual bdevs like lvol

The documentation says any type of bdev can be exposed through NVMeoF target.  I'm assuming lvolvs are themselves considered a type of bdev (one that resides on top of another bdev) and thus can be exposed through the NVMeoF target.  If that's wrong hopefully someone will set me straight.

>>>> lvol configuration example should be available, or you can look at the code under /lib/lvol to figure out how to configure an lvol atop a bdev

Yes, I am following the example from the documentation to create bdevs and lvols with rpc.py.  However I'm getting an error when first trying to create a bdev on the command line.  Hoping someone can help me understand what would cause this error and how to resolve.

rsa(a)tppjoe08:~/spdk/scripts$ sudo ./rpc.py construct_nvme_bdev -b spdkdev1 -t PCIe -a 0000:3d:00.0
Error while connecting to /var/tmp/spdk.sock
Error details: [Errno 2] No such file or directory


^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:34 Andrey Kuzmin
  0 siblings, 0 replies; 17+ messages in thread
From: Andrey Kuzmin @ 2018-05-17 19:34 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1734 bytes --]

On Thu, May 17, 2018, 22:29 Andrey Kuzmin <andrey.v.kuzmin(a)gmail.com> wrote:

>
>
> On Thu, May 17, 2018, 22:24 Gruher, Joseph R <joseph.r.gruher(a)intel.com>
> wrote:
>
>>
>> >>>> Nvme bdevs are created and instantiated automatically for each nvme
>> device listed under [Nvme] section of your config file. If you add your
>> PCIe device into nvme section of the nvmf.conf, nvmf target will create
>> nvme bdev and expose it over nvmf.
>>
>> Yes, that's what I've done up to this point.  Now I want to carve up the
>> drive into multiple separate volumes instead of exposing the whole drive as
>> one volume.  I assume I need to use lvols to divide the capacity of the
>> drive
>
>
I'm not sure that's the case actually. Lvols may require a blob bdev
underneath, so you should consult lvol docs first if it can run atop a raw
nvme bdev.

Regards,
A.


and then I can expose the lvols through the NVMeoF target.  My question is
>> how to create lvols from my NVMe drive to use with the SPDK NVMeoF target.
>> Can it be done directly in nvmf.conf.in, similar to how the nvme bdevs
>> are created?
>>
>
> I haven't noticed any specific example in the codebase (that doesn't mean
> there's none, just that I wasn't interested in lvols), but I assume that
> should be the case if nvmf target is able to expose virtual bdevs like
> lvol. Lvol configuration example should be available, or you can look at
> the code under /lib/lvol to figure out how to configure an lvol atop a bdev.
>
> Regards,
> A.
>
>>
>>
>> _______________________________________________
>> SPDK mailing list
>> SPDK(a)lists.01.org
>> https://lists.01.org/mailman/listinfo/spdk
>>
> --
>
> Regards,
> Andrey
>

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 3247 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:29 Andrey Kuzmin
  0 siblings, 0 replies; 17+ messages in thread
From: Andrey Kuzmin @ 2018-05-17 19:29 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1410 bytes --]

On Thu, May 17, 2018, 22:24 Gruher, Joseph R <joseph.r.gruher(a)intel.com>
wrote:

>
> >>>> Nvme bdevs are created and instantiated automatically for each nvme
> device listed under [Nvme] section of your config file. If you add your
> PCIe device into nvme section of the nvmf.conf, nvmf target will create
> nvme bdev and expose it over nvmf.
>
> Yes, that's what I've done up to this point.  Now I want to carve up the
> drive into multiple separate volumes instead of exposing the whole drive as
> one volume.  I assume I need to use lvols to divide the capacity of the
> drive and then I can expose the lvols through the NVMeoF target.  My
> question is how to create lvols from my NVMe drive to use with the SPDK
> NVMeoF target.  Can it be done directly in nvmf.conf.in, similar to how
> the nvme bdevs are created?
>

I haven't noticed any specific example in the codebase (that doesn't mean
there's none, just that I wasn't interested in lvols), but I assume that
should be the case if nvmf target is able to expose virtual bdevs like
lvol. Lvol configuration example should be available, or you can look at
the code under /lib/lvol to figure out how to configure an lvol atop a bdev.

Regards,
A.

>
>
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
-- 

Regards,
Andrey

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2157 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:24 Gruher, Joseph R
  0 siblings, 0 replies; 17+ messages in thread
From: Gruher, Joseph R @ 2018-05-17 19:24 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 724 bytes --]


>>>> Nvme bdevs are created and instantiated automatically for each nvme device listed under [Nvme] section of your config file. If you add your PCIe device into nvme section of the nvmf.conf, nvmf target will create nvme bdev and expose it over nvmf.

Yes, that's what I've done up to this point.  Now I want to carve up the drive into multiple separate volumes instead of exposing the whole drive as one volume.  I assume I need to use lvols to divide the capacity of the drive and then I can expose the lvols through the NVMeoF target.  My question is how to create lvols from my NVMe drive to use with the SPDK NVMeoF target.  Can it be done directly in nvmf.conf.in, similar to how the nvme bdevs are created?



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:19 Andrey Kuzmin
  0 siblings, 0 replies; 17+ messages in thread
From: Andrey Kuzmin @ 2018-05-17 19:19 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1643 bytes --]

On Thu, May 17, 2018, 22:13 Gruher, Joseph R <joseph.r.gruher(a)intel.com>
wrote:

> Hi everyone-
>
> New SPDK user here.  I've successfully used SPDK to run an NVMeoF target
> and expose entire NVMe disks as individual subsystems.  So far so good.
>
> Now I would like to be able to carve my NVMe disks up into small volumes
> and expose those volumes through the NVMeoF target.  It looks like I'll
> need to make my NVMe devices into bdevs, make lvols on those bdevs, and
> then I can expose those lvols through the NVMeoF target?  Is that the best
> approach?
>
> If that's the right tactic, can I define my lvols as part of nvmf.conf.in?
> I don't see this included in the documentation or sample nvmf.conf.in, so
> maybe not.
>
> Assuming not, I'm trying to do it on the command line, but I have a
> failure trying to make a bdev from my NVMe device.  What would cause this?
> Do I have to start SPDK somewhere before I can run this action?
>

Nvme bdevs are created and instantiated automatically for each nvme device
listed under [Nvme] section of your config file. If you add your PCIe
device into nvme section of the nvmf.conf, nvmf target will create nvme
bdev and expose it over nvmf.

HTH,
Andrey

>
> rsa(a)tppjoe08:~/spdk/scripts$ sudo ./rpc.py construct_nvme_bdev -b
> spdkdev1 -t PCIe -a 0000:3d:00.0
> Error while connecting to /var/tmp/spdk.sock
> Error details: [Errno 2] No such file or directory
>
> Thanks,
> Joe
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
-- 

Regards,
Andrey

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 2454 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* [SPDK] Working With bdevs/lvols
@ 2018-05-17 19:13 Gruher, Joseph R
  0 siblings, 0 replies; 17+ messages in thread
From: Gruher, Joseph R @ 2018-05-17 19:13 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1060 bytes --]

Hi everyone-

New SPDK user here.  I've successfully used SPDK to run an NVMeoF target and expose entire NVMe disks as individual subsystems.  So far so good.

Now I would like to be able to carve my NVMe disks up into small volumes and expose those volumes through the NVMeoF target.  It looks like I'll need to make my NVMe devices into bdevs, make lvols on those bdevs, and then I can expose those lvols through the NVMeoF target?  Is that the best approach?

If that's the right tactic, can I define my lvols as part of nvmf.conf.in?  I don't see this included in the documentation or sample nvmf.conf.in, so maybe not.

Assuming not, I'm trying to do it on the command line, but I have a failure trying to make a bdev from my NVMe device.  What would cause this?  Do I have to start SPDK somewhere before I can run this action?

rsa(a)tppjoe08:~/spdk/scripts$ sudo ./rpc.py construct_nvme_bdev -b spdkdev1 -t PCIe -a 0000:3d:00.0
Error while connecting to /var/tmp/spdk.sock
Error details: [Errno 2] No such file or directory

Thanks,
Joe

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2018-05-18 16:28 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-18 16:28 [SPDK] Working With bdevs/lvols Luse, Paul E
  -- strict thread matches above, loose matches on Subject: below --
2018-05-18 15:49 Gruher, Joseph R
2018-05-17 20:05 Harris, James R
2018-05-17 19:59 Bruns, Curt E
2018-05-17 19:59 Andrey Kuzmin
2018-05-17 19:56 Harris, James R
2018-05-17 19:53 Gruher, Joseph R
2018-05-17 19:50 Andrey Kuzmin
2018-05-17 19:46 Gruher, Joseph R
2018-05-17 19:42 Gruher, Joseph R
2018-05-17 19:38 Harris, James R
2018-05-17 19:36 Gruher, Joseph R
2018-05-17 19:34 Andrey Kuzmin
2018-05-17 19:29 Andrey Kuzmin
2018-05-17 19:24 Gruher, Joseph R
2018-05-17 19:19 Andrey Kuzmin
2018-05-17 19:13 Gruher, Joseph R

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.