FYI switching the order worked for me... -----Original Message----- From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R Sent: Tuesday, August 20, 2019 12:30 PM To: Storage Performance Development Kit Cc: shibx(a)lenovo.com Subject: Re: [SPDK] Configuring status raid bdevs with SPDK After reboot, could you try construct_raid_bdev *before* construct_nvme_bdev? Doing construct_raid_bdev first will cause the RAID module to claim NVMe3n1 once it appears without giving other modules (such as logical volumes) a chance to claim it. Then after the RAID module registers the StoragePool1 bdev, the logical volume module will see the blobstore/lvolstore metadata on StoragePool1 and claim it. Building on what Paul said, single disk RAID0 is the only case where switching the order would be needed. For a multiple disk RAID0, lvol would see the metadata on the first member disk, but then would see the size of the member disk didn't match the size of the blobstore and wouldn't claim it. That would give you a chance to construct_raid_bdev. Once the RAID volume was registered, lvol would confirm the size reported in the lvol metadata matched the size of the RAID volume itself. -Jim On 8/20/19, 12:19 PM, "SPDK on behalf of Luse, Paul E" wrote: Hi Neil, I started looking into this and can easily reproduce. Question though, why are you using a single disk RAID0? Or are you meaning to create a RAID0 out of all of the nvme devices in which case when you create the raid you need to pass in a list of all of the devices. The thinking is that this would work with other than a single disk RAID0 (which has no real value in anything except test). Thx Paul -----Original Message----- From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E Sent: Tuesday, August 20, 2019 9:31 AM To: Storage Performance Development Kit Cc: shibx(a)lenovo.com Subject: Re: [SPDK] Configuring status raid bdevs with SPDK Hi Neil, Would you mind entering a github issue for this at https://github.com/spdk/spdk/issues ? If someone else doesn't jump on this, I can look into later today for you. Thx Paul -----Original Message----- From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Neil Shi Sent: Tuesday, August 20, 2019 1:01 AM To: spdk(a)lists.01.org Cc: shibx(a)lenovo.com Subject: [SPDK] Configuring status raid bdevs with SPDK Dear Experts, When we are using SPDK to create SSD RAID and build logical volume on top of the RAID, we meet the raid bdev becomes "configuring" status. Using RPC "destroy_raid_bdev" can delete the raid bdev without report error, but the "rpc.py get_bdevs all" shows the raid bdev still there. And the SSD drives involved can't be used anymore. See attached screeshots. The issue happens after RAID and volume created, and system rebooted, when trying to recover the configuration. The SPDK version we used is v19.04.1, commit id also attached. With below steps can 100% reproduce this issue. ./setup.sh ./app/nvmf_tgt/nvmf_tgt -i 1 & ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0 ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0 ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0 ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0 ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0 ####Create raid with SSD drive ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64 -b "NVMe3n1" ###Create volume store ./rpc.py construct_lvol_store StoragePool1 StoragePool1store ###Create volume on the volume store ./scripts/rpc.py construct_lvol_bdev -l StoragePool1store "1" 10240 Then reboot the system, and after the system rebooted, run below command: ./setup.sh ./app/nvmf_tgt/nvmf_tgt -i 1 & ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0 ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0 ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0 ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0 ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0 ### Recover the RAID ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64 -b "NVMe3n1" -->This command will cause the issue. It will report error message, but a "Configuring" status raid is built. It seems with above steps, we can't recover previous settings after system reboot. But the steps work with old version SPDK v18.04. How should we do if we want to recover the setting after the system reboot? Thanks Neil _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk _______________________________________________ SPDK mailing list SPDK(a)lists.01.org https://lists.01.org/mailman/listinfo/spdk