All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] Configuring status raid bdevs with SPDK
@ 2019-08-20 19:30 Harris, James R
  0 siblings, 0 replies; 6+ messages in thread
From: Harris, James R @ 2019-08-20 19:30 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5246 bytes --]

After reboot, could you try construct_raid_bdev *before* construct_nvme_bdev?

Doing construct_raid_bdev first will cause the RAID module to claim NVMe3n1 once it appears without giving other modules (such as logical volumes) a chance to claim it.  Then after the RAID module registers the StoragePool1 bdev, the logical volume module will see the blobstore/lvolstore metadata on StoragePool1 and claim it.

Building on what Paul said, single disk RAID0 is the only case where switching the order would be needed.  For a multiple disk RAID0, lvol would see the metadata on the first member disk, but then would see the size of the member disk didn't match the size of the blobstore and wouldn't claim it.  That would give you a chance to construct_raid_bdev.  Once the RAID volume was registered, lvol would confirm the size reported in the lvol metadata matched the size of the RAID volume itself.

-Jim


On 8/20/19, 12:19 PM, "SPDK on behalf of Luse, Paul E" <spdk-bounces(a)lists.01.org on behalf of paul.e.luse(a)intel.com> wrote:

    Hi Neil,
    
    I started looking into this and can easily reproduce.  Question though, why are you using a single disk RAID0? Or are you meaning to create a RAID0 out of all of the nvme devices in which case when you create the raid you need to pass in a list of all of the devices.  The thinking is that this would work with other than a single disk RAID0 (which has no real value in anything except test). 
    
    Thx
    Paul
    
    -----Original Message-----
    From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
    Sent: Tuesday, August 20, 2019 9:31 AM
    To: Storage Performance Development Kit <spdk(a)lists.01.org>
    Cc: shibx(a)lenovo.com
    Subject: Re: [SPDK] Configuring status raid bdevs with SPDK
    
    Hi Neil,
    
    Would you mind entering a github issue for this at https://github.com/spdk/spdk/issues ?
    
    If someone else doesn't jump on this, I can look into later today for you.
    
    Thx
    Paul
    
    -----Original Message-----
    From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Neil Shi
    Sent: Tuesday, August 20, 2019 1:01 AM
    To: spdk(a)lists.01.org
    Cc: shibx(a)lenovo.com
    Subject: [SPDK] Configuring status raid bdevs with SPDK
    
    Dear Experts,
    
    
    When we are using SPDK to create SSD RAID and build logical volume on top of the RAID, we meet the raid bdev becomes "configuring" status. Using RPC "destroy_raid_bdev" can delete the raid bdev without report error, but the "rpc.py get_bdevs all" shows the raid bdev still there. And the SSD drives involved can't be used anymore. See attached screeshots.
    
    The issue happens after RAID and volume created, and system rebooted, when trying to recover the configuration.
    
    The SPDK version we used is v19.04.1, commit id also attached.
    
    
    
    With below steps can 100% reproduce this issue.
    
    
    ./setup.sh
    
    ./app/nvmf_tgt/nvmf_tgt -i 1 &
    
    ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0
    
    ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0
    
    ####Create raid with SSD drive
    
    ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"
    
    ###Create volume store
    
    ./rpc.py construct_lvol_store StoragePool1 StoragePool1store
    
    ###Create volume on the volume store
    
    ./scripts/rpc.py construct_lvol_bdev -l StoragePool1store "1" 10240
    
    
    Then reboot the system, and after the system rebooted, run below command:
    
    ./setup.sh
    
    ./app/nvmf_tgt/nvmf_tgt -i 1 &
    
    ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0
    
    ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0
    
    
    ### Recover the RAID
    
    ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"   -->This command will cause the issue. It will report error message, but a "Configuring" status raid is built.
    
    
    It seems with above steps, we can't recover previous settings after system reboot. But the steps work with old version SPDK v18.04. How should we do if we want to recover the setting after the system reboot?
    
    
    Thanks
    
    Neil
    
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] Configuring status raid bdevs with SPDK
@ 2019-08-21  3:03 Yan, Liang Z
  0 siblings, 0 replies; 6+ messages in thread
From: Yan, Liang Z @ 2019-08-21  3:03 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 6281 bytes --]

Hi all,

I have submitted one github issue https://github.com/spdk/spdk/issues/921 to track this issue.

Thanks.

Liang Yan

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Wednesday, August 21, 2019 4:44 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: shibx(a)lenovo.com
Subject: Re: [SPDK] Configuring status raid bdevs with SPDK

FYI switching the order worked for me...

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Tuesday, August 20, 2019 12:30 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: shibx(a)lenovo.com
Subject: Re: [SPDK] Configuring status raid bdevs with SPDK

After reboot, could you try construct_raid_bdev *before* construct_nvme_bdev?

Doing construct_raid_bdev first will cause the RAID module to claim NVMe3n1 once it appears without giving other modules (such as logical volumes) a chance to claim it.  Then after the RAID module registers the StoragePool1 bdev, the logical volume module will see the blobstore/lvolstore metadata on StoragePool1 and claim it.

Building on what Paul said, single disk RAID0 is the only case where switching the order would be needed.  For a multiple disk RAID0, lvol would see the metadata on the first member disk, but then would see the size of the member disk didn't match the size of the blobstore and wouldn't claim it.  That would give you a chance to construct_raid_bdev.  Once the RAID volume was registered, lvol would confirm the size reported in the lvol metadata matched the size of the RAID volume itself.

-Jim


On 8/20/19, 12:19 PM, "SPDK on behalf of Luse, Paul E" <spdk-bounces(a)lists.01.org on behalf of paul.e.luse(a)intel.com> wrote:

    Hi Neil,
    
    I started looking into this and can easily reproduce.  Question though, why are you using a single disk RAID0? Or are you meaning to create a RAID0 out of all of the nvme devices in which case when you create the raid you need to pass in a list of all of the devices.  The thinking is that this would work with other than a single disk RAID0 (which has no real value in anything except test). 
    
    Thx
    Paul
    
    -----Original Message-----
    From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
    Sent: Tuesday, August 20, 2019 9:31 AM
    To: Storage Performance Development Kit <spdk(a)lists.01.org>
    Cc: shibx(a)lenovo.com
    Subject: Re: [SPDK] Configuring status raid bdevs with SPDK
    
    Hi Neil,
    
    Would you mind entering a github issue for this at https://github.com/spdk/spdk/issues ?
    
    If someone else doesn't jump on this, I can look into later today for you.
    
    Thx
    Paul
    
    -----Original Message-----
    From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Neil Shi
    Sent: Tuesday, August 20, 2019 1:01 AM
    To: spdk(a)lists.01.org
    Cc: shibx(a)lenovo.com
    Subject: [SPDK] Configuring status raid bdevs with SPDK
    
    Dear Experts,
    
    
    When we are using SPDK to create SSD RAID and build logical volume on top of the RAID, we meet the raid bdev becomes "configuring" status. Using RPC "destroy_raid_bdev" can delete the raid bdev without report error, but the "rpc.py get_bdevs all" shows the raid bdev still there. And the SSD drives involved can't be used anymore. See attached screeshots.
    
    The issue happens after RAID and volume created, and system rebooted, when trying to recover the configuration.
    
    The SPDK version we used is v19.04.1, commit id also attached.
    
    
    
    With below steps can 100% reproduce this issue.
    
    
    ./setup.sh
    
    ./app/nvmf_tgt/nvmf_tgt -i 1 &
    
    ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0
    
    ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0
    
    ####Create raid with SSD drive
    
    ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"
    
    ###Create volume store
    
    ./rpc.py construct_lvol_store StoragePool1 StoragePool1store
    
    ###Create volume on the volume store
    
    ./scripts/rpc.py construct_lvol_bdev -l StoragePool1store "1" 10240
    
    
    Then reboot the system, and after the system rebooted, run below command:
    
    ./setup.sh
    
    ./app/nvmf_tgt/nvmf_tgt -i 1 &
    
    ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0
    
    ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0
    
    
    ### Recover the RAID
    
    ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"   -->This command will cause the issue. It will report error message, but a "Configuring" status raid is built.
    
    
    It seems with above steps, we can't recover previous settings after system reboot. But the steps work with old version SPDK v18.04. How should we do if we want to recover the setting after the system reboot?
    
    
    Thanks
    
    Neil
    
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] Configuring status raid bdevs with SPDK
@ 2019-08-20 20:43 Luse, Paul E
  0 siblings, 0 replies; 6+ messages in thread
From: Luse, Paul E @ 2019-08-20 20:43 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5720 bytes --]

FYI switching the order worked for me...

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Harris, James R
Sent: Tuesday, August 20, 2019 12:30 PM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: shibx(a)lenovo.com
Subject: Re: [SPDK] Configuring status raid bdevs with SPDK

After reboot, could you try construct_raid_bdev *before* construct_nvme_bdev?

Doing construct_raid_bdev first will cause the RAID module to claim NVMe3n1 once it appears without giving other modules (such as logical volumes) a chance to claim it.  Then after the RAID module registers the StoragePool1 bdev, the logical volume module will see the blobstore/lvolstore metadata on StoragePool1 and claim it.

Building on what Paul said, single disk RAID0 is the only case where switching the order would be needed.  For a multiple disk RAID0, lvol would see the metadata on the first member disk, but then would see the size of the member disk didn't match the size of the blobstore and wouldn't claim it.  That would give you a chance to construct_raid_bdev.  Once the RAID volume was registered, lvol would confirm the size reported in the lvol metadata matched the size of the RAID volume itself.

-Jim


On 8/20/19, 12:19 PM, "SPDK on behalf of Luse, Paul E" <spdk-bounces(a)lists.01.org on behalf of paul.e.luse(a)intel.com> wrote:

    Hi Neil,
    
    I started looking into this and can easily reproduce.  Question though, why are you using a single disk RAID0? Or are you meaning to create a RAID0 out of all of the nvme devices in which case when you create the raid you need to pass in a list of all of the devices.  The thinking is that this would work with other than a single disk RAID0 (which has no real value in anything except test). 
    
    Thx
    Paul
    
    -----Original Message-----
    From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
    Sent: Tuesday, August 20, 2019 9:31 AM
    To: Storage Performance Development Kit <spdk(a)lists.01.org>
    Cc: shibx(a)lenovo.com
    Subject: Re: [SPDK] Configuring status raid bdevs with SPDK
    
    Hi Neil,
    
    Would you mind entering a github issue for this at https://github.com/spdk/spdk/issues ?
    
    If someone else doesn't jump on this, I can look into later today for you.
    
    Thx
    Paul
    
    -----Original Message-----
    From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Neil Shi
    Sent: Tuesday, August 20, 2019 1:01 AM
    To: spdk(a)lists.01.org
    Cc: shibx(a)lenovo.com
    Subject: [SPDK] Configuring status raid bdevs with SPDK
    
    Dear Experts,
    
    
    When we are using SPDK to create SSD RAID and build logical volume on top of the RAID, we meet the raid bdev becomes "configuring" status. Using RPC "destroy_raid_bdev" can delete the raid bdev without report error, but the "rpc.py get_bdevs all" shows the raid bdev still there. And the SSD drives involved can't be used anymore. See attached screeshots.
    
    The issue happens after RAID and volume created, and system rebooted, when trying to recover the configuration.
    
    The SPDK version we used is v19.04.1, commit id also attached.
    
    
    
    With below steps can 100% reproduce this issue.
    
    
    ./setup.sh
    
    ./app/nvmf_tgt/nvmf_tgt -i 1 &
    
    ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0
    
    ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0
    
    ####Create raid with SSD drive
    
    ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"
    
    ###Create volume store
    
    ./rpc.py construct_lvol_store StoragePool1 StoragePool1store
    
    ###Create volume on the volume store
    
    ./scripts/rpc.py construct_lvol_bdev -l StoragePool1store "1" 10240
    
    
    Then reboot the system, and after the system rebooted, run below command:
    
    ./setup.sh
    
    ./app/nvmf_tgt/nvmf_tgt -i 1 &
    
    ./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0
    
    ./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0
    
    ./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0
    
    
    ### Recover the RAID
    
    ./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"   -->This command will cause the issue. It will report error message, but a "Configuring" status raid is built.
    
    
    It seems with above steps, we can't recover previous settings after system reboot. But the steps work with old version SPDK v18.04. How should we do if we want to recover the setting after the system reboot?
    
    
    Thanks
    
    Neil
    
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    _______________________________________________
    SPDK mailing list
    SPDK(a)lists.01.org
    https://lists.01.org/mailman/listinfo/spdk
    

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] Configuring status raid bdevs with SPDK
@ 2019-08-20 19:19 Luse, Paul E
  0 siblings, 0 replies; 6+ messages in thread
From: Luse, Paul E @ 2019-08-20 19:19 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3606 bytes --]

Hi Neil,

I started looking into this and can easily reproduce.  Question though, why are you using a single disk RAID0? Or are you meaning to create a RAID0 out of all of the nvme devices in which case when you create the raid you need to pass in a list of all of the devices.  The thinking is that this would work with other than a single disk RAID0 (which has no real value in anything except test). 

Thx
Paul

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Luse, Paul E
Sent: Tuesday, August 20, 2019 9:31 AM
To: Storage Performance Development Kit <spdk(a)lists.01.org>
Cc: shibx(a)lenovo.com
Subject: Re: [SPDK] Configuring status raid bdevs with SPDK

Hi Neil,

Would you mind entering a github issue for this at https://github.com/spdk/spdk/issues ?

If someone else doesn't jump on this, I can look into later today for you.

Thx
Paul

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Neil Shi
Sent: Tuesday, August 20, 2019 1:01 AM
To: spdk(a)lists.01.org
Cc: shibx(a)lenovo.com
Subject: [SPDK] Configuring status raid bdevs with SPDK

Dear Experts,


When we are using SPDK to create SSD RAID and build logical volume on top of the RAID, we meet the raid bdev becomes "configuring" status. Using RPC "destroy_raid_bdev" can delete the raid bdev without report error, but the "rpc.py get_bdevs all" shows the raid bdev still there. And the SSD drives involved can't be used anymore. See attached screeshots.

The issue happens after RAID and volume created, and system rebooted, when trying to recover the configuration.

The SPDK version we used is v19.04.1, commit id also attached.



With below steps can 100% reproduce this issue.


./setup.sh

./app/nvmf_tgt/nvmf_tgt -i 1 &

./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0

./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0

./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0

./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0

./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0

####Create raid with SSD drive

./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"

###Create volume store

./rpc.py construct_lvol_store StoragePool1 StoragePool1store

###Create volume on the volume store

./scripts/rpc.py construct_lvol_bdev -l StoragePool1store "1" 10240


Then reboot the system, and after the system rebooted, run below command:

./setup.sh

./app/nvmf_tgt/nvmf_tgt -i 1 &

./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0

./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0

./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0

./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0

./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0


### Recover the RAID

./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"   -->This command will cause the issue. It will report error message, but a "Configuring" status raid is built.


It seems with above steps, we can't recover previous settings after system reboot. But the steps work with old version SPDK v18.04. How should we do if we want to recover the setting after the system reboot?


Thanks

Neil

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk
_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [SPDK] Configuring status raid bdevs with SPDK
@ 2019-08-20 16:31 Luse, Paul E
  0 siblings, 0 replies; 6+ messages in thread
From: Luse, Paul E @ 2019-08-20 16:31 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2758 bytes --]

Hi Neil,

Would you mind entering a github issue for this at https://github.com/spdk/spdk/issues ?

If someone else doesn't jump on this, I can look into later today for you.

Thx
Paul

-----Original Message-----
From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Neil Shi
Sent: Tuesday, August 20, 2019 1:01 AM
To: spdk(a)lists.01.org
Cc: shibx(a)lenovo.com
Subject: [SPDK] Configuring status raid bdevs with SPDK

Dear Experts,


When we are using SPDK to create SSD RAID and build logical volume on top of the RAID, we meet the raid bdev becomes "configuring" status. Using RPC "destroy_raid_bdev" can delete the raid bdev without report error, but the "rpc.py get_bdevs all" shows the raid bdev still there. And the SSD drives involved can't be used anymore. See attached screeshots.

The issue happens after RAID and volume created, and system rebooted, when trying to recover the configuration.

The SPDK version we used is v19.04.1, commit id also attached.



With below steps can 100% reproduce this issue.


./setup.sh

./app/nvmf_tgt/nvmf_tgt -i 1 &

./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0

./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0

./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0

./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0

./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0

####Create raid with SSD drive

./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"

###Create volume store

./rpc.py construct_lvol_store StoragePool1 StoragePool1store

###Create volume on the volume store

./scripts/rpc.py construct_lvol_bdev -l StoragePool1store "1" 10240


Then reboot the system, and after the system rebooted, run below command:

./setup.sh

./app/nvmf_tgt/nvmf_tgt -i 1 &

./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0

./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0

./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0

./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0

./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0


### Recover the RAID

./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"   -->This command will cause the issue. It will report error message, but a "Configuring" status raid is built.


It seems with above steps, we can't recover previous settings after system reboot. But the steps work with old version SPDK v18.04. How should we do if we want to recover the setting after the system reboot?


Thanks

Neil

_______________________________________________
SPDK mailing list
SPDK(a)lists.01.org
https://lists.01.org/mailman/listinfo/spdk

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [SPDK] Configuring status raid bdevs with SPDK
@ 2019-08-20  8:00 Neil Shi
  0 siblings, 0 replies; 6+ messages in thread
From: Neil Shi @ 2019-08-20  8:00 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2203 bytes --]

Dear Experts,


When we are using SPDK to create SSD RAID and build logical volume on top of the RAID, we meet the raid bdev becomes “configuring” status. Using RPC “destroy_raid_bdev” can delete the raid bdev without report error, but the “rpc.py get_bdevs all” shows the raid bdev still there. And the SSD drives involved can’t be used anymore. See attached screeshots.

The issue happens after RAID and volume created, and system rebooted, when trying to recover the configuration.

The SPDK version we used is v19.04.1, commit id also attached.



With below steps can 100% reproduce this issue.


./setup.sh

./app/nvmf_tgt/nvmf_tgt -i 1 &

./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0

./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0

./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0

./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0

./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0

####Create raid with SSD drive

./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"

###Create volume store

./rpc.py construct_lvol_store StoragePool1 StoragePool1store

###Create volume on the volume store

./scripts/rpc.py construct_lvol_bdev -l StoragePool1store "1" 10240


Then reboot the system, and after the system rebooted, run below command:

./setup.sh

./app/nvmf_tgt/nvmf_tgt -i 1 &

./rpc.py nvmf_create_transport -t RDMA -u 8192 -p 4 -c 0

./rpc.py construct_nvme_bdev -b NVMe1 -t PCIe -a 0000:08:00.0

./rpc.py construct_nvme_bdev -b NVMe2 -t PCIe -a 0000:0d:00.0

./rpc.py construct_nvme_bdev -b NVMe3 -t PCIe -a 0000:12:00.0

./rpc.py construct_nvme_bdev -b NVMe4 -t PCIe -a 0000:17:00.0


### Recover the RAID

./rpc.py construct_raid_bdev -n StoragePool1 -r 0 -z 64  -b "NVMe3n1"   -->This command will cause the issue. It will report error message, but a “Configuring” status raid is built.


It seems with above steps, we can’t recover previous settings after system reboot. But the steps work with old version SPDK v18.04. How should we do if we want to recover the setting after the system reboot?


Thanks

Neil


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-08-21  3:03 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-20 19:30 [SPDK] Configuring status raid bdevs with SPDK Harris, James R
  -- strict thread matches above, loose matches on Subject: below --
2019-08-21  3:03 Yan, Liang Z
2019-08-20 20:43 Luse, Paul E
2019-08-20 19:19 Luse, Paul E
2019-08-20 16:31 Luse, Paul E
2019-08-20  8:00 Neil Shi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.