Hello Terry,

 

On constructing lvol store an unmap command is issued for all blocks of base bdev. When underlying device does not support unmap command, it is explicitly written with zeroes. That might take a while depending on size of the device. Did the offsets when writing to Nvme device ever stop increasing ?

bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0

 

Does your Nvme device/driver support unmap ? One example of driver not supporting unmap is when using emulated  Nvme in QEMU.

 

>It looks that lvol is used to divide a bdev to many bdev with variable size.

>But can it be possible to combine many bdev to one?

> If No. Is there any future plan to achieve that?

At the moment logical volume store can span one base bdev.

What usage do you have in mind for combining bdevs to one with logical volumes, RAID 0 stripping or RAID 1 mirroring ?

 

Best regards,

Tomek

 

From: SPDK [mailto:spdk-bounces@lists.01.org] On Behalf Of Terry_MF_Kao@wistron.com
Sent: Thursday, November 2, 2017 3:57 AM
To: spdk@lists.01.org
Subject: [SPDK] lvol function: hangs up with Nvme bdev.

 

Hi,

 

 

I’m trying the lvol function.

When I used Malloc as the base bdev, it works fine.

 

However, if I used Nvme as the base bdev, the command hangs up.

# ./scripts/rpc.py construct_lvol_store Nvme0n1 lvs_1 -c 65536

(hangs up)

 

I tried to enable verbose by adding “-t all” with nvmf_tgt.

Then there is a looping message as below when hang-up:

request.c: 237:spdk_bs_sequence_write_zeroes: *INFO*: writing zeroes to 3125627568 blocks at LBA 0

nvme_pcie.c:1521:nvme_pcie_prp_list_append: *INFO*: prp_index:0 virt_addr:0x7f6744ced000 len:4096

nvme_pcie.c:1548:nvme_pcie_prp_list_append: *INFO*: prp1 = 0xbc54ed000

bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0

nvme_pcie.c:1521:nvme_pcie_prp_list_append: *INFO*: prp_index:0 virt_addr:0x7f6744d00000 len:131072

nvme_pcie.c:1548:nvme_pcie_prp_list_append: *INFO*: prp1 = 0xbc5500000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[0] = 0xbc5501000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[1] = 0xbc5502000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[2] = 0xbc5503000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[3] = 0xbc5504000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[4] = 0xbc5505000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[5] = 0xbc5506000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[6] = 0xbc5507000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[7] = 0xbc5508000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[8] = 0xbc5509000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[9] = 0xbc550a000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[10] = 0xbc550b000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[11] = 0xbc550c000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[12] = 0xbc550d000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[13] = 0xbc550e000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[14] = 0xbc550f000

nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[15] = 0xbc5510000

          …..

 

Any suggestion what the problem might be?

 

 

 

 

A further question:

It looks that lvol is used to divide a bdev to many bdev with variable size.

But can it be possible to combine many bdev to one?

 If No. Is there any future plan to achieve that?

 

 

Best Regards,

Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------

This email contains confidential or legally privileged information and is for the sole use of its intended recipient.

Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.

If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.

---------------------------------------------------------------------------------------------------------------------------------------------------------------