All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2018-02-23  8:45 Terry_MF_Kao
  0 siblings, 0 replies; 9+ messages in thread
From: Terry_MF_Kao @ 2018-02-23  8:45 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1311 bytes --]

Hi Maciek,



> Hello Terry,

> I pushed patch (already merged on master) that changes the default behavior of lvol store creation. Currently we write zeros on metadata space and unmap data clusters. Please check > if this works well in your environment.

> Here is the patch:

> https://review.gerrithub.io/#/c/387152/



I did the same testing in my environment.

Yes, it works. Speed updates to 2m as below:



# time ./scripts/rpc.py construct_lvol_store Nvme0n1 lvs_1 -c 65536

c7f2a420-186f-11e8-9e50-00e04c6805c8



real    2m12.727s

user    0m0.073s

sys     0m0.016s




Regards,,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------
This email contains confidential or legally privileged information and is for the sole use of its intended recipient. 
Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.
---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 5368 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2018-02-23 16:43 Szwed, Maciej
  0 siblings, 0 replies; 9+ messages in thread
From: Szwed, Maciej @ 2018-02-23 16:43 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1640 bytes --]

Good to hear that. I hope that this is satisfying execution time. Glad I could help you.

Maciek

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Terry_MF_Kao(a)wistron.com
Sent: Friday, February 23, 2018 1:45 AM
To: spdk(a)lists.01.org
Subject: Re: [SPDK] lvol function: hangs up with Nvme bdev.


Hi Maciek,



> Hello Terry,

> I pushed patch (already merged on master) that changes the default behavior of lvol store creation. Currently we write zeros on metadata space and unmap data clusters. Please check > if this works well in your environment.

> Here is the patch:

> https://review.gerrithub.io/#/c/387152/



I did the same testing in my environment.

Yes, it works. Speed updates to 2m as below:



# time ./scripts/rpc.py construct_lvol_store Nvme0n1 lvs_1 -c 65536

c7f2a420-186f-11e8-9e50-00e04c6805c8



real    2m12.727s

user    0m0.073s

sys     0m0.016s




Regards,,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------

This email contains confidential or legally privileged information and is for the sole use of its intended recipient.

Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.

If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.

---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 8380 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2017-11-24  9:14 Szwed, Maciej
  0 siblings, 0 replies; 9+ messages in thread
From: Szwed, Maciej @ 2017-11-24  9:14 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3416 bytes --]

Hello Terry,
I pushed patch (already merged on master) that changes the default behavior of lvol store creation. Currently we write zeros on metadata space and unmap data clusters. Please check if this works well in your environment.
Here is the patch:
https://review.gerrithub.io/#/c/387152/

Regards,
Maciek

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Terry_MF_Kao(a)wistron.com
Sent: Friday, November 10, 2017 8:53 AM
To: spdk(a)lists.01.org
Subject: Re: [SPDK] lvol function: hangs up with Nvme bdev.

Hello Tomek,

>Without support for unmap, whole device is written to - taking a long time. This operation has to be only once per lvol store creation. An optional flag could be added to RPC construct_lvol_store
>that allows user to skip unmapping whole device - with caveat that previously present data would still be there after lvol store creation. Would you find such option useful ?
True, I think so.

>Do you see other output than "NVMe DSM: success" ?
No, it looks work fine.
1.) RPC get_bdevs()
{
    "num_blocks": 3125627568,
    "supported_io_types": {
      "reset": true,
      "nvme_admin": true,
      "unmap": true,        *****
      "read": true,
      "write_zeroes": false,
      "write": true,
      "flush": true,
      "nvme_io": true
    },
    "driver_specific": {
      "nvme": {
        "trid": {
          "trtype": "PCIe",
          "traddr": "0000:84:00.0"
        },
        "ns_data": {
          "id": 1
        },
        "pci_address": "0000:84:00.0",
        "vs": {
          "nvme_version": "1.1"
        },
        "ctrlr_data": {
          "firmware_revision": "KPYA6B3Q",
          "serial_number": "S2EVNAAH600017",
          "oacs": {
            "ns_manage": 0,
            "security": 0,
            "firmware": 1,
            "format": 1
          },
          "vendor_id": "0x144d",
          "model_number": "SAMSUNG MZWLK1T6HCHP-00003"
        },
        "csts": {
          "rdy": 1,
          "cfs": 0
        }
      }
    },
    "claimed": true,
    "block_size": 512,
    "product_name": "NVMe disk",
    "name": "Nvme0n1"
  }
2.) nvme dsm
# nvme dsm /dev/nvme0n1 -d -s 0 -b 0
NVMe DSM: success
# nvme dsm /dev/nvme0n1 -d -s 0 -b 1
NVMe DSM: success
3.) nvme id-ctrl  (from Andrey's suggestion
        # nvme id-ctrl /dev/nvme0 -H | grep Data
        [2:2] : 0x1   Data Set Management Supported



>This might be a good idea, but would require some changes in blobstore or a bdev aggregating multiple others underneath. I've added topic for this to the SPDK Community meeting agenda.
I saw it on trello. Looks forward to it. Thanks for promoting!


Best Regards,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------

This email contains confidential or legally privileged information and is for the sole use of its intended recipient.

Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.

If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.

---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 21337 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2017-11-10  7:52 Terry_MF_Kao
  0 siblings, 0 replies; 9+ messages in thread
From: Terry_MF_Kao @ 2017-11-10  7:52 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 2872 bytes --]

Hello Tomek,

>Without support for unmap, whole device is written to - taking a long time. This operation has to be only once per lvol store creation. An optional flag could be added to RPC construct_lvol_store
>that allows user to skip unmapping whole device - with caveat that previously present data would still be there after lvol store creation. Would you find such option useful ?
True, I think so.

>Do you see other output than "NVMe DSM: success" ?
No, it looks work fine.
1.) RPC get_bdevs()
{
    "num_blocks": 3125627568,
    "supported_io_types": {
      "reset": true,
      "nvme_admin": true,
      "unmap": true,        *****
      "read": true,
      "write_zeroes": false,
      "write": true,
      "flush": true,
      "nvme_io": true
    },
    "driver_specific": {
      "nvme": {
        "trid": {
          "trtype": "PCIe",
          "traddr": "0000:84:00.0"
        },
        "ns_data": {
          "id": 1
        },
        "pci_address": "0000:84:00.0",
        "vs": {
          "nvme_version": "1.1"
        },
        "ctrlr_data": {
          "firmware_revision": "KPYA6B3Q",
          "serial_number": "S2EVNAAH600017",
          "oacs": {
            "ns_manage": 0,
            "security": 0,
            "firmware": 1,
            "format": 1
          },
          "vendor_id": "0x144d",
          "model_number": "SAMSUNG MZWLK1T6HCHP-00003"
        },
        "csts": {
          "rdy": 1,
          "cfs": 0
        }
      }
    },
    "claimed": true,
    "block_size": 512,
    "product_name": "NVMe disk",
    "name": "Nvme0n1"
  }
2.) nvme dsm
# nvme dsm /dev/nvme0n1 -d -s 0 -b 0
NVMe DSM: success
# nvme dsm /dev/nvme0n1 -d -s 0 -b 1
NVMe DSM: success
3.) nvme id-ctrl  (from Andrey's suggestion
        # nvme id-ctrl /dev/nvme0 -H | grep Data
        [2:2] : 0x1   Data Set Management Supported



>This might be a good idea, but would require some changes in blobstore or a bdev aggregating multiple others underneath. I've added topic for this to the SPDK Community meeting agenda.
I saw it on trello. Looks forward to it. Thanks for promoting!


Best Regards,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------
This email contains confidential or legally privileged information and is for the sole use of its intended recipient. 
Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.
---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 16557 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2017-11-08 16:08 Andrey Kuzmin
  0 siblings, 0 replies; 9+ messages in thread
From: Andrey Kuzmin @ 2017-11-08 16:08 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5943 bytes --]

On Wed, Nov 8, 2017, 19:02 Zawadzki, Tomasz <tomasz.zawadzki(a)intel.com>
wrote:

> Hello Terry,
>
>
>
> > It does NOT hang up, but takes 13 minutes to finish the command.
>
> Without support for unmap, whole device is written to - taking a long
> time. This operation has to be only once per lvol store creation. An
> optional flag could be added to RPC construct_lvol_store that allows user
> to skip unmapping whole device – with caveat that previously present data
> would still be there after lvol store creation. Would you find such option
> useful ?
>
>
>
> > Is there a standard command/method to check if my device support
> UNMAP(or TRIM? you mean) command?
>
> To display supported io types in SPDK for certain bdev, RPC get_bdevs()
> can be issued.
>
> For nvme bdev, support for unmap means that certain device supports
> Dataset Management Command – deallocate. This is optional to implement in
> NVMe spec.
>
> I have not been able to find a tool to display that property of NVMe
> device.
>

nvme-cli identify controller

Regards,
Andrey

> There is a way to issue Dataset Management Commands directly in Linux:
>
> *sudo nvme dsm /dev/nvme0n1 -d -s 0 -b 1*           # Please note it will
> unmap first block on listed device.
>
> Do you see other output than “*NVMe DSM: success*” ?
>
>
>
> > Yes, RAID 0 strippng or JBOF.
>
> >At first we would like to have the capability of LVM(lvm/dmsetup) to run
> on the spdk.
>
> >We tested it with the bdev_aio with above device.
>
> >It’s doable but performance is no good (comparing to bdev_nvme).
>
> >
>
> >Then we saw lvol, we thought it would behave like LVM, so…
>
> >Do you think to combine bdevs to one with lvol is not a good idea?
>
> This might be a good idea, but would require some changes in blobstore or
> a bdev aggregating multiple others underneath. I’ve added topic for this to
> the SPDK Community meeting agenda here
> <https://trello.com/b/DvM7XayJ/spdk-community-meeting-agenda>. Feel free
> to join tomorrow 11/9 at 8am PDT / 5pm CEST.
>
>
>
>
>
> Best regards,
>
> Tomek
>
>
>
> *From:* SPDK [mailto:spdk-bounces(a)lists.01.org] *On Behalf Of *
> Terry_MF_Kao(a)wistron.com
> *Sent:* Tuesday, November 7, 2017 3:38 AM
> *To:* spdk(a)lists.01.org
> *Subject:* Re: [SPDK] lvol function: hangs up with Nvme bdev.
>
>
>
> Hi Tomek,
>
>
>
>
>
> >On constructing lvol store an unmap command is issued for all blocks of
> base bdev. When underlying device does not support unmap command, it is
> explicitly written with zeroes. That might take a while depending on size
> >of the device. Did the offsets when writing to Nvme device ever stop
> increasing ?
>
> >bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0
>
>
>
> >Does your Nvme device/driver support unmap ? One example of driver not
> supporting unmap is when using emulated  Nvme in QEMU.
>
>
>
> Thanks for the explanation. I tested it again without interrupting it.
>
> Yes, you are right. It does NOT hang up, but takes 13 minutes to finish
> the command.
>
>
>
> # time ./scripts/rpc.py construct_lvol_store Nvme2n1 lvs_n1 -c 1048576
>
> 9fa1880a-c2cb-11e7-8a4a-00e04c6805c8
>
>
>
> real    13m12.452s
>
> user    0m0.075s
>
> sys     0m0.008s
>
>
>
> Is there a standard command/method to check if my device support UNMAP(or
> TRIM? you mean) command?
>
> The device is “Samsung Electronics Co Ltd NVMe SSD Controller 172X” 1.6TB
> and kernel is 4.10.3.
>
> # nvme list
>
> Node             Model                                    Namespace
> Usage                      Format           FW Rev
>
> ---------------- ---------------------------------------- ---------
> -------------------------- ---------------- --------
>
> /dev/nvme0n1     SAMSUNG MZWLK1T6HCHP-00003               1
> 1.60  TB /   1.60  TB    512   B +  0 B   KPYA6B3Q
>
>
>
> Didn’t see the information about UNMAP/TRIM in datasheet.
>
> But according to the result of lsblk.
>
> # lsblk –D
>
> NAME              DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
>
> nvme0n1                512      512B       2T         0
>
>
>
> There are values of “DISC-GRAN,DISC-MAX”.
>
> I suppose the device should support unmap command.
>
> If so it shouldn’t take such a long time?
>
>
>
> >At the moment logical volume store can span one base bdev.
>
> >What usage do you have in mind for combining bdevs to one with logical
> volumes, RAID 0 stripping or RAID 1 mirroring ?
>
>
>
> Yes, RAID 0 strippng or JBOF.
>
> At first we would like to have the capability of LVM(lvm/dmsetup) to run
> on the spdk.
>
> We tested it with the bdev_aio with above device.
>
> It’s doable but performance is no good (comparing to bdev_nvme).
>
>
>
> Then we saw lvol, we thought it would behave like LVM, so…
>
> Do you think to combine bdevs to one with lvol is not a good idea?
>
>
>
>
>
> Best Regards,
>
> Terry
>
>
> *---------------------------------------------------------------------------------------------------------------------------------------------------------------*
>
> *This email contains confidential or legally privileged information and is
> for the sole use of its intended recipient. *
>
> *Any unauthorized review, use, copying or distribution of this email or
> the content of this email is strictly prohibited.*
>
> *If you are not the intended recipient, you may reply to the sender and
> should delete this e-mail immediately.*
>
>
> *---------------------------------------------------------------------------------------------------------------------------------------------------------------*
> _______________________________________________
> SPDK mailing list
> SPDK(a)lists.01.org
> https://lists.01.org/mailman/listinfo/spdk
>
-- 

Regards,
Andrey

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 14033 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2017-11-08 16:02 Zawadzki, Tomasz
  0 siblings, 0 replies; 9+ messages in thread
From: Zawadzki, Tomasz @ 2017-11-08 16:02 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 5103 bytes --]

Hello Terry,

> It does NOT hang up, but takes 13 minutes to finish the command.
Without support for unmap, whole device is written to - taking a long time. This operation has to be only once per lvol store creation. An optional flag could be added to RPC construct_lvol_store that allows user to skip unmapping whole device - with caveat that previously present data would still be there after lvol store creation. Would you find such option useful ?

> Is there a standard command/method to check if my device support UNMAP(or TRIM? you mean) command?
To display supported io types in SPDK for certain bdev, RPC get_bdevs() can be issued.
For nvme bdev, support for unmap means that certain device supports Dataset Management Command - deallocate. This is optional to implement in NVMe spec.
I have not been able to find a tool to display that property of NVMe device.
There is a way to issue Dataset Management Commands directly in Linux:
sudo nvme dsm /dev/nvme0n1 -d -s 0 -b 1           # Please note it will unmap first block on listed device.
Do you see other output than "NVMe DSM: success" ?

> Yes, RAID 0 strippng or JBOF.
>At first we would like to have the capability of LVM(lvm/dmsetup) to run on the spdk.
>We tested it with the bdev_aio with above device.
>It's doable but performance is no good (comparing to bdev_nvme).
>
>Then we saw lvol, we thought it would behave like LVM, so...
>Do you think to combine bdevs to one with lvol is not a good idea?
This might be a good idea, but would require some changes in blobstore or a bdev aggregating multiple others underneath. I've added topic for this to the SPDK Community meeting agenda here<https://trello.com/b/DvM7XayJ/spdk-community-meeting-agenda>. Feel free to join tomorrow 11/9 at 8am PDT / 5pm CEST.


Best regards,
Tomek

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Terry_MF_Kao(a)wistron.com
Sent: Tuesday, November 7, 2017 3:38 AM
To: spdk(a)lists.01.org
Subject: Re: [SPDK] lvol function: hangs up with Nvme bdev.

Hi Tomek,


>On constructing lvol store an unmap command is issued for all blocks of base bdev. When underlying device does not support unmap command, it is explicitly written with zeroes. That might take a while depending on size >of the device. Did the offsets when writing to Nvme device ever stop increasing ?
>bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0

>Does your Nvme device/driver support unmap ? One example of driver not supporting unmap is when using emulated  Nvme in QEMU.

Thanks for the explanation. I tested it again without interrupting it.
Yes, you are right. It does NOT hang up, but takes 13 minutes to finish the command.

# time ./scripts/rpc.py construct_lvol_store Nvme2n1 lvs_n1 -c 1048576
9fa1880a-c2cb-11e7-8a4a-00e04c6805c8

real    13m12.452s
user    0m0.075s
sys     0m0.008s

Is there a standard command/method to check if my device support UNMAP(or TRIM? you mean) command?
The device is "Samsung Electronics Co Ltd NVMe SSD Controller 172X" 1.6TB and kernel is 4.10.3.
# nvme list
Node             Model                                    Namespace Usage                      Format           FW Rev
---------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     SAMSUNG MZWLK1T6HCHP-00003               1           1.60  TB /   1.60  TB    512   B +  0 B   KPYA6B3Q

Didn't see the information about UNMAP/TRIM in datasheet.
But according to the result of lsblk.
# lsblk -D
NAME              DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
nvme0n1                512      512B       2T         0

There are values of "DISC-GRAN,DISC-MAX".
I suppose the device should support unmap command.
If so it shouldn't take such a long time?

>At the moment logical volume store can span one base bdev.
>What usage do you have in mind for combining bdevs to one with logical volumes, RAID 0 stripping or RAID 1 mirroring ?

Yes, RAID 0 strippng or JBOF.
At first we would like to have the capability of LVM(lvm/dmsetup) to run on the spdk.
We tested it with the bdev_aio with above device.
It's doable but performance is no good (comparing to bdev_nvme).

Then we saw lvol, we thought it would behave like LVM, so...
Do you think to combine bdevs to one with lvol is not a good idea?


Best Regards,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------

This email contains confidential or legally privileged information and is for the sole use of its intended recipient.

Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.

If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.

---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 17144 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2017-11-07  2:37 Terry_MF_Kao
  0 siblings, 0 replies; 9+ messages in thread
From: Terry_MF_Kao @ 2017-11-07  2:37 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3053 bytes --]

Hi Tomek,


>On constructing lvol store an unmap command is issued for all blocks of base bdev. When underlying device does not support unmap command, it is explicitly written with zeroes. That might take a while depending on size >of the device. Did the offsets when writing to Nvme device ever stop increasing ?
>bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0

>Does your Nvme device/driver support unmap ? One example of driver not supporting unmap is when using emulated  Nvme in QEMU.

Thanks for the explanation. I tested it again without interrupting it.
Yes, you are right. It does NOT hang up, but takes 13 minutes to finish the command.

# time ./scripts/rpc.py construct_lvol_store Nvme2n1 lvs_n1 -c 1048576
9fa1880a-c2cb-11e7-8a4a-00e04c6805c8

real    13m12.452s
user    0m0.075s
sys     0m0.008s

Is there a standard command/method to check if my device support UNMAP(or TRIM? you mean) command?
The device is "Samsung Electronics Co Ltd NVMe SSD Controller 172X" 1.6TB and kernel is 4.10.3.
# nvme list
Node             Model                                    Namespace Usage                      Format           FW Rev
---------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1     SAMSUNG MZWLK1T6HCHP-00003               1           1.60  TB /   1.60  TB    512   B +  0 B   KPYA6B3Q

Didn't see the information about UNMAP/TRIM in datasheet.
But according to the result of lsblk.
# lsblk -D
NAME              DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
nvme0n1                512      512B       2T         0

There are values of "DISC-GRAN,DISC-MAX".
I suppose the device should support unmap command.
If so it shouldn't take such a long time?

>At the moment logical volume store can span one base bdev.
>What usage do you have in mind for combining bdevs to one with logical volumes, RAID 0 stripping or RAID 1 mirroring ?

Yes, RAID 0 strippng or JBOF.
At first we would like to have the capability of LVM(lvm/dmsetup) to run on the spdk.
We tested it with the bdev_aio with above device.
It's doable but performance is no good (comparing to bdev_nvme).

Then we saw lvol, we thought it would behave like LVM, so...
Do you think to combine bdevs to one with lvol is not a good idea?


Best Regards,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------
This email contains confidential or legally privileged information and is for the sole use of its intended recipient. 
Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.
---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 10207 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [SPDK] lvol function: hangs up with Nvme bdev.
@ 2017-11-02 18:55 Zawadzki, Tomasz
  0 siblings, 0 replies; 9+ messages in thread
From: Zawadzki, Tomasz @ 2017-11-02 18:55 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 4178 bytes --]

Hello Terry,

On constructing lvol store an unmap command is issued for all blocks of base bdev. When underlying device does not support unmap command, it is explicitly written with zeroes. That might take a while depending on size of the device. Did the offsets when writing to Nvme device ever stop increasing ?
bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0

Does your Nvme device/driver support unmap ? One example of driver not supporting unmap is when using emulated  Nvme in QEMU.

>It looks that lvol is used to divide a bdev to many bdev with variable size.
>But can it be possible to combine many bdev to one?
> If No. Is there any future plan to achieve that?
At the moment logical volume store can span one base bdev.
What usage do you have in mind for combining bdevs to one with logical volumes, RAID 0 stripping or RAID 1 mirroring ?

Best regards,
Tomek

From: SPDK [mailto:spdk-bounces(a)lists.01.org] On Behalf Of Terry_MF_Kao(a)wistron.com
Sent: Thursday, November 2, 2017 3:57 AM
To: spdk(a)lists.01.org
Subject: [SPDK] lvol function: hangs up with Nvme bdev.

Hi,


I'm trying the lvol function.
When I used Malloc as the base bdev, it works fine.

However, if I used Nvme as the base bdev, the command hangs up.
# ./scripts/rpc.py construct_lvol_store Nvme0n1 lvs_1 -c 65536
(hangs up)

I tried to enable verbose by adding "-t all" with nvmf_tgt.
Then there is a looping message as below when hang-up:
request.c: 237:spdk_bs_sequence_write_zeroes: *INFO*: writing zeroes to 3125627568 blocks at LBA 0
nvme_pcie.c:1521:nvme_pcie_prp_list_append: *INFO*: prp_index:0 virt_addr:0x7f6744ced000 len:4096
nvme_pcie.c:1548:nvme_pcie_prp_list_append: *INFO*: prp1 = 0xbc54ed000
bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0
nvme_pcie.c:1521:nvme_pcie_prp_list_append: *INFO*: prp_index:0 virt_addr:0x7f6744d00000 len:131072
nvme_pcie.c:1548:nvme_pcie_prp_list_append: *INFO*: prp1 = 0xbc5500000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[0] = 0xbc5501000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[1] = 0xbc5502000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[2] = 0xbc5503000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[3] = 0xbc5504000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[4] = 0xbc5505000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[5] = 0xbc5506000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[6] = 0xbc5507000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[7] = 0xbc5508000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[8] = 0xbc5509000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[9] = 0xbc550a000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[10] = 0xbc550b000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[11] = 0xbc550c000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[12] = 0xbc550d000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[13] = 0xbc550e000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[14] = 0xbc550f000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[15] = 0xbc5510000
          .....

Any suggestion what the problem might be?




A further question:
It looks that lvol is used to divide a bdev to many bdev with variable size.
But can it be possible to combine many bdev to one?
 If No. Is there any future plan to achieve that?


Best Regards,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------

This email contains confidential or legally privileged information and is for the sole use of its intended recipient.

Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.

If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.

---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 14157 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [SPDK]  lvol function: hangs up with Nvme bdev.
@ 2017-11-02  2:57 Terry_MF_Kao
  0 siblings, 0 replies; 9+ messages in thread
From: Terry_MF_Kao @ 2017-11-02  2:57 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 3041 bytes --]

Hi,


I'm trying the lvol function.
When I used Malloc as the base bdev, it works fine.

However, if I used Nvme as the base bdev, the command hangs up.
# ./scripts/rpc.py construct_lvol_store Nvme0n1 lvs_1 -c 65536
(hangs up)

I tried to enable verbose by adding "-t all" with nvmf_tgt.
Then there is a looping message as below when hang-up:
request.c: 237:spdk_bs_sequence_write_zeroes: *INFO*: writing zeroes to 3125627568 blocks at LBA 0
nvme_pcie.c:1521:nvme_pcie_prp_list_append: *INFO*: prp_index:0 virt_addr:0x7f6744ced000 len:4096
nvme_pcie.c:1548:nvme_pcie_prp_list_append: *INFO*: prp1 = 0xbc54ed000
bdev_nvme.c: 184:bdev_nvme_writev: *INFO*: write 2048 blocks with offset 0
nvme_pcie.c:1521:nvme_pcie_prp_list_append: *INFO*: prp_index:0 virt_addr:0x7f6744d00000 len:131072
nvme_pcie.c:1548:nvme_pcie_prp_list_append: *INFO*: prp1 = 0xbc5500000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[0] = 0xbc5501000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[1] = 0xbc5502000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[2] = 0xbc5503000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[3] = 0xbc5504000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[4] = 0xbc5505000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[5] = 0xbc5506000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[6] = 0xbc5507000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[7] = 0xbc5508000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[8] = 0xbc5509000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[9] = 0xbc550a000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[10] = 0xbc550b000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[11] = 0xbc550c000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[12] = 0xbc550d000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[13] = 0xbc550e000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[14] = 0xbc550f000
nvme_pcie.c:1557:nvme_pcie_prp_list_append: *INFO*: prp[15] = 0xbc5510000
          .....

Any suggestion what the problem might be?




A further question:
It looks that lvol is used to divide a bdev to many bdev with variable size.
But can it be possible to combine many bdev to one?
 If No. Is there any future plan to achieve that?


Best Regards,
Terry

---------------------------------------------------------------------------------------------------------------------------------------------------------------
This email contains confidential or legally privileged information and is for the sole use of its intended recipient. 
Any unauthorized review, use, copying or distribution of this email or the content of this email is strictly prohibited.
If you are not the intended recipient, you may reply to the sender and should delete this e-mail immediately.
---------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: attachment.html --]
[-- Type: text/html, Size: 9531 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2018-02-23 16:43 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-23  8:45 [SPDK] lvol function: hangs up with Nvme bdev Terry_MF_Kao
  -- strict thread matches above, loose matches on Subject: below --
2018-02-23 16:43 Szwed, Maciej
2017-11-24  9:14 Szwed, Maciej
2017-11-10  7:52 Terry_MF_Kao
2017-11-08 16:08 Andrey Kuzmin
2017-11-08 16:02 Zawadzki, Tomasz
2017-11-07  2:37 Terry_MF_Kao
2017-11-02 18:55 Zawadzki, Tomasz
2017-11-02  2:57 Terry_MF_Kao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.