All of lore.kernel.org
 help / color / mirror / Atom feed
* [SPDK] Re: Race condition in bdev_nvme
@ 2019-10-07 21:16 Geoffrey McRae
  0 siblings, 0 replies; 2+ messages in thread
From: Geoffrey McRae @ 2019-10-07 21:16 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 176 bytes --]

Done: https://github.com/spdk/spdk/issues/979

I will try to join in the meeting if only to listen in, I am going away in a few days so it depends on time.

Thanks,
-Geoff

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [SPDK] Re: Race condition in bdev_nvme
@ 2019-10-07 16:48 Harris, James R
  0 siblings, 0 replies; 2+ messages in thread
From: Harris, James R @ 2019-10-07 16:48 UTC (permalink / raw)
  To: spdk

[-- Attachment #1: Type: text/plain, Size: 1816 bytes --]

Hi Geoff,

Thanks for the report.  Could you file an issue in the SPDK GitHub database for this sighting?

https://github.com/spdk/spdk/issues

We will have a bug scrub meeting tomorrow morning so getting it into GitHub will ensure it gets some attention.

Note: you can get details on the bug scrub meeting at https://spdk.io/community/ if you are interested in attending.

-Jim


On 10/5/19, 11:49 PM, "geoff(a)hostfission.com" <geoff(a)hostfission.com> wrote:

    When using the raid0 bdev with two nvme bdevs a race condition exists. Under certain circumstances the `bio->iovpos` assertion in `bdev_nvme_queued_next_sge` triggers due to multiple threads updating this value, in all instances leaving the value == to `bio->iovcont`. As I am new to this codebase I am not sure of the appropriate method to correct this. Running the vhost application with a single core corrects this issue. The following configuration seems to trigger this crash about 80% of the time during a Windows 10 installation at the partitioning phase:
    
    ```
    vhost -m 0x3 -S /var/tmp --huge-dir /mnt/hugepages1G -R -L bdev_raid -L vhost_blk &
    vhost_pid=$!
    sleep 1
    
    rpc.py bdev_raid_create -n WinRaid0 -z 512 -r 0 -b "NVMe1n1 NVMe2n1"
    rpc.py bdev_nvme_attach_controller -b NVMe1 -t PCIe -a ${DEVLIST[0]}
    rpc.py bdev_nvme_attach_controller -b NVMe2 -t PCIe -a ${DEVLIST[1]}
    rpc.py construct_vhost_blk_controller vhost.0 WinRaid0
    ```
    
    Qemu configured with:
    
    ```
    -chardev socket,id=char0,path=/var/tmp/vhost.0
    -device  vhost-user-blk-pci,id=blk0,chardev=char0
    ```
    _______________________________________________
    SPDK mailing list -- spdk(a)lists.01.org
    To unsubscribe send an email to spdk-leave(a)lists.01.org
    


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-10-07 21:16 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-07 21:16 [SPDK] Re: Race condition in bdev_nvme Geoffrey McRae
  -- strict thread matches above, loose matches on Subject: below --
2019-10-07 16:48 Harris, James R

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.