All of lore.kernel.org
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 523] vhost iotlb cache incorrectly assumes to be single consumer
@ 2020-08-10 13:56 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2020-08-10 13:56 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=523

            Bug ID: 523
           Summary: vhost iotlb cache incorrectly assumes to be single
                    consumer
           Product: DPDK
           Version: unspecified
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: eupm90@gmail.com
  Target Milestone: ---

Using testpmd as a vhost-user with iommu:

/home/dpdk/build/app/dpdk-testpmd -l 1,3 --vdev
net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 -- --auto-start
--stats-period 5 --forward-mode=txonly

And qemu with packed virtqueue:


    <interface type='vhostuser'>
      <mac address='88:67:11:5f:dd:02'/>
      <source type='unix' path='/tmp/vhost-user1' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
    </interface>
...

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net1.packed=on'/>
  </qemu:commandline>

--

Is it possible to consume the iotlb's entries of the mempoo from different
threads. Thread sanitizer output:

WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 8 at 0x00017ffd5628 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
(dpdk-testpmd+0x769343)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380
(dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848
(dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311
(dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286
(dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
(dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous read of size 8 at 0x00017ffd5628 by thread T3:
    #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252
(dpdk-testpmd+0x76ee96)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42
(dpdk-testpmd+0x77488c)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753
(dpdk-testpmd+0x7abeb3)
    #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497
(dpdk-testpmd+0x7abeb3)
    #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751
(dpdk-testpmd+0x7abeb3)
    #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
(dpdk-testpmd+0x7abeb3)
    #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
(dpdk-testpmd+0x7abeb3)
    #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384
(dpdk-testpmd+0x7abeb3)
    #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435
(dpdk-testpmd+0x7b0654)
    #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
(dpdk-testpmd+0x7b0654)
    #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470
(dpdk-testpmd+0x1ddfbd8)
    #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800
(dpdk-testpmd+0x505fdb)
    #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080
(dpdk-testpmd+0x4f8951)
    #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106
(dpdk-testpmd+0x4f89d7)
    #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127
(dpdk-testpmd+0xa5b20a)
    #16 <null> <null> (libtsan.so.0+0x2a68d)

  Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)

  Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
(dpdk-testpmd+0xa289e7)
    #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190
(dpdk-testpmd+0x7728ef)
    #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028
(dpdk-testpmd+0x1de233d)
    #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126
(dpdk-testpmd+0x1de29cc)
    #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439
(dpdk-testpmd+0x991ce2)
    #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
    #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)

  Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
    #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)


--

Or:
WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 1 at 0x00017ffd00f8 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
(dpdk-testpmd+0x769370)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380
(dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848
(dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311
(dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286
(dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
(dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous write of size 1 at 0x00017ffd00f8 by thread T3:
    #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
(dpdk-testpmd+0x75eb0c)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58
(dpdk-testpmd+0x774926)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753
(dpdk-testpmd+0x7a79d1)
    #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
(dpdk-testpmd+0x7a79d1)
    #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376
(dpdk-testpmd+0x7a79d1)
    #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435
(dpdk-testpmd+0x7b0654)
    #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
(dpdk-testpmd+0x7b0654)
    #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470
(dpdk-testpmd+0x1ddfbd8)
    #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800
(dpdk-testpmd+0x505fdb)
    #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080
(dpdk-testpmd+0x4f8951)
    #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106
(dpdk-testpmd+0x4f89d7)
    #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127
(dpdk-testpmd+0xa5b20a)
    #13 <null> <null> (libtsan.so.0+0x2a68d)

--

As a consequence, the two threads can modify the same entry of the mempool.
Usually, this cause a loop in iotlb_pending_entries, making impossible for
testpmd to advance.

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2020-08-10 13:56 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-10 13:56 [dpdk-dev] [Bug 523] vhost iotlb cache incorrectly assumes to be single consumer bugzilla

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.