From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Scheurich Subject: [PATCH v2] vhost: Expose virtio interrupt need on rte_vhost API Date: Sat, 23 Sep 2017 20:23:09 +0000 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable To: "'dev@dpdk.org'" Return-path: Received: from sessmg23.ericsson.net (sessmg23.ericsson.net [193.180.251.45]) by dpdk.org (Postfix) with ESMTP id 84D301AEEE for ; Sat, 23 Sep 2017 22:23:10 +0200 (CEST) Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Performance tests with the OVS DPDK datapath have shown=20 that the tx throughput over a vhostuser port into a VM with=20 an interrupt-based virtio driver is limited by the overhead incurred by virtio interrupts. The OVS PMD spends up to 30%=20 of its cycles in system calls kicking the eventfd. Also the core running the vCPU is heavily loaded with generating the virtio interrupts in KVM on the host and handling these interrupts in the virtio-net driver in the guest. This limits the throughput to about 500-700 Kpps with a single vCPU. OVS is trying to address this issue by batching packets to a vhostuser port for some time to limit the virtio interrupt=20 frequency. With a 50 us batching period we have measured an iperf3 throughput increase by 15% and a PMD utilization decrease from 45% to 30%. On the other hand, guests using virtio PMDs do not profit from time-based tx batching. Instead they experience a 2-3% performance penalty and an average latency increase of=20 30-40 us. OVS therefore intends to apply time-based tx=20 batching only for vhostuser tx queues that need to trigger virtio interrupts. Today this information is hidden inside the rte_vhost library and not accessible to users of the API. This patch adds a function to the API to query it. Signed-off-by: Jan Scheurich --- v1 -> v2: Fixed too long commit lines Fixed white-space errors and warnings lib/librte_vhost/rte_vhost.h | 12 ++++++++++++ lib/librte_vhost/vhost.c | 19 +++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 8c974eb..d62338b 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_= idx, */ uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid); +/** + * Does the virtio driver request interrupts for a vhost tx queue? + * + * @param vid + * vhost device ID + * @param qid + * virtio queue index in mq case + * @return + * 1 if true, 0 if false + */ +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid); + #ifdef __cplusplus } #endif diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 0b6aa1c..c6e636e 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -503,3 +503,22 @@ struct virtio_net * return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx= ; } + +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid) +{ + struct virtio_net *dev; + struct vhost_virtqueue *vq; + + dev =3D get_device(vid); + if (dev =3D=3D NULL) + return 0; + + vq =3D dev->virtqueue[qid]; + if (vq =3D=3D NULL) + return 0; + + if (unlikely(vq->enabled =3D=3D 0 || vq->avail =3D=3D NULL)) + return 0; + + return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT); +}