From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Scheurich Subject: [PATCH] vhost: Expose virtio interrupt requirement on rte_vhos API Date: Sat, 23 Sep 2017 19:16:05 +0000 Message-ID: Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Cc: "'dev@openvswitch.org'" To: "'dev@dpdk.org'" Return-path: Received: from sessmg22.ericsson.net (sessmg22.ericsson.net [193.180.251.58]) by dpdk.org (Postfix) with ESMTP id 594501AEEE for ; Sat, 23 Sep 2017 21:16:06 +0200 (CEST) Content-Language: en-US List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Performance tests with the OVS DPDK datapath have shown that the tx through= put over a vhostuser port into a VM with an interrupt-based virtio driver i= s limited by the overhead incurred by virtio interrupts. The OVS PMD spends= up to 30% of its cycles in system calls kicking the eventfd. Also the core= running the vCPU is heavily loaded with generating the virtio interrupts i= n KVM on the host and handling these interrupts in the virtio-net driver in= the guest. This limits the throughput to about 500-700 Kpps with a single = vCPU. OVS is trying to address this issue by batching packets to a vhostuser port= for some time to limit the virtio interrupt frequency. With a 50 us batchi= ng period we have measured an iperf3 throughput increase by 15% and a PMD = utilization decrease from 45% to 30%. On the other hand, guests using virtio PMDs do not profit from time-based t= x batching. Instead they experience a 2-3% performance penalty and an avera= ge latency increase of 30-40 us. OVS therefore intends to apply time-based = tx batching only for vhostuser tx queues that need to trigger virtio interr= upts. Today this information is hidden inside the rte_vhost library and not acces= sible to users of the API. This patch adds a function to the API to query i= t. Signed-off-by: Jan Scheurich --- lib/librte_vhost/rte_vhost.h | 12 ++++++++++++ lib/librte_vhost/vhost.c | 19 +++++++++++++++++++ 2 files changed, 31 insertions(+) diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 8c974eb..d62338b 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t vring_= idx, */ uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid); +/** + * Does the virtio driver request interrupts for a vhost tx queue? + * + * @param vid + * vhost device ID + * @param qid + * virtio queue index in mq case + * @return + * 1 if true, 0 if false + */ +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid); + #ifdef __cplusplus } #endif diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c index 0b6aa1c..bd1ebf9 100644 --- a/lib/librte_vhost/vhost.c +++ b/lib/librte_vhost/vhost.c @@ -503,3 +503,22 @@ struct virtio_net * return *((volatile uint16_t *)&vq->avail->idx) - vq->last_avail_idx= ; } + +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid) +{ + struct virtio_net *dev; + struct vhost_virtqueue *vq; + + dev =3D get_device(vid); + if (dev =3D=3D NULL) + return 0; + + vq =3D dev->virtqueue[qid]; + if (vq =3D=3D NULL) + return 0; + + if (unlikely(vq->enabled =3D=3D 0 || vq->avail =3D=3D NULL)) + return 0; + + return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT); +}