From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Hemminger Subject: Re: [PATCH] vhost: Expose virtio interrupt requirement on rte_vhos API Date: Sun, 24 Sep 2017 07:02:25 -0700 Message-ID: References: Mime-Version: 1.0 Content-Type: text/plain; charset="UTF-8" Cc: dev@dpdk.org, dev@openvswitch.org To: Jan Scheurich Return-path: Received: from mail-pg0-f51.google.com (mail-pg0-f51.google.com [74.125.83.51]) by dpdk.org (Postfix) with ESMTP id 3EE6C2C28 for ; Sun, 24 Sep 2017 16:02:27 +0200 (CEST) Received: by mail-pg0-f51.google.com with SMTP id c137so2694225pga.11 for ; Sun, 24 Sep 2017 07:02:27 -0700 (PDT) In-Reply-To: List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" I heard Fd.io has a faster vhost driver. Has anyone investigated? On Sep 23, 2017 8:16 PM, "Jan Scheurich" wrote: > Performance tests with the OVS DPDK datapath have shown that the tx > throughput over a vhostuser port into a VM with an interrupt-based virtio > driver is limited by the overhead incurred by virtio interrupts. The OVS > PMD spends up to 30% of its cycles in system calls kicking the eventfd. > Also the core running the vCPU is heavily loaded with generating the virtio > interrupts in KVM on the host and handling these interrupts in the > virtio-net driver in the guest. This limits the throughput to about 500-700 > Kpps with a single vCPU. > > OVS is trying to address this issue by batching packets to a vhostuser > port for some time to limit the virtio interrupt frequency. With a 50 us > batching period we have measured an iperf3 throughput increase by 15% and > a PMD utilization decrease from 45% to 30%. > > On the other hand, guests using virtio PMDs do not profit from time-based > tx batching. Instead they experience a 2-3% performance penalty and an > average latency increase of 30-40 us. OVS therefore intends to apply > time-based tx batching only for vhostuser tx queues that need to trigger > virtio interrupts. > > Today this information is hidden inside the rte_vhost library and not > accessible to users of the API. This patch adds a function to the API to > query it. > > Signed-off-by: Jan Scheurich > > --- > > lib/librte_vhost/rte_vhost.h | 12 ++++++++++++ > lib/librte_vhost/vhost.c | 19 +++++++++++++++++++ > 2 files changed, 31 insertions(+) > > diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h > index 8c974eb..d62338b 100644 > --- a/lib/librte_vhost/rte_vhost.h > +++ b/lib/librte_vhost/rte_vhost.h > @@ -444,6 +444,18 @@ int rte_vhost_get_vhost_vring(int vid, uint16_t > vring_idx, > */ > uint32_t rte_vhost_rx_queue_count(int vid, uint16_t qid); > > +/** > + * Does the virtio driver request interrupts for a vhost tx queue? > + * > + * @param vid > + * vhost device ID > + * @param qid > + * virtio queue index in mq case > + * @return > + * 1 if true, 0 if false > + */ > +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid); > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c > index 0b6aa1c..bd1ebf9 100644 > --- a/lib/librte_vhost/vhost.c > +++ b/lib/librte_vhost/vhost.c > @@ -503,3 +503,22 @@ struct virtio_net * > > return *((volatile uint16_t *)&vq->avail->idx) - > vq->last_avail_idx; > } > + > +int rte_vhost_tx_interrupt_requested(int vid, uint16_t qid) > +{ > + struct virtio_net *dev; > + struct vhost_virtqueue *vq; > + > + dev = get_device(vid); > + if (dev == NULL) > + return 0; > + > + vq = dev->virtqueue[qid]; > + if (vq == NULL) > + return 0; > + > + if (unlikely(vq->enabled == 0 || vq->avail == NULL)) > + return 0; > + > + return !(vq->avail->flags & VRING_AVAIL_F_NO_INTERRUPT); > +} > >