From mboxrd@z Thu Jan 1 00:00:00 1970 From: Kevin Traynor Subject: Re: [PATCH 4/5] net/vhost: remove limit of vhost TX burst size Date: Fri, 24 Feb 2017 11:08:56 +0000 Message-ID: <22266556-4d0e-5db1-6a90-eebcecbe5283@redhat.com> References: <1487926101-4637-1-git-send-email-zhiyong.yang@intel.com> <1487926101-4637-5-git-send-email-zhiyong.yang@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit Cc: yuanhan.liu@linux.intel.com, maxime.coquelin@redhat.com To: Zhiyong Yang , dev@dpdk.org Return-path: Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 785222BAA for ; Fri, 24 Feb 2017 12:08:58 +0100 (CET) In-Reply-To: <1487926101-4637-5-git-send-email-zhiyong.yang@intel.com> List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 02/24/2017 08:48 AM, Zhiyong Yang wrote: > vhost removes limit of TX burst size(32 pkts) and supports to make > an best effort to transmit pkts. > > Cc: yuanhan.liu@linux.intel.com > Cc: maxime.coquelin@redhat.com > > Signed-off-by: Zhiyong Yang > --- > drivers/net/vhost/rte_eth_vhost.c | 24 ++++++++++++++++++++++-- > 1 file changed, 22 insertions(+), 2 deletions(-) > > diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c > index e98cffd..1e1fa34 100644 > --- a/drivers/net/vhost/rte_eth_vhost.c > +++ b/drivers/net/vhost/rte_eth_vhost.c > @@ -52,6 +52,7 @@ > #define ETH_VHOST_QUEUES_ARG "queues" > #define ETH_VHOST_CLIENT_ARG "client" > #define ETH_VHOST_DEQUEUE_ZERO_COPY "dequeue-zero-copy" > +#define VHOST_MAX_PKT_BURST 32 > > static const char *valid_arguments[] = { > ETH_VHOST_IFACE_ARG, > @@ -434,8 +435,27 @@ eth_vhost_tx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) > goto out; > > /* Enqueue packets to guest RX queue */ > - nb_tx = rte_vhost_enqueue_burst(r->vid, > - r->virtqueue_id, bufs, nb_bufs); > + if (likely(nb_bufs <= VHOST_MAX_PKT_BURST)) > + nb_tx = rte_vhost_enqueue_burst(r->vid, r->virtqueue_id, > + bufs, nb_bufs); > + else { > + uint16_t nb_send = nb_bufs; > + > + while (nb_send) { > + uint16_t nb_pkts; > + uint16_t num = (uint16_t)RTE_MIN(nb_send, > + VHOST_MAX_PKT_BURST); > + > + nb_pkts = rte_vhost_enqueue_burst(r->vid, > + r->virtqueue_id, > + &bufs[nb_tx], num); > + > + nb_tx += nb_pkts; > + nb_send -= nb_pkts; > + if (nb_pkts < num) > + break; > + } In the code above, - if the VM does not service the queue, this will spin forever - if the queue is almost full, it will be very slow In the example of OVS, other ports being serviced by the same core would stop being serviced or drop a lot of packets. If you want to remove the 32 pkt limitation, the rte_vhost_enqueue_burst() api limitation needs to be changed first. It doesn't make sense to put retry code in eth_vhost_tx() fn, the application can retry if it wants to. > + } > > r->stats.pkts += nb_tx; > r->stats.missed_pkts += nb_bufs - nb_tx; >