From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jason Wang Subject: Re: [PATCH net-next v8 7/7] net: vhost: make busyloop_intr more accurate Date: Tue, 21 Aug 2018 08:33:00 +0800 Message-ID: References: <1534680686-3108-1-git-send-email-xiangxia.m.yue@gmail.com> <1534680686-3108-8-git-send-email-xiangxia.m.yue@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Cc: virtualization@lists.linux-foundation.org, netdev@vger.kernel.org To: xiangxia.m.yue@gmail.com, mst@redhat.com, makita.toshiaki@lab.ntt.co.jp Return-path: Received: from mx3-rdu2.redhat.com ([66.187.233.73]:47814 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726639AbeHUDu6 (ORCPT ); Mon, 20 Aug 2018 23:50:58 -0400 In-Reply-To: <1534680686-3108-8-git-send-email-xiangxia.m.yue@gmail.com> Content-Language: en-US Sender: netdev-owner@vger.kernel.org List-ID: On 2018年08月19日 20:11, xiangxia.m.yue@gmail.com wrote: > From: Tonghao Zhang > > The patch uses vhost_has_work_pending() to check if > the specified handler is scheduled, because in the most case, > vhost_has_work() return true when other side handler is added > to worker list. Use the vhost_has_work_pending() insead of > vhost_has_work(). > > Topology: > [Host] ->linux bridge -> tap vhost-net ->[Guest] > > TCP_STREAM (netperf): > * Without the patch: 38035.39 Mbps, 3.37 us mean latency > * With the patch: 38409.44 Mbps, 3.34 us mean latency The improvement is not obvious as last version. Do you imply there's some recent changes of vhost that make it faster? Thanks > > Signed-off-by: Tonghao Zhang > --- > drivers/vhost/net.c | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index db63ae2..b6939ef 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -487,10 +487,8 @@ static void vhost_net_busy_poll(struct vhost_net *net, > endtime = busy_clock() + busyloop_timeout; > > while (vhost_can_busy_poll(endtime)) { > - if (vhost_has_work(&net->dev)) { > - *busyloop_intr = true; > + if (vhost_has_work(&net->dev)) > break; > - } > > if ((sock_has_rx_data(sock) && > !vhost_vq_avail_empty(&net->dev, rvq)) || > @@ -513,6 +511,11 @@ static void vhost_net_busy_poll(struct vhost_net *net, > !vhost_has_work_pending(&net->dev, VHOST_NET_VQ_RX)) > vhost_net_enable_vq(net, rvq); > > + if (vhost_has_work_pending(&net->dev, > + poll_rx ? > + VHOST_NET_VQ_RX: VHOST_NET_VQ_TX)) > + *busyloop_intr = true; > + > mutex_unlock(&vq->mutex); > } >