From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jianbo Liu Subject: Re: [PATCH v3 0/5] vhost: optimize enqueue Date: Wed, 21 Sep 2016 20:54:11 +0800 Message-ID: References: <1471319402-112998-1-git-send-email-zhihong.wang@intel.com> <1471585430-125925-1-git-send-email-zhihong.wang@intel.com> <8F6C2BD409508844A0EFC19955BE09414E7B5581@SHSMSX103.ccr.corp.intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Cc: Maxime Coquelin , "dev@dpdk.org" , "yuanhan.liu@linux.intel.com" To: "Wang, Zhihong" Return-path: Received: from mail-yw0-f171.google.com (mail-yw0-f171.google.com [209.85.161.171]) by dpdk.org (Postfix) with ESMTP id 5839928FD for ; Wed, 21 Sep 2016 14:54:13 +0200 (CEST) Received: by mail-yw0-f171.google.com with SMTP id i129so50744600ywb.0 for ; Wed, 21 Sep 2016 05:54:13 -0700 (PDT) In-Reply-To: <8F6C2BD409508844A0EFC19955BE09414E7B5581@SHSMSX103.ccr.corp.intel.com> List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On 21 September 2016 at 17:27, Wang, Zhihong wrote: > > >> -----Original Message----- >> From: Jianbo Liu [mailto:jianbo.liu@linaro.org] >> Sent: Wednesday, September 21, 2016 4:50 PM >> To: Maxime Coquelin >> Cc: Wang, Zhihong ; dev@dpdk.org; >> yuanhan.liu@linux.intel.com >> Subject: Re: [dpdk-dev] [PATCH v3 0/5] vhost: optimize enqueue >> >> Hi Maxime, >> >> On 22 August 2016 at 16:11, Maxime Coquelin >> wrote: >> > Hi Zhihong, >> > >> > On 08/19/2016 07:43 AM, Zhihong Wang wrote: >> >> >> >> This patch set optimizes the vhost enqueue function. >> >> >> ... >> >> > >> > My setup consists of one host running a guest. >> > The guest generates as much 64bytes packets as possible using >> >> Have you tested with other different packet size? >> My testing shows that performance is dropping when packet size is more >> than 256. > > > Hi Jianbo, > > Thanks for reporting this. > > 1. Are you running the vector frontend with mrg_rxbuf=off? > > 2. Could you please specify what CPU you're running? Is it Haswell > or Ivy Bridge? > > 3. How many percentage of drop are you seeing? > > This is expected by me because I've already found the root cause and > the way to optimize it, but since it missed the v0 deadline and > requires changes in eal/memcpy, I postpone it to the next release. > > After the upcoming optimization the performance for packets larger > than 256 will be improved, and the new code will be much faster than > the current code. > Sorry, I tested on an ARM server, but I wonder if there is the same issue for x86 platform. >> > pktgen-dpdk. The hosts forwards received packets back to the guest >> > using testpmd on vhost pmd interface. Guest's vCPUs are pinned to >> > physical CPUs. >> > >> > I tested it with and without your v1 patch, with and without >> > rx-mergeable feature turned ON. >> > Results are the average of 8 runs of 60 seconds: >> > >> > Rx-Mergeable ON : 7.72Mpps >> > Rx-Mergeable ON + "vhost: optimize enqueue" v1: 9.19Mpps >> > Rx-Mergeable OFF: 10.52Mpps >> > Rx-Mergeable OFF + "vhost: optimize enqueue" v1: 10.60Mpps >> >