From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jianbo Liu Subject: Re: [PATCH v3 0/5] vhost: optimize enqueue Date: Wed, 21 Sep 2016 16:50:01 +0800 Message-ID: References: <1471319402-112998-1-git-send-email-zhihong.wang@intel.com> <1471585430-125925-1-git-send-email-zhihong.wang@intel.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Cc: Zhihong Wang , dev@dpdk.org, yuanhan.liu@linux.intel.com To: Maxime Coquelin Return-path: Received: from mail-yw0-f178.google.com (mail-yw0-f178.google.com [209.85.161.178]) by dpdk.org (Postfix) with ESMTP id DD25A47CD for ; Wed, 21 Sep 2016 10:50:02 +0200 (CEST) Received: by mail-yw0-f178.google.com with SMTP id g192so41689540ywh.1 for ; Wed, 21 Sep 2016 01:50:02 -0700 (PDT) In-Reply-To: List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Maxime, On 22 August 2016 at 16:11, Maxime Coquelin wrote: > Hi Zhihong, > > On 08/19/2016 07:43 AM, Zhihong Wang wrote: >> >> This patch set optimizes the vhost enqueue function. >> ... > > My setup consists of one host running a guest. > The guest generates as much 64bytes packets as possible using Have you tested with other different packet size? My testing shows that performance is dropping when packet size is more than 256. > pktgen-dpdk. The hosts forwards received packets back to the guest > using testpmd on vhost pmd interface. Guest's vCPUs are pinned to > physical CPUs. > > I tested it with and without your v1 patch, with and without > rx-mergeable feature turned ON. > Results are the average of 8 runs of 60 seconds: > > Rx-Mergeable ON : 7.72Mpps > Rx-Mergeable ON + "vhost: optimize enqueue" v1: 9.19Mpps > Rx-Mergeable OFF: 10.52Mpps > Rx-Mergeable OFF + "vhost: optimize enqueue" v1: 10.60Mpps > > Regards, > Maxime