linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Rajender M <manir@vmware.com>
To: Vincent Guittot <vincent.guittot@linaro.org>
Cc: "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	"David S. Miller" <davem@davemloft.net>,
	Steven Rostedt <rostedt@goodmis.org>
Subject: Re: Performance impact in networking data path tests in Linux 5.5 Kernel
Date: Wed, 26 Feb 2020 11:45:34 +0000	[thread overview]
Message-ID: <A6BD9BBB-B087-4A3C-BF3D-557626AC233A@vmware.com> (raw)
In-Reply-To: <CAKfTPtA9275amW4wAnCZpW3bVRv0HssgMJ_YgPzZDRZ3A1rbVg@mail.gmail.com>

Thanks for your response, Vincent. 
Just curious to know, if there are any room for optimizing 
the additional CPU cost. 


On 26/02/20, 3:18 PM, "Vincent Guittot" <vincent.guittot@linaro.org> wrote:

    Hi Rajender,
    
    On Tue, 25 Feb 2020 at 06:46, Rajender M <manir@vmware.com> wrote:
    >
    > As part of VMware's performance regression testing for Linux Kernel upstream
    >  releases, when comparing Linux 5.5 kernel against Linux 5.4 kernel, we noticed
    > 20% improvement in networking throughput performance at the cost of a 30%
    > increase in the CPU utilization.
    
    Thanks for testing and sharing results with us. It's always
    interesting to get feedbacks from various tests cases
    
    >
    > After performing the bisect between 5.4 and 5.5, we identified the root cause
    > of this behaviour to be a scheduling change from Vincent Guittot's
    > 2ab4092fc82d ("sched/fair: Spread out tasks evenly when not overloaded").
    >
    > The impacted testcases are TCP_STREAM SEND & RECV – on both small
    > (8K socket & 256B message) & large (64K socket & 16K message) packet sizes.
    >
    > We backed out Vincent's commit & reran our networking tests and found that
    > the performance were similar to 5.4 kernel - improvements in networking tests
    > were no more.
    >
    > In our current network performance testing, we use Intel 10G NIC to evaluate
    > all Linux Kernel releases. In order to confirm that the impact is also seen in
    > higher bandwidth NIC, we repeated the same test cases with Intel 40G and
    > we were able to reproduce the same behaviour - 25% improvements in
    > throughput with 10% more CPU consumption.
    >
    > The overall results indicate that the new scheduler change has introduced
    > much better network throughput performance at the cost of incremental
    > CPU usage. This can be seen as expected behavior because now the
    > TCP streams are evenly spread across all the CPUs and eventually drives
    > more network packets, with additional CPU consumption.
    >
    >
    > We have also confirmed this theory by parsing the ESX stats for 5.4 and 5.5
    > kernels in a 4vCPU VM running 8 TCP streams - as shown below;
    >
    > 5.4 kernel:
    >   "2132149": {"id": 2132149, "used": 94.37, "ready": 0.01, "cstp": 0.00, "name": "vmx-vcpu-0:rhel7x64-0",
    >   "2132151": {"id": 2132151, "used": 0.13, "ready": 0.00, "cstp": 0.00, "name": "vmx-vcpu-1:rhel7x64-0",
    >   "2132152": {"id": 2132152, "used": 9.07, "ready": 0.03, "cstp": 0.00, "name": "vmx-vcpu-2:rhel7x64-0",
    >   "2132153": {"id": 2132153, "used": 34.77, "ready": 0.01, "cstp": 0.00, "name": "vmx-vcpu-3:rhel7x64-0",
    >
    > 5.5 kernel:
    >   "2132041": {"id": 2132041, "used": 55.70, "ready": 0.01, "cstp": 0.00, "name": "vmx-vcpu-0:rhel7x64-0",
    >   "2132043": {"id": 2132043, "used": 47.53, "ready": 0.01, "cstp": 0.00, "name": "vmx-vcpu-1:rhel7x64-0",
    >   "2132044": {"id": 2132044, "used": 77.81, "ready": 0.00, "cstp": 0.00, "name": "vmx-vcpu-2:rhel7x64-0",
    >   "2132045": {"id": 2132045, "used": 57.11, "ready": 0.02, "cstp": 0.00, "name": "vmx-vcpu-3:rhel7x64-0",
    >
    > Note, "used %" in above stats for 5.5 kernel is evenly distributed across all vCPUs.
    >
    > On the whole, this change should be seen as a significant improvement for
    > most customers.
    >
    > Rajender M
    > Performance Engineering
    > VMware, Inc.
    >
    


  reply	other threads:[~2020-02-26 11:45 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-02-25  5:46 Performance impact in networking data path tests in Linux 5.5 Kernel Rajender M
2020-02-26  9:48 ` Vincent Guittot
2020-02-26 11:45   ` Rajender M [this message]
2020-02-26 14:10     ` Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A6BD9BBB-B087-4A3C-BF3D-557626AC233A@vmware.com \
    --to=manir@vmware.com \
    --cc=davem@davemloft.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=rostedt@goodmis.org \
    --cc=vincent.guittot@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).