From: Eliezer Tamir <eliezer.tamir@linux.intel.com>
To: Rick Jones <rick.jones2@hp.com>
Cc: Eliezer Tamir <eliezer.tamir@linux.jf.intel.com>,
linux-kernel@vger.kernel.org, netdev@vger.kernel.org,
Dave Miller <davem@davemloft.net>,
Jesse Brandeburg <jesse.brandeburg@intel.com>,
e1000-devel@lists.sourceforge.net,
Willem de Bruijn <willemb@google.com>,
Andi Kleen <andi@firstfloor.org>, HPA <hpa@zytor.com>,
Eliezer Tamir <eliezer@tamir.org.il>
Subject: Re: [RFC PATCH 0/5] net: low latency Ethernet device polling
Date: Wed, 27 Feb 2013 22:40:03 +0200 [thread overview]
Message-ID: <512E6F23.3090003@linux.intel.com> (raw)
In-Reply-To: <512E654A.2010209@hp.com>
On 27/02/2013 21:58, Rick Jones wrote:
> On 02/27/2013 09:55 AM, Eliezer Tamir wrote:
>>
>> Performance numbers:
>> Kernel Config C3/6 rx-usecs TCP UDP
>> 3.8rc6 typical off adaptive 37k 40k
>> 3.8rc6 typical off 0* 50k 56k
>> 3.8rc6 optimized off 0* 61k 67k
>> 3.8rc6 optimized on adaptive 26k 29k
>> patched typical off adaptive 70k 78k
>> patched optimized off adaptive 79k 88k
>> patched optimized off 100 84k 92k
>> patched optimized on adaptive 83k 91k
>> *rx-usecs=0 is usually not useful in a production environment.
>
> I would think that latency-sensitive folks would be using rx-usecs=0 in
> production - at least if the NIC in use didn't have low enough latency
> with its default interrupt coalescing/avoidance heuristics.
It will only work well if you have no bulk traffic on the same port as
the low latency traffic at all.
> If I take the first "pure" A/B comparison it seems that the change as
> benchmarked takes latency for TCP from ~27 usec (37k) to ~14 usec (70k).
> At what request/response size does the benefit taper-off? 13 usec
> seems to be about 16250 bytes at 10 GbE.
It's pretty easy to get a result of 80K+ with a little tweaking, an
rx-usecs value of 100 with C3/6 enabled will get you that.
> When I last looked at netperf TCP_RR performance where something similar
> could happen I think it was IPoIB where it was possible to set things up
> such that polling happened rather than wakeups (perhaps it was with a
> shim library that converted netperf's socket calls to "native" IB). My
> recollection is that it "did a number" on the netperf service demands
> thanks to the spinning. It would be a good thing to include those
> figures in any subsequent rounds of benchmarking.
I will get service demand numbers, but we are busy polling so I can tell
you right now that one core will be at 100%.
> Am I correct in assuming this is a mechanism which would not be used in
> a high aggregate PPS situation?
The current design has in mind situations where you want to react very
fast to a trigger but that reaction could involve more than short
messages. so we are willing to burn CPU cycles when there is nothing
better to do, but we also want to work well when there is bulk traffic.
Ideally I would want the system to be smart about this and to know when
not to allow busy polling.
> happy benchmarking,
we love netperf.
next prev parent reply other threads:[~2013-02-27 20:40 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-02-27 17:55 [RFC PATCH 0/5] net: low latency Ethernet device polling Eliezer Tamir
2013-02-27 17:55 ` [RFC PATCH 1/5] net: implement support for low latency socket polling Eliezer Tamir
2013-03-03 18:35 ` Eric Dumazet
2013-03-03 19:21 ` Andi Kleen
2013-03-03 21:20 ` Eric Dumazet
2013-03-04 3:55 ` Andi Kleen
2013-03-04 8:43 ` Eliezer Tamir
2013-03-04 14:52 ` Eric Dumazet
2013-03-04 15:28 ` Eliezer Tamir
2013-03-04 16:15 ` Eric Dumazet
2013-03-04 7:23 ` Cong Wang
2013-03-05 16:43 ` Ben Hutchings
2013-03-05 17:15 ` Eliezer Tamir
2013-03-05 19:57 ` David Miller
2013-03-05 19:55 ` David Miller
2013-03-05 20:03 ` H. Peter Anvin
2013-02-27 17:56 ` [RFC PATCH 2/5] tcp: add TCP support for low latency receive poll Eliezer Tamir
2013-03-05 17:13 ` Ben Hutchings
2013-02-27 17:56 ` [RFC PATCH 3/5] ixgbe: Add support for ndo_ll_poll Eliezer Tamir
2013-02-27 18:41 ` Eric Dumazet
2013-02-27 19:20 ` Eliezer Tamir
2013-03-05 17:26 ` Ben Hutchings
2013-03-05 17:28 ` Ben Hutchings
2013-03-05 17:36 ` Eric Dumazet
2013-02-27 17:56 ` [RFC PATCH 4/5] ixgbe: add extra stats " Eliezer Tamir
2013-02-27 17:56 ` [RFC PATCH 5/5] ixgbe: kprobes latency test module Eliezer Tamir
2013-02-27 18:07 ` [RFC PATCH 0/5] net: low latency Ethernet device polling Eliezer Tamir
2013-02-27 18:13 ` Stephen Hemminger
2013-02-27 18:47 ` Tom Herbert
2013-02-27 19:17 ` Eliezer Tamir
2013-03-04 22:34 ` Ben Hutchings
2013-02-27 19:58 ` Rick Jones
2013-02-27 20:40 ` Eliezer Tamir [this message]
2013-02-27 21:42 ` Ben Greear
2013-02-28 8:38 ` Eliezer Tamir
2013-03-01 21:24 ` David Miller
2013-03-01 22:57 ` Tom Herbert
2013-03-02 17:02 ` Eliezer Tamir
2013-03-04 7:37 ` Cong Wang
2013-03-04 8:19 ` Eliezer Tamir
2013-03-04 8:46 ` Eliezer Tamir
2013-03-04 17:19 ` Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=512E6F23.3090003@linux.intel.com \
--to=eliezer.tamir@linux.intel.com \
--cc=andi@firstfloor.org \
--cc=davem@davemloft.net \
--cc=e1000-devel@lists.sourceforge.net \
--cc=eliezer.tamir@linux.jf.intel.com \
--cc=eliezer@tamir.org.il \
--cc=hpa@zytor.com \
--cc=jesse.brandeburg@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=rick.jones2@hp.com \
--cc=willemb@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).