All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tom Herbert <therbert@google.com>
To: Jesse Gross <jesse@nicira.com>
Cc: Pravin B Shelar <pshelar@nicira.com>,
	David Miller <davem@davemloft.net>,
	Linux Netdev List <netdev@vger.kernel.org>
Subject: Re: [PATCH net-next 0/3] openvswitch: Add STT support.
Date: Fri, 23 Jan 2015 12:57:21 -0800	[thread overview]
Message-ID: <CA+mtBx8bK16McNgBXKN8Vu6_ifEdyZL7FPzOmXvxLP5AqKx1Hw@mail.gmail.com> (raw)
In-Reply-To: <CAEP_g=-Yn849TmrN5xFLJVxuux0fBuYGFSM7K-psg=pdxgPR9w@mail.gmail.com>

On Fri, Jan 23, 2015 at 12:20 PM, Jesse Gross <jesse@nicira.com> wrote:
> On Fri, Jan 23, 2015 at 10:25 AM, Tom Herbert <therbert@google.com> wrote:
>> On Fri, Jan 23, 2015 at 9:38 AM, Jesse Gross <jesse@nicira.com> wrote:
>>> On Fri, Jan 23, 2015 at 8:58 AM, Tom Herbert <therbert@google.com> wrote:
>>>> On Tue, Jan 20, 2015 at 12:25 PM, Pravin B Shelar <pshelar@nicira.com> wrote:
>>>>> Following patch series adds support for Stateless Transport
>>>>> Tunneling protocol.
>>>>> STT uses TCP segmentation offload available in most of NIC. On
>>>>> packet xmit STT driver appends STT header along with TCP header
>>>>> to the packet. For GSO packet GSO parameters are set according
>>>>> to tunnel configuration and packet is handed over to networking
>>>>> stack. This allows use of segmentation offload available in NICs
>>>>>
>>>>> Netperf unidirectional test gives ~9.4 Gbits/s performance on 10Gbit
>>>>> NIC with 1500 byte MTU with two TCP streams.
>>>>>
>>>> The reason you're able to get 9.4 Gbit/s with an L2 encapsulation
>>>> using STT is that it has less protocol overhead per packet when doing
>>>> segmentation compared to VXLAN (without segmentation STT packets will
>>>> have more overhead than VXLAN).
>>>>
>>>> A VXLAN packet with TCP/IP has headers
>>>> IP|UDP|VXLAN|Ethernet|IP|TCP+options. Assuming TCP is stuffed with
>>>> options, this is 20+8+8+16+20+40=112 bytes, or 7.4% MTU. Each STT
>>>> segment created in GSO, other than the first, has just IP|TCP headers
>>>> which is 20+20=40 bytes or 2.6% MTU. So this explains throughput
>>>> differences between VXLAN and STT.
>>>
>>> Tom, what performance do you see with a single stream of VXLAN running
>>> on net-next with default configuration? The difference in numbers
>>> being posted here is greater than a few percent caused by protocol
>>> overheard.
>>
>> Please look at the data I posted with the VXLAN RCO patches.
>
> The data you posted uses 200 streams, so I assume that you are using
> multiple CPUs. It's not surprising that you would be able to consume a
> 10G link in that case. STT can do this with a single stream and less
> than 1 core (or alternately handle higher throughput). Claiming that
> since both can hit 10G they are same is not accurate.
>
> Discussing performance like this seems a little silly given that the
> code is available. Pravin posted some numbers that he got, if you want
> to dispute them then why don't you just try running it?

Because you haven't provided network interface like I already
requested twice, and I really don't have time or motivation to do
development on your patches or figure out how to do this with OVS. If
you want me to test your patches re-spin them with a network interface
included.

  reply	other threads:[~2015-01-23 20:57 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-20 20:25 [PATCH net-next 0/3] openvswitch: Add STT support Pravin B Shelar
2015-01-20 23:06 ` Tom Herbert
2015-01-21  9:08   ` Pravin Shelar
2015-01-21 16:51     ` Tom Herbert
2015-01-21 18:30       ` Pravin Shelar
2015-01-21 19:45         ` Tom Herbert
2015-01-21 20:22           ` Eric Dumazet
2015-01-21 20:35           ` Jesse Gross
2015-01-21 21:54             ` Tom Herbert
2015-01-21 22:14               ` Jesse Gross
2015-01-21 23:46                 ` Vincent JARDIN
2015-01-22 16:24                   ` Tom Herbert
2015-01-22 17:51                     ` Vincent JARDIN
2015-01-23  9:04                       ` David Miller
2015-01-23  9:00                     ` David Miller
2015-02-02 16:23             ` Tom Herbert
2015-02-02 20:39               ` Jesse Gross
2015-02-02 22:49                 ` Tom Herbert
2015-01-23 16:58 ` Tom Herbert
2015-01-23 17:38   ` Jesse Gross
2015-01-23 18:25     ` Tom Herbert
2015-01-23 20:20       ` Jesse Gross
2015-01-23 20:57         ` Tom Herbert [this message]
2015-01-23 21:11           ` Pravin Shelar
2015-02-02 16:15         ` Tom Herbert

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CA+mtBx8bK16McNgBXKN8Vu6_ifEdyZL7FPzOmXvxLP5AqKx1Hw@mail.gmail.com \
    --to=therbert@google.com \
    --cc=davem@davemloft.net \
    --cc=jesse@nicira.com \
    --cc=netdev@vger.kernel.org \
    --cc=pshelar@nicira.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.