linux-newbie.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* TCP higher throughput that UDP?
@ 2021-08-31 20:23 Eric Curtin
  2021-08-31 20:40 ` Neal Cardwell
  0 siblings, 1 reply; 2+ messages in thread
From: Eric Curtin @ 2021-08-31 20:23 UTC (permalink / raw)
  To: netdev, linux-newbie

Hi Guys,

I've been researching quantitatively various protocols QUIC, TCP, UDP,
SCTP, etc. on modern hardware. And some TLS, DTLS and plain text
variants of these protocols. I've avoided tuning kernel parameters so
much and focused more on adjusting what I can in user space on an out
of the box kernel, as if I was deploying on a machine where I did not
have root access (lets say an Android phone, even though these results
come from a laptop for simplicity). I did make a minor exception in
that I had to turn on SCTP.

I have a small client/server binary that implements bare bones
versions of TCP, UDP and SCTP. The server allows you to specify which
file clients will pull. And the client lets you specify which file to
write the pulled data to (often I use /dev/null here). I run both on
127.0.0.1 . I would have thought (obviously I was wrong) that UDP
would be faster, given no acks, retransmits, checksums, etc. as many
laymen would tell you. But on this machine TCP is faster more often
than not. This has been answered online with different answers of the
reasoning (hardware offload, how packet fragmentation is handled, TCP
tuned more for throughput, UDP tuned more for latency, etc.). I'm just
curious if someone could let me know the most significant factor (or
factors) here these days? Things like strace don't reveal much.
Changing buffer size to the read/write/send/recv system calls alters
things but TCP seems to still win regardless of buffer size.

Another question, I detect the UDP client has finished by checking for
a final (unsigned char) -1 byte, very error-prone of course. Is there
a better way to detect when a simple UDP client should stop trying to
pull/read/receive data, that you've seen or is commonplace?

If my code is too simplistic and there's one or two setsockopt's etc.
that might be interesting to add, my ears are open.

Feel free to pull this repo and run this script and take a quick look
at the few lines of code if needs be (currently pushing, at an airport
might be slow, should be commit around time of this email):

https://github.com/ericcurtin/DTLS-Examples

The two files relevant:

src/script.sh  src/udp-tls.c

To run my test case:

cd src; ./script.sh

I'd appreciate some thoughts from the gurus :)

My hardware:

ThinkPad-P1-Gen-3
CPU: i7-10850H
RAM: 32GB

Kernel version:

$ cat /etc/*release | head -n2; uname -r
Fedora release 34 (Thirty Four)
NAME=Fedora
5.13.12-200.fc34.x86_64

Is mise le meas/Regards,

Eric Curtin

Check out this charity that's close to my heart:

https://www.idonate.ie/fundraiser/11394438_peak-for-pat.html
https://www.facebook.com/Peak-for-Pat-104470678280309
https://www.instagram.com/peakforpat/

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: TCP higher throughput that UDP?
  2021-08-31 20:23 TCP higher throughput that UDP? Eric Curtin
@ 2021-08-31 20:40 ` Neal Cardwell
  0 siblings, 0 replies; 2+ messages in thread
From: Neal Cardwell @ 2021-08-31 20:40 UTC (permalink / raw)
  To: Eric Curtin; +Cc: netdev, linux-newbie, Eric Dumazet, Willem de Bruijn

On Tue, Aug 31, 2021 at 4:25 PM Eric Curtin <ericcurtin17@gmail.com> wrote:
>
> Hi Guys,
>
> I've been researching quantitatively various protocols QUIC, TCP, UDP,
> SCTP, etc. on modern hardware. And some TLS, DTLS and plain text
> variants of these protocols. I've avoided tuning kernel parameters so
> much and focused more on adjusting what I can in user space on an out
> of the box kernel, as if I was deploying on a machine where I did not
> have root access (lets say an Android phone, even though these results
> come from a laptop for simplicity). I did make a minor exception in
> that I had to turn on SCTP.
>
> I have a small client/server binary that implements bare bones
> versions of TCP, UDP and SCTP. The server allows you to specify which
> file clients will pull. And the client lets you specify which file to
> write the pulled data to (often I use /dev/null here). I run both on
> 127.0.0.1 . I would have thought (obviously I was wrong) that UDP
> would be faster, given no acks, retransmits, checksums, etc. as many
> laymen would tell you. But on this machine TCP is faster more often
> than not. This has been answered online with different answers of the
> reasoning (hardware offload, how packet fragmentation is handled, TCP
> tuned more for throughput, UDP tuned more for latency, etc.). I'm just
> curious if someone could let me know the most significant factor (or
> factors) here these days? Things like strace don't reveal much.
> Changing buffer size to the read/write/send/recv system calls alters
> things but TCP seems to still win regardless of buffer size.

These articles have some nice discussion and description of
experimental results that shed some light on the differences between
UDP performance and TCP performance, and what it takes to make UDP
performance approximate TCP on Linux:

  Optimizing UDP for content delivery: GSO, pacing and zerocopy
  http://vger.kernel.org/lpc_net2018_talks/willemdebruijn-lpc2018-udpgso-paper-DRAFT-1.pdf

  Can QUIC match TCP’s computational efficiency?
  https://www.fastly.com/blog/measuring-quic-vs-tcp-computational-efficiency

I would guess segmentation offload for TCP is probably the biggest
factor in your experiments.

best,
neal

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-08-31 20:40 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-31 20:23 TCP higher throughput that UDP? Eric Curtin
2021-08-31 20:40 ` Neal Cardwell

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).