netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Laight <David.Laight@ACULAB.COM>
To: 'Paolo Abeni' <pabeni@redhat.com>,
	Jesper Dangaard Brouer <brouer@redhat.com>
Cc: 'Marek Majkowski' <marek@cloudflare.com>,
	linux-kernel <linux-kernel@vger.kernel.org>,
	network dev <netdev@vger.kernel.org>,
	kernel-team <kernel-team@cloudflare.com>
Subject: RE: epoll_wait() performance
Date: Wed, 27 Nov 2019 17:30:00 +0000	[thread overview]
Message-ID: <2f1635d9300a4bec8a0422e9e9518751@AcuMS.aculab.com> (raw)
In-Reply-To: <0b8d7447e129539aec559fa797c07047f5a6a1b2.camel@redhat.com>

From: Paolo Abeni
> Sent: 27 November 2019 16:27
...
> @David: If I read your message correctly, the pkt rate you are dealing
> with is quite low... are we talking about tput or latency? I guess
> latency could be measurably higher with recvmmsg() in respect to other
> syscall. How do you measure the releative performances of recvmmsg()
> and recv() ? with micro-benchmark/rdtsc()? Am I right that you are
> usually getting a single packet per recvmmsg() call?

The packet rate per socket is low, typically one packet every 20ms.
This is RTP, so telephony audio.
However we have a lot of audio channels and hence a lot of sockets.
So there are can be 1000s of sockets we need to receive the data from.
The test system I'm using has 16 E1 TDM links each of which can handle
31 audio channels.
Forwarding all these to/from RTP (one of the things it might do) is 496
audio channels - so 496 RTP sockets and 496 RTCP ones.
Although the test I'm doing is pure RTP and doesn't use TDM.

What I'm measuring is the total time taken to receive all the packets
(on all the sockets) that are available to be read every 10ms.
So poll + recv + add_to_queue.
(The data processing is done by other threads.)
I use the time difference (actually CLOCK_MONOTONIC - from rdtsc)
to generate a 64 entry (self scaling) histogram of the elapsed times.
Then look for the histograms peak value.
(I need to work on the max value, but that is a different (more important!) problem.)
Depending on the poll/recv method used this takes 1.5 to 2ms
in each 10ms period.
(It is faster if I run the cpu at full speed, but it usually idles along
at 800MHz.)

If I use recvmmsg() I only expect to see one packet because there
is (almost always) only one packet on each socket every 20ms.
However there might be more than one, and if there is they
all need to be read (well at least 2 of them) in that block of receives.

The outbound traffic goes out through a small number of raw sockets.
Annoyingly we have to work out the local IPv4 address that will be used
for each destination in order to calculate the UDP checksum.
(I've a pending patch to speed up the x86 checksum code on a lot of
cpus.)

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)

  reply	other threads:[~2019-11-27 17:30 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-22 11:17 epoll_wait() performance David Laight
2019-11-27  9:50 ` Marek Majkowski
2019-11-27 10:39   ` David Laight
2019-11-27 15:48     ` Jesper Dangaard Brouer
2019-11-27 16:04       ` David Laight
2019-11-27 19:48         ` Willem de Bruijn
2019-11-28 16:25           ` David Laight
2019-11-28 11:12         ` Jesper Dangaard Brouer
2019-11-28 16:37           ` David Laight
2019-11-28 16:52             ` Willy Tarreau
2019-12-19  7:57             ` Jesper Dangaard Brouer
2019-11-27 16:26       ` Paolo Abeni
2019-11-27 17:30         ` David Laight [this message]
2019-11-27 17:46           ` Eric Dumazet
2019-11-28 10:17             ` David Laight
2019-11-30  1:07               ` Eric Dumazet
2019-11-30 13:29                 ` Jakub Sitnicki
2019-12-02 12:24                   ` David Laight
2019-12-02 16:47                     ` Willem de Bruijn
2019-11-27 17:50           ` Paolo Abeni

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2f1635d9300a4bec8a0422e9e9518751@AcuMS.aculab.com \
    --to=david.laight@aculab.com \
    --cc=brouer@redhat.com \
    --cc=kernel-team@cloudflare.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=marek@cloudflare.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).