From: Alexander Duyck <alexander.duyck@gmail.com>
To: Jesper Dangaard Brouer <brouer@redhat.com>,
Daniel Borkmann <daniel@iogearbox.net>
Cc: Alexei Starovoitov <ast@plumgrid.com>,
netdev@vger.kernel.org, Eric Dumazet <eric.dumazet@gmail.com>
Subject: Re: Multiqueue pktgen and ingress path (Was: [PATCH v5 2/2] pktgen: introduce xmit_mode '<start_xmit|netif_receive>')
Date: Fri, 08 May 2015 09:53:34 -0700 [thread overview]
Message-ID: <554CEA0E.4050509@gmail.com> (raw)
In-Reply-To: <20150508174927.5b1ecdd1@redhat.com>
On 05/08/2015 08:49 AM, Jesper Dangaard Brouer wrote:
> More interesting observations with the mentioned script (now attached).
>
> On my system the scaling stopped a 24Mpps, when I increased the number
> of threads the collective scaling was stuck at 24Mpps.
>
> Then I simply removed/compiled-out the:
> atomic_long_inc(&skb->dev->rx_dropped);
>
> And after that change, the scaling is basically infinite/perfect.
>
> Single thread performance increased from 24.7Mpps to 31.1Mpps, which
> corresponds perfectly with the cost of an atomic operation on this HW
> (8.25ns).
>
> Diff to before:
> * (1/24700988*10^9)-(1/31170819*10^9) = 8.40292328196 ns
>
> When increasing the threads now, they all basically run at 31Mpps.
> Tried it upto 12 threads.
>
>
> I'm quite puzzled why a single atomic op could "freeze" my system from
> scaling beyond 24Mpps.
The atomic access likely acts as a serializing event, and on top of that
it would increase in time needed to be completed as you add more
threads. I am guessing the 8ns is probably the cost for a single
threaded setup where the memory location is available in L2 or L1
cache. If it is in L3 cache that would make it more expensive. If it
is currently in use by another CPU then that would make it even more
expensive. If it is in use on another socket then we are probably
looking at something in the high 10s if not 100s of nanoseconds. Once
you hit the point where the time for the atomic transaction multiplied
by the number of threads is equal to the time it takes for any one
thread to complete the operation you have hit the upper limit and
everything after that is just wasted cycles spinning while waiting for
cache line access.
So for example if you had 2 threads on the same socket you are looking
at an L3 cache access which takes about 30 cycles. That 30 cycles would
likely be in addition to the 8ns you were already seeing for single
thread performance, and I don't know if it includes the cache flush
needed by the remote L1/L2 where the cache line currently resides. I'd
be interested in seeing what the 2 socket data looked like as I suspect
you would take an even heavier hit for that.
- Alex
next prev parent reply other threads:[~2015-05-08 16:53 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-01 5:12 [PATCH v3 net-next] pktgen: introduce 'rx' mode Alexei Starovoitov
2015-05-01 16:54 ` Eric Dumazet
2015-05-02 8:46 ` Jesper Dangaard Brouer
2015-05-02 9:44 ` Daniel Borkmann
2015-05-02 9:54 ` Jesper Dangaard Brouer
2015-05-02 10:30 ` Daniel Borkmann
2015-05-02 16:01 ` Alexei Starovoitov
2015-05-02 16:46 ` Jesper Dangaard Brouer
2015-05-02 17:07 ` Alexei Starovoitov
2015-05-05 18:15 ` Jesper Dangaard Brouer
2015-05-05 18:30 ` Alexei Starovoitov
2015-05-05 20:29 ` [PATCH 0/2] pktgen changes Jesper Dangaard Brouer
2015-05-05 20:29 ` [PATCH 1/2] pktgen: adjust flag NO_TIMESTAMP to be more pktgen compliant Jesper Dangaard Brouer
2015-05-05 20:30 ` [PATCH 2/2] pktgen: introduce xmit_mode 'rx_inject' Jesper Dangaard Brouer
2015-05-06 4:33 ` Alexei Starovoitov
2015-05-06 5:24 ` Jesper Dangaard Brouer
2015-05-06 10:17 ` Daniel Borkmann
2015-05-06 11:22 ` Jesper Dangaard Brouer
2015-05-06 5:32 ` Alexander Duyck
2015-05-06 8:44 ` Jesper Dangaard Brouer
2015-05-06 14:35 ` Alexei Starovoitov
2015-05-07 14:34 ` [PATCH v5 0/2] pktgen changes Jesper Dangaard Brouer
2015-05-07 14:34 ` [PATCH v5 1/2] pktgen: adjust flag NO_TIMESTAMP to be more pktgen compliant Jesper Dangaard Brouer
2015-05-07 14:35 ` [PATCH v5 2/2] pktgen: introduce xmit_mode '<start_xmit|netif_receive>' Jesper Dangaard Brouer
2015-05-07 16:28 ` Alexei Starovoitov
2015-05-07 17:11 ` Daniel Borkmann
2015-05-07 17:16 ` Alexei Starovoitov
2015-05-07 17:20 ` Daniel Borkmann
2015-05-08 13:40 ` Multiqueue pktgen (was: [PATCH v5 2/2] pktgen: introduce xmit_mode '<start_xmit|netif_receive>') Jesper Dangaard Brouer
2015-05-08 15:39 ` [PATCH v5 2/2] pktgen: introduce xmit_mode '<start_xmit|netif_receive>' Jesper Dangaard Brouer
2015-05-08 15:49 ` Multiqueue pktgen and ingress path (Was: [PATCH v5 2/2] pktgen: introduce xmit_mode '<start_xmit|netif_receive>') Jesper Dangaard Brouer
2015-05-08 15:56 ` Eric Dumazet
2015-05-08 16:53 ` Alexander Duyck [this message]
2015-05-08 17:00 ` Alexei Starovoitov
2015-05-08 18:21 ` Alexander Duyck
2015-05-08 15:57 ` [PATCH v5 2/2] pktgen: introduce xmit_mode '<start_xmit|netif_receive>' Eric Dumazet
2015-05-08 16:50 ` Alexei Starovoitov
2015-05-10 2:26 ` [PATCH v5 0/2] pktgen changes David Miller
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=554CEA0E.4050509@gmail.com \
--to=alexander.duyck@gmail.com \
--cc=ast@plumgrid.com \
--cc=brouer@redhat.com \
--cc=daniel@iogearbox.net \
--cc=eric.dumazet@gmail.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).