From: "Tobias Waldekranz" <tobias@waldekranz.com>
To: "Andy Duan" <fugang.duan@nxp.com>, "David Miller" <davem@davemloft.net>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: RE: [EXT] Re: [PATCH net-next] net: ethernet: fec: prevent tx starvation under high rx load
Date: Tue, 30 Jun 2020 09:30:41 +0200 [thread overview]
Message-ID: <C3U9EFL9CA15.QDKTU9Y4EZXM@wkz-x280> (raw)
In-Reply-To: <AM6PR0402MB36075CF372D7A31932E32B60FF6F0@AM6PR0402MB3607.eurprd04.prod.outlook.com>
On Tue Jun 30, 2020 at 8:27 AM CEST, Andy Duan wrote:
> From: Tobias Waldekranz <tobias@waldekranz.com> Sent: Tuesday, June 30,
> 2020 12:29 AM
> > On Sun Jun 28, 2020 at 8:23 AM CEST, Andy Duan wrote:
> > > I never seem bandwidth test cause netdev watchdog trip.
> > > Can you describe the reproduce steps on the commit, then we can
> > > reproduce it on my local. Thanks.
> >
> > My setup uses a i.MX8M Nano EVK connected to an ethernet switch, but can
> > get the same results with a direct connection to a PC.
> >
> > On the iMX, configure two VLANs on top of the FEC and enable IPv4
> > forwarding.
> >
> > On the PC, configure two VLANs and put them in different namespaces. From
> > one namespace, use trafgen to generate a flow that the iMX will route from
> > the first VLAN to the second and then back towards the second namespace on
> > the PC.
> >
> > Something like:
> >
> > {
> > eth(sa=PC_MAC, da=IMX_MAC),
> > ipv4(saddr=10.0.2.2, daddr=10.0.3.2, ttl=2)
> > udp(sp=1, dp=2),
> > "Hello world"
> > }
> >
> > Wait a couple of seconds and then you'll see the output from fec_dump.
> >
> > In the same setup I also see a weird issue when running a TCP flow using
> > iperf3. Most of the time (~70%) when i start the iperf3 client I'll see
> > ~450Mbps of throughput. In the other case (~30%) I'll see ~790Mbps. The
> > system is "stably bi-modal", i.e. whichever rate is reached in the beginning is
> > then sustained for as long as the session is kept alive.
> >
> > I've inserted some tracepoints in the driver to try to understand what's going
> > on:
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsvgsha
> > re.com%2Fi%2FMVp.svg&data=02%7C01%7Cfugang.duan%40nxp.com%
> > 7C12854e21ea124b4cc2e008d81c59d618%7C686ea1d3bc2b4c6fa92cd99c5c
> > 301635%7C0%7C0%7C637290519453656013&sdata=by4ShOkmTaRkFfE
> > 0xJkrTptC%2B2egFf9iM4E5hx4jiSU%3D&reserved=0
> >
> > What I can't figure out is why the Tx buffers seem to be collected at a much
> > slower rate in the slow case (top in the picture). If we fall behind in one NAPI
> > poll, we should catch up at the next call (which we can see in the fast case).
> > But in the slow case we keep falling further and further behind until we freeze
> > the queue. Is this something you've ever observed? Any ideas?
>
> Before, our cases don't reproduce the issue, cpu resource has better
> bandwidth
> than ethernet uDMA then there have chance to complete current NAPI. The
> next,
> work_tx get the update, never catch the issue.
It appears it has nothing to do with routing back out through the same
interface.
I get the same bi-modal behavior if just run the iperf3 server on the
iMX and then have it be the transmitting part, i.e. on the PC I run:
iperf3 -c $IMX_IP -R
I would be very interesting to see what numbers you see in this
scenario.
next prev parent reply other threads:[~2020-06-30 7:45 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-25 8:57 [PATCH net-next] net: ethernet: fec: prevent tx starvation under high rx load Tobias Waldekranz
2020-06-25 19:19 ` David Miller
2020-06-28 6:23 ` [EXT] " Andy Duan
2020-06-29 16:29 ` Tobias Waldekranz
2020-06-30 6:27 ` Andy Duan
2020-06-30 7:30 ` Tobias Waldekranz [this message]
2020-06-30 8:26 ` Andy Duan
2020-06-30 8:55 ` Tobias Waldekranz
2020-06-30 9:02 ` Andy Duan
2020-06-30 9:12 ` Tobias Waldekranz
2020-06-30 9:47 ` Andy Duan
2020-06-30 11:01 ` Tobias Waldekranz
2020-07-01 1:27 ` Andy Duan
2020-06-30 13:45 ` Tobias Waldekranz
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=C3U9EFL9CA15.QDKTU9Y4EZXM@wkz-x280 \
--to=tobias@waldekranz.com \
--cc=davem@davemloft.net \
--cc=fugang.duan@nxp.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).