netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andy Duan <fugang.duan@nxp.com>
To: Tobias Waldekranz <tobias@waldekranz.com>,
	David Miller <davem@davemloft.net>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>
Subject: RE: [EXT] Re: [PATCH net-next] net: ethernet: fec: prevent tx starvation under high rx load
Date: Tue, 30 Jun 2020 08:26:28 +0000	[thread overview]
Message-ID: <AM6PR0402MB3607E5066DA857CD4D9B33A3FF6F0@AM6PR0402MB3607.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <C3U9EFL9CA15.QDKTU9Y4EZXM@wkz-x280>

From: Tobias Waldekranz <tobias@waldekranz.com> Sent: Tuesday, June 30, 2020 3:31 PM
> On Tue Jun 30, 2020 at 8:27 AM CEST, Andy Duan wrote:
> > From: Tobias Waldekranz <tobias@waldekranz.com> Sent: Tuesday, June
> > 30,
> > 2020 12:29 AM
> > > On Sun Jun 28, 2020 at 8:23 AM CEST, Andy Duan wrote:
> > > > I never seem bandwidth test cause netdev watchdog trip.
> > > > Can you describe the reproduce steps on the commit, then we can
> > > > reproduce it on my local. Thanks.
> > >
> > > My setup uses a i.MX8M Nano EVK connected to an ethernet switch, but
> > > can get the same results with a direct connection to a PC.
> > >
> > > On the iMX, configure two VLANs on top of the FEC and enable IPv4
> > > forwarding.
> > >
> > > On the PC, configure two VLANs and put them in different namespaces.
> > > From one namespace, use trafgen to generate a flow that the iMX will
> > > route from the first VLAN to the second and then back towards the
> > > second namespace on the PC.
> > >
> > > Something like:
> > >
> > >     {
> > >         eth(sa=PC_MAC, da=IMX_MAC),
> > >         ipv4(saddr=10.0.2.2, daddr=10.0.3.2, ttl=2)
> > >         udp(sp=1, dp=2),
> > >         "Hello world"
> > >     }
> > >
> > > Wait a couple of seconds and then you'll see the output from fec_dump.
> > >
> > > In the same setup I also see a weird issue when running a TCP flow
> > > using iperf3. Most of the time (~70%) when i start the iperf3 client
> > > I'll see ~450Mbps of throughput. In the other case (~30%) I'll see
> > > ~790Mbps. The system is "stably bi-modal", i.e. whichever rate is
> > > reached in the beginning is then sustained for as long as the session is kept
> alive.
> > >
> > > I've inserted some tracepoints in the driver to try to understand
> > > what's going
> > > on:
> > > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsv
> > > gsha
> re.com%2Fi%2FMVp.svg&amp;data=02%7C01%7Cfugang.duan%40nxp.com%
> > >
> 7C12854e21ea124b4cc2e008d81c59d618%7C686ea1d3bc2b4c6fa92cd99c5c
> > >
> 301635%7C0%7C0%7C637290519453656013&amp;sdata=by4ShOkmTaRkFfE
> > > 0xJkrTptC%2B2egFf9iM4E5hx4jiSU%3D&amp;reserved=0
> > >
> > > What I can't figure out is why the Tx buffers seem to be collected
> > > at a much slower rate in the slow case (top in the picture). If we
> > > fall behind in one NAPI poll, we should catch up at the next call (which we
> can see in the fast case).
> > > But in the slow case we keep falling further and further behind
> > > until we freeze the queue. Is this something you've ever observed? Any
> ideas?
> >
> > Before, our cases don't reproduce the issue, cpu resource has better
> > bandwidth than ethernet uDMA then there have chance to complete
> > current NAPI. The next, work_tx get the update, never catch the issue.
> 
> It appears it has nothing to do with routing back out through the same
> interface.
> 
> I get the same bi-modal behavior if just run the iperf3 server on the iMX and
> then have it be the transmitting part, i.e. on the PC I run:
> 
>     iperf3 -c $IMX_IP -R
> 
> I would be very interesting to see what numbers you see in this scenario.
I just have on imx8mn evk in my hands, and run the case, the numbers is ~940Mbps
as below.

root@imx8mnevk:~# iperf3 -s
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.192.242.132, port 43402
[  5] local 10.192.242.96 port 5201 connected to 10.192.242.132 port 43404
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   109 MBytes   913 Mbits/sec    0    428 KBytes
[  5]   1.00-2.00   sec   112 MBytes   943 Mbits/sec    0    447 KBytes
[  5]   2.00-3.00   sec   112 MBytes   941 Mbits/sec    0    472 KBytes
[  5]   3.00-4.00   sec   113 MBytes   944 Mbits/sec    0    472 KBytes
[  5]   4.00-5.00   sec   112 MBytes   942 Mbits/sec    0    472 KBytes
[  5]   5.00-6.00   sec   112 MBytes   936 Mbits/sec    0    472 KBytes
[  5]   6.00-7.00   sec   113 MBytes   945 Mbits/sec    0    472 KBytes
[  5]   7.00-8.00   sec   112 MBytes   944 Mbits/sec    0    472 KBytes
[  5]   8.00-9.00   sec   112 MBytes   941 Mbits/sec    0    472 KBytes
[  5]   9.00-10.00  sec   112 MBytes   940 Mbits/sec    0    472 KBytes
[  5]  10.00-10.04  sec  4.16 MBytes   873 Mbits/sec    0    472 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.04  sec  1.10 GBytes   939 Mbits/sec    0             sender

  reply	other threads:[~2020-06-30  8:26 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-25  8:57 [PATCH net-next] net: ethernet: fec: prevent tx starvation under high rx load Tobias Waldekranz
2020-06-25 19:19 ` David Miller
2020-06-28  6:23   ` [EXT] " Andy Duan
2020-06-29 16:29     ` Tobias Waldekranz
2020-06-30  6:27       ` Andy Duan
2020-06-30  7:30         ` Tobias Waldekranz
2020-06-30  8:26           ` Andy Duan [this message]
2020-06-30  8:55             ` Tobias Waldekranz
2020-06-30  9:02               ` Andy Duan
2020-06-30  9:12                 ` Tobias Waldekranz
2020-06-30  9:47                   ` Andy Duan
2020-06-30 11:01                     ` Tobias Waldekranz
2020-07-01  1:27                       ` Andy Duan
2020-06-30 13:45                     ` Tobias Waldekranz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM6PR0402MB3607E5066DA857CD4D9B33A3FF6F0@AM6PR0402MB3607.eurprd04.prod.outlook.com \
    --to=fugang.duan@nxp.com \
    --cc=davem@davemloft.net \
    --cc=netdev@vger.kernel.org \
    --cc=tobias@waldekranz.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).