netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Joakim Zhang <qiangqing.zhang@nxp.com>
To: Frieder Schrempf <frieder.schrempf@kontron.de>,
	Dave Taht <dave.taht@gmail.com>
Cc: dl-linux-imx <linux-imx@nxp.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>
Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
Date: Wed, 19 May 2021 07:49:45 +0000	[thread overview]
Message-ID: <DB8PR04MB67956FF441B3592320E793DEE62B9@DB8PR04MB6795.eurprd04.prod.outlook.com> (raw)
In-Reply-To: <9b9cd281-51c7-5e37-7849-dd9814474636@kontron.de>


Hi Frieder,

> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> Sent: 2021年5月18日 20:55
> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
> <dave.taht@gmail.com>
> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> 
> 
> On 18.05.21 14:35, Joakim Zhang wrote:
> >
> > Hi Dave,
> >
> >> -----Original Message-----
> >> From: Dave Taht <dave.taht@gmail.com>
> >> Sent: 2021年5月17日 20:48
> >> To: Joakim Zhang <qiangqing.zhang@nxp.com>
> >> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> >> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >> linux-arm-kernel@lists.infradead.org
> >> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
> >> <qiangqing.zhang@nxp.com>
> >> wrote:
> >>>
> >>>
> >>> Hi Frieder,
> >>>
> >>>> -----Original Message-----
> >>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>>> Sent: 2021年5月17日 15:17
> >>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
> >>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>> linux-arm-kernel@lists.infradead.org
> >>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>
> >>>> Hi Joakim,
> >>>>
> >>>> On 13.05.21 14:36, Joakim Zhang wrote:
> >>>>>
> >>>>> Hi Frieder,
> >>>>>
> >>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce
> >>>>> on
> >>>> L5.10, and can't reproduce on L5.4.
> >>>>> According to your description, you can reproduce this issue both
> >>>>> L5.4 and
> >>>> L5.10? So I need confirm with you.
> >>>>
> >>>> Thanks for looking into this. I could reproduce this on 5.4 and
> >>>> 5.10 but both kernels were official mainline kernels and **not**
> >>>> from the linux-imx downstream tree.
> >>> Ok.
> >>>
> >>>> Maybe there is some problem in the mainline tree and it got
> >>>> included in the NXP release kernel starting from L5.10?
> >>> No, this much looks like a known issue, it should always exist after
> >>> adding
> >> AVB support in mainline.
> >>>
> >>> ENET IP is not a _real_ multiple queues per my understanding, queue
> >>> 0 is for
> >> best effort. And the queue 1&2 is for AVB stream whose default
> >> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps
> for 1Gbps).
> >> When transmitting packets, net core will select queues randomly,
> >> which caused the tx bandwidth fluctuations. So you can change to use
> >> single queue if you care more about tx bandwidth. Or you can refer to
> >> NXP internal implementation.
> >>> e.g.
> >>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>> @@ -916,8 +916,8 @@
> >>>                                          <&clk
> >> IMX8MQ_CLK_ENET_PHY_REF>;
> >>>                                 clock-names = "ipg", "ahb", "ptp",
> >>>                                               "enet_clk_ref",
> >> "enet_out";
> >>> -                               fsl,num-tx-queues = <3>;
> >>> -                               fsl,num-rx-queues = <3>;
> >>> +                               fsl,num-tx-queues = <1>;
> >>> +                               fsl,num-rx-queues = <1>;
> >>>                                 status = "disabled";
> >>>                         };
> >>>                 };
> >>>
> >>> I hope this can help you :)
> >>
> >> Patching out the queues is probably not the right thing.
> >>
> >> for starters... Is there BQL support in this driver? It would be
> >> helpful to have on all queues.
> > There is no BQL support in this driver, and BQL may improve throughput
> further, but should not be the root cause of this reported issue.
> >
> >> Also if there was a way to present it as two interfaces, rather than
> >> one, that would allow for a specific avb device to be presented.
> >>
> >> Or:
> >>
> >> Is there a standard means of signalling down the stack via the IP layer (a
> dscp?
> >> a setsockopt?) that the AVB queue is requested?
> >>
> > AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue 1&2
> based on VLAN-ID.
> 
> I had to look up what AVB even means, but from my current understanding it
> doesn't seem right that for non-AVB packets the driver picks any of the three
> queues in a random fashion while at the same time knowing that queue 1 and 2
> have a 50% limitation on the bandwidth. Shouldn't there be some way to prefer
> queue 0 without needing the user to set it up or even arbitrarily limiting the
> number of queues as proposed above?

Yes, I think we can. I look into NXP local implementation, there is a ndo_select_queue callback.
https://source.codeaurora.org/external/imx/linux-imx/tree/drivers/net/ethernet/freescale/fec_main.c?h=lf-5.4.y#n3419
This is the version for L5.4 kernel.

Best Regards,
Joakim Zhang
> >
> > Best Regards,
> > Joakim Zhang
> >>> Best Regards,
> >>> Joakim Zhang
> >>>> Best regards
> >>>> Frieder
> >>>>
> >>>>>
> >>>>> Best Regards,
> >>>>> Joakim Zhang
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Joakim Zhang <qiangqing.zhang@nxp.com>
> >>>>>> Sent: 2021年5月12日 19:59
> >>>>>> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> >>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>>>> linux-arm-kernel@lists.infradead.org
> >>>>>> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>
> >>>>>>
> >>>>>> Hi Frieder,
> >>>>>>
> >>>>>> Sorry, I missed this mail before, I can reproduce this issue at
> >>>>>> my side, I will try my best to look into this issue.
> >>>>>>
> >>>>>> Best Regards,
> >>>>>> Joakim Zhang
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>>>>>> Sent: 2021年5月6日 22:46
> >>>>>>> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>>>>> linux-arm-kernel@lists.infradead.org
> >>>>>>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> we observed some weird phenomenon with the Ethernet on our
> >>>>>>> i.MX8M-Mini boards. It happens quite often that the measured
> >>>>>>> bandwidth in TX direction drops from its expected/nominal value
> >>>>>>> to something like 50% (for 100M) or ~67% (for 1G) connections.
> >>>>>>>
> >>>>>>> So far we reproduced this with two different hardware designs
> >>>>>>> using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two
> >>>>>>> different kernel versions (v5.4 and v5.10) and link speeds of
> >>>>>>> 100M and
> >> 1G.
> >>>>>>>
> >>>>>>> To measure the throughput we simply run iperf3 on the target
> >>>>>>> (with a short p2p connection to the host PC) like this:
> >>>>>>>
> >>>>>>>   iperf3 -c 192.168.1.10 --bidir
> >>>>>>>
> >>>>>>> But even something more simple like this can be used to get the
> >>>>>>> info (with 'nc -l -p 1122 > /dev/null' running on the host):
> >>>>>>>
> >>>>>>>   dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >>>>>>>
> >>>>>>> The results fluctuate between each test run and are sometimes 'good'
> >>>> (e.g.
> >>>>>>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s
> >>>>>>> for 100M
> >>>>>> link).
> >>>>>>> There is nothing else running on the system in parallel. Some
> >>>>>>> more info is also available in this post: [1].
> >>>>>>>
> >>>>>>> If there's anyone around who has an idea on what might be the
> >>>>>>> reason for this, please let me know!
> >>>>>>> Or maybe someone would be willing to do a quick test on his own
> >>>> hardware.
> >>>>>>> That would also be highly appreciated!
> >>>>>>>
> >>>>>>> Thanks and best regards
> >>>>>>> Frieder
> >>>>>>>
> >>>>>>> [1]:
> >>>>>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%
> >>>>>>> 2Fco
> >>>>>>> mm
> >>>>>>> u
> >>>>>>>
> >>>>>>
> >>>>
> >>
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> >>>>>>>
> >>>>>>
> >>>>
> >>
> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> >>>>>>>
> >>>>>>
> >>>>
> >>
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> >>>>>>>
> >>>>>>
> >>>>
> >>
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> >>>>>>>
> >>>>>>
> >>>>
> >>
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >>>>>>>
> >>>>>>
> >>>>
> >>
> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> >>>>>>> YSxakXwZtxde8%3D&amp;reserved=0
> >>
> >>
> >>
> >> --
> >> Latest Podcast:
> >> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww
> >> .lin
> >>
> kedin.com%2Ffeed%2Fupdate%2Furn%3Ali%3Aactivity%3A6791014284936785
> >>
> 920%2F&amp;data=04%7C01%7Cqiangqing.zhang%40nxp.com%7Cd11b7b331
> >>
> db04c41799908d91932059b%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%
> >>
> 7C0%7C637568524896127548%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w
> >>
> LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&am
> >>
> p;sdata=IPW1MPLSnitX0HUttdLtZysknzokRN5fYVPXrbJQhaY%3D&amp;reserve
> >> d=0
> >>
> >> Dave Täht CTO, TekLibre, LLC

  reply	other threads:[~2021-05-19  7:49 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-06 14:45 i.MX8MM Ethernet TX Bandwidth Fluctuations Frieder Schrempf
2021-05-06 14:53 ` Dave Taht
2021-05-10 12:49   ` Frieder Schrempf
2021-05-10 15:09     ` Dave Taht
2021-05-06 19:20 ` Adam Ford
2021-05-07 15:34   ` Tim Harvey
2021-05-10 12:57     ` Frieder Schrempf
2021-05-10 12:52   ` Frieder Schrempf
2021-05-10 13:10     ` Adam Ford
2021-05-12 11:58 ` Joakim Zhang
2021-05-13 12:36   ` Joakim Zhang
2021-05-17  7:17     ` Frieder Schrempf
2021-05-17 10:22       ` Joakim Zhang
2021-05-17 12:47         ` Dave Taht
2021-05-18 12:35           ` Joakim Zhang
2021-05-18 12:55             ` Frieder Schrempf
2021-05-19  7:49               ` Joakim Zhang [this message]
2021-05-19  8:10                 ` Frieder Schrempf
2021-05-19  8:40                   ` Joakim Zhang
2021-05-19 10:12                     ` Frieder Schrempf
2021-05-19 10:47                       ` Joakim Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DB8PR04MB67956FF441B3592320E793DEE62B9@DB8PR04MB6795.eurprd04.prod.outlook.com \
    --to=qiangqing.zhang@nxp.com \
    --cc=dave.taht@gmail.com \
    --cc=frieder.schrempf@kontron.de \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-imx@nxp.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).