linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* i.MX8MM Ethernet TX Bandwidth Fluctuations
@ 2021-05-06 14:45 Frieder Schrempf
  2021-05-06 14:53 ` Dave Taht
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-06 14:45 UTC (permalink / raw)
  To: NXP Linux Team, netdev, linux-arm-kernel

Hi,

we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.

So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.

To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:

	iperf3 -c 192.168.1.10 --bidir

But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):

	dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122

The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
There is nothing else running on the system in parallel. Some more info is also available in this post: [1].

If there's anyone around who has an idea on what might be the reason for this, please let me know!
Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!

Thanks and best regards
Frieder

[1]: https://community.nxp.com/t5/i-MX-Processors/i-MX8MM-Ethernet-TX-Bandwidth-Fluctuations/m-p/1242467#M170563

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-06 14:45 i.MX8MM Ethernet TX Bandwidth Fluctuations Frieder Schrempf
@ 2021-05-06 14:53 ` Dave Taht
  2021-05-10 12:49   ` Frieder Schrempf
  2021-05-06 19:20 ` Adam Ford
  2021-05-12 11:58 ` Joakim Zhang
  2 siblings, 1 reply; 21+ messages in thread
From: Dave Taht @ 2021-05-06 14:53 UTC (permalink / raw)
  To: Frieder Schrempf; +Cc: NXP Linux Team, netdev, linux-arm-kernel

I am a big fan of bql - is that implemented on this driver?

cd /sys/class/net/your_device_name/queues/tx-0/byte_queue_limits/
cat limit

see also bqlmon from github

is fq_codel running on the ethernet interface? the iperf bidir test
does much better with that in place rather than a fifo. tc -s qdisc
show dev your_device

Also I tend to run tests using the flent tool, which will yield more
data. Install netperf and irtt on the target, flent, netperf, irtt on
the test driver box...

flent -H the-target-ip -x --socket-stats -t whateveryouaretesting rrul
# the meanest bidir test there

flent-gui *.gz

On Thu, May 6, 2021 at 7:47 AM Frieder Schrempf
<frieder.schrempf@kontron.de> wrote:
>
> Hi,
>
> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
>
> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>
> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
>
>         iperf3 -c 192.168.1.10 --bidir
>
> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
>
>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>
> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
>
> If there's anyone around who has an idea on what might be the reason for this, please let me know!
> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
>
> Thanks and best regards
> Frieder
>
> [1]: https://community.nxp.com/t5/i-MX-Processors/i-MX8MM-Ethernet-TX-Bandwidth-Fluctuations/m-p/1242467#M170563



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-06 14:45 i.MX8MM Ethernet TX Bandwidth Fluctuations Frieder Schrempf
  2021-05-06 14:53 ` Dave Taht
@ 2021-05-06 19:20 ` Adam Ford
  2021-05-07 15:34   ` Tim Harvey
  2021-05-10 12:52   ` Frieder Schrempf
  2021-05-12 11:58 ` Joakim Zhang
  2 siblings, 2 replies; 21+ messages in thread
From: Adam Ford @ 2021-05-06 19:20 UTC (permalink / raw)
  To: Frieder Schrempf; +Cc: NXP Linux Team, netdev, linux-arm-kernel

On Thu, May 6, 2021 at 9:51 AM Frieder Schrempf
<frieder.schrempf@kontron.de> wrote:
>
> Hi,
>
> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
>
> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>
> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
>
>         iperf3 -c 192.168.1.10 --bidir
>
> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
>
>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>
> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
>
> If there's anyone around who has an idea on what might be the reason for this, please let me know!
> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!

I have seen a similar regression on linux-next on both Mini and Nano.
I thought I broke something, but it returned to normal after a reboot.
  However, with a 1Gb connection, I was running at ~450 Mbs which is
consistent with what you were seeing with a 100Mb link.

adam

>
> Thanks and best regards
> Frieder
>
> [1]: https://community.nxp.com/t5/i-MX-Processors/i-MX8MM-Ethernet-TX-Bandwidth-Fluctuations/m-p/1242467#M170563
>
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-06 19:20 ` Adam Ford
@ 2021-05-07 15:34   ` Tim Harvey
  2021-05-10 12:57     ` Frieder Schrempf
  2021-05-10 12:52   ` Frieder Schrempf
  1 sibling, 1 reply; 21+ messages in thread
From: Tim Harvey @ 2021-05-07 15:34 UTC (permalink / raw)
  To: Adam Ford, Frieder Schrempf; +Cc: NXP Linux Team, netdev, linux-arm-kernel

On Thu, May 6, 2021 at 12:20 PM Adam Ford <aford173@gmail.com> wrote:
>
> On Thu, May 6, 2021 at 9:51 AM Frieder Schrempf
> <frieder.schrempf@kontron.de> wrote:
> >
> > Hi,
> >
> > we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
> >
> > So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> >
> > To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
> >
> >         iperf3 -c 192.168.1.10 --bidir
> >
> > But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
> >
> >         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >
> > The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
> > There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
> >
> > If there's anyone around who has an idea on what might be the reason for this, please let me know!
> > Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
>
> I have seen a similar regression on linux-next on both Mini and Nano.
> I thought I broke something, but it returned to normal after a reboot.
>   However, with a 1Gb connection, I was running at ~450 Mbs which is
> consistent with what you were seeing with a 100Mb link.
>
> adam
>

Frieder,

I've noticed this as well on our designs with IMX8MN+DP83867 and
IMX8MM+KSZ9897S. I also notice it with IMX8MN+DP83867. I have noticed
it on all kernels I've tested and it appears to latch back and forth
every few times I run a 10s iperf3 between 50% and 100% line speed.

I have no idea what it is but glad you are asking and hope someone
knows how to fix it!

Best Regards,

Tim

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-06 14:53 ` Dave Taht
@ 2021-05-10 12:49   ` Frieder Schrempf
  2021-05-10 15:09     ` Dave Taht
  0 siblings, 1 reply; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-10 12:49 UTC (permalink / raw)
  To: Dave Taht; +Cc: NXP Linux Team, netdev, linux-arm-kernel

Hi Dave,

thanks for the input. I really don't know much about the networking stack, so at the moment I can only provide the values requested below, without knowing what it really means.

What's so strange is, that the performance is actually good in general and only "snaps" to the "bad" state and back after some time or after repeated test runs.

And by the way, the ethernet driver in use is the FEC driver at drivers/net/ethernet/freescale/fec_main.c.

On 06.05.21 16:53, Dave Taht wrote:
> I am a big fan of bql - is that implemented on this driver?
> 
> cd /sys/class/net/your_device_name/queues/tx-0/byte_queue_limits/
> cat limit

~# cat /sys/class/net/eth0/queues/tx-0/byte_queue_limits/limit
0

> 
> see also bqlmon from github
> 
> is fq_codel running on the ethernet interface? the iperf bidir test
> does much better with that in place rather than a fifo. tc -s qdisc
> show dev your_device

~# tc -s qdisc show dev eth0
RTNETLINK answers: Operation not supported
Dump terminated

Best regards
Frieder

> 
> Also I tend to run tests using the flent tool, which will yield more
> data. Install netperf and irtt on the target, flent, netperf, irtt on
> the test driver box...
> 
> flent -H the-target-ip -x --socket-stats -t whateveryouaretesting rrul
> # the meanest bidir test there
> 
> flent-gui *.gz
> 
> On Thu, May 6, 2021 at 7:47 AM Frieder Schrempf
> <frieder.schrempf@kontron.de> wrote:
>>
>> Hi,
>>
>> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
>>
>> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>>
>> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
>>
>>         iperf3 -c 192.168.1.10 --bidir
>>
>> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
>>
>>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>>
>> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
>> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
>>
>> If there's anyone around who has an idea on what might be the reason for this, please let me know!
>> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
>>
>> Thanks and best regards
>> Frieder
>>
>> [1]: https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommunity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cfrieder.schrempf%40kontron.de%7C157b00b2686447fd9a7108d9109ecbc6%7C8c9d3c973fd941c8a2b1646f3942daf1%7C0%7C0%7C637559096478620665%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=SFT%2Boic9C1sirw%2BT1o1qRNNUe4H9bk2FHkLQpdy489I%3D&amp;reserved=0
> 
> 
> 

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-06 19:20 ` Adam Ford
  2021-05-07 15:34   ` Tim Harvey
@ 2021-05-10 12:52   ` Frieder Schrempf
  2021-05-10 13:10     ` Adam Ford
  1 sibling, 1 reply; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-10 12:52 UTC (permalink / raw)
  To: Adam Ford; +Cc: NXP Linux Team, netdev, linux-arm-kernel

Hi Adam,

On 06.05.21 21:20, Adam Ford wrote:
> On Thu, May 6, 2021 at 9:51 AM Frieder Schrempf
> <frieder.schrempf@kontron.de> wrote:
>>
>> Hi,
>>
>> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
>>
>> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>>
>> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
>>
>>         iperf3 -c 192.168.1.10 --bidir
>>
>> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
>>
>>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>>
>> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
>> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
>>
>> If there's anyone around who has an idea on what might be the reason for this, please let me know!
>> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
> 
> I have seen a similar regression on linux-next on both Mini and Nano.
> I thought I broke something, but it returned to normal after a reboot.
>   However, with a 1Gb connection, I was running at ~450 Mbs which is
> consistent with what you were seeing with a 100Mb link.

Thanks for your response. If you say "regression" does this mean that you had some previous version where this issue didn't occur? As for me, I can see it on 5.4 and 5.10, but I didn't try it with anything else so far.

Best regards
Frieder

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-07 15:34   ` Tim Harvey
@ 2021-05-10 12:57     ` Frieder Schrempf
  0 siblings, 0 replies; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-10 12:57 UTC (permalink / raw)
  To: Tim Harvey, Adam Ford; +Cc: NXP Linux Team, netdev, linux-arm-kernel

Hi Tim,

On 07.05.21 17:34, Tim Harvey wrote:
> On Thu, May 6, 2021 at 12:20 PM Adam Ford <aford173@gmail.com> wrote:
>>
>> On Thu, May 6, 2021 at 9:51 AM Frieder Schrempf
>> <frieder.schrempf@kontron.de> wrote:
>>>
>>> Hi,
>>>
>>> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
>>>
>>> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>>>
>>> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
>>>
>>>         iperf3 -c 192.168.1.10 --bidir
>>>
>>> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
>>>
>>>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>>>
>>> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
>>> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
>>>
>>> If there's anyone around who has an idea on what might be the reason for this, please let me know!
>>> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
>>
>> I have seen a similar regression on linux-next on both Mini and Nano.
>> I thought I broke something, but it returned to normal after a reboot.
>>   However, with a 1Gb connection, I was running at ~450 Mbs which is
>> consistent with what you were seeing with a 100Mb link.
>>
>> adam
>>
> 
> Frieder,
> 
> I've noticed this as well on our designs with IMX8MN+DP83867 and
> IMX8MM+KSZ9897S. I also notice it with IMX8MN+DP83867. I have noticed
> it on all kernels I've tested and it appears to latch back and forth
> every few times I run a 10s iperf3 between 50% and 100% line speed.
> 
> I have no idea what it is but glad you are asking and hope someone
> knows how to fix it!

Thanks for providing that information. Yes, the latching effect between "slow" and normal speed now and then is exactly what I'm seeing, too. Good to know that this is something not only happening at my end!

Best regards
Frieder

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-10 12:52   ` Frieder Schrempf
@ 2021-05-10 13:10     ` Adam Ford
  0 siblings, 0 replies; 21+ messages in thread
From: Adam Ford @ 2021-05-10 13:10 UTC (permalink / raw)
  To: Frieder Schrempf; +Cc: NXP Linux Team, netdev, linux-arm-kernel

On Mon, May 10, 2021 at 7:52 AM Frieder Schrempf
<frieder.schrempf@kontron.de> wrote:
>
> Hi Adam,
>
> On 06.05.21 21:20, Adam Ford wrote:
> > On Thu, May 6, 2021 at 9:51 AM Frieder Schrempf
> > <frieder.schrempf@kontron.de> wrote:
> >>
> >> Hi,
> >>
> >> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
> >>
> >> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> >>
> >> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
> >>
> >>         iperf3 -c 192.168.1.10 --bidir
> >>
> >> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
> >>
> >>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >>
> >> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
> >> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
> >>
> >> If there's anyone around who has an idea on what might be the reason for this, please let me know!
> >> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
> >
> > I have seen a similar regression on linux-next on both Mini and Nano.
> > I thought I broke something, but it returned to normal after a reboot.
> >   However, with a 1Gb connection, I was running at ~450 Mbs which is
> > consistent with what you were seeing with a 100Mb link.
>
> Thanks for your response. If you say "regression" does this mean that you had some previous version where this issue didn't occur? As for me, I can see it on 5.4 and 5.10, but I didn't try it with anything else so far.

I have not seen this in the 4.19 kernel that NXP provided, but I have
seen this intermittently in the 5.10, so I called it a regression.

adam
>
> Best regards
> Frieder

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-10 12:49   ` Frieder Schrempf
@ 2021-05-10 15:09     ` Dave Taht
  0 siblings, 0 replies; 21+ messages in thread
From: Dave Taht @ 2021-05-10 15:09 UTC (permalink / raw)
  To: Frieder Schrempf; +Cc: NXP Linux Team, netdev, linux-arm-kernel

On Mon, May 10, 2021 at 5:49 AM Frieder Schrempf
<frieder.schrempf@kontron.de> wrote:
>
> Hi Dave,
>
> thanks for the input. I really don't know much about the networking stack, so at the moment I can only provide the values requested below, without knowing what it really means.
>
> What's so strange is, that the performance is actually good in general and only "snaps" to the "bad" state and back after some time or after repeated test runs.
>
> And by the way, the ethernet driver in use is the FEC driver at drivers/net/ethernet/freescale/fec_main.c.

It doesn't look (from a quick grep) that that driver ever got BQL support.

davet@Georges-MacBook-Pro freescale % grep sent_queue *.c
gianfar.c:    netdev_tx_sent_queue(txq, bytes_sent);
ucc_geth.c:    netdev_sent_queue(dev, skb->len);

If you really care about bidirectional throughput, having enormous
fifo buffers buried deep in the driver has a tendency to hurt that a
lot and has the symptoms you describe, however not as persistent, so I
would suspect another bug involving gso or gro to start with...

BUT: I note that the effort in implementing bql and testing the packet
size accounting usually shows up other problems in the tx/rx ring, GRO
or NAPI code, and thus is worthwhile exercise that might find where
things are getting stuck.

It doesn't appear your kernel has fq_codel qdisc support, either,
which means big dumb fifos at that layer, drastically affecting bidir
throughput. also.

Since the nxp team is cc'd this is a preso I'd given broadcom back in 2018:

http://www.taht.net/~d/broadcom_aug9_2018.pdf

And the relevant lwn articles from, like, 2011:

https://lwn.net/Articles/454390/

https://lwn.net/Articles/469652/


If someone wants to send me a board to play with...

> On 06.05.21 16:53, Dave Taht wrote:
> > I am a big fan of bql - is that implemented on this driver?
> >
> > cd /sys/class/net/your_device_name/queues/tx-0/byte_queue_limits/
> > cat limit
>
> ~# cat /sys/class/net/eth0/queues/tx-0/byte_queue_limits/limit
> 0
>
> >
> > see also bqlmon from github
> >
> > is fq_codel running on the ethernet interface? the iperf bidir test
> > does much better with that in place rather than a fifo. tc -s qdisc
> > show dev your_device
>
> ~# tc -s qdisc show dev eth0
> RTNETLINK answers: Operation not supported
> Dump terminated
>
> Best regards
> Frieder
>
> >
> > Also I tend to run tests using the flent tool, which will yield more
> > data. Install netperf and irtt on the target, flent, netperf, irtt on
> > the test driver box...
> >
> > flent -H the-target-ip -x --socket-stats -t whateveryouaretesting rrul
> > # the meanest bidir test there
> >
> > flent-gui *.gz
> >
> > On Thu, May 6, 2021 at 7:47 AM Frieder Schrempf
> > <frieder.schrempf@kontron.de> wrote:
> >>
> >> Hi,
> >>
> >> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini boards. It happens quite often that the measured bandwidth in TX direction drops from its expected/nominal value to something like 50% (for 100M) or ~67% (for 1G) connections.
> >>
> >> So far we reproduced this with two different hardware designs using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> >>
> >> To measure the throughput we simply run iperf3 on the target (with a short p2p connection to the host PC) like this:
> >>
> >>         iperf3 -c 192.168.1.10 --bidir
> >>
> >> But even something more simple like this can be used to get the info (with 'nc -l -p 1122 > /dev/null' running on the host):
> >>
> >>         dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >>
> >> The results fluctuate between each test run and are sometimes 'good' (e.g. ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
> >> There is nothing else running on the system in parallel. Some more info is also available in this post: [1].
> >>
> >> If there's anyone around who has an idea on what might be the reason for this, please let me know!
> >> Or maybe someone would be willing to do a quick test on his own hardware. That would also be highly appreciated!
> >>
> >> Thanks and best regards
> >> Frieder
> >>
> >> [1]: https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommunity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cfrieder.schrempf%40kontron.de%7C157b00b2686447fd9a7108d9109ecbc6%7C8c9d3c973fd941c8a2b1646f3942daf1%7C0%7C0%7C637559096478620665%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=SFT%2Boic9C1sirw%2BT1o1qRNNUe4H9bk2FHkLQpdy489I%3D&amp;reserved=0
> >
> >
> >



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-06 14:45 i.MX8MM Ethernet TX Bandwidth Fluctuations Frieder Schrempf
  2021-05-06 14:53 ` Dave Taht
  2021-05-06 19:20 ` Adam Ford
@ 2021-05-12 11:58 ` Joakim Zhang
  2021-05-13 12:36   ` Joakim Zhang
  2 siblings, 1 reply; 21+ messages in thread
From: Joakim Zhang @ 2021-05-12 11:58 UTC (permalink / raw)
  To: Frieder Schrempf, dl-linux-imx, netdev, linux-arm-kernel


Hi Frieder,

Sorry, I missed this mail before, I can reproduce this issue at my side, I will try my best to look into this issue.

Best Regards,
Joakim Zhang

> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> Sent: 2021年5月6日 22:46
> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> Hi,
> 
> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini
> boards. It happens quite often that the measured bandwidth in TX direction
> drops from its expected/nominal value to something like 50% (for 100M) or ~67%
> (for 1G) connections.
> 
> So far we reproduced this with two different hardware designs using two
> different PHYs (RGMII VSC8531 and RMII KSZ8081), two different kernel
> versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> 
> To measure the throughput we simply run iperf3 on the target (with a short
> p2p connection to the host PC) like this:
> 
> 	iperf3 -c 192.168.1.10 --bidir
> 
> But even something more simple like this can be used to get the info (with 'nc -l
> -p 1122 > /dev/null' running on the host):
> 
> 	dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> 
> The results fluctuate between each test run and are sometimes 'good' (e.g.
> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M link).
> There is nothing else running on the system in parallel. Some more info is also
> available in this post: [1].
> 
> If there's anyone around who has an idea on what might be the reason for this,
> please let me know!
> Or maybe someone would be willing to do a quick test on his own hardware.
> That would also be highly appreciated!
> 
> Thanks and best regards
> Frieder
> 
> [1]:
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommu
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> YSxakXwZtxde8%3D&amp;reserved=0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-12 11:58 ` Joakim Zhang
@ 2021-05-13 12:36   ` Joakim Zhang
  2021-05-17  7:17     ` Frieder Schrempf
  0 siblings, 1 reply; 21+ messages in thread
From: Joakim Zhang @ 2021-05-13 12:36 UTC (permalink / raw)
  To: Frieder Schrempf, dl-linux-imx, netdev, linux-arm-kernel


Hi Frieder,

For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce on L5.10, and can't reproduce on L5.4.
According to your description, you can reproduce this issue both L5.4 and L5.10? So I need confirm with you.

Best Regards,
Joakim Zhang

> -----Original Message-----
> From: Joakim Zhang <qiangqing.zhang@nxp.com>
> Sent: 2021年5月12日 19:59
> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> 
> Hi Frieder,
> 
> Sorry, I missed this mail before, I can reproduce this issue at my side, I will try
> my best to look into this issue.
> 
> Best Regards,
> Joakim Zhang
> 
> > -----Original Message-----
> > From: Frieder Schrempf <frieder.schrempf@kontron.de>
> > Sent: 2021年5月6日 22:46
> > To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> > linux-arm-kernel@lists.infradead.org
> > Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >
> > Hi,
> >
> > we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini
> > boards. It happens quite often that the measured bandwidth in TX
> > direction drops from its expected/nominal value to something like 50%
> > (for 100M) or ~67% (for 1G) connections.
> >
> > So far we reproduced this with two different hardware designs using
> > two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different
> > kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> >
> > To measure the throughput we simply run iperf3 on the target (with a
> > short p2p connection to the host PC) like this:
> >
> > 	iperf3 -c 192.168.1.10 --bidir
> >
> > But even something more simple like this can be used to get the info
> > (with 'nc -l -p 1122 > /dev/null' running on the host):
> >
> > 	dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >
> > The results fluctuate between each test run and are sometimes 'good' (e.g.
> > ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M
> link).
> > There is nothing else running on the system in parallel. Some more
> > info is also available in this post: [1].
> >
> > If there's anyone around who has an idea on what might be the reason
> > for this, please let me know!
> > Or maybe someone would be willing to do a quick test on his own hardware.
> > That would also be highly appreciated!
> >
> > Thanks and best regards
> > Frieder
> >
> > [1]:
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcomm
> > u
> >
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> >
> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> >
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> >
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> >
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >
> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> > YSxakXwZtxde8%3D&amp;reserved=0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-13 12:36   ` Joakim Zhang
@ 2021-05-17  7:17     ` Frieder Schrempf
  2021-05-17 10:22       ` Joakim Zhang
  0 siblings, 1 reply; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-17  7:17 UTC (permalink / raw)
  To: Joakim Zhang, dl-linux-imx, netdev, linux-arm-kernel

Hi Joakim,

On 13.05.21 14:36, Joakim Zhang wrote:
> 
> Hi Frieder,
> 
> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce on L5.10, and can't reproduce on L5.4.
> According to your description, you can reproduce this issue both L5.4 and L5.10? So I need confirm with you.

Thanks for looking into this. I could reproduce this on 5.4 and 5.10 but both kernels were official mainline kernels and **not** from the linux-imx downstream tree.

Maybe there is some problem in the mainline tree and it got included in the NXP release kernel starting from L5.10?

Best regards
Frieder

> 
> Best Regards,
> Joakim Zhang
> 
>> -----Original Message-----
>> From: Joakim Zhang <qiangqing.zhang@nxp.com>
>> Sent: 2021年5月12日 19:59
>> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>> linux-arm-kernel@lists.infradead.org
>> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>>
>> Hi Frieder,
>>
>> Sorry, I missed this mail before, I can reproduce this issue at my side, I will try
>> my best to look into this issue.
>>
>> Best Regards,
>> Joakim Zhang
>>
>>> -----Original Message-----
>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>> Sent: 2021年5月6日 22:46
>>> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>> linux-arm-kernel@lists.infradead.org
>>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>
>>> Hi,
>>>
>>> we observed some weird phenomenon with the Ethernet on our i.MX8M-Mini
>>> boards. It happens quite often that the measured bandwidth in TX
>>> direction drops from its expected/nominal value to something like 50%
>>> (for 100M) or ~67% (for 1G) connections.
>>>
>>> So far we reproduced this with two different hardware designs using
>>> two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different
>>> kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
>>>
>>> To measure the throughput we simply run iperf3 on the target (with a
>>> short p2p connection to the host PC) like this:
>>>
>>> 	iperf3 -c 192.168.1.10 --bidir
>>>
>>> But even something more simple like this can be used to get the info
>>> (with 'nc -l -p 1122 > /dev/null' running on the host):
>>>
>>> 	dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>>>
>>> The results fluctuate between each test run and are sometimes 'good' (e.g.
>>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for 100M
>> link).
>>> There is nothing else running on the system in parallel. Some more
>>> info is also available in this post: [1].
>>>
>>> If there's anyone around who has an idea on what might be the reason
>>> for this, please let me know!
>>> Or maybe someone would be willing to do a quick test on his own hardware.
>>> That would also be highly appreciated!
>>>
>>> Thanks and best regards
>>> Frieder
>>>
>>> [1]:
>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcomm
>>> u
>>>
>> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
>>>
>> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
>>>
>> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
>>>
>> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
>>>
>> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
>>>
>> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
>>> YSxakXwZtxde8%3D&amp;reserved=0

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-17  7:17     ` Frieder Schrempf
@ 2021-05-17 10:22       ` Joakim Zhang
  2021-05-17 12:47         ` Dave Taht
  0 siblings, 1 reply; 21+ messages in thread
From: Joakim Zhang @ 2021-05-17 10:22 UTC (permalink / raw)
  To: Frieder Schrempf, dl-linux-imx, netdev, linux-arm-kernel


Hi Frieder,

> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> Sent: 2021年5月17日 15:17
> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> Hi Joakim,
> 
> On 13.05.21 14:36, Joakim Zhang wrote:
> >
> > Hi Frieder,
> >
> > For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce on
> L5.10, and can't reproduce on L5.4.
> > According to your description, you can reproduce this issue both L5.4 and
> L5.10? So I need confirm with you.
> 
> Thanks for looking into this. I could reproduce this on 5.4 and 5.10 but both
> kernels were official mainline kernels and **not** from the linux-imx
> downstream tree.
Ok.

> Maybe there is some problem in the mainline tree and it got included in the
> NXP release kernel starting from L5.10?
No, this much looks like a known issue, it should always exist after adding AVB support in mainline.

ENET IP is not a _real_ multiple queues per my understanding, queue 0 is for best effort. And the queue 1&2 is for AVB stream whose default bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps for 1Gbps). When transmitting packets, net core will select queues randomly, which caused the tx bandwidth fluctuations. So you can change to use single queue if you care more about tx bandwidth. Or you can refer to NXP internal implementation.
e.g.
--- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
+++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
@@ -916,8 +916,8 @@
                                         <&clk IMX8MQ_CLK_ENET_PHY_REF>;
                                clock-names = "ipg", "ahb", "ptp",
                                              "enet_clk_ref", "enet_out";
-                               fsl,num-tx-queues = <3>;
-                               fsl,num-rx-queues = <3>;
+                               fsl,num-tx-queues = <1>;
+                               fsl,num-rx-queues = <1>;
                                status = "disabled";
                        };
                };

I hope this can help you :)

Best Regards,
Joakim Zhang
> Best regards
> Frieder
> 
> >
> > Best Regards,
> > Joakim Zhang
> >
> >> -----Original Message-----
> >> From: Joakim Zhang <qiangqing.zhang@nxp.com>
> >> Sent: 2021年5月12日 19:59
> >> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> >> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >> linux-arm-kernel@lists.infradead.org
> >> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >>
> >> Hi Frieder,
> >>
> >> Sorry, I missed this mail before, I can reproduce this issue at my
> >> side, I will try my best to look into this issue.
> >>
> >> Best Regards,
> >> Joakim Zhang
> >>
> >>> -----Original Message-----
> >>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>> Sent: 2021年5月6日 22:46
> >>> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>> linux-arm-kernel@lists.infradead.org
> >>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>
> >>> Hi,
> >>>
> >>> we observed some weird phenomenon with the Ethernet on our
> >>> i.MX8M-Mini boards. It happens quite often that the measured
> >>> bandwidth in TX direction drops from its expected/nominal value to
> >>> something like 50% (for 100M) or ~67% (for 1G) connections.
> >>>
> >>> So far we reproduced this with two different hardware designs using
> >>> two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different
> >>> kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> >>>
> >>> To measure the throughput we simply run iperf3 on the target (with a
> >>> short p2p connection to the host PC) like this:
> >>>
> >>> 	iperf3 -c 192.168.1.10 --bidir
> >>>
> >>> But even something more simple like this can be used to get the info
> >>> (with 'nc -l -p 1122 > /dev/null' running on the host):
> >>>
> >>> 	dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >>>
> >>> The results fluctuate between each test run and are sometimes 'good'
> (e.g.
> >>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for
> >>> 100M
> >> link).
> >>> There is nothing else running on the system in parallel. Some more
> >>> info is also available in this post: [1].
> >>>
> >>> If there's anyone around who has an idea on what might be the reason
> >>> for this, please let me know!
> >>> Or maybe someone would be willing to do a quick test on his own
> hardware.
> >>> That would also be highly appreciated!
> >>>
> >>> Thanks and best regards
> >>> Frieder
> >>>
> >>> [1]:
> >>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fco
> >>> mm
> >>> u
> >>>
> >>
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> >>>
> >>
> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> >>>
> >>
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> >>>
> >>
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> >>>
> >>
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >>>
> >>
> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> >>> YSxakXwZtxde8%3D&amp;reserved=0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-17 10:22       ` Joakim Zhang
@ 2021-05-17 12:47         ` Dave Taht
  2021-05-18 12:35           ` Joakim Zhang
  0 siblings, 1 reply; 21+ messages in thread
From: Dave Taht @ 2021-05-17 12:47 UTC (permalink / raw)
  To: Joakim Zhang; +Cc: Frieder Schrempf, dl-linux-imx, netdev, linux-arm-kernel

On Mon, May 17, 2021 at 3:25 AM Joakim Zhang <qiangqing.zhang@nxp.com> wrote:
>
>
> Hi Frieder,
>
> > -----Original Message-----
> > From: Frieder Schrempf <frieder.schrempf@kontron.de>
> > Sent: 2021年5月17日 15:17
> > To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
> > <linux-imx@nxp.com>; netdev@vger.kernel.org;
> > linux-arm-kernel@lists.infradead.org
> > Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >
> > Hi Joakim,
> >
> > On 13.05.21 14:36, Joakim Zhang wrote:
> > >
> > > Hi Frieder,
> > >
> > > For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce on
> > L5.10, and can't reproduce on L5.4.
> > > According to your description, you can reproduce this issue both L5.4 and
> > L5.10? So I need confirm with you.
> >
> > Thanks for looking into this. I could reproduce this on 5.4 and 5.10 but both
> > kernels were official mainline kernels and **not** from the linux-imx
> > downstream tree.
> Ok.
>
> > Maybe there is some problem in the mainline tree and it got included in the
> > NXP release kernel starting from L5.10?
> No, this much looks like a known issue, it should always exist after adding AVB support in mainline.
>
> ENET IP is not a _real_ multiple queues per my understanding, queue 0 is for best effort. And the queue 1&2 is for AVB stream whose default bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps for 1Gbps). When transmitting packets, net core will select queues randomly, which caused the tx bandwidth fluctuations. So you can change to use single queue if you care more about tx bandwidth. Or you can refer to NXP internal implementation.
> e.g.
> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> @@ -916,8 +916,8 @@
>                                          <&clk IMX8MQ_CLK_ENET_PHY_REF>;
>                                 clock-names = "ipg", "ahb", "ptp",
>                                               "enet_clk_ref", "enet_out";
> -                               fsl,num-tx-queues = <3>;
> -                               fsl,num-rx-queues = <3>;
> +                               fsl,num-tx-queues = <1>;
> +                               fsl,num-rx-queues = <1>;
>                                 status = "disabled";
>                         };
>                 };
>
> I hope this can help you :)

Patching out the queues is probably not the right thing.

for starters... Is there BQL support in this driver? It would be
helpful to have on all queues.

Also if there was a way to present it as two interfaces, rather than
one, that would allow for a specific avb device to be
presented.

Or:

Is there a standard means of signalling down the stack via the IP
layer (a dscp? a setsockopt?) that the AVB queue is requested?



> Best Regards,
> Joakim Zhang
> > Best regards
> > Frieder
> >
> > >
> > > Best Regards,
> > > Joakim Zhang
> > >
> > >> -----Original Message-----
> > >> From: Joakim Zhang <qiangqing.zhang@nxp.com>
> > >> Sent: 2021年5月12日 19:59
> > >> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> > >> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> > >> linux-arm-kernel@lists.infradead.org
> > >> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> > >>
> > >>
> > >> Hi Frieder,
> > >>
> > >> Sorry, I missed this mail before, I can reproduce this issue at my
> > >> side, I will try my best to look into this issue.
> > >>
> > >> Best Regards,
> > >> Joakim Zhang
> > >>
> > >>> -----Original Message-----
> > >>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> > >>> Sent: 2021年5月6日 22:46
> > >>> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> > >>> linux-arm-kernel@lists.infradead.org
> > >>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> > >>>
> > >>> Hi,
> > >>>
> > >>> we observed some weird phenomenon with the Ethernet on our
> > >>> i.MX8M-Mini boards. It happens quite often that the measured
> > >>> bandwidth in TX direction drops from its expected/nominal value to
> > >>> something like 50% (for 100M) or ~67% (for 1G) connections.
> > >>>
> > >>> So far we reproduced this with two different hardware designs using
> > >>> two different PHYs (RGMII VSC8531 and RMII KSZ8081), two different
> > >>> kernel versions (v5.4 and v5.10) and link speeds of 100M and 1G.
> > >>>
> > >>> To measure the throughput we simply run iperf3 on the target (with a
> > >>> short p2p connection to the host PC) like this:
> > >>>
> > >>>   iperf3 -c 192.168.1.10 --bidir
> > >>>
> > >>> But even something more simple like this can be used to get the info
> > >>> (with 'nc -l -p 1122 > /dev/null' running on the host):
> > >>>
> > >>>   dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> > >>>
> > >>> The results fluctuate between each test run and are sometimes 'good'
> > (e.g.
> > >>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s for
> > >>> 100M
> > >> link).
> > >>> There is nothing else running on the system in parallel. Some more
> > >>> info is also available in this post: [1].
> > >>>
> > >>> If there's anyone around who has an idea on what might be the reason
> > >>> for this, please let me know!
> > >>> Or maybe someone would be willing to do a quick test on his own
> > hardware.
> > >>> That would also be highly appreciated!
> > >>>
> > >>> Thanks and best regards
> > >>> Frieder
> > >>>
> > >>> [1]:
> > >>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fco
> > >>> mm
> > >>> u
> > >>>
> > >>
> > nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> > >>>
> > >>
> > Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> > >>>
> > >>
> > qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> > >>>
> > >>
> > 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> > >>>
> > >>
> > wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> > >>>
> > >>
> > WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> > >>> YSxakXwZtxde8%3D&amp;reserved=0



-- 
Latest Podcast:
https://www.linkedin.com/feed/update/urn:li:activity:6791014284936785920/

Dave Täht CTO, TekLibre, LLC

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-17 12:47         ` Dave Taht
@ 2021-05-18 12:35           ` Joakim Zhang
  2021-05-18 12:55             ` Frieder Schrempf
  0 siblings, 1 reply; 21+ messages in thread
From: Joakim Zhang @ 2021-05-18 12:35 UTC (permalink / raw)
  To: Dave Taht; +Cc: Frieder Schrempf, dl-linux-imx, netdev, linux-arm-kernel


Hi Dave,

> -----Original Message-----
> From: Dave Taht <dave.taht@gmail.com>
> Sent: 2021年5月17日 20:48
> To: Joakim Zhang <qiangqing.zhang@nxp.com>
> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang <qiangqing.zhang@nxp.com>
> wrote:
> >
> >
> > Hi Frieder,
> >
> > > -----Original Message-----
> > > From: Frieder Schrempf <frieder.schrempf@kontron.de>
> > > Sent: 2021年5月17日 15:17
> > > To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
> > > <linux-imx@nxp.com>; netdev@vger.kernel.org;
> > > linux-arm-kernel@lists.infradead.org
> > > Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> > >
> > > Hi Joakim,
> > >
> > > On 13.05.21 14:36, Joakim Zhang wrote:
> > > >
> > > > Hi Frieder,
> > > >
> > > > For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce
> > > > on
> > > L5.10, and can't reproduce on L5.4.
> > > > According to your description, you can reproduce this issue both
> > > > L5.4 and
> > > L5.10? So I need confirm with you.
> > >
> > > Thanks for looking into this. I could reproduce this on 5.4 and 5.10
> > > but both kernels were official mainline kernels and **not** from the
> > > linux-imx downstream tree.
> > Ok.
> >
> > > Maybe there is some problem in the mainline tree and it got included
> > > in the NXP release kernel starting from L5.10?
> > No, this much looks like a known issue, it should always exist after adding
> AVB support in mainline.
> >
> > ENET IP is not a _real_ multiple queues per my understanding, queue 0 is for
> best effort. And the queue 1&2 is for AVB stream whose default bandwidth
> fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps for 1Gbps).
> When transmitting packets, net core will select queues randomly, which
> caused the tx bandwidth fluctuations. So you can change to use single queue if
> you care more about tx bandwidth. Or you can refer to NXP internal
> implementation.
> > e.g.
> > --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> > +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> > @@ -916,8 +916,8 @@
> >                                          <&clk
> IMX8MQ_CLK_ENET_PHY_REF>;
> >                                 clock-names = "ipg", "ahb", "ptp",
> >                                               "enet_clk_ref",
> "enet_out";
> > -                               fsl,num-tx-queues = <3>;
> > -                               fsl,num-rx-queues = <3>;
> > +                               fsl,num-tx-queues = <1>;
> > +                               fsl,num-rx-queues = <1>;
> >                                 status = "disabled";
> >                         };
> >                 };
> >
> > I hope this can help you :)
> 
> Patching out the queues is probably not the right thing.
> 
> for starters... Is there BQL support in this driver? It would be helpful to have on
> all queues.
There is no BQL support in this driver, and BQL may improve throughput further, but should not be the root cause of this reported issue.

> Also if there was a way to present it as two interfaces, rather than one, that
> would allow for a specific avb device to be presented.
> 
> Or:
> 
> Is there a standard means of signalling down the stack via the IP layer (a dscp?
> a setsockopt?) that the AVB queue is requested?
> 
AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue 1&2 based on VLAN-ID.

Best Regards,
Joakim Zhang
> > Best Regards,
> > Joakim Zhang
> > > Best regards
> > > Frieder
> > >
> > > >
> > > > Best Regards,
> > > > Joakim Zhang
> > > >
> > > >> -----Original Message-----
> > > >> From: Joakim Zhang <qiangqing.zhang@nxp.com>
> > > >> Sent: 2021年5月12日 19:59
> > > >> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> > > >> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> > > >> linux-arm-kernel@lists.infradead.org
> > > >> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> > > >>
> > > >>
> > > >> Hi Frieder,
> > > >>
> > > >> Sorry, I missed this mail before, I can reproduce this issue at
> > > >> my side, I will try my best to look into this issue.
> > > >>
> > > >> Best Regards,
> > > >> Joakim Zhang
> > > >>
> > > >>> -----Original Message-----
> > > >>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> > > >>> Sent: 2021年5月6日 22:46
> > > >>> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> > > >>> linux-arm-kernel@lists.infradead.org
> > > >>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> > > >>>
> > > >>> Hi,
> > > >>>
> > > >>> we observed some weird phenomenon with the Ethernet on our
> > > >>> i.MX8M-Mini boards. It happens quite often that the measured
> > > >>> bandwidth in TX direction drops from its expected/nominal value
> > > >>> to something like 50% (for 100M) or ~67% (for 1G) connections.
> > > >>>
> > > >>> So far we reproduced this with two different hardware designs
> > > >>> using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two
> > > >>> different kernel versions (v5.4 and v5.10) and link speeds of 100M and
> 1G.
> > > >>>
> > > >>> To measure the throughput we simply run iperf3 on the target
> > > >>> (with a short p2p connection to the host PC) like this:
> > > >>>
> > > >>>   iperf3 -c 192.168.1.10 --bidir
> > > >>>
> > > >>> But even something more simple like this can be used to get the
> > > >>> info (with 'nc -l -p 1122 > /dev/null' running on the host):
> > > >>>
> > > >>>   dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> > > >>>
> > > >>> The results fluctuate between each test run and are sometimes 'good'
> > > (e.g.
> > > >>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s
> > > >>> for 100M
> > > >> link).
> > > >>> There is nothing else running on the system in parallel. Some
> > > >>> more info is also available in this post: [1].
> > > >>>
> > > >>> If there's anyone around who has an idea on what might be the
> > > >>> reason for this, please let me know!
> > > >>> Or maybe someone would be willing to do a quick test on his own
> > > hardware.
> > > >>> That would also be highly appreciated!
> > > >>>
> > > >>> Thanks and best regards
> > > >>> Frieder
> > > >>>
> > > >>> [1]:
> > > >>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%
> > > >>> 2Fco
> > > >>> mm
> > > >>> u
> > > >>>
> > > >>
> > >
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> > > >>>
> > > >>
> > >
> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> > > >>>
> > > >>
> > >
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> > > >>>
> > > >>
> > >
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> > > >>>
> > > >>
> > >
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> > > >>>
> > > >>
> > >
> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> > > >>> YSxakXwZtxde8%3D&amp;reserved=0
> 
> 
> 
> --
> Latest Podcast:
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lin
> kedin.com%2Ffeed%2Fupdate%2Furn%3Ali%3Aactivity%3A6791014284936785
> 920%2F&amp;data=04%7C01%7Cqiangqing.zhang%40nxp.com%7Cd11b7b331
> db04c41799908d91932059b%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%
> 7C0%7C637568524896127548%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w
> LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&am
> p;sdata=IPW1MPLSnitX0HUttdLtZysknzokRN5fYVPXrbJQhaY%3D&amp;reserve
> d=0
> 
> Dave Täht CTO, TekLibre, LLC
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-18 12:35           ` Joakim Zhang
@ 2021-05-18 12:55             ` Frieder Schrempf
  2021-05-19  7:49               ` Joakim Zhang
  0 siblings, 1 reply; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-18 12:55 UTC (permalink / raw)
  To: Joakim Zhang, Dave Taht; +Cc: dl-linux-imx, netdev, linux-arm-kernel



On 18.05.21 14:35, Joakim Zhang wrote:
> 
> Hi Dave,
> 
>> -----Original Message-----
>> From: Dave Taht <dave.taht@gmail.com>
>> Sent: 2021年5月17日 20:48
>> To: Joakim Zhang <qiangqing.zhang@nxp.com>
>> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>> linux-arm-kernel@lists.infradead.org
>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang <qiangqing.zhang@nxp.com>
>> wrote:
>>>
>>>
>>> Hi Frieder,
>>>
>>>> -----Original Message-----
>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>> Sent: 2021年5月17日 15:17
>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>> linux-arm-kernel@lists.infradead.org
>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>
>>>> Hi Joakim,
>>>>
>>>> On 13.05.21 14:36, Joakim Zhang wrote:
>>>>>
>>>>> Hi Frieder,
>>>>>
>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce
>>>>> on
>>>> L5.10, and can't reproduce on L5.4.
>>>>> According to your description, you can reproduce this issue both
>>>>> L5.4 and
>>>> L5.10? So I need confirm with you.
>>>>
>>>> Thanks for looking into this. I could reproduce this on 5.4 and 5.10
>>>> but both kernels were official mainline kernels and **not** from the
>>>> linux-imx downstream tree.
>>> Ok.
>>>
>>>> Maybe there is some problem in the mainline tree and it got included
>>>> in the NXP release kernel starting from L5.10?
>>> No, this much looks like a known issue, it should always exist after adding
>> AVB support in mainline.
>>>
>>> ENET IP is not a _real_ multiple queues per my understanding, queue 0 is for
>> best effort. And the queue 1&2 is for AVB stream whose default bandwidth
>> fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps for 1Gbps).
>> When transmitting packets, net core will select queues randomly, which
>> caused the tx bandwidth fluctuations. So you can change to use single queue if
>> you care more about tx bandwidth. Or you can refer to NXP internal
>> implementation.
>>> e.g.
>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>> @@ -916,8 +916,8 @@
>>>                                          <&clk
>> IMX8MQ_CLK_ENET_PHY_REF>;
>>>                                 clock-names = "ipg", "ahb", "ptp",
>>>                                               "enet_clk_ref",
>> "enet_out";
>>> -                               fsl,num-tx-queues = <3>;
>>> -                               fsl,num-rx-queues = <3>;
>>> +                               fsl,num-tx-queues = <1>;
>>> +                               fsl,num-rx-queues = <1>;
>>>                                 status = "disabled";
>>>                         };
>>>                 };
>>>
>>> I hope this can help you :)
>>
>> Patching out the queues is probably not the right thing.
>>
>> for starters... Is there BQL support in this driver? It would be helpful to have on
>> all queues.
> There is no BQL support in this driver, and BQL may improve throughput further, but should not be the root cause of this reported issue.
> 
>> Also if there was a way to present it as two interfaces, rather than one, that
>> would allow for a specific avb device to be presented.
>>
>> Or:
>>
>> Is there a standard means of signalling down the stack via the IP layer (a dscp?
>> a setsockopt?) that the AVB queue is requested?
>>
> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue 1&2 based on VLAN-ID.

I had to look up what AVB even means, but from my current understanding it doesn't seem right that for non-AVB packets the driver picks any of the three queues in a random fashion while at the same time knowing that queue 1 and 2 have a 50% limitation on the bandwidth. Shouldn't there be some way to prefer queue 0 without needing the user to set it up or even arbitrarily limiting the number of queues as proposed above?

> 
> Best Regards,
> Joakim Zhang
>>> Best Regards,
>>> Joakim Zhang
>>>> Best regards
>>>> Frieder
>>>>
>>>>>
>>>>> Best Regards,
>>>>> Joakim Zhang
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Joakim Zhang <qiangqing.zhang@nxp.com>
>>>>>> Sent: 2021年5月12日 19:59
>>>>>> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>
>>>>>>
>>>>>> Hi Frieder,
>>>>>>
>>>>>> Sorry, I missed this mail before, I can reproduce this issue at
>>>>>> my side, I will try my best to look into this issue.
>>>>>>
>>>>>> Best Regards,
>>>>>> Joakim Zhang
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>>>>> Sent: 2021年5月6日 22:46
>>>>>>> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> we observed some weird phenomenon with the Ethernet on our
>>>>>>> i.MX8M-Mini boards. It happens quite often that the measured
>>>>>>> bandwidth in TX direction drops from its expected/nominal value
>>>>>>> to something like 50% (for 100M) or ~67% (for 1G) connections.
>>>>>>>
>>>>>>> So far we reproduced this with two different hardware designs
>>>>>>> using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two
>>>>>>> different kernel versions (v5.4 and v5.10) and link speeds of 100M and
>> 1G.
>>>>>>>
>>>>>>> To measure the throughput we simply run iperf3 on the target
>>>>>>> (with a short p2p connection to the host PC) like this:
>>>>>>>
>>>>>>>   iperf3 -c 192.168.1.10 --bidir
>>>>>>>
>>>>>>> But even something more simple like this can be used to get the
>>>>>>> info (with 'nc -l -p 1122 > /dev/null' running on the host):
>>>>>>>
>>>>>>>   dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
>>>>>>>
>>>>>>> The results fluctuate between each test run and are sometimes 'good'
>>>> (e.g.
>>>>>>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s
>>>>>>> for 100M
>>>>>> link).
>>>>>>> There is nothing else running on the system in parallel. Some
>>>>>>> more info is also available in this post: [1].
>>>>>>>
>>>>>>> If there's anyone around who has an idea on what might be the
>>>>>>> reason for this, please let me know!
>>>>>>> Or maybe someone would be willing to do a quick test on his own
>>>> hardware.
>>>>>>> That would also be highly appreciated!
>>>>>>>
>>>>>>> Thanks and best regards
>>>>>>> Frieder
>>>>>>>
>>>>>>> [1]:
>>>>>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%
>>>>>>> 2Fco
>>>>>>> mm
>>>>>>> u
>>>>>>>
>>>>>>
>>>>
>> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
>>>>>>>
>>>>>>
>>>>
>> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
>>>>>>>
>>>>>>
>>>>
>> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
>>>>>>>
>>>>>>
>>>>
>> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
>>>>>>>
>>>>>>
>>>>
>> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
>>>>>>>
>>>>>>
>>>>
>> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
>>>>>>> YSxakXwZtxde8%3D&amp;reserved=0
>>
>>
>>
>> --
>> Latest Podcast:
>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.lin
>> kedin.com%2Ffeed%2Fupdate%2Furn%3Ali%3Aactivity%3A6791014284936785
>> 920%2F&amp;data=04%7C01%7Cqiangqing.zhang%40nxp.com%7Cd11b7b331
>> db04c41799908d91932059b%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%
>> 7C0%7C637568524896127548%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w
>> LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&am
>> p;sdata=IPW1MPLSnitX0HUttdLtZysknzokRN5fYVPXrbJQhaY%3D&amp;reserve
>> d=0
>>
>> Dave Täht CTO, TekLibre, LLC

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-18 12:55             ` Frieder Schrempf
@ 2021-05-19  7:49               ` Joakim Zhang
  2021-05-19  8:10                 ` Frieder Schrempf
  0 siblings, 1 reply; 21+ messages in thread
From: Joakim Zhang @ 2021-05-19  7:49 UTC (permalink / raw)
  To: Frieder Schrempf, Dave Taht; +Cc: dl-linux-imx, netdev, linux-arm-kernel


Hi Frieder,

> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> Sent: 2021年5月18日 20:55
> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
> <dave.taht@gmail.com>
> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> 
> 
> On 18.05.21 14:35, Joakim Zhang wrote:
> >
> > Hi Dave,
> >
> >> -----Original Message-----
> >> From: Dave Taht <dave.taht@gmail.com>
> >> Sent: 2021年5月17日 20:48
> >> To: Joakim Zhang <qiangqing.zhang@nxp.com>
> >> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> >> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >> linux-arm-kernel@lists.infradead.org
> >> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
> >> <qiangqing.zhang@nxp.com>
> >> wrote:
> >>>
> >>>
> >>> Hi Frieder,
> >>>
> >>>> -----Original Message-----
> >>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>>> Sent: 2021年5月17日 15:17
> >>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
> >>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>> linux-arm-kernel@lists.infradead.org
> >>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>
> >>>> Hi Joakim,
> >>>>
> >>>> On 13.05.21 14:36, Joakim Zhang wrote:
> >>>>>
> >>>>> Hi Frieder,
> >>>>>
> >>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce
> >>>>> on
> >>>> L5.10, and can't reproduce on L5.4.
> >>>>> According to your description, you can reproduce this issue both
> >>>>> L5.4 and
> >>>> L5.10? So I need confirm with you.
> >>>>
> >>>> Thanks for looking into this. I could reproduce this on 5.4 and
> >>>> 5.10 but both kernels were official mainline kernels and **not**
> >>>> from the linux-imx downstream tree.
> >>> Ok.
> >>>
> >>>> Maybe there is some problem in the mainline tree and it got
> >>>> included in the NXP release kernel starting from L5.10?
> >>> No, this much looks like a known issue, it should always exist after
> >>> adding
> >> AVB support in mainline.
> >>>
> >>> ENET IP is not a _real_ multiple queues per my understanding, queue
> >>> 0 is for
> >> best effort. And the queue 1&2 is for AVB stream whose default
> >> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps
> for 1Gbps).
> >> When transmitting packets, net core will select queues randomly,
> >> which caused the tx bandwidth fluctuations. So you can change to use
> >> single queue if you care more about tx bandwidth. Or you can refer to
> >> NXP internal implementation.
> >>> e.g.
> >>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>> @@ -916,8 +916,8 @@
> >>>                                          <&clk
> >> IMX8MQ_CLK_ENET_PHY_REF>;
> >>>                                 clock-names = "ipg", "ahb", "ptp",
> >>>                                               "enet_clk_ref",
> >> "enet_out";
> >>> -                               fsl,num-tx-queues = <3>;
> >>> -                               fsl,num-rx-queues = <3>;
> >>> +                               fsl,num-tx-queues = <1>;
> >>> +                               fsl,num-rx-queues = <1>;
> >>>                                 status = "disabled";
> >>>                         };
> >>>                 };
> >>>
> >>> I hope this can help you :)
> >>
> >> Patching out the queues is probably not the right thing.
> >>
> >> for starters... Is there BQL support in this driver? It would be
> >> helpful to have on all queues.
> > There is no BQL support in this driver, and BQL may improve throughput
> further, but should not be the root cause of this reported issue.
> >
> >> Also if there was a way to present it as two interfaces, rather than
> >> one, that would allow for a specific avb device to be presented.
> >>
> >> Or:
> >>
> >> Is there a standard means of signalling down the stack via the IP layer (a
> dscp?
> >> a setsockopt?) that the AVB queue is requested?
> >>
> > AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue 1&2
> based on VLAN-ID.
> 
> I had to look up what AVB even means, but from my current understanding it
> doesn't seem right that for non-AVB packets the driver picks any of the three
> queues in a random fashion while at the same time knowing that queue 1 and 2
> have a 50% limitation on the bandwidth. Shouldn't there be some way to prefer
> queue 0 without needing the user to set it up or even arbitrarily limiting the
> number of queues as proposed above?

Yes, I think we can. I look into NXP local implementation, there is a ndo_select_queue callback.
https://source.codeaurora.org/external/imx/linux-imx/tree/drivers/net/ethernet/freescale/fec_main.c?h=lf-5.4.y#n3419
This is the version for L5.4 kernel.

Best Regards,
Joakim Zhang
> >
> > Best Regards,
> > Joakim Zhang
> >>> Best Regards,
> >>> Joakim Zhang
> >>>> Best regards
> >>>> Frieder
> >>>>
> >>>>>
> >>>>> Best Regards,
> >>>>> Joakim Zhang
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Joakim Zhang <qiangqing.zhang@nxp.com>
> >>>>>> Sent: 2021年5月12日 19:59
> >>>>>> To: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> >>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>>>> linux-arm-kernel@lists.infradead.org
> >>>>>> Subject: RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>
> >>>>>>
> >>>>>> Hi Frieder,
> >>>>>>
> >>>>>> Sorry, I missed this mail before, I can reproduce this issue at
> >>>>>> my side, I will try my best to look into this issue.
> >>>>>>
> >>>>>> Best Regards,
> >>>>>> Joakim Zhang
> >>>>>>
> >>>>>>> -----Original Message-----
> >>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>>>>>> Sent: 2021年5月6日 22:46
> >>>>>>> To: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>>>>> linux-arm-kernel@lists.infradead.org
> >>>>>>> Subject: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>> we observed some weird phenomenon with the Ethernet on our
> >>>>>>> i.MX8M-Mini boards. It happens quite often that the measured
> >>>>>>> bandwidth in TX direction drops from its expected/nominal value
> >>>>>>> to something like 50% (for 100M) or ~67% (for 1G) connections.
> >>>>>>>
> >>>>>>> So far we reproduced this with two different hardware designs
> >>>>>>> using two different PHYs (RGMII VSC8531 and RMII KSZ8081), two
> >>>>>>> different kernel versions (v5.4 and v5.10) and link speeds of
> >>>>>>> 100M and
> >> 1G.
> >>>>>>>
> >>>>>>> To measure the throughput we simply run iperf3 on the target
> >>>>>>> (with a short p2p connection to the host PC) like this:
> >>>>>>>
> >>>>>>>   iperf3 -c 192.168.1.10 --bidir
> >>>>>>>
> >>>>>>> But even something more simple like this can be used to get the
> >>>>>>> info (with 'nc -l -p 1122 > /dev/null' running on the host):
> >>>>>>>
> >>>>>>>   dd if=/dev/zero bs=10M count=1 | nc 192.168.1.10 1122
> >>>>>>>
> >>>>>>> The results fluctuate between each test run and are sometimes 'good'
> >>>> (e.g.
> >>>>>>> ~90 MBit/s for 100M link) and sometimes 'bad' (e.g. ~45 MBit/s
> >>>>>>> for 100M
> >>>>>> link).
> >>>>>>> There is nothing else running on the system in parallel. Some
> >>>>>>> more info is also available in this post: [1].
> >>>>>>>
> >>>>>>> If there's anyone around who has an idea on what might be the
> >>>>>>> reason for this, please let me know!
> >>>>>>> Or maybe someone would be willing to do a quick test on his own
> >>>> hardware.
> >>>>>>> That would also be highly appreciated!
> >>>>>>>
> >>>>>>> Thanks and best regards
> >>>>>>> Frieder
> >>>>>>>
> >>>>>>> [1]:
> >>>>>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%
> >>>>>>> 2Fco
> >>>>>>> mm
> >>>>>>> u
> >>>>>>>
> >>>>>>
> >>>>
> >>
> nity.nxp.com%2Ft5%2Fi-MX-Processors%2Fi-MX8MM-Ethernet-TX-Bandwidth-
> >>>>>>>
> >>>>>>
> >>>>
> >>
> Fluctuations%2Fm-p%2F1242467%23M170563&amp;data=04%7C01%7Cqiang
> >>>>>>>
> >>>>>>
> >>>>
> >>
> qing.zhang%40nxp.com%7C5d4866d4565e4cbc36a008d9109da0ff%7C686ea1d
> >>>>>>>
> >>>>>>
> >>>>
> >>
> 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637559091463792932%7CUnkno
> >>>>>>>
> >>>>>>
> >>>>
> >>
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >>>>>>>
> >>>>>>
> >>>>
> >>
> WwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=ygcThQOLIzp0lzhXacRLjSjnjm1FEj
> >>>>>>> YSxakXwZtxde8%3D&amp;reserved=0
> >>
> >>
> >>
> >> --
> >> Latest Podcast:
> >> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww
> >> .lin
> >>
> kedin.com%2Ffeed%2Fupdate%2Furn%3Ali%3Aactivity%3A6791014284936785
> >>
> 920%2F&amp;data=04%7C01%7Cqiangqing.zhang%40nxp.com%7Cd11b7b331
> >>
> db04c41799908d91932059b%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%
> >>
> 7C0%7C637568524896127548%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4w
> >>
> LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&am
> >>
> p;sdata=IPW1MPLSnitX0HUttdLtZysknzokRN5fYVPXrbJQhaY%3D&amp;reserve
> >> d=0
> >>
> >> Dave Täht CTO, TekLibre, LLC
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-19  7:49               ` Joakim Zhang
@ 2021-05-19  8:10                 ` Frieder Schrempf
  2021-05-19  8:40                   ` Joakim Zhang
  0 siblings, 1 reply; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-19  8:10 UTC (permalink / raw)
  To: Joakim Zhang, Dave Taht; +Cc: dl-linux-imx, netdev, linux-arm-kernel

Hi Joakim,

On 19.05.21 09:49, Joakim Zhang wrote:
> 
> Hi Frieder,
> 
>> -----Original Message-----
>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>> Sent: 2021年5月18日 20:55
>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
>> <dave.taht@gmail.com>
>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>> linux-arm-kernel@lists.infradead.org
>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>>
>>
>> On 18.05.21 14:35, Joakim Zhang wrote:
>>>
>>> Hi Dave,
>>>
>>>> -----Original Message-----
>>>> From: Dave Taht <dave.taht@gmail.com>
>>>> Sent: 2021年5月17日 20:48
>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>
>>>> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>> linux-arm-kernel@lists.infradead.org
>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>
>>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
>>>> <qiangqing.zhang@nxp.com>
>>>> wrote:
>>>>>
>>>>>
>>>>> Hi Frieder,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>>>> Sent: 2021年5月17日 15:17
>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>
>>>>>> Hi Joakim,
>>>>>>
>>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
>>>>>>>
>>>>>>> Hi Frieder,
>>>>>>>
>>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can reproduce
>>>>>>> on
>>>>>> L5.10, and can't reproduce on L5.4.
>>>>>>> According to your description, you can reproduce this issue both
>>>>>>> L5.4 and
>>>>>> L5.10? So I need confirm with you.
>>>>>>
>>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
>>>>>> 5.10 but both kernels were official mainline kernels and **not**
>>>>>> from the linux-imx downstream tree.
>>>>> Ok.
>>>>>
>>>>>> Maybe there is some problem in the mainline tree and it got
>>>>>> included in the NXP release kernel starting from L5.10?
>>>>> No, this much looks like a known issue, it should always exist after
>>>>> adding
>>>> AVB support in mainline.
>>>>>
>>>>> ENET IP is not a _real_ multiple queues per my understanding, queue
>>>>> 0 is for
>>>> best effort. And the queue 1&2 is for AVB stream whose default
>>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and 500Mbps
>> for 1Gbps).
>>>> When transmitting packets, net core will select queues randomly,
>>>> which caused the tx bandwidth fluctuations. So you can change to use
>>>> single queue if you care more about tx bandwidth. Or you can refer to
>>>> NXP internal implementation.
>>>>> e.g.
>>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>> @@ -916,8 +916,8 @@
>>>>>                                          <&clk
>>>> IMX8MQ_CLK_ENET_PHY_REF>;
>>>>>                                 clock-names = "ipg", "ahb", "ptp",
>>>>>                                               "enet_clk_ref",
>>>> "enet_out";
>>>>> -                               fsl,num-tx-queues = <3>;
>>>>> -                               fsl,num-rx-queues = <3>;
>>>>> +                               fsl,num-tx-queues = <1>;
>>>>> +                               fsl,num-rx-queues = <1>;
>>>>>                                 status = "disabled";
>>>>>                         };
>>>>>                 };
>>>>>
>>>>> I hope this can help you :)
>>>>
>>>> Patching out the queues is probably not the right thing.
>>>>
>>>> for starters... Is there BQL support in this driver? It would be
>>>> helpful to have on all queues.
>>> There is no BQL support in this driver, and BQL may improve throughput
>> further, but should not be the root cause of this reported issue.
>>>
>>>> Also if there was a way to present it as two interfaces, rather than
>>>> one, that would allow for a specific avb device to be presented.
>>>>
>>>> Or:
>>>>
>>>> Is there a standard means of signalling down the stack via the IP layer (a
>> dscp?
>>>> a setsockopt?) that the AVB queue is requested?
>>>>
>>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue 1&2
>> based on VLAN-ID.
>>
>> I had to look up what AVB even means, but from my current understanding it
>> doesn't seem right that for non-AVB packets the driver picks any of the three
>> queues in a random fashion while at the same time knowing that queue 1 and 2
>> have a 50% limitation on the bandwidth. Shouldn't there be some way to prefer
>> queue 0 without needing the user to set it up or even arbitrarily limiting the
>> number of queues as proposed above?
> 
> Yes, I think we can. I look into NXP local implementation, there is a ndo_select_queue callback.
> https://eur04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet%2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&amp;data=04%7C01%7Cfrieder.schrempf%40kontron.de%7Ce4a99819cb6e444f598f08d91a9aad39%7C8c9d3c973fd941c8a2b1646f3942daf1%7C0%7C0%7C637570073897801363%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=S3HNVF8acDLJUJR89W1oI%2F28eTJhe18209l70eqVvXQ%3D&amp;reserved=0
> This is the version for L5.4 kernel.

Yes, this looks like it could solve the issue. Would you mind preparing a patch to upstream the change in [1]? I would be happy to test (at least the non-AVB case) and review.

Thanks
Frieder

[1] https://source.codeaurora.org/external/imx/linux-imx/commit?id=8a7fe8f38b7e3b2f9a016dcf4b4e38bb941ac6df

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-19  8:10                 ` Frieder Schrempf
@ 2021-05-19  8:40                   ` Joakim Zhang
  2021-05-19 10:12                     ` Frieder Schrempf
  0 siblings, 1 reply; 21+ messages in thread
From: Joakim Zhang @ 2021-05-19  8:40 UTC (permalink / raw)
  To: Frieder Schrempf, Dave Taht; +Cc: dl-linux-imx, netdev, linux-arm-kernel


Hi Frieder,

> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> Sent: 2021年5月19日 16:10
> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
> <dave.taht@gmail.com>
> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> Hi Joakim,
> 
> On 19.05.21 09:49, Joakim Zhang wrote:
> >
> > Hi Frieder,
> >
> >> -----Original Message-----
> >> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >> Sent: 2021年5月18日 20:55
> >> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
> >> <dave.taht@gmail.com>
> >> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >> linux-arm-kernel@lists.infradead.org
> >> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >>
> >>
> >> On 18.05.21 14:35, Joakim Zhang wrote:
> >>>
> >>> Hi Dave,
> >>>
> >>>> -----Original Message-----
> >>>> From: Dave Taht <dave.taht@gmail.com>
> >>>> Sent: 2021年5月17日 20:48
> >>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>
> >>>> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> >>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>> linux-arm-kernel@lists.infradead.org
> >>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>
> >>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
> >>>> <qiangqing.zhang@nxp.com>
> >>>> wrote:
> >>>>>
> >>>>>
> >>>>> Hi Frieder,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>>>>> Sent: 2021年5月17日 15:17
> >>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
> >>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>>>> linux-arm-kernel@lists.infradead.org
> >>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>
> >>>>>> Hi Joakim,
> >>>>>>
> >>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
> >>>>>>>
> >>>>>>> Hi Frieder,
> >>>>>>>
> >>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can
> >>>>>>> reproduce on
> >>>>>> L5.10, and can't reproduce on L5.4.
> >>>>>>> According to your description, you can reproduce this issue both
> >>>>>>> L5.4 and
> >>>>>> L5.10? So I need confirm with you.
> >>>>>>
> >>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
> >>>>>> 5.10 but both kernels were official mainline kernels and **not**
> >>>>>> from the linux-imx downstream tree.
> >>>>> Ok.
> >>>>>
> >>>>>> Maybe there is some problem in the mainline tree and it got
> >>>>>> included in the NXP release kernel starting from L5.10?
> >>>>> No, this much looks like a known issue, it should always exist
> >>>>> after adding
> >>>> AVB support in mainline.
> >>>>>
> >>>>> ENET IP is not a _real_ multiple queues per my understanding,
> >>>>> queue
> >>>>> 0 is for
> >>>> best effort. And the queue 1&2 is for AVB stream whose default
> >>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and
> >>>> 500Mbps
> >> for 1Gbps).
> >>>> When transmitting packets, net core will select queues randomly,
> >>>> which caused the tx bandwidth fluctuations. So you can change to
> >>>> use single queue if you care more about tx bandwidth. Or you can
> >>>> refer to NXP internal implementation.
> >>>>> e.g.
> >>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>> @@ -916,8 +916,8 @@
> >>>>>                                          <&clk
> >>>> IMX8MQ_CLK_ENET_PHY_REF>;
> >>>>>                                 clock-names = "ipg", "ahb",
> "ptp",
> >>>>>
> "enet_clk_ref",
> >>>> "enet_out";
> >>>>> -                               fsl,num-tx-queues = <3>;
> >>>>> -                               fsl,num-rx-queues = <3>;
> >>>>> +                               fsl,num-tx-queues = <1>;
> >>>>> +                               fsl,num-rx-queues = <1>;
> >>>>>                                 status = "disabled";
> >>>>>                         };
> >>>>>                 };
> >>>>>
> >>>>> I hope this can help you :)
> >>>>
> >>>> Patching out the queues is probably not the right thing.
> >>>>
> >>>> for starters... Is there BQL support in this driver? It would be
> >>>> helpful to have on all queues.
> >>> There is no BQL support in this driver, and BQL may improve
> >>> throughput
> >> further, but should not be the root cause of this reported issue.
> >>>
> >>>> Also if there was a way to present it as two interfaces, rather
> >>>> than one, that would allow for a specific avb device to be presented.
> >>>>
> >>>> Or:
> >>>>
> >>>> Is there a standard means of signalling down the stack via the IP
> >>>> layer (a
> >> dscp?
> >>>> a setsockopt?) that the AVB queue is requested?
> >>>>
> >>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue
> >>> 1&2
> >> based on VLAN-ID.
> >>
> >> I had to look up what AVB even means, but from my current
> >> understanding it doesn't seem right that for non-AVB packets the
> >> driver picks any of the three queues in a random fashion while at the
> >> same time knowing that queue 1 and 2 have a 50% limitation on the
> >> bandwidth. Shouldn't there be some way to prefer queue 0 without
> >> needing the user to set it up or even arbitrarily limiting the number of
> queues as proposed above?
> >
> > Yes, I think we can. I look into NXP local implementation, there is a
> ndo_select_queue callback.
> > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsour
> >
> ce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet
> %
> >
> 2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&amp;data=
> 04
> > %7C01%7Cqiangqing.zhang%40nxp.com%7Cd83917f3c76c4b6ef80008d91a9
> d8a28%7
> >
> C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637570086193978287%
> 7CUnkno
> >
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> WwiL
> >
> CJXVCI6Mn0%3D%7C1000&amp;sdata=pQuGAadGzM8GhYsVl%2FG%2BPJSCZ
> RRvbwhuLy9
> > g30bn3ok%3D&amp;reserved=0
> > This is the version for L5.4 kernel.
> 
> Yes, this looks like it could solve the issue. Would you mind preparing a patch to
> upstream the change in [1]? I would be happy to test (at least the non-AVB
> case) and review.

Yes, I can have a try. I saw this patch has been staying in downstream tree for many years, and I don't know the history.
Anyway, I will try to upstream first to see if anyone has comments.

Best Regards,
Joakim Zhang
> Thanks
> Frieder
> 
> [1]
> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsource.
> codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Fcommit%3Fid%3D8a7fe8f3
> 8b7e3b2f9a016dcf4b4e38bb941ac6df&amp;data=04%7C01%7Cqiangqing.zhan
> g%40nxp.com%7Cd83917f3c76c4b6ef80008d91a9d8a28%7C686ea1d3bc2b4c6
> fa92cd99c5c301635%7C0%7C0%7C637570086193978287%7CUnknown%7CT
> WFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJ
> XVCI6Mn0%3D%7C1000&amp;sdata=J%2FdfHlTY9qh%2BT8%2F%2B2%2Brzh9
> R%2BL9eG3yXbhFcHpSjs7Xk%3D&amp;reserved=0
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-19  8:40                   ` Joakim Zhang
@ 2021-05-19 10:12                     ` Frieder Schrempf
  2021-05-19 10:47                       ` Joakim Zhang
  0 siblings, 1 reply; 21+ messages in thread
From: Frieder Schrempf @ 2021-05-19 10:12 UTC (permalink / raw)
  To: Joakim Zhang, Dave Taht; +Cc: dl-linux-imx, netdev, linux-arm-kernel

On 19.05.21 10:40, Joakim Zhang wrote:
> 
> Hi Frieder,
> 
>> -----Original Message-----
>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>> Sent: 2021年5月19日 16:10
>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
>> <dave.taht@gmail.com>
>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>> linux-arm-kernel@lists.infradead.org
>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>> Hi Joakim,
>>
>> On 19.05.21 09:49, Joakim Zhang wrote:
>>>
>>> Hi Frieder,
>>>
>>>> -----Original Message-----
>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>> Sent: 2021年5月18日 20:55
>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
>>>> <dave.taht@gmail.com>
>>>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>> linux-arm-kernel@lists.infradead.org
>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>
>>>>
>>>>
>>>> On 18.05.21 14:35, Joakim Zhang wrote:
>>>>>
>>>>> Hi Dave,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Dave Taht <dave.taht@gmail.com>
>>>>>> Sent: 2021年5月17日 20:48
>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>
>>>>>> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>
>>>>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
>>>>>> <qiangqing.zhang@nxp.com>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> Hi Frieder,
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>>>>>> Sent: 2021年5月17日 15:17
>>>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
>>>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>>>
>>>>>>>> Hi Joakim,
>>>>>>>>
>>>>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
>>>>>>>>>
>>>>>>>>> Hi Frieder,
>>>>>>>>>
>>>>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can
>>>>>>>>> reproduce on
>>>>>>>> L5.10, and can't reproduce on L5.4.
>>>>>>>>> According to your description, you can reproduce this issue both
>>>>>>>>> L5.4 and
>>>>>>>> L5.10? So I need confirm with you.
>>>>>>>>
>>>>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
>>>>>>>> 5.10 but both kernels were official mainline kernels and **not**
>>>>>>>> from the linux-imx downstream tree.
>>>>>>> Ok.
>>>>>>>
>>>>>>>> Maybe there is some problem in the mainline tree and it got
>>>>>>>> included in the NXP release kernel starting from L5.10?
>>>>>>> No, this much looks like a known issue, it should always exist
>>>>>>> after adding
>>>>>> AVB support in mainline.
>>>>>>>
>>>>>>> ENET IP is not a _real_ multiple queues per my understanding,
>>>>>>> queue
>>>>>>> 0 is for
>>>>>> best effort. And the queue 1&2 is for AVB stream whose default
>>>>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and
>>>>>> 500Mbps
>>>> for 1Gbps).
>>>>>> When transmitting packets, net core will select queues randomly,
>>>>>> which caused the tx bandwidth fluctuations. So you can change to
>>>>>> use single queue if you care more about tx bandwidth. Or you can
>>>>>> refer to NXP internal implementation.
>>>>>>> e.g.
>>>>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>>>> @@ -916,8 +916,8 @@
>>>>>>>                                          <&clk
>>>>>> IMX8MQ_CLK_ENET_PHY_REF>;
>>>>>>>                                 clock-names = "ipg", "ahb",
>> "ptp",
>>>>>>>
>> "enet_clk_ref",
>>>>>> "enet_out";
>>>>>>> -                               fsl,num-tx-queues = <3>;
>>>>>>> -                               fsl,num-rx-queues = <3>;
>>>>>>> +                               fsl,num-tx-queues = <1>;
>>>>>>> +                               fsl,num-rx-queues = <1>;
>>>>>>>                                 status = "disabled";
>>>>>>>                         };
>>>>>>>                 };
>>>>>>>
>>>>>>> I hope this can help you :)
>>>>>>
>>>>>> Patching out the queues is probably not the right thing.
>>>>>>
>>>>>> for starters... Is there BQL support in this driver? It would be
>>>>>> helpful to have on all queues.
>>>>> There is no BQL support in this driver, and BQL may improve
>>>>> throughput
>>>> further, but should not be the root cause of this reported issue.
>>>>>
>>>>>> Also if there was a way to present it as two interfaces, rather
>>>>>> than one, that would allow for a specific avb device to be presented.
>>>>>>
>>>>>> Or:
>>>>>>
>>>>>> Is there a standard means of signalling down the stack via the IP
>>>>>> layer (a
>>>> dscp?
>>>>>> a setsockopt?) that the AVB queue is requested?
>>>>>>
>>>>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue
>>>>> 1&2
>>>> based on VLAN-ID.
>>>>
>>>> I had to look up what AVB even means, but from my current
>>>> understanding it doesn't seem right that for non-AVB packets the
>>>> driver picks any of the three queues in a random fashion while at the
>>>> same time knowing that queue 1 and 2 have a 50% limitation on the
>>>> bandwidth. Shouldn't there be some way to prefer queue 0 without
>>>> needing the user to set it up or even arbitrarily limiting the number of
>> queues as proposed above?
>>>
>>> Yes, I think we can. I look into NXP local implementation, there is a
>> ndo_select_queue callback.
>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsour
>>>
>> ce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet
>> %
>>>
>> 2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&amp;data=
>> 04
>>> %7C01%7Cqiangqing.zhang%40nxp.com%7Cd83917f3c76c4b6ef80008d91a9
>> d8a28%7
>>>
>> C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637570086193978287%
>> 7CUnkno
>>>
>> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
>> WwiL
>>>
>> CJXVCI6Mn0%3D%7C1000&amp;sdata=pQuGAadGzM8GhYsVl%2FG%2BPJSCZ
>> RRvbwhuLy9
>>> g30bn3ok%3D&amp;reserved=0
>>> This is the version for L5.4 kernel.
>>
>> Yes, this looks like it could solve the issue. Would you mind preparing a patch to
>> upstream the change in [1]? I would be happy to test (at least the non-AVB
>> case) and review.
> 
> Yes, I can have a try. I saw this patch has been staying in downstream tree for many years, and I don't know the history.
> Anyway, I will try to upstream first to see if anyone has comments.

Thanks, that would be great. Please put me on cc if you send the patch.

Just for the record:

When I set fsl,num-tx-queues = <1>, I do see that the bandwidth-drops don't occur anymore. When I instead apply the queue selection patch from the downstream kernel, I also see that queue 0 is always picked for my untagged traffic. In both cases bandwidth stays just as high as expected (> 900 Mbit/s).

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

* RE: i.MX8MM Ethernet TX Bandwidth Fluctuations
  2021-05-19 10:12                     ` Frieder Schrempf
@ 2021-05-19 10:47                       ` Joakim Zhang
  0 siblings, 0 replies; 21+ messages in thread
From: Joakim Zhang @ 2021-05-19 10:47 UTC (permalink / raw)
  To: Frieder Schrempf, Dave Taht; +Cc: dl-linux-imx, netdev, linux-arm-kernel


> -----Original Message-----
> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> Sent: 2021年5月19日 18:12
> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
> <dave.taht@gmail.com>
> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> linux-arm-kernel@lists.infradead.org
> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> 
> On 19.05.21 10:40, Joakim Zhang wrote:
> >
> > Hi Frieder,
> >
> >> -----Original Message-----
> >> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >> Sent: 2021年5月19日 16:10
> >> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
> >> <dave.taht@gmail.com>
> >> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >> linux-arm-kernel@lists.infradead.org
> >> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>
> >> Hi Joakim,
> >>
> >> On 19.05.21 09:49, Joakim Zhang wrote:
> >>>
> >>> Hi Frieder,
> >>>
> >>>> -----Original Message-----
> >>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>>> Sent: 2021年5月18日 20:55
> >>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
> >>>> <dave.taht@gmail.com>
> >>>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>> linux-arm-kernel@lists.infradead.org
> >>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>
> >>>>
> >>>>
> >>>> On 18.05.21 14:35, Joakim Zhang wrote:
> >>>>>
> >>>>> Hi Dave,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Dave Taht <dave.taht@gmail.com>
> >>>>>> Sent: 2021年5月17日 20:48
> >>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>
> >>>>>> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
> >>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>>>> linux-arm-kernel@lists.infradead.org
> >>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>
> >>>>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
> >>>>>> <qiangqing.zhang@nxp.com>
> >>>>>> wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> Hi Frieder,
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
> >>>>>>>> Sent: 2021年5月17日 15:17
> >>>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
> >>>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
> >>>>>>>> linux-arm-kernel@lists.infradead.org
> >>>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
> >>>>>>>>
> >>>>>>>> Hi Joakim,
> >>>>>>>>
> >>>>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
> >>>>>>>>>
> >>>>>>>>> Hi Frieder,
> >>>>>>>>>
> >>>>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can
> >>>>>>>>> reproduce on
> >>>>>>>> L5.10, and can't reproduce on L5.4.
> >>>>>>>>> According to your description, you can reproduce this issue
> >>>>>>>>> both
> >>>>>>>>> L5.4 and
> >>>>>>>> L5.10? So I need confirm with you.
> >>>>>>>>
> >>>>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
> >>>>>>>> 5.10 but both kernels were official mainline kernels and
> >>>>>>>> **not** from the linux-imx downstream tree.
> >>>>>>> Ok.
> >>>>>>>
> >>>>>>>> Maybe there is some problem in the mainline tree and it got
> >>>>>>>> included in the NXP release kernel starting from L5.10?
> >>>>>>> No, this much looks like a known issue, it should always exist
> >>>>>>> after adding
> >>>>>> AVB support in mainline.
> >>>>>>>
> >>>>>>> ENET IP is not a _real_ multiple queues per my understanding,
> >>>>>>> queue
> >>>>>>> 0 is for
> >>>>>> best effort. And the queue 1&2 is for AVB stream whose default
> >>>>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and
> >>>>>> 500Mbps
> >>>> for 1Gbps).
> >>>>>> When transmitting packets, net core will select queues randomly,
> >>>>>> which caused the tx bandwidth fluctuations. So you can change to
> >>>>>> use single queue if you care more about tx bandwidth. Or you can
> >>>>>> refer to NXP internal implementation.
> >>>>>>> e.g.
> >>>>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
> >>>>>>> @@ -916,8 +916,8 @@
> >>>>>>>                                          <&clk
> >>>>>> IMX8MQ_CLK_ENET_PHY_REF>;
> >>>>>>>                                 clock-names = "ipg", "ahb",
> >> "ptp",
> >>>>>>>
> >> "enet_clk_ref",
> >>>>>> "enet_out";
> >>>>>>> -                               fsl,num-tx-queues = <3>;
> >>>>>>> -                               fsl,num-rx-queues = <3>;
> >>>>>>> +                               fsl,num-tx-queues = <1>;
> >>>>>>> +                               fsl,num-rx-queues = <1>;
> >>>>>>>                                 status = "disabled";
> >>>>>>>                         };
> >>>>>>>                 };
> >>>>>>>
> >>>>>>> I hope this can help you :)
> >>>>>>
> >>>>>> Patching out the queues is probably not the right thing.
> >>>>>>
> >>>>>> for starters... Is there BQL support in this driver? It would be
> >>>>>> helpful to have on all queues.
> >>>>> There is no BQL support in this driver, and BQL may improve
> >>>>> throughput
> >>>> further, but should not be the root cause of this reported issue.
> >>>>>
> >>>>>> Also if there was a way to present it as two interfaces, rather
> >>>>>> than one, that would allow for a specific avb device to be presented.
> >>>>>>
> >>>>>> Or:
> >>>>>>
> >>>>>> Is there a standard means of signalling down the stack via the IP
> >>>>>> layer (a
> >>>> dscp?
> >>>>>> a setsockopt?) that the AVB queue is requested?
> >>>>>>
> >>>>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into
> >>>>> queue
> >>>>> 1&2
> >>>> based on VLAN-ID.
> >>>>
> >>>> I had to look up what AVB even means, but from my current
> >>>> understanding it doesn't seem right that for non-AVB packets the
> >>>> driver picks any of the three queues in a random fashion while at
> >>>> the same time knowing that queue 1 and 2 have a 50% limitation on
> >>>> the bandwidth. Shouldn't there be some way to prefer queue 0
> >>>> without needing the user to set it up or even arbitrarily limiting
> >>>> the number of
> >> queues as proposed above?
> >>>
> >>> Yes, I think we can. I look into NXP local implementation, there is
> >>> a
> >> ndo_select_queue callback.
> >>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fso
> >>> ur
> >>>
> >>
> ce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet
> >> %
> >>>
> >>
> 2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&amp;data=
> >> 04
> >>> %7C01%7Cqiangqing.zhang%40nxp.com%7Cd83917f3c76c4b6ef80008d91
> a9
> >> d8a28%7
> >>>
> >>
> C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637570086193978287%
> >> 7CUnkno
> >>>
> >>
> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
> >> WwiL
> >>>
> >>
> CJXVCI6Mn0%3D%7C1000&amp;sdata=pQuGAadGzM8GhYsVl%2FG%2BPJSCZ
> >> RRvbwhuLy9
> >>> g30bn3ok%3D&amp;reserved=0
> >>> This is the version for L5.4 kernel.
> >>
> >> Yes, this looks like it could solve the issue. Would you mind
> >> preparing a patch to upstream the change in [1]? I would be happy to
> >> test (at least the non-AVB
> >> case) and review.
> >
> > Yes, I can have a try. I saw this patch has been staying in downstream tree
> for many years, and I don't know the history.
> > Anyway, I will try to upstream first to see if anyone has comments.
> 
> Thanks, that would be great. Please put me on cc if you send the patch.
Sure :-)

Best Regards,
Joakim Zhang
> Just for the record:
> 
> When I set fsl,num-tx-queues = <1>, I do see that the bandwidth-drops don't
> occur anymore. When I instead apply the queue selection patch from the
> downstream kernel, I also see that queue 0 is always picked for my untagged
> traffic. In both cases bandwidth stays just as high as expected (> 900 Mbit/s).
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2021-05-19 10:51 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-06 14:45 i.MX8MM Ethernet TX Bandwidth Fluctuations Frieder Schrempf
2021-05-06 14:53 ` Dave Taht
2021-05-10 12:49   ` Frieder Schrempf
2021-05-10 15:09     ` Dave Taht
2021-05-06 19:20 ` Adam Ford
2021-05-07 15:34   ` Tim Harvey
2021-05-10 12:57     ` Frieder Schrempf
2021-05-10 12:52   ` Frieder Schrempf
2021-05-10 13:10     ` Adam Ford
2021-05-12 11:58 ` Joakim Zhang
2021-05-13 12:36   ` Joakim Zhang
2021-05-17  7:17     ` Frieder Schrempf
2021-05-17 10:22       ` Joakim Zhang
2021-05-17 12:47         ` Dave Taht
2021-05-18 12:35           ` Joakim Zhang
2021-05-18 12:55             ` Frieder Schrempf
2021-05-19  7:49               ` Joakim Zhang
2021-05-19  8:10                 ` Frieder Schrempf
2021-05-19  8:40                   ` Joakim Zhang
2021-05-19 10:12                     ` Frieder Schrempf
2021-05-19 10:47                       ` Joakim Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).