All of lore.kernel.org
 help / color / mirror / Atom feed
From: Frieder Schrempf <frieder.schrempf@kontron.de>
To: Joakim Zhang <qiangqing.zhang@nxp.com>, Dave Taht <dave.taht@gmail.com>
Cc: dl-linux-imx <linux-imx@nxp.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org" 
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
Date: Wed, 19 May 2021 12:12:03 +0200	[thread overview]
Message-ID: <58930c74-c889-e9d2-f30f-bc9f47119820@kontron.de> (raw)
In-Reply-To: <DB8PR04MB6795BEDCA2995C1E88E2B5B7E62B9@DB8PR04MB6795.eurprd04.prod.outlook.com>

On 19.05.21 10:40, Joakim Zhang wrote:
> 
> Hi Frieder,
> 
>> -----Original Message-----
>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>> Sent: 2021年5月19日 16:10
>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
>> <dave.taht@gmail.com>
>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>> linux-arm-kernel@lists.infradead.org
>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>> Hi Joakim,
>>
>> On 19.05.21 09:49, Joakim Zhang wrote:
>>>
>>> Hi Frieder,
>>>
>>>> -----Original Message-----
>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>> Sent: 2021年5月18日 20:55
>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
>>>> <dave.taht@gmail.com>
>>>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>> linux-arm-kernel@lists.infradead.org
>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>
>>>>
>>>>
>>>> On 18.05.21 14:35, Joakim Zhang wrote:
>>>>>
>>>>> Hi Dave,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Dave Taht <dave.taht@gmail.com>
>>>>>> Sent: 2021年5月17日 20:48
>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>
>>>>>> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>
>>>>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
>>>>>> <qiangqing.zhang@nxp.com>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> Hi Frieder,
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>>>>>> Sent: 2021年5月17日 15:17
>>>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
>>>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>>>
>>>>>>>> Hi Joakim,
>>>>>>>>
>>>>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
>>>>>>>>>
>>>>>>>>> Hi Frieder,
>>>>>>>>>
>>>>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can
>>>>>>>>> reproduce on
>>>>>>>> L5.10, and can't reproduce on L5.4.
>>>>>>>>> According to your description, you can reproduce this issue both
>>>>>>>>> L5.4 and
>>>>>>>> L5.10? So I need confirm with you.
>>>>>>>>
>>>>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
>>>>>>>> 5.10 but both kernels were official mainline kernels and **not**
>>>>>>>> from the linux-imx downstream tree.
>>>>>>> Ok.
>>>>>>>
>>>>>>>> Maybe there is some problem in the mainline tree and it got
>>>>>>>> included in the NXP release kernel starting from L5.10?
>>>>>>> No, this much looks like a known issue, it should always exist
>>>>>>> after adding
>>>>>> AVB support in mainline.
>>>>>>>
>>>>>>> ENET IP is not a _real_ multiple queues per my understanding,
>>>>>>> queue
>>>>>>> 0 is for
>>>>>> best effort. And the queue 1&2 is for AVB stream whose default
>>>>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and
>>>>>> 500Mbps
>>>> for 1Gbps).
>>>>>> When transmitting packets, net core will select queues randomly,
>>>>>> which caused the tx bandwidth fluctuations. So you can change to
>>>>>> use single queue if you care more about tx bandwidth. Or you can
>>>>>> refer to NXP internal implementation.
>>>>>>> e.g.
>>>>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>>>> @@ -916,8 +916,8 @@
>>>>>>>                                          <&clk
>>>>>> IMX8MQ_CLK_ENET_PHY_REF>;
>>>>>>>                                 clock-names = "ipg", "ahb",
>> "ptp",
>>>>>>>
>> "enet_clk_ref",
>>>>>> "enet_out";
>>>>>>> -                               fsl,num-tx-queues = <3>;
>>>>>>> -                               fsl,num-rx-queues = <3>;
>>>>>>> +                               fsl,num-tx-queues = <1>;
>>>>>>> +                               fsl,num-rx-queues = <1>;
>>>>>>>                                 status = "disabled";
>>>>>>>                         };
>>>>>>>                 };
>>>>>>>
>>>>>>> I hope this can help you :)
>>>>>>
>>>>>> Patching out the queues is probably not the right thing.
>>>>>>
>>>>>> for starters... Is there BQL support in this driver? It would be
>>>>>> helpful to have on all queues.
>>>>> There is no BQL support in this driver, and BQL may improve
>>>>> throughput
>>>> further, but should not be the root cause of this reported issue.
>>>>>
>>>>>> Also if there was a way to present it as two interfaces, rather
>>>>>> than one, that would allow for a specific avb device to be presented.
>>>>>>
>>>>>> Or:
>>>>>>
>>>>>> Is there a standard means of signalling down the stack via the IP
>>>>>> layer (a
>>>> dscp?
>>>>>> a setsockopt?) that the AVB queue is requested?
>>>>>>
>>>>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue
>>>>> 1&2
>>>> based on VLAN-ID.
>>>>
>>>> I had to look up what AVB even means, but from my current
>>>> understanding it doesn't seem right that for non-AVB packets the
>>>> driver picks any of the three queues in a random fashion while at the
>>>> same time knowing that queue 1 and 2 have a 50% limitation on the
>>>> bandwidth. Shouldn't there be some way to prefer queue 0 without
>>>> needing the user to set it up or even arbitrarily limiting the number of
>> queues as proposed above?
>>>
>>> Yes, I think we can. I look into NXP local implementation, there is a
>> ndo_select_queue callback.
>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsour
>>>
>> ce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet
>> %
>>>
>> 2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&amp;data=
>> 04
>>> %7C01%7Cqiangqing.zhang%40nxp.com%7Cd83917f3c76c4b6ef80008d91a9
>> d8a28%7
>>>
>> C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637570086193978287%
>> 7CUnkno
>>>
>> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
>> WwiL
>>>
>> CJXVCI6Mn0%3D%7C1000&amp;sdata=pQuGAadGzM8GhYsVl%2FG%2BPJSCZ
>> RRvbwhuLy9
>>> g30bn3ok%3D&amp;reserved=0
>>> This is the version for L5.4 kernel.
>>
>> Yes, this looks like it could solve the issue. Would you mind preparing a patch to
>> upstream the change in [1]? I would be happy to test (at least the non-AVB
>> case) and review.
> 
> Yes, I can have a try. I saw this patch has been staying in downstream tree for many years, and I don't know the history.
> Anyway, I will try to upstream first to see if anyone has comments.

Thanks, that would be great. Please put me on cc if you send the patch.

Just for the record:

When I set fsl,num-tx-queues = <1>, I do see that the bandwidth-drops don't occur anymore. When I instead apply the queue selection patch from the downstream kernel, I also see that queue 0 is always picked for my untagged traffic. In both cases bandwidth stays just as high as expected (> 900 Mbit/s).

WARNING: multiple messages have this Message-ID (diff)
From: Frieder Schrempf <frieder.schrempf@kontron.de>
To: Joakim Zhang <qiangqing.zhang@nxp.com>, Dave Taht <dave.taht@gmail.com>
Cc: dl-linux-imx <linux-imx@nxp.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>
Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
Date: Wed, 19 May 2021 12:12:03 +0200	[thread overview]
Message-ID: <58930c74-c889-e9d2-f30f-bc9f47119820@kontron.de> (raw)
In-Reply-To: <DB8PR04MB6795BEDCA2995C1E88E2B5B7E62B9@DB8PR04MB6795.eurprd04.prod.outlook.com>

On 19.05.21 10:40, Joakim Zhang wrote:
> 
> Hi Frieder,
> 
>> -----Original Message-----
>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>> Sent: 2021年5月19日 16:10
>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
>> <dave.taht@gmail.com>
>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>> linux-arm-kernel@lists.infradead.org
>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>
>> Hi Joakim,
>>
>> On 19.05.21 09:49, Joakim Zhang wrote:
>>>
>>> Hi Frieder,
>>>
>>>> -----Original Message-----
>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>> Sent: 2021年5月18日 20:55
>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; Dave Taht
>>>> <dave.taht@gmail.com>
>>>> Cc: dl-linux-imx <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>> linux-arm-kernel@lists.infradead.org
>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>
>>>>
>>>>
>>>> On 18.05.21 14:35, Joakim Zhang wrote:
>>>>>
>>>>> Hi Dave,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Dave Taht <dave.taht@gmail.com>
>>>>>> Sent: 2021年5月17日 20:48
>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>
>>>>>> Cc: Frieder Schrempf <frieder.schrempf@kontron.de>; dl-linux-imx
>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>
>>>>>> On Mon, May 17, 2021 at 3:25 AM Joakim Zhang
>>>>>> <qiangqing.zhang@nxp.com>
>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> Hi Frieder,
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Frieder Schrempf <frieder.schrempf@kontron.de>
>>>>>>>> Sent: 2021年5月17日 15:17
>>>>>>>> To: Joakim Zhang <qiangqing.zhang@nxp.com>; dl-linux-imx
>>>>>>>> <linux-imx@nxp.com>; netdev@vger.kernel.org;
>>>>>>>> linux-arm-kernel@lists.infradead.org
>>>>>>>> Subject: Re: i.MX8MM Ethernet TX Bandwidth Fluctuations
>>>>>>>>
>>>>>>>> Hi Joakim,
>>>>>>>>
>>>>>>>> On 13.05.21 14:36, Joakim Zhang wrote:
>>>>>>>>>
>>>>>>>>> Hi Frieder,
>>>>>>>>>
>>>>>>>>> For NXP release kernel, I tested on i.MX8MQ/MM/MP, I can
>>>>>>>>> reproduce on
>>>>>>>> L5.10, and can't reproduce on L5.4.
>>>>>>>>> According to your description, you can reproduce this issue both
>>>>>>>>> L5.4 and
>>>>>>>> L5.10? So I need confirm with you.
>>>>>>>>
>>>>>>>> Thanks for looking into this. I could reproduce this on 5.4 and
>>>>>>>> 5.10 but both kernels were official mainline kernels and **not**
>>>>>>>> from the linux-imx downstream tree.
>>>>>>> Ok.
>>>>>>>
>>>>>>>> Maybe there is some problem in the mainline tree and it got
>>>>>>>> included in the NXP release kernel starting from L5.10?
>>>>>>> No, this much looks like a known issue, it should always exist
>>>>>>> after adding
>>>>>> AVB support in mainline.
>>>>>>>
>>>>>>> ENET IP is not a _real_ multiple queues per my understanding,
>>>>>>> queue
>>>>>>> 0 is for
>>>>>> best effort. And the queue 1&2 is for AVB stream whose default
>>>>>> bandwidth fraction is 0.5 in driver. (i.e. 50Mbps for 100Mbps and
>>>>>> 500Mbps
>>>> for 1Gbps).
>>>>>> When transmitting packets, net core will select queues randomly,
>>>>>> which caused the tx bandwidth fluctuations. So you can change to
>>>>>> use single queue if you care more about tx bandwidth. Or you can
>>>>>> refer to NXP internal implementation.
>>>>>>> e.g.
>>>>>>> --- a/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>>>> +++ b/arch/arm64/boot/dts/freescale/imx8mq.dtsi
>>>>>>> @@ -916,8 +916,8 @@
>>>>>>>                                          <&clk
>>>>>> IMX8MQ_CLK_ENET_PHY_REF>;
>>>>>>>                                 clock-names = "ipg", "ahb",
>> "ptp",
>>>>>>>
>> "enet_clk_ref",
>>>>>> "enet_out";
>>>>>>> -                               fsl,num-tx-queues = <3>;
>>>>>>> -                               fsl,num-rx-queues = <3>;
>>>>>>> +                               fsl,num-tx-queues = <1>;
>>>>>>> +                               fsl,num-rx-queues = <1>;
>>>>>>>                                 status = "disabled";
>>>>>>>                         };
>>>>>>>                 };
>>>>>>>
>>>>>>> I hope this can help you :)
>>>>>>
>>>>>> Patching out the queues is probably not the right thing.
>>>>>>
>>>>>> for starters... Is there BQL support in this driver? It would be
>>>>>> helpful to have on all queues.
>>>>> There is no BQL support in this driver, and BQL may improve
>>>>> throughput
>>>> further, but should not be the root cause of this reported issue.
>>>>>
>>>>>> Also if there was a way to present it as two interfaces, rather
>>>>>> than one, that would allow for a specific avb device to be presented.
>>>>>>
>>>>>> Or:
>>>>>>
>>>>>> Is there a standard means of signalling down the stack via the IP
>>>>>> layer (a
>>>> dscp?
>>>>>> a setsockopt?) that the AVB queue is requested?
>>>>>>
>>>>> AFAIK, AVB is scope of VLAN, so we can queue AVB packets into queue
>>>>> 1&2
>>>> based on VLAN-ID.
>>>>
>>>> I had to look up what AVB even means, but from my current
>>>> understanding it doesn't seem right that for non-AVB packets the
>>>> driver picks any of the three queues in a random fashion while at the
>>>> same time knowing that queue 1 and 2 have a 50% limitation on the
>>>> bandwidth. Shouldn't there be some way to prefer queue 0 without
>>>> needing the user to set it up or even arbitrarily limiting the number of
>> queues as proposed above?
>>>
>>> Yes, I think we can. I look into NXP local implementation, there is a
>> ndo_select_queue callback.
>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsour
>>>
>> ce.codeaurora.org%2Fexternal%2Fimx%2Flinux-imx%2Ftree%2Fdrivers%2Fnet
>> %
>>>
>> 2Fethernet%2Ffreescale%2Ffec_main.c%3Fh%3Dlf-5.4.y%23n3419&amp;data=
>> 04
>>> %7C01%7Cqiangqing.zhang%40nxp.com%7Cd83917f3c76c4b6ef80008d91a9
>> d8a28%7
>>>
>> C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637570086193978287%
>> 7CUnkno
>>>
>> wn%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1ha
>> WwiL
>>>
>> CJXVCI6Mn0%3D%7C1000&amp;sdata=pQuGAadGzM8GhYsVl%2FG%2BPJSCZ
>> RRvbwhuLy9
>>> g30bn3ok%3D&amp;reserved=0
>>> This is the version for L5.4 kernel.
>>
>> Yes, this looks like it could solve the issue. Would you mind preparing a patch to
>> upstream the change in [1]? I would be happy to test (at least the non-AVB
>> case) and review.
> 
> Yes, I can have a try. I saw this patch has been staying in downstream tree for many years, and I don't know the history.
> Anyway, I will try to upstream first to see if anyone has comments.

Thanks, that would be great. Please put me on cc if you send the patch.

Just for the record:

When I set fsl,num-tx-queues = <1>, I do see that the bandwidth-drops don't occur anymore. When I instead apply the queue selection patch from the downstream kernel, I also see that queue 0 is always picked for my untagged traffic. In both cases bandwidth stays just as high as expected (> 900 Mbit/s).

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  reply	other threads:[~2021-05-19 10:12 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-06 14:45 i.MX8MM Ethernet TX Bandwidth Fluctuations Frieder Schrempf
2021-05-06 14:45 ` Frieder Schrempf
2021-05-06 14:53 ` Dave Taht
2021-05-06 14:53   ` Dave Taht
2021-05-10 12:49   ` Frieder Schrempf
2021-05-10 12:49     ` Frieder Schrempf
2021-05-10 15:09     ` Dave Taht
2021-05-10 15:09       ` Dave Taht
2021-05-06 19:20 ` Adam Ford
2021-05-06 19:20   ` Adam Ford
2021-05-07 15:34   ` Tim Harvey
2021-05-07 15:34     ` Tim Harvey
2021-05-10 12:57     ` Frieder Schrempf
2021-05-10 12:57       ` Frieder Schrempf
2021-05-10 12:52   ` Frieder Schrempf
2021-05-10 12:52     ` Frieder Schrempf
2021-05-10 13:10     ` Adam Ford
2021-05-10 13:10       ` Adam Ford
2021-05-12 11:58 ` Joakim Zhang
2021-05-12 11:58   ` Joakim Zhang
2021-05-13 12:36   ` Joakim Zhang
2021-05-13 12:36     ` Joakim Zhang
2021-05-17  7:17     ` Frieder Schrempf
2021-05-17  7:17       ` Frieder Schrempf
2021-05-17 10:22       ` Joakim Zhang
2021-05-17 10:22         ` Joakim Zhang
2021-05-17 12:47         ` Dave Taht
2021-05-17 12:47           ` Dave Taht
2021-05-18 12:35           ` Joakim Zhang
2021-05-18 12:35             ` Joakim Zhang
2021-05-18 12:55             ` Frieder Schrempf
2021-05-18 12:55               ` Frieder Schrempf
2021-05-19  7:49               ` Joakim Zhang
2021-05-19  7:49                 ` Joakim Zhang
2021-05-19  8:10                 ` Frieder Schrempf
2021-05-19  8:10                   ` Frieder Schrempf
2021-05-19  8:40                   ` Joakim Zhang
2021-05-19  8:40                     ` Joakim Zhang
2021-05-19 10:12                     ` Frieder Schrempf [this message]
2021-05-19 10:12                       ` Frieder Schrempf
2021-05-19 10:47                       ` Joakim Zhang
2021-05-19 10:47                         ` Joakim Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=58930c74-c889-e9d2-f30f-bc9f47119820@kontron.de \
    --to=frieder.schrempf@kontron.de \
    --cc=dave.taht@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-imx@nxp.com \
    --cc=netdev@vger.kernel.org \
    --cc=qiangqing.zhang@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.