All of lore.kernel.org
 help / color / mirror / Atom feed
From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: Jiri Pirko <jiri@resnulli.us>,
	Mateusz Polchlopek <mateusz.polchlopek@intel.com>
Cc: netdev@vger.kernel.org, Lukasz Czapnik <lukasz.czapnik@intel.com>,
	intel-wired-lan@lists.osuosl.org, horms@kernel.org
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v4 4/5] ice: Add tx_scheduling_layers devlink param
Date: Mon, 19 Feb 2024 14:33:54 +0100	[thread overview]
Message-ID: <48675853-2971-42a1-9596-73d1c4517085@intel.com> (raw)
In-Reply-To: <ZdNLkJm2qr1kZCis@nanopsycho>

On 2/19/24 13:37, Jiri Pirko wrote:
> Mon, Feb 19, 2024 at 11:05:57AM CET, mateusz.polchlopek@intel.com wrote:
>> From: Lukasz Czapnik <lukasz.czapnik@intel.com>
>>
>> It was observed that Tx performance was inconsistent across all queues
>> and/or VSIs and that it was directly connected to existing 9-layer
>> topology of the Tx scheduler.
>>
>> Introduce new private devlink param - tx_scheduling_layers. This parameter
>> gives user flexibility to choose the 5-layer transmit scheduler topology
>> which helps to smooth out the transmit performance.
>>
>> Allowed parameter values are 5 and 9.
>>
>> Example usage:
>>
>> Show:
>> devlink dev param show pci/0000:4b:00.0 name tx_scheduling_layers
>> pci/0000:4b:00.0:
>>   name tx_scheduling_layers type driver-specific
>>     values:
>>       cmode permanent value 9
>>
>> Set:
>> devlink dev param set pci/0000:4b:00.0 name tx_scheduling_layers value 5
>> cmode permanent
> 
> This is kind of proprietary param similar to number of which were shot

not sure if this is the same kind of param, but for sure proprietary one

> down for mlx5 in past. Jakub?

I'm not that familiar with the history/ies around mlx5, but this case is
somewhat different, at least for me:
we have a performance fix for the tree inside the FW/HW, while you
(IIRC) were about to introduce some nice and general abstraction layer,
which could be used by other HW vendors too, but instead it was mlx-only

> 
> Also, given this is apparently nvconfig configuration, there could be
> probably more suitable to use some provisioning tool. 

TBH, we will want to add some other NVM related params, but that does
not justify yet another tool to configure PF. (And then there would be
a big debate if FW update should be moved there too for consistency).

> This is related to the mlx5 misc driver.
> 
> Until be figure out the plan, this has my nack:
> 
> NAcked-by: Jiri Pirko <jiri@nvidia.com>

IMO this is an easy case, but would like to hear from netdev maintainers



WARNING: multiple messages have this Message-ID (diff)
From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: Jiri Pirko <jiri@resnulli.us>,
	Mateusz Polchlopek <mateusz.polchlopek@intel.com>
Cc: <intel-wired-lan@lists.osuosl.org>, <netdev@vger.kernel.org>,
	<horms@kernel.org>, Lukasz Czapnik <lukasz.czapnik@intel.com>
Subject: Re: [Intel-wired-lan] [PATCH iwl-next v4 4/5] ice: Add tx_scheduling_layers devlink param
Date: Mon, 19 Feb 2024 14:33:54 +0100	[thread overview]
Message-ID: <48675853-2971-42a1-9596-73d1c4517085@intel.com> (raw)
In-Reply-To: <ZdNLkJm2qr1kZCis@nanopsycho>

On 2/19/24 13:37, Jiri Pirko wrote:
> Mon, Feb 19, 2024 at 11:05:57AM CET, mateusz.polchlopek@intel.com wrote:
>> From: Lukasz Czapnik <lukasz.czapnik@intel.com>
>>
>> It was observed that Tx performance was inconsistent across all queues
>> and/or VSIs and that it was directly connected to existing 9-layer
>> topology of the Tx scheduler.
>>
>> Introduce new private devlink param - tx_scheduling_layers. This parameter
>> gives user flexibility to choose the 5-layer transmit scheduler topology
>> which helps to smooth out the transmit performance.
>>
>> Allowed parameter values are 5 and 9.
>>
>> Example usage:
>>
>> Show:
>> devlink dev param show pci/0000:4b:00.0 name tx_scheduling_layers
>> pci/0000:4b:00.0:
>>   name tx_scheduling_layers type driver-specific
>>     values:
>>       cmode permanent value 9
>>
>> Set:
>> devlink dev param set pci/0000:4b:00.0 name tx_scheduling_layers value 5
>> cmode permanent
> 
> This is kind of proprietary param similar to number of which were shot

not sure if this is the same kind of param, but for sure proprietary one

> down for mlx5 in past. Jakub?

I'm not that familiar with the history/ies around mlx5, but this case is
somewhat different, at least for me:
we have a performance fix for the tree inside the FW/HW, while you
(IIRC) were about to introduce some nice and general abstraction layer,
which could be used by other HW vendors too, but instead it was mlx-only

> 
> Also, given this is apparently nvconfig configuration, there could be
> probably more suitable to use some provisioning tool. 

TBH, we will want to add some other NVM related params, but that does
not justify yet another tool to configure PF. (And then there would be
a big debate if FW update should be moved there too for consistency).

> This is related to the mlx5 misc driver.
> 
> Until be figure out the plan, this has my nack:
> 
> NAcked-by: Jiri Pirko <jiri@nvidia.com>

IMO this is an easy case, but would like to hear from netdev maintainers



  reply	other threads:[~2024-02-19 13:34 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-19 10:05 [Intel-wired-lan] [PATCH iwl-next v4 0/5] ice: Support 5 layer Tx scheduler topology Mateusz Polchlopek
2024-02-19 10:05 ` Mateusz Polchlopek
2024-02-19 10:05 ` [Intel-wired-lan] [PATCH iwl-next v1 1/5] ice: Support 5 layer topology Mateusz Polchlopek
2024-02-19 10:05   ` Mateusz Polchlopek
2024-02-19 10:16   ` Mateusz Polchlopek
2024-02-19 10:16     ` Mateusz Polchlopek
2024-02-19 10:05 ` [Intel-wired-lan] [PATCH iwl-next v4 2/5] ice: Adjust the VSI/Aggregator layers Mateusz Polchlopek
2024-02-19 10:05   ` Mateusz Polchlopek
2024-02-19 10:05 ` [Intel-wired-lan] [PATCH iwl-next v4 3/5] ice: Enable switching default Tx scheduler topology Mateusz Polchlopek
2024-02-19 10:05   ` Mateusz Polchlopek
2024-02-19 10:05 ` [Intel-wired-lan] [PATCH iwl-next v4 4/5] ice: Add tx_scheduling_layers devlink param Mateusz Polchlopek
2024-02-19 10:05   ` Mateusz Polchlopek
2024-02-19 12:37   ` Jiri Pirko
2024-02-19 12:37     ` Jiri Pirko
2024-02-19 13:33     ` Przemek Kitszel [this message]
2024-02-19 13:33       ` Przemek Kitszel
2024-02-19 17:15       ` Jiri Pirko
2024-02-19 17:15         ` Jiri Pirko
2024-02-21 23:38     ` Jakub Kicinski
2024-02-21 23:38       ` Jakub Kicinski
2024-02-22 13:25       ` Mateusz Polchlopek
2024-02-22 13:25         ` Mateusz Polchlopek
2024-02-22 23:07         ` Jakub Kicinski
2024-02-22 23:07           ` Jakub Kicinski
2024-02-23  9:45           ` Jiri Pirko
2024-02-23  9:45             ` Jiri Pirko
2024-02-23 14:27             ` Jakub Kicinski
2024-02-23 14:27               ` Jakub Kicinski
2024-02-25  7:18               ` Jiri Pirko
2024-02-25  7:18                 ` Jiri Pirko
2024-02-27  2:37                 ` Jakub Kicinski
2024-02-27  2:37                   ` Jakub Kicinski
2024-02-27 12:17                   ` Jiri Pirko
2024-02-27 12:17                     ` Jiri Pirko
2024-02-27 13:05                     ` Przemek Kitszel
2024-02-27 13:05                       ` Przemek Kitszel
2024-02-27 15:39                       ` Jiri Pirko
2024-02-27 15:39                         ` Jiri Pirko
2024-02-27 15:41                       ` Andrew Lunn
2024-02-27 15:41                         ` Andrew Lunn
2024-02-27 16:04                         ` Jiri Pirko
2024-02-27 16:04                           ` Jiri Pirko
2024-02-27 20:38                           ` Andrew Lunn
2024-02-27 20:38                             ` Andrew Lunn
2024-02-19 10:05 ` [Intel-wired-lan] [PATCH iwl-next v4 5/5] ice: Document tx_scheduling_layers parameter Mateusz Polchlopek
2024-02-19 10:05   ` Mateusz Polchlopek

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48675853-2971-42a1-9596-73d1c4517085@intel.com \
    --to=przemyslaw.kitszel@intel.com \
    --cc=horms@kernel.org \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jiri@resnulli.us \
    --cc=lukasz.czapnik@intel.com \
    --cc=mateusz.polchlopek@intel.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.