All of lore.kernel.org
 help / color / mirror / Atom feed
From: Przemek Kitszel <przemyslaw.kitszel@intel.com>
To: Michal Wilczynski <michal.wilczynski@intel.com>
Cc: Przemek Kitszel <przemyslaw.kitszel@intel.com>,
	netdev@vger.kernel.org, alexandr.lobakin@intel.com,
	dchumak@nvidia.com, maximmi@nvidia.com, jiri@resnulli.us,
	simon.horman@corigine.com, jacob.e.keller@intel.com,
	jesse.brandeburg@intel.com
Subject: Re: [RFC PATCH net-next v4 4/6] ice: Implement devlink-rate API
Date: Thu, 22 Sep 2022 15:08:59 +0200	[thread overview]
Message-ID: <20220922130859.337985-1-przemyslaw.kitszel@intel.com> (raw)
In-Reply-To: <20220915134239.1935604-5-michal.wilczynski@intel.com>

From:   Michal Wilczynski <michal.wilczynski@intel.com>
Date:   Thu, 15 Sep 2022 15:42:37 +0200

> There is a need to support modification of Tx scheduler topology, in the
> ice driver. This will allow user to control Tx settings of each node in
> the internal hierarchy of nodes. A number of parameters is supported per
> each node: tx_max, tx_share, tx_priority and tx_weight.
> 
> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_devlink.c | 511 +++++++++++++++++++
>  drivers/net/ethernet/intel/ice/ice_devlink.h |   2 +
>  2 files changed, 513 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
> index e6ec20079ced..925283605b59 100644
> --- a/drivers/net/ethernet/intel/ice/ice_devlink.c
> +++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
> @@ -713,6 +713,490 @@ ice_devlink_port_unsplit(struct devlink *devlink, struct devlink_port *port,
>  	return ice_devlink_port_split(devlink, port, 1, extack);
>  }
>  
> +/**
> + * ice_traverse_tx_tree - traverse Tx scheduler tree
> + * @devlink: devlink struct
> + * @node: current node, used for recursion
> + * @tc_node: tc_node struct, that is treated as a root
> + * @pf: pf struct
> + *
> + * This function traverses Tx scheduler tree and exports
> + * entire structure to the devlink-rate.
> + */
> +static void ice_traverse_tx_tree(struct devlink *devlink, struct ice_sched_node *node,
> +				 struct ice_sched_node *tc_node, struct ice_pf *pf)
> +{
> +	struct ice_vf *vf;
> +	int i;
> +
> +	devl_lock(devlink);
> +
> +	if (node->parent == tc_node) {
> +		/* create root node */
> +		devl_rate_node_create(devlink, node, node->name, NULL);
> +	} else if (node->info.data.elem_type == ICE_AQC_ELEM_TYPE_LEAF &&
> +		   node->parent->name) {
> +		devl_rate_queue_create(devlink, node->parent->name, node->tx_queue_id, node);
> +	} else if (node->vsi_handle &&
> +		   pf->vsi[node->vsi_handle]->vf) {
> +		vf = pf->vsi[node->vsi_handle]->vf;
> +		snprintf(node->name, DEVLINK_RATE_NAME_MAX_LEN, "vport_%u", vf->devlink_port.index);
> +		if (!vf->devlink_port.devlink_rate)
> +			devl_rate_vport_create(&vf->devlink_port, node, node->parent->name);
> +	} else {
> +		devl_rate_node_create(devlink, node, node->name, node->parent->name);
> +	}
> +
> +	devl_unlock(devlink);

I would move devlink locking into ice_devlink_rate_init_tx_topology(),
so it would be locked only once for the whole tree walking.

> +
> +	for (i = 0; i < node->num_children; i++)
> +		ice_traverse_tx_tree(devlink, node->children[i], tc_node, pf);
> +}
> +
> +/**
> + * ice_devlink_rate_init_tx_topology - export Tx scheduler tree to devlink rate
> + * @devlink: devlink struct
> + * @vsi: main vsi struct
> + *
> + * This function finds a root node, then calls ice_traverse_tx tree, which
> + * traverses the tree and export it's contents to devlink rate.
> + */
> +int ice_devlink_rate_init_tx_topology(struct devlink *devlink, struct ice_vsi *vsi)
> +{
> +	struct ice_port_info *pi = vsi->port_info;
> +	struct ice_sched_node *tc_node;
> +	struct ice_pf *pf = vsi->back;
> +	int i;
> +
> +	tc_node = pi->root->children[0];
> +	mutex_lock(&pi->sched_lock);
> +	for (i = 0; i < tc_node->num_children; i++)
> +		ice_traverse_tx_tree(devlink, tc_node->children[i], tc_node, pf);
> +	mutex_unlock(&pi->sched_lock);
> +
> +	return 0;
> +}

// snip

>  static int
> @@ -893,6 +1399,9 @@ void ice_devlink_register(struct ice_pf *pf)
>   */
>  void ice_devlink_unregister(struct ice_pf *pf)
>  {
> +	devl_lock(priv_to_devlink(pf));
> +	devl_rate_objects_destroy(priv_to_devlink(pf));
> +	devl_unlock(priv_to_devlink(pf));
>  	devlink_unregister(priv_to_devlink(pf));
>  }
>

Maybe it's now worth a variable for priv_to_devlink(pf)?

[...]

--Przemek

  reply	other threads:[~2022-09-22 13:10 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-09-15 13:42 [RFC PATCH net-next v4 0/6] Implement devlink-rate API and extend it Michal Wilczynski
2022-09-15 13:42 ` [RFC PATCH net-next v4 1/6] ice: Add function for move/reconfigure TxQ AQ command Michal Wilczynski
2022-09-15 13:42 ` [RFC PATCH net-next v4 2/6] devlink: Extend devlink-rate api with queues and new parameters Michal Wilczynski
2022-09-15 15:31   ` Edward Cree
2022-09-15 18:41     ` Wilczynski, Michal
2022-09-15 21:01       ` Edward Cree
2022-09-19 13:12         ` Wilczynski, Michal
2022-09-20 11:09           ` Edward Cree
2022-09-26 11:58             ` Jiri Pirko
2022-09-28 11:53               ` Wilczynski, Michal
2022-09-29  7:08                 ` Jiri Pirko
2022-09-21 23:33       ` Jakub Kicinski
2022-09-22 11:44         ` Wilczynski, Michal
2022-09-22 12:50           ` Jakub Kicinski
2022-09-22 13:45             ` Wilczynski, Michal
2022-09-22 20:29               ` Jakub Kicinski
2022-09-23 12:11                 ` Wilczynski, Michal
2022-09-23 13:16                   ` Jakub Kicinski
2022-09-23 15:46                     ` Wilczynski, Michal
2022-09-27  0:16                       ` Jakub Kicinski
2022-09-28 12:02                         ` Wilczynski, Michal
2022-09-28 17:39                           ` Jakub Kicinski
2022-09-26 11:51       ` Jiri Pirko
2022-09-28 11:47         ` Wilczynski, Michal
2022-09-29  7:12           ` Jiri Pirko
2022-10-11 13:28             ` Wilczynski, Michal
2022-10-11 14:17               ` Jiri Pirko
2022-09-15 13:42 ` [RFC PATCH net-next v4 3/6] ice: Introduce new parameters in ice_sched_node Michal Wilczynski
2022-09-15 13:42 ` [RFC PATCH net-next v4 4/6] ice: Implement devlink-rate API Michal Wilczynski
2022-09-22 13:08   ` Przemek Kitszel [this message]
2022-09-15 13:42 ` [RFC PATCH net-next v4 5/6] ice: Export Tx scheduler configuration to devlink-rate Michal Wilczynski
2022-09-15 13:42 ` [RFC PATCH net-next v4 6/6] ice: Prevent ADQ, DCB and RDMA coexistence with Custom Tx scheduler Michal Wilczynski
2022-09-15 13:57 ` [RFC PATCH net-next v4 0/6] Implement devlink-rate API and extend it Wilczynski, Michal
2022-09-19  7:22 [RFC PATCH net-next v4 2/6] devlink: Extend devlink-rate api with queues and new parameters kernel test robot
2022-09-19  9:22 ` Dan Carpenter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220922130859.337985-1-przemyslaw.kitszel@intel.com \
    --to=przemyslaw.kitszel@intel.com \
    --cc=alexandr.lobakin@intel.com \
    --cc=dchumak@nvidia.com \
    --cc=jacob.e.keller@intel.com \
    --cc=jesse.brandeburg@intel.com \
    --cc=jiri@resnulli.us \
    --cc=maximmi@nvidia.com \
    --cc=michal.wilczynski@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=simon.horman@corigine.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.