From: Mateusz Polchlopek <mateusz.polchlopek@intel.com> To: intel-wired-lan@lists.osuosl.org Cc: andrew@lunn.ch, jiri@resnulli.us, michal.wilczynski@intel.com, Mateusz Polchlopek <mateusz.polchlopek@intel.com>, netdev@vger.kernel.org, lukasz.czapnik@intel.com, victor.raj@intel.com, anthony.l.nguyen@intel.com, horms@kernel.org, przemyslaw.kitszel@intel.com, kuba@kernel.org Subject: [Intel-wired-lan] [PATCH net-next v8 3/6] ice: Adjust the VSI/Aggregator layers Date: Tue, 26 Mar 2024 10:30:39 -0400 [thread overview] Message-ID: <20240326143042.9240-4-mateusz.polchlopek@intel.com> (raw) In-Reply-To: <20240326143042.9240-1-mateusz.polchlopek@intel.com> From: Raj Victor <victor.raj@intel.com> Adjust the VSI/Aggregator layers based on the number of logical layers supported by the FW. Currently the VSI and Aggregator layers are fixed based on the 9 layer scheduler tree layout. Due to performance reasons the number of layers of the scheduler tree is changing from 9 to 5. It requires a readjustment of these VSI/Aggregator layer values. Signed-off-by: Raj Victor <victor.raj@intel.com> Co-developed-by: Michal Wilczynski <michal.wilczynski@intel.com> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> --- drivers/net/ethernet/intel/ice/ice_sched.c | 37 +++++++++++----------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index d174a4eeb899..1ac3686328ae 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -1128,12 +1128,11 @@ u8 ice_sched_get_vsi_layer(struct ice_hw *hw) * 5 or less sw_entry_point_layer */ /* calculate the VSI layer based on number of layers. */ - if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) { - u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; - - if (layer > hw->sw_entry_point_layer) - return layer; - } + if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS) + return hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; + else if (hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS) + /* qgroup and VSI layers are same */ + return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET; return hw->sw_entry_point_layer; } @@ -1150,13 +1149,10 @@ u8 ice_sched_get_agg_layer(struct ice_hw *hw) * 7 or less sw_entry_point_layer */ /* calculate the aggregator layer based on number of layers. */ - if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) { - u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET; - - if (layer > hw->sw_entry_point_layer) - return layer; - } - return hw->sw_entry_point_layer; + if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS) + return hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET; + else + return hw->sw_entry_point_layer; } /** @@ -1510,10 +1506,11 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, { struct ice_sched_node *vsi_node, *qgrp_node; struct ice_vsi_ctx *vsi_ctx; + u8 qgrp_layer, vsi_layer; u16 max_children; - u8 qgrp_layer; qgrp_layer = ice_sched_get_qgrp_layer(pi->hw); + vsi_layer = ice_sched_get_vsi_layer(pi->hw); max_children = pi->hw->max_children[qgrp_layer]; vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle); @@ -1524,6 +1521,12 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, if (!vsi_node) return NULL; + /* If the queue group and VSI layer are same then queues + * are all attached directly to VSI + */ + if (qgrp_layer == vsi_layer) + return vsi_node; + /* get the first queue group node from VSI sub-tree */ qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer); while (qgrp_node) { @@ -3199,7 +3202,7 @@ ice_sched_add_rl_profile(struct ice_port_info *pi, u8 profile_type; int status; - if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM) + if (!pi || layer_num >= pi->hw->num_tx_sched_layers) return NULL; switch (rl_type) { case ICE_MIN_BW: @@ -3215,8 +3218,6 @@ ice_sched_add_rl_profile(struct ice_port_info *pi, return NULL; } - if (!pi) - return NULL; hw = pi->hw; list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num], list_entry) @@ -3446,7 +3447,7 @@ ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type, struct ice_aqc_rl_profile_info *rl_prof_elem; int status = 0; - if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM) + if (layer_num >= pi->hw->num_tx_sched_layers) return -EINVAL; /* Check the existing list for RL profile */ list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num], -- 2.38.1
WARNING: multiple messages have this Message-ID (diff)
From: Mateusz Polchlopek <mateusz.polchlopek@intel.com> To: intel-wired-lan@lists.osuosl.org Cc: netdev@vger.kernel.org, anthony.l.nguyen@intel.com, kuba@kernel.org, jiri@resnulli.us, horms@kernel.org, przemyslaw.kitszel@intel.com, andrew@lunn.ch, victor.raj@intel.com, michal.wilczynski@intel.com, lukasz.czapnik@intel.com, Mateusz Polchlopek <mateusz.polchlopek@intel.com> Subject: [Intel-wired-lan] [PATCH net-next v8 3/6] ice: Adjust the VSI/Aggregator layers Date: Tue, 26 Mar 2024 10:30:39 -0400 [thread overview] Message-ID: <20240326143042.9240-4-mateusz.polchlopek@intel.com> (raw) In-Reply-To: <20240326143042.9240-1-mateusz.polchlopek@intel.com> From: Raj Victor <victor.raj@intel.com> Adjust the VSI/Aggregator layers based on the number of logical layers supported by the FW. Currently the VSI and Aggregator layers are fixed based on the 9 layer scheduler tree layout. Due to performance reasons the number of layers of the scheduler tree is changing from 9 to 5. It requires a readjustment of these VSI/Aggregator layer values. Signed-off-by: Raj Victor <victor.raj@intel.com> Co-developed-by: Michal Wilczynski <michal.wilczynski@intel.com> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com> Signed-off-by: Mateusz Polchlopek <mateusz.polchlopek@intel.com> --- drivers/net/ethernet/intel/ice/ice_sched.c | 37 +++++++++++----------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c index d174a4eeb899..1ac3686328ae 100644 --- a/drivers/net/ethernet/intel/ice/ice_sched.c +++ b/drivers/net/ethernet/intel/ice/ice_sched.c @@ -1128,12 +1128,11 @@ u8 ice_sched_get_vsi_layer(struct ice_hw *hw) * 5 or less sw_entry_point_layer */ /* calculate the VSI layer based on number of layers. */ - if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) { - u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; - - if (layer > hw->sw_entry_point_layer) - return layer; - } + if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS) + return hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET; + else if (hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS) + /* qgroup and VSI layers are same */ + return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET; return hw->sw_entry_point_layer; } @@ -1150,13 +1149,10 @@ u8 ice_sched_get_agg_layer(struct ice_hw *hw) * 7 or less sw_entry_point_layer */ /* calculate the aggregator layer based on number of layers. */ - if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) { - u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET; - - if (layer > hw->sw_entry_point_layer) - return layer; - } - return hw->sw_entry_point_layer; + if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS) + return hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET; + else + return hw->sw_entry_point_layer; } /** @@ -1510,10 +1506,11 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, { struct ice_sched_node *vsi_node, *qgrp_node; struct ice_vsi_ctx *vsi_ctx; + u8 qgrp_layer, vsi_layer; u16 max_children; - u8 qgrp_layer; qgrp_layer = ice_sched_get_qgrp_layer(pi->hw); + vsi_layer = ice_sched_get_vsi_layer(pi->hw); max_children = pi->hw->max_children[qgrp_layer]; vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle); @@ -1524,6 +1521,12 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc, if (!vsi_node) return NULL; + /* If the queue group and VSI layer are same then queues + * are all attached directly to VSI + */ + if (qgrp_layer == vsi_layer) + return vsi_node; + /* get the first queue group node from VSI sub-tree */ qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer); while (qgrp_node) { @@ -3199,7 +3202,7 @@ ice_sched_add_rl_profile(struct ice_port_info *pi, u8 profile_type; int status; - if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM) + if (!pi || layer_num >= pi->hw->num_tx_sched_layers) return NULL; switch (rl_type) { case ICE_MIN_BW: @@ -3215,8 +3218,6 @@ ice_sched_add_rl_profile(struct ice_port_info *pi, return NULL; } - if (!pi) - return NULL; hw = pi->hw; list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num], list_entry) @@ -3446,7 +3447,7 @@ ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type, struct ice_aqc_rl_profile_info *rl_prof_elem; int status = 0; - if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM) + if (layer_num >= pi->hw->num_tx_sched_layers) return -EINVAL; /* Check the existing list for RL profile */ list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num], -- 2.38.1
next prev parent reply other threads:[~2024-03-26 14:40 UTC|newest] Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top 2024-03-26 14:30 [Intel-wired-lan] [PATCH net-next v8 0/6] ice: Support 5 layer Tx scheduler topology Mateusz Polchlopek 2024-03-26 14:30 ` Mateusz Polchlopek 2024-03-26 14:30 ` [Intel-wired-lan] [PATCH net-next v8 1/6] devlink: extend devlink_param *set pointer Mateusz Polchlopek 2024-03-26 14:30 ` Mateusz Polchlopek 2024-03-26 14:30 ` [Intel-wired-lan] [PATCH net-next v8 2/6] ice: Support 5 layer topology Mateusz Polchlopek 2024-03-26 14:30 ` Mateusz Polchlopek 2024-03-26 14:30 ` Mateusz Polchlopek [this message] 2024-03-26 14:30 ` [Intel-wired-lan] [PATCH net-next v8 3/6] ice: Adjust the VSI/Aggregator layers Mateusz Polchlopek 2024-03-26 14:30 ` [Intel-wired-lan] [PATCH net-next v8 4/6] ice: Enable switching default Tx scheduler topology Mateusz Polchlopek 2024-03-26 14:30 ` Mateusz Polchlopek 2024-03-26 14:30 ` [Intel-wired-lan] [PATCH net-next v8 5/6] ice: Add tx_scheduling_layers devlink param Mateusz Polchlopek 2024-03-26 14:30 ` Mateusz Polchlopek 2024-03-26 14:30 ` [Intel-wired-lan] [PATCH net-next v8 6/6] ice: Document tx_scheduling_layers parameter Mateusz Polchlopek 2024-03-26 14:30 ` Mateusz Polchlopek 2024-03-28 21:40 ` [Intel-wired-lan] [PATCH net-next v8 0/6] ice: Support 5 layer Tx scheduler topology Tony Nguyen 2024-03-28 21:40 ` Tony Nguyen
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20240326143042.9240-4-mateusz.polchlopek@intel.com \ --to=mateusz.polchlopek@intel.com \ --cc=andrew@lunn.ch \ --cc=anthony.l.nguyen@intel.com \ --cc=horms@kernel.org \ --cc=intel-wired-lan@lists.osuosl.org \ --cc=jiri@resnulli.us \ --cc=kuba@kernel.org \ --cc=lukasz.czapnik@intel.com \ --cc=michal.wilczynski@intel.com \ --cc=netdev@vger.kernel.org \ --cc=przemyslaw.kitszel@intel.com \ --cc=victor.raj@intel.com \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.