* [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology
@ 2022-07-20 14:40 Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 1/4] ice: Support 5 layer topology Michal Wilczynski
` (4 more replies)
0 siblings, 5 replies; 12+ messages in thread
From: Michal Wilczynski @ 2022-07-20 14:40 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Michal Wilczynski
For performance reasons there is a need to have support for selectable
tx scheduler topology. Currently firmware supports only the default
9-layer and 5-layer topology. This patch series enables switch from
default to 5-layer topology, if user decides to opt-in.
Lukasz Czapnik (1):
ice: Add txbalancing devlink param
v6: Added this commit
Michal Wilczynski (1):
ice: Enable switching default tx scheduler topology
v2:
- Moved definitions of scheduling layers to other commit
v3:
- Removed wrong tags
- Added blank line
- Indented a comment
- Moved parts of code to separate commit
- Removed unnecessary initializations
- Change from unnecessary devm_kmemdup to kmemdup
v5:
- Changed freeing to kfree
- Changed commit message to clarify that parameter change is not yet upstream
Raj Victor (2):
ice: Support 5 layer topology
v2:
- Added example of performance decrease in commit message
- Reworded commit message for imperative mood
- Removed unnecessary tags
- Refactored duplicated function call
- Fixed RCT
- Fixed unnecessary call to devm_kfree
- Defined constants
v3:
- Changed title
- Changes in commit description, added versions of DDP
and firmware, also added test methodology
- Removed unnecessary defines
- Added a newline for define separation
- Did s/tx/Tx in comments
- Removed newline between error check
ice: Adjust the VSI/Aggregator layers
v3:
- Added this commit
- Removed unnecessary 'else'
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 31 +++
drivers/net/ethernet/intel/ice/ice_common.c | 5 +
drivers/net/ethernet/intel/ice/ice_devlink.c | 159 ++++++++++++++
.../net/ethernet/intel/ice/ice_flex_pipe.c | 202 ++++++++++++++++++
.../net/ethernet/intel/ice/ice_flex_type.h | 17 +-
.../net/ethernet/intel/ice/ice_fw_update.c | 7 +-
.../net/ethernet/intel/ice/ice_fw_update.h | 3 +
drivers/net/ethernet/intel/ice/ice_main.c | 113 ++++++++--
drivers/net/ethernet/intel/ice/ice_nvm.c | 2 +-
drivers/net/ethernet/intel/ice/ice_nvm.h | 4 +
drivers/net/ethernet/intel/ice/ice_sched.c | 35 +--
drivers/net/ethernet/intel/ice/ice_sched.h | 3 +
drivers/net/ethernet/intel/ice/ice_type.h | 1 +
13 files changed, 539 insertions(+), 43 deletions(-)
--
2.27.0
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply [flat|nested] 12+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 1/4] ice: Support 5 layer topology
2022-07-20 14:40 [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Michal Wilczynski
@ 2022-07-20 14:40 ` Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 2/4] ice: Adjust the VSI/Aggregator layers Michal Wilczynski
` (3 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Michal Wilczynski @ 2022-07-20 14:40 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Raj Victor, Michal Wilczynski
From: Raj Victor <victor.raj@intel.com>
There is a performance issue reported when the number of VSIs are not
multiple of 8. This is caused due to the max children limitation per
node(8) in 9 layer topology. The BW credits are shared evenly among the
children by default. Assume one node has 8 children and the other has 1.
The parent of these nodes share the BW credit equally among them.
Apparently this causes a problem for the first node which has 8 children.
The 9th VM get more BW credits than the first 8 VMs.
Example:
1) With 8 VM's:
for x in 0 1 2 3 4 5 6 7;
do taskset -c ${x} netperf -P0 -H 172.68.169.125 & sleep .1 ; done
tx_queue_0_packets: 23283027
tx_queue_1_packets: 23292289
tx_queue_2_packets: 23276136
tx_queue_3_packets: 23279828
tx_queue_4_packets: 23279828
tx_queue_5_packets: 23279333
tx_queue_6_packets: 23277745
tx_queue_7_packets: 23279950
tx_queue_8_packets: 0
2) With 9 VM's:
for x in 0 1 2 3 4 5 6 7 8;
do taskset -c ${x} netperf -P0 -H 172.68.169.125 & sleep .1 ; done
tx_queue_0_packets: 24163396
tx_queue_1_packets: 24164623
tx_queue_2_packets: 24163188
tx_queue_3_packets: 24163701
tx_queue_4_packets: 24163683
tx_queue_5_packets: 24164668
tx_queue_6_packets: 23327200
tx_queue_7_packets: 24163853
tx_queue_8_packets: 91101417
So on average queue 8 statistics show that 3.7 times more packets were
send there than to the other queues.
The FW starting with version 3.20, has increased the max number of
children per node by reducing the number of layers from 9 to 5. Reflect
this on driver side.
Signed-off-by: Raj Victor <victor.raj@intel.com>
Co-developed-by: Michal Wilczynski <michal.wilczynski@intel.com>
Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
---
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 22 ++
drivers/net/ethernet/intel/ice/ice_common.c | 3 +
.../net/ethernet/intel/ice/ice_flex_pipe.c | 201 ++++++++++++++++++
.../net/ethernet/intel/ice/ice_flex_type.h | 17 +-
drivers/net/ethernet/intel/ice/ice_sched.h | 3 +
drivers/net/ethernet/intel/ice/ice_type.h | 1 +
6 files changed, 245 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 05cb9dd7035a..fe50309c5d1c 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -120,6 +120,7 @@ struct ice_aqc_list_caps_elem {
#define ICE_AQC_CAPS_PCIE_RESET_AVOIDANCE 0x0076
#define ICE_AQC_CAPS_POST_UPDATE_RESET_RESTRICT 0x0077
#define ICE_AQC_CAPS_NVM_MGMT 0x0080
+#define ICE_AQC_CAPS_TX_SCHED_TOPO_COMP_MODE 0x0085
u8 major_ver;
u8 minor_ver;
@@ -798,6 +799,23 @@ struct ice_aqc_get_topo {
__le32 addr_low;
};
+/* Get/Set Tx Topology (indirect 0x0418/0x0417) */
+struct ice_aqc_get_set_tx_topo {
+ u8 set_flags;
+#define ICE_AQC_TX_TOPO_FLAGS_CORRER BIT(0)
+#define ICE_AQC_TX_TOPO_FLAGS_SRC_RAM BIT(1)
+#define ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW BIT(4)
+#define ICE_AQC_TX_TOPO_FLAGS_ISSUED BIT(5)
+
+ u8 get_flags;
+#define ICE_AQC_TX_TOPO_GET_RAM 2
+
+ __le16 reserved1;
+ __le32 reserved2;
+ __le32 addr_high;
+ __le32 addr_low;
+};
+
/* Update TSE (indirect 0x0403)
* Get TSE (indirect 0x0404)
* Add TSE (indirect 0x0401)
@@ -2126,6 +2144,7 @@ struct ice_aq_desc {
struct ice_aqc_get_link_topo get_link_topo;
struct ice_aqc_i2c read_i2c;
struct ice_aqc_read_i2c_resp read_i2c_resp;
+ struct ice_aqc_get_set_tx_topo get_set_tx_topo;
} params;
};
@@ -2231,6 +2250,9 @@ enum ice_adminq_opc {
ice_aqc_opc_query_sched_res = 0x0412,
ice_aqc_opc_remove_rl_profiles = 0x0415,
+ ice_aqc_opc_set_tx_topo = 0x0417,
+ ice_aqc_opc_get_tx_topo = 0x0418,
+
/* PHY commands */
ice_aqc_opc_get_phy_caps = 0x0600,
ice_aqc_opc_set_phy_cfg = 0x0601,
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 9619bdb9e49a..8b65e2bfb160 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -2091,6 +2091,9 @@ ice_parse_common_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps,
"%s: reset_restrict_support = %d\n", prefix,
caps->reset_restrict_support);
break;
+ case ICE_AQC_CAPS_TX_SCHED_TOPO_COMP_MODE:
+ caps->tx_sched_topo_comp_mode_en = (number == 1);
+ break;
default:
/* Not one of the recognized common capabilities */
found = false;
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
index ada5198b5b16..7c82f05621e3 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
@@ -4,6 +4,7 @@
#include "ice_common.h"
#include "ice_flex_pipe.h"
#include "ice_flow.h"
+#include "ice_sched.h"
#include "ice.h"
/* For supporting double VLAN mode, it is necessary to enable or disable certain
@@ -1783,6 +1784,206 @@ bool ice_is_init_pkg_successful(enum ice_ddp_state state)
}
}
+/**
+ * ice_get_set_tx_topo - get or set Tx topology
+ * @hw: pointer to the HW struct
+ * @buf: pointer to Tx topology buffer
+ * @buf_size: buffer size
+ * @cd: pointer to command details structure or NULL
+ * @flags: pointer to descriptor flags
+ * @set: 0-get, 1-set topology
+ *
+ * The function will get or set Tx topology
+ */
+static int
+ice_get_set_tx_topo(struct ice_hw *hw, u8 *buf, u16 buf_size,
+ struct ice_sq_cd *cd, u8 *flags, bool set)
+{
+ struct ice_aqc_get_set_tx_topo *cmd;
+ struct ice_aq_desc desc;
+ int status;
+
+ cmd = &desc.params.get_set_tx_topo;
+ if (set) {
+ cmd->set_flags = ICE_AQC_TX_TOPO_FLAGS_ISSUED;
+ /* requested to update a new topology, not a default topology */
+ if (buf)
+ cmd->set_flags |= ICE_AQC_TX_TOPO_FLAGS_SRC_RAM |
+ ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW;
+ } else {
+ cmd->get_flags = ICE_AQC_TX_TOPO_GET_RAM;
+ }
+ ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_get_tx_topo);
+ desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+ status = ice_aq_send_cmd(hw, &desc, buf, buf_size, cd);
+ if (status)
+ return status;
+ /* read the return flag values (first byte) for get operation */
+ if (!set && flags)
+ *flags = desc.params.get_set_tx_topo.set_flags;
+
+ return 0;
+}
+
+/**
+ * ice_cfg_tx_topo - Initialize new Tx topology if available
+ * @hw: pointer to the HW struct
+ * @buf: pointer to Tx topology buffer
+ * @len: buffer size
+ *
+ * The function will apply the new Tx topology from the package buffer
+ * if available.
+ */
+int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
+{
+ u8 *current_topo, *new_topo = NULL;
+ struct ice_run_time_cfg_seg *seg;
+ struct ice_buf_hdr *section;
+ struct ice_pkg_hdr *pkg_hdr;
+ enum ice_ddp_state state;
+ u16 i, size = 0, offset;
+ u32 reg = 0;
+ int status;
+ u8 flags;
+
+ if (!buf || !len)
+ return -EINVAL;
+
+ /* Does FW support new Tx topology mode ? */
+ if (!hw->func_caps.common_cap.tx_sched_topo_comp_mode_en) {
+ ice_debug(hw, ICE_DBG_INIT, "FW doesn't support compatibility mode\n");
+ return -EOPNOTSUPP;
+ }
+
+ current_topo = kzalloc(ICE_AQ_MAX_BUF_LEN, GFP_KERNEL);
+ if (!current_topo)
+ return -ENOMEM;
+
+ /* get the current Tx topology */
+ status = ice_get_set_tx_topo(hw, current_topo, ICE_AQ_MAX_BUF_LEN, NULL,
+ &flags, false);
+
+ kfree(current_topo);
+
+ if (status) {
+ ice_debug(hw, ICE_DBG_INIT, "Get current topology is failed\n");
+ return status;
+ }
+
+ /* Is default topology already applied ? */
+ if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) &&
+ hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS) {
+ ice_debug(hw, ICE_DBG_INIT, "Loaded default topology\n");
+ /* Already default topology is loaded */
+ return -EEXIST;
+ }
+
+ /* Is new topology already applied ? */
+ if ((flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) &&
+ hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS) {
+ ice_debug(hw, ICE_DBG_INIT, "Loaded new topology\n");
+ /* Already new topology is loaded */
+ return -EEXIST;
+ }
+
+ /* Is set topology issued already ? */
+ if (flags & ICE_AQC_TX_TOPO_FLAGS_ISSUED) {
+ ice_debug(hw, ICE_DBG_INIT, "Update Tx topology was done by another PF\n");
+ /* add a small delay before exiting */
+ for (i = 0; i < 20; i++)
+ msleep(100);
+ return -EEXIST;
+ }
+
+ /* Change the topology from new to default (5 to 9) */
+ if (!(flags & ICE_AQC_TX_TOPO_FLAGS_LOAD_NEW) &&
+ hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS) {
+ ice_debug(hw, ICE_DBG_INIT, "Change topology from 5 to 9 layers\n");
+ goto update_topo;
+ }
+
+ pkg_hdr = (struct ice_pkg_hdr *)buf;
+ state = ice_verify_pkg(pkg_hdr, len);
+ if (state) {
+ ice_debug(hw, ICE_DBG_INIT, "failed to verify pkg (err: %d)\n",
+ state);
+ return -EIO;
+ }
+
+ /* find run time configuration segment */
+ seg = (struct ice_run_time_cfg_seg *)
+ ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE_RUN_TIME_CFG, pkg_hdr);
+ if (!seg) {
+ ice_debug(hw, ICE_DBG_INIT, "5 layer topology segment is missing\n");
+ return -EIO;
+ }
+
+ if (le32_to_cpu(seg->buf_table.buf_count) < ICE_MIN_S_COUNT) {
+ ice_debug(hw, ICE_DBG_INIT, "5 layer topology segment count(%d) is wrong\n",
+ seg->buf_table.buf_count);
+ return -EIO;
+ }
+
+ section = ice_pkg_val_buf(seg->buf_table.buf_array);
+
+ if (!section || le32_to_cpu(section->section_entry[0].type) !=
+ ICE_SID_TX_5_LAYER_TOPO) {
+ ice_debug(hw, ICE_DBG_INIT, "5 layer topology section type is wrong\n");
+ return -EIO;
+ }
+
+ size = le16_to_cpu(section->section_entry[0].size);
+ offset = le16_to_cpu(section->section_entry[0].offset);
+ if (size < ICE_MIN_S_SZ || size > ICE_MAX_S_SZ) {
+ ice_debug(hw, ICE_DBG_INIT, "5 layer topology section size is wrong\n");
+ return -EIO;
+ }
+
+ /* make sure the section fits in the buffer */
+ if (offset + size > ICE_PKG_BUF_SIZE) {
+ ice_debug(hw, ICE_DBG_INIT, "5 layer topology buffer > 4K\n");
+ return -EIO;
+ }
+
+ /* Get the new topology buffer */
+ new_topo = ((u8 *)section) + offset;
+
+update_topo:
+ /* acquire global lock to make sure that set topology issued
+ * by one PF
+ */
+ status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE);
+ if (status) {
+ ice_debug(hw, ICE_DBG_INIT, "Failed to acquire global lock\n");
+ return status;
+ }
+
+ /* check reset was triggered already or not */
+ reg = rd32(hw, GLGEN_RSTAT);
+ if (reg & GLGEN_RSTAT_DEVSTATE_M) {
+ /* Reset is in progress, re-init the hw again */
+ ice_debug(hw, ICE_DBG_INIT, "Reset is in progress. layer topology might be applied already\n");
+ ice_check_reset(hw);
+ return 0;
+ }
+
+ /* set new topology */
+ status = ice_get_set_tx_topo(hw, new_topo, size, NULL, NULL, true);
+ if (status) {
+ ice_debug(hw, ICE_DBG_INIT, "Set Tx topology is failed\n");
+ return status;
+ }
+
+ /* new topology is updated, delay 1 second before issuing the CORRER */
+ for (i = 0; i < 10; i++)
+ msleep(100);
+ ice_reset(hw, ICE_RESET_CORER);
+ /* CORER will clear the global lock, so no explicit call
+ * required for release
+ */
+ return 0;
+}
+
/**
* ice_pkg_buf_alloc
* @hw: pointer to the HW structure
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h
index 974d14a83b2e..ebbb5a1db8c7 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h
@@ -29,8 +29,9 @@ struct ice_pkg_hdr {
/* generic segment */
struct ice_generic_seg_hdr {
-#define SEGMENT_TYPE_METADATA 0x00000001
-#define SEGMENT_TYPE_ICE 0x00000010
+#define SEGMENT_TYPE_METADATA 0x00000001
+#define SEGMENT_TYPE_ICE 0x00000010
+#define SEGMENT_TYPE_ICE_RUN_TIME_CFG 0x00000020
__le32 seg_type;
struct ice_pkg_ver seg_format_ver;
__le32 seg_size;
@@ -73,6 +74,12 @@ struct ice_buf_table {
struct ice_buf buf_array[];
};
+struct ice_run_time_cfg_seg {
+ struct ice_generic_seg_hdr hdr;
+ u8 rsvd[8];
+ struct ice_buf_table buf_table;
+};
+
/* global metadata specific segment */
struct ice_global_metadata_seg {
struct ice_generic_seg_hdr hdr;
@@ -181,6 +188,9 @@ struct ice_meta_sect {
/* The following define MUST be updated to reflect the last label section ID */
#define ICE_SID_LBL_LAST 0x80000038
+/* Label ICE runtime configuration section IDs */
+#define ICE_SID_TX_5_LAYER_TOPO 0x10
+
enum ice_block {
ICE_BLK_SW = 0,
ICE_BLK_ACL,
@@ -706,4 +716,7 @@ struct ice_meta_init_section {
__le16 offset;
struct ice_meta_init_entry entry;
};
+
+int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len);
+
#endif /* _ICE_FLEX_TYPE_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.h b/drivers/net/ethernet/intel/ice/ice_sched.h
index 4f91577fed56..86dc0f1f4255 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.h
+++ b/drivers/net/ethernet/intel/ice/ice_sched.h
@@ -6,6 +6,9 @@
#include "ice_common.h"
+#define ICE_SCHED_5_LAYERS 5
+#define ICE_SCHED_9_LAYERS 9
+
#define ICE_QGRP_LAYER_OFFSET 2
#define ICE_VSI_LAYER_OFFSET 4
#define ICE_AGG_LAYER_OFFSET 6
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index f2a518a1fd94..ff793fe2a2e7 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -290,6 +290,7 @@ struct ice_hw_common_caps {
bool pcie_reset_avoidance;
/* Post update reset restriction */
bool reset_restrict_support;
+ bool tx_sched_topo_comp_mode_en;
};
/* IEEE 1588 TIME_SYNC specific info */
--
2.27.0
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 2/4] ice: Adjust the VSI/Aggregator layers
2022-07-20 14:40 [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 1/4] ice: Support 5 layer topology Michal Wilczynski
@ 2022-07-20 14:40 ` Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 3/4] ice: Enable switching default tx scheduler topology Michal Wilczynski
` (2 subsequent siblings)
4 siblings, 0 replies; 12+ messages in thread
From: Michal Wilczynski @ 2022-07-20 14:40 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Raj Victor, Michal Wilczynski
From: Raj Victor <victor.raj@intel.com>
Adjust the VSI/Aggregator layers based on the number of logical layers
supported by the FW. Currently the VSI and aggregator layers are
fixed based on the 9 layer scheduler tree layout. Due to performance
reasons the number of layers of the scheduler tree is changing from
9 to 5. It requires a readjustment of these VSI/Aggregator layer values.
Signed-off-by: Raj Victor <victor.raj@intel.com>
Co-developed-by: Michal Wilczynski <michal.wilczynski@intel.com>
Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
drivers/net/ethernet/intel/ice/ice_sched.c | 35 +++++++++++-----------
1 file changed, 18 insertions(+), 17 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 7947223536e3..4d9cd7aa9db4 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -1102,12 +1102,11 @@ static u8 ice_sched_get_vsi_layer(struct ice_hw *hw)
* 5 or less sw_entry_point_layer
*/
/* calculate the VSI layer based on number of layers. */
- if (hw->num_tx_sched_layers > ICE_VSI_LAYER_OFFSET + 1) {
- u8 layer = hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
-
- if (layer > hw->sw_entry_point_layer)
- return layer;
- }
+ if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS)
+ return hw->num_tx_sched_layers - ICE_VSI_LAYER_OFFSET;
+ if (hw->num_tx_sched_layers == ICE_SCHED_5_LAYERS)
+ /* qgroup and VSI layers are same */
+ return hw->num_tx_sched_layers - ICE_QGRP_LAYER_OFFSET;
return hw->sw_entry_point_layer;
}
@@ -1124,12 +1123,8 @@ static u8 ice_sched_get_agg_layer(struct ice_hw *hw)
* 7 or less sw_entry_point_layer
*/
/* calculate the aggregator layer based on number of layers. */
- if (hw->num_tx_sched_layers > ICE_AGG_LAYER_OFFSET + 1) {
- u8 layer = hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
-
- if (layer > hw->sw_entry_point_layer)
- return layer;
- }
+ if (hw->num_tx_sched_layers == ICE_SCHED_9_LAYERS)
+ return hw->num_tx_sched_layers - ICE_AGG_LAYER_OFFSET;
return hw->sw_entry_point_layer;
}
@@ -1485,10 +1480,11 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
{
struct ice_sched_node *vsi_node, *qgrp_node;
struct ice_vsi_ctx *vsi_ctx;
+ u8 qgrp_layer, vsi_layer;
u16 max_children;
- u8 qgrp_layer;
qgrp_layer = ice_sched_get_qgrp_layer(pi->hw);
+ vsi_layer = ice_sched_get_vsi_layer(pi->hw);
max_children = pi->hw->max_children[qgrp_layer];
vsi_ctx = ice_get_vsi_ctx(pi->hw, vsi_handle);
@@ -1499,6 +1495,12 @@ ice_sched_get_free_qparent(struct ice_port_info *pi, u16 vsi_handle, u8 tc,
if (!vsi_node)
return NULL;
+ /* If the queue group and vsi layer are same then queues
+ * are all attached directly to VSI
+ */
+ if (qgrp_layer == vsi_layer)
+ return vsi_node;
+
/* get the first queue group node from VSI sub-tree */
qgrp_node = ice_sched_get_first_node(pi, vsi_node, qgrp_layer);
while (qgrp_node) {
@@ -3178,8 +3180,9 @@ ice_sched_add_rl_profile(struct ice_port_info *pi,
u8 profile_type;
int status;
- if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM)
+ if (!pi || layer_num >= pi->hw->num_tx_sched_layers)
return NULL;
+
switch (rl_type) {
case ICE_MIN_BW:
profile_type = ICE_AQC_RL_PROFILE_TYPE_CIR;
@@ -3194,8 +3197,6 @@ ice_sched_add_rl_profile(struct ice_port_info *pi,
return NULL;
}
- if (!pi)
- return NULL;
hw = pi->hw;
list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num],
list_entry)
@@ -3425,7 +3426,7 @@ ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type,
struct ice_aqc_rl_profile_info *rl_prof_elem;
int status = 0;
- if (layer_num >= ICE_AQC_TOPO_MAX_LEVEL_NUM)
+ if (layer_num >= pi->hw->num_tx_sched_layers)
return -EINVAL;
/* Check the existing list for RL profile */
list_for_each_entry(rl_prof_elem, &pi->rl_prof_list[layer_num],
--
2.27.0
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 3/4] ice: Enable switching default tx scheduler topology
2022-07-20 14:40 [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 1/4] ice: Support 5 layer topology Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 2/4] ice: Adjust the VSI/Aggregator layers Michal Wilczynski
@ 2022-07-20 14:40 ` Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param Michal Wilczynski
2022-07-20 23:17 ` [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Tony Nguyen
4 siblings, 0 replies; 12+ messages in thread
From: Michal Wilczynski @ 2022-07-20 14:40 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Michal Wilczynski
Introduce support for tx scheduler topology change, based on user
selection, from default 9-layer to 5-layer. In order for switch to be
successful there is a new NVM(version 3.20 or older) and DDP package(OS
Package 1.3.29 or older).
Enable 5-layer topology switch in init path of the driver. To accomplish
that upload of the DDP package needs to be delayed, until change in Tx
topology is finished. To trigger the Tx change user selection should be
changed in NVM using devlink. Then the platform should be rebooted.
Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
drivers/net/ethernet/intel/ice/ice_common.c | 2 +
.../net/ethernet/intel/ice/ice_flex_pipe.c | 3 +-
drivers/net/ethernet/intel/ice/ice_main.c | 113 +++++++++++++++---
3 files changed, 98 insertions(+), 20 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 8b65e2bfb160..167f9d5c345a 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -1535,6 +1535,8 @@ ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf,
case ice_aqc_opc_set_port_params:
case ice_aqc_opc_get_vlan_mode_parameters:
case ice_aqc_opc_set_vlan_mode_parameters:
+ case ice_aqc_opc_set_tx_topo:
+ case ice_aqc_opc_get_tx_topo:
case ice_aqc_opc_add_recipe:
case ice_aqc_opc_recipe_to_profile:
case ice_aqc_opc_get_recipe:
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
index 7c82f05621e3..02c7f3d2c027 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
@@ -1952,7 +1952,8 @@ int ice_cfg_tx_topo(struct ice_hw *hw, u8 *buf, u32 len)
/* acquire global lock to make sure that set topology issued
* by one PF
*/
- status = ice_acquire_global_cfg_lock(hw, ICE_RES_WRITE);
+ status = ice_acquire_res(hw, ICE_GLOBAL_CFG_LOCK_RES_ID, ICE_RES_WRITE,
+ ICE_GLOBAL_CFG_LOCK_TIMEOUT);
if (status) {
ice_debug(hw, ICE_DBG_INIT, "Failed to acquire global lock\n");
return status;
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 313716615e98..9836bb6e12c6 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -4453,11 +4453,11 @@ static char *ice_get_opt_fw_name(struct ice_pf *pf)
/**
* ice_request_fw - Device initialization routine
* @pf: pointer to the PF instance
+ * @firmware: double pointer to firmware struct
*/
-static void ice_request_fw(struct ice_pf *pf)
+static int ice_request_fw(struct ice_pf *pf, const struct firmware **firmware)
{
char *opt_fw_filename = ice_get_opt_fw_name(pf);
- const struct firmware *firmware = NULL;
struct device *dev = ice_pf_to_dev(pf);
int err = 0;
@@ -4466,29 +4466,98 @@ static void ice_request_fw(struct ice_pf *pf)
* and warning messages for other errors.
*/
if (opt_fw_filename) {
- err = firmware_request_nowarn(&firmware, opt_fw_filename, dev);
- if (err) {
- kfree(opt_fw_filename);
- goto dflt_pkg_load;
- }
-
- /* request for firmware was successful. Download to device */
- ice_load_pkg(firmware, pf);
+ err = firmware_request_nowarn(firmware, opt_fw_filename, dev);
kfree(opt_fw_filename);
- release_firmware(firmware);
- return;
+ if (!err)
+ return err;
}
-dflt_pkg_load:
- err = request_firmware(&firmware, ICE_DDP_PKG_FILE, dev);
- if (err) {
+ err = request_firmware(firmware, ICE_DDP_PKG_FILE, dev);
+ if (err)
dev_err(dev, "The DDP package file was not found or could not be read. Entering Safe Mode\n");
- return;
+
+ return err;
+}
+
+/**
+ * ice_init_tx_topology - performs Tx topology initialization
+ * @hw: pointer to the hardware structure
+ * @firmware: pointer to firmware structure
+ */
+static int ice_init_tx_topology(struct ice_hw *hw,
+ const struct firmware *firmware)
+{
+ u8 num_tx_sched_layers = hw->num_tx_sched_layers;
+ struct ice_pf *pf = hw->back;
+ struct device *dev;
+ u8 *buf_copy;
+ int err;
+
+ dev = ice_pf_to_dev(pf);
+ /* ice_cfg_tx_topo buf argument is not a constant,
+ * so we have to make a copy
+ */
+ buf_copy = kmemdup(firmware->data, firmware->size, GFP_KERNEL);
+
+ err = ice_cfg_tx_topo(hw, buf_copy, firmware->size);
+ if (!err) {
+ if (hw->num_tx_sched_layers > num_tx_sched_layers)
+ dev_info(dev, "Transmit balancing feature disabled\n");
+ else
+ dev_info(dev, "Transmit balancing feature enabled\n");
+
+ /* if there was a change in topology ice_cfg_tx_topo triggered
+ * a CORER and we need to re-init hw.
+ */
+ ice_deinit_hw(hw);
+ err = ice_init_hw(hw);
+
+ /* in this case we're not allowing safe mode */
+ kfree(buf_copy);
+
+ return err;
+
+ } else if (err == -EIO) {
+ dev_info(dev, "DDP package does not support transmit balancing feature - please update to the latest DDP package and try again\n");
+ }
+
+ kfree(buf_copy);
+
+ return 0;
+}
+
+/**
+ * ice_init_ddp_config - DDP related configuration
+ * @hw: pointer to the hardware structure
+ * @pf: pointer to pf structure
+ *
+ * This function loads DDP file from the disk, then initializes tx
+ * topology. At the end DDP package is loaded on the card.
+ */
+static int ice_init_ddp_config(struct ice_hw *hw, struct ice_pf *pf)
+{
+ struct device *dev = ice_pf_to_dev(pf);
+ const struct firmware *firmware = NULL;
+ int err;
+
+ err = ice_request_fw(pf, &firmware);
+ if (err)
+ /* we can still operate in safe mode if DDP package load fails */
+ return 0;
+
+ err = ice_init_tx_topology(hw, firmware);
+ if (err) {
+ dev_err(dev, "ice_init_hw during change of tx topology failed: %d\n",
+ err);
+ release_firmware(firmware);
+ return err;
}
- /* request for firmware was successful. Download to device */
+ /* Download firmware to device */
ice_load_pkg(firmware, pf);
release_firmware(firmware);
+
+ return 0;
}
/**
@@ -4641,9 +4710,15 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
ice_init_feature_support(pf);
- ice_request_fw(pf);
+ err = ice_init_ddp_config(hw, pf);
+
+ /* during topology change ice_init_hw may fail */
+ if (err) {
+ err = -EIO;
+ goto err_exit_unroll;
+ }
- /* if ice_request_fw fails, ICE_FLAG_ADV_FEATURES bit won't be
+ /* if ice_init_ddp_config fails, ICE_FLAG_ADV_FEATURES bit won't be
* set in pf->state, which will cause ice_is_safe_mode to return
* true
*/
--
2.27.0
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply related [flat|nested] 12+ messages in thread
* [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param
2022-07-20 14:40 [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Michal Wilczynski
` (2 preceding siblings ...)
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 3/4] ice: Enable switching default tx scheduler topology Michal Wilczynski
@ 2022-07-20 14:40 ` Michal Wilczynski
2022-07-20 17:17 ` kernel test robot
` (2 more replies)
2022-07-20 23:17 ` [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Tony Nguyen
4 siblings, 3 replies; 12+ messages in thread
From: Michal Wilczynski @ 2022-07-20 14:40 UTC (permalink / raw)
To: intel-wired-lan; +Cc: Michal Wilczynski
From: Lukasz Czapnik <lukasz.czapnik@intel.com>
It was observed that Tx performance was inconsistent across all queues
and/or VSIs and that it was directly connected to existing 9-layer
topology of the Tx scheduler.
Introduce new private devlink param - txbalance. This paramerer gives user
flexibility to choose the 5-layer transmit scheduler topology which helps
to smooth out the transmit performance.
Allowed parameter values are true for enabled and false for disabled.
Example usage:
Show:
devlink dev param show pci/0000:4b:00.0 name txbalancing
pci/0000:4b:00.0:
name txbalancing type driver-specific
values:
cmode permanent value true
Set:
devlink dev param set pci/0000:4b:00.0 name txbalancing value true cmode
permanent
Signed-off-by: Lukasz Czapnik <lukasz.czapnik@intel.com>
Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
---
.../net/ethernet/intel/ice/ice_adminq_cmd.h | 9 +
drivers/net/ethernet/intel/ice/ice_devlink.c | 159 ++++++++++++++++++
.../net/ethernet/intel/ice/ice_fw_update.c | 7 +-
.../net/ethernet/intel/ice/ice_fw_update.h | 3 +
drivers/net/ethernet/intel/ice/ice_nvm.c | 2 +-
drivers/net/ethernet/intel/ice/ice_nvm.h | 4 +
6 files changed, 179 insertions(+), 5 deletions(-)
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index fe50309c5d1c..238cf9d4870b 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1515,6 +1515,15 @@ struct ice_aqc_nvm {
};
#define ICE_AQC_NVM_START_POINT 0
+#define ICE_AQC_NVM_TX_TOPO_MOD_ID 0x14B
+
+struct ice_aqc_nvm_tx_topo_user_sel {
+ __le16 length;
+ u8 data;
+#define ICE_AQC_NVM_TX_TOPO_USER_SEL BIT(4)
+
+ u8 reserved;
+};
/* NVM Checksum Command (direct, 0x0706) */
struct ice_aqc_nvm_checksum {
diff --git a/drivers/net/ethernet/intel/ice/ice_devlink.c b/drivers/net/ethernet/intel/ice/ice_devlink.c
index 3337314a7b35..e2388ba229f7 100644
--- a/drivers/net/ethernet/intel/ice/ice_devlink.c
+++ b/drivers/net/ethernet/intel/ice/ice_devlink.c
@@ -372,6 +372,158 @@ static int ice_devlink_info_get(struct devlink *devlink,
return err;
}
+enum ice_devlink_param_id {
+ ICE_DEVLINK_PARAM_ID_BASE = DEVLINK_PARAM_GENERIC_ID_MAX,
+ ICE_DEVLINK_PARAM_ID_TX_BALANCE,
+};
+
+/**
+ * ice_get_tx_topo_user_sel - Read user's choice from flash
+ * @pf: pointer to pf structure
+ * @txbalance_ena: value read from flash will be saved here
+ *
+ * Reads user's preference for Tx Scheduler Topology Tree from PFA TLV.
+ *
+ * Returns zero when read was successful, negative values otherwise.
+ */
+int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
+{
+ struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
+ struct ice_hw *hw = &pf->hw;
+ int status;
+
+ status = ice_acquire_nvm(hw, ICE_RES_READ);
+ if (status)
+ return status;
+
+ status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
+ sizeof(usr_sel), &usr_sel, true, true, NULL);
+ ice_release_nvm(hw);
+
+ *txbalance_ena = usr_sel.data & ICE_AQC_NVM_TX_TOPO_USER_SEL;
+
+ return status;
+}
+
+/**
+ * ice_update_tx_topo_user_sel - Save user's preference in flash
+ * @pf: pointer to pf structure
+ * @txbalance_ena: value to be saved in flash
+ *
+ * When txbalance_ena is set to true it means user's preference is to use
+ * five layer Tx Scheduler Topology Tree, when it is set to false then it is
+ * nine layer. This choice should be stored in PFA TLV field and should be
+ * picked up by driver, next time during init.
+ *
+ * Returns zero when save was successful, negative values otherwise.
+ */
+int
+ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
+{
+ struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
+ struct ice_hw *hw = &pf->hw;
+ int status;
+
+ status = ice_acquire_nvm(hw, ICE_RES_WRITE);
+ if (status)
+ return status;
+
+ status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
+ sizeof(usr_sel), &usr_sel, true, true, NULL);
+ if (status)
+ goto exit_release_res;
+
+ if (txbalance_ena)
+ usr_sel.data |= ICE_AQC_NVM_TX_TOPO_USER_SEL;
+ else
+ usr_sel.data &= ~ICE_AQC_NVM_TX_TOPO_USER_SEL;
+
+ status = ice_write_one_nvm_block(pf, ICE_AQC_NVM_TX_TOPO_MOD_ID, 2,
+ sizeof(usr_sel.data), &usr_sel.data,
+ true, NULL, NULL);
+
+exit_release_res:
+ ice_release_nvm(hw);
+
+ return status;
+}
+
+/**
+ * ice_devlink_txbalance_get - Get txbalance parameter
+ * @devlink: pointer to the devlink instance
+ * @id: the parameter ID to set
+ * @ctx: context to store the parameter value
+ *
+ * Returns zero on success and negative value on failure.
+ */
+static int ice_devlink_txbalance_get(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct ice_pf *pf = devlink_priv(devlink);
+ struct device *dev = ice_pf_to_dev(pf);
+ int status;
+
+ status = ice_get_tx_topo_user_sel(pf, &ctx->val.vbool);
+ if (status) {
+ dev_warn(dev, "Failed to read Tx Scheduler Tree - User Selection data from flash\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * ice_devlink_txbalance_set - Set txbalance parameter
+ * @devlink: pointer to the devlink instance
+ * @id: the parameter ID to set
+ * @ctx: context to get the parameter value
+ *
+ * Returns zero on success and negative value on failure.
+ */
+static int ice_devlink_txbalance_set(struct devlink *devlink, u32 id,
+ struct devlink_param_gset_ctx *ctx)
+{
+ struct ice_pf *pf = devlink_priv(devlink);
+ struct device *dev = ice_pf_to_dev(pf);
+ int status;
+
+ status = ice_update_tx_topo_user_sel(pf, ctx->val.vbool);
+ if (status)
+ return -EIO;
+
+ dev_warn(dev, "Transmit balancing setting has been changed on this device. You must reboot the system for the change to take effect");
+
+ return 0;
+}
+
+/**
+ * ice_devlink_txbalance_validate - Validate passed txbalance parameter value
+ * @devlink: unused pointer to devlink instance
+ * @id: the parameter ID to validate
+ * @val: value to validate
+ * @extack: netlink extended ACK structure
+ *
+ * Supported values are:
+ * true - five layer, false - nine layer Tx Scheduler Topology Tree
+ *
+ * Returns zero when passed parameter value is supported. Negative value on
+ * error.
+ */
+static int ice_devlink_txbalance_validate(struct devlink *devlink, u32 id,
+ union devlink_param_value val,
+ struct netlink_ext_ack *extack)
+{
+ struct ice_pf *pf = devlink_priv(devlink);
+ struct ice_hw *hw = &pf->hw;
+
+ if (!hw->func_caps.common_cap.tx_sched_topo_comp_mode_en) {
+ NL_SET_ERR_MSG_MOD(extack, "Error: Requested feature is not supported by the FW on this device. Update the FW and run this command again.");
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
/**
* ice_devlink_reload_empr_start - Start EMP reset to activate new firmware
* @devlink: pointer to the devlink instance to reload
@@ -589,6 +741,13 @@ static const struct devlink_param ice_devlink_params[] = {
ice_devlink_enable_iw_get,
ice_devlink_enable_iw_set,
ice_devlink_enable_iw_validate),
+ DEVLINK_PARAM_DRIVER(ICE_DEVLINK_PARAM_ID_TX_BALANCE,
+ "txbalancing",
+ DEVLINK_PARAM_TYPE_BOOL,
+ BIT(DEVLINK_PARAM_CMODE_PERMANENT),
+ ice_devlink_txbalance_get,
+ ice_devlink_txbalance_set,
+ ice_devlink_txbalance_validate),
};
diff --git a/drivers/net/ethernet/intel/ice/ice_fw_update.c b/drivers/net/ethernet/intel/ice/ice_fw_update.c
index 3dc5662d62a6..2e8db018a630 100644
--- a/drivers/net/ethernet/intel/ice/ice_fw_update.c
+++ b/drivers/net/ethernet/intel/ice/ice_fw_update.c
@@ -286,10 +286,9 @@ ice_send_component_table(struct pldmfw *context, struct pldmfw_component *compon
*
* Returns: zero on success, or a negative error code on failure.
*/
-static int
-ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset,
- u16 block_size, u8 *block, bool last_cmd,
- u8 *reset_level, struct netlink_ext_ack *extack)
+int ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset,
+ u16 block_size, u8 *block, bool last_cmd,
+ u8 *reset_level, struct netlink_ext_ack *extack)
{
u16 completion_module, completion_retval;
struct device *dev = ice_pf_to_dev(pf);
diff --git a/drivers/net/ethernet/intel/ice/ice_fw_update.h b/drivers/net/ethernet/intel/ice/ice_fw_update.h
index 750574885716..04b200462757 100644
--- a/drivers/net/ethernet/intel/ice/ice_fw_update.h
+++ b/drivers/net/ethernet/intel/ice/ice_fw_update.h
@@ -9,5 +9,8 @@ int ice_devlink_flash_update(struct devlink *devlink,
struct netlink_ext_ack *extack);
int ice_get_pending_updates(struct ice_pf *pf, u8 *pending,
struct netlink_ext_ack *extack);
+int ice_write_one_nvm_block(struct ice_pf *pf, u16 module, u32 offset,
+ u16 block_size, u8 *block, bool last_cmd,
+ u8 *reset_level, struct netlink_ext_ack *extack);
#endif
diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
index 13cdb5ea594d..7e2c7b55899e 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
@@ -18,7 +18,7 @@
*
* Read the NVM using the admin queue commands (0x0701)
*/
-static int
+int
ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
void *data, bool last_command, bool read_shadow_ram,
struct ice_sq_cd *cd)
diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.h b/drivers/net/ethernet/intel/ice/ice_nvm.h
index 856d1ad4398b..84ecf45b9db6 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.h
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.h
@@ -15,6 +15,10 @@ struct ice_orom_civd_info {
int ice_acquire_nvm(struct ice_hw *hw, enum ice_aq_res_access_type access);
void ice_release_nvm(struct ice_hw *hw);
int
+ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
+ void *data, bool last_command, bool read_shadow_ram,
+ struct ice_sq_cd *cd);
+int
ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data,
bool read_shadow_ram);
int
--
2.27.0
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param Michal Wilczynski
@ 2022-07-20 17:17 ` kernel test robot
2022-07-20 23:17 ` Tony Nguyen
2022-07-21 1:13 ` kernel test robot
2 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2022-07-20 17:17 UTC (permalink / raw)
To: Michal Wilczynski, intel-wired-lan; +Cc: kbuild-all, Michal Wilczynski
Hi Michal,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 5fb859f79f4f49d9df16bac2b3a84a6fa3aaccf1
config: x86_64-randconfig-a002 (https://download.01.org/0day-ci/archive/20220721/202207210108.7ZpVcgDQ-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-3) 11.3.0
reproduce (this is a W=1 build):
# https://github.com/intel-lab-lkp/linux/commit/15b804e74b266402a1af3d04b1b3106d06670c23
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
git checkout 15b804e74b266402a1af3d04b1b3106d06670c23
# save the config file
mkdir build_dir && cp config build_dir/.config
make W=1 O=build_dir ARCH=x86_64 SHELL=/bin/bash
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/intel/ice/ice_devlink.c:389:5: warning: no previous prototype for 'ice_get_tx_topo_user_sel' [-Wmissing-prototypes]
389 | int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
| ^~~~~~~~~~~~~~~~~~~~~~~~
>> drivers/net/ethernet/intel/ice/ice_devlink.c:421:1: warning: no previous prototype for 'ice_update_tx_topo_user_sel' [-Wmissing-prototypes]
421 | ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~
vim +/ice_get_tx_topo_user_sel +389 drivers/net/ethernet/intel/ice/ice_devlink.c
379
380 /**
381 * ice_get_tx_topo_user_sel - Read user's choice from flash
382 * @pf: pointer to pf structure
383 * @txbalance_ena: value read from flash will be saved here
384 *
385 * Reads user's preference for Tx Scheduler Topology Tree from PFA TLV.
386 *
387 * Returns zero when read was successful, negative values otherwise.
388 */
> 389 int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
390 {
391 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
392 struct ice_hw *hw = &pf->hw;
393 int status;
394
395 status = ice_acquire_nvm(hw, ICE_RES_READ);
396 if (status)
397 return status;
398
399 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
400 sizeof(usr_sel), &usr_sel, true, true, NULL);
401 ice_release_nvm(hw);
402
403 *txbalance_ena = usr_sel.data & ICE_AQC_NVM_TX_TOPO_USER_SEL;
404
405 return status;
406 }
407
408 /**
409 * ice_update_tx_topo_user_sel - Save user's preference in flash
410 * @pf: pointer to pf structure
411 * @txbalance_ena: value to be saved in flash
412 *
413 * When txbalance_ena is set to true it means user's preference is to use
414 * five layer Tx Scheduler Topology Tree, when it is set to false then it is
415 * nine layer. This choice should be stored in PFA TLV field and should be
416 * picked up by driver, next time during init.
417 *
418 * Returns zero when save was successful, negative values otherwise.
419 */
420 int
> 421 ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
422 {
423 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
424 struct ice_hw *hw = &pf->hw;
425 int status;
426
427 status = ice_acquire_nvm(hw, ICE_RES_WRITE);
428 if (status)
429 return status;
430
431 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
432 sizeof(usr_sel), &usr_sel, true, true, NULL);
433 if (status)
434 goto exit_release_res;
435
436 if (txbalance_ena)
437 usr_sel.data |= ICE_AQC_NVM_TX_TOPO_USER_SEL;
438 else
439 usr_sel.data &= ~ICE_AQC_NVM_TX_TOPO_USER_SEL;
440
441 status = ice_write_one_nvm_block(pf, ICE_AQC_NVM_TX_TOPO_MOD_ID, 2,
442 sizeof(usr_sel.data), &usr_sel.data,
443 true, NULL, NULL);
444
445 exit_release_res:
446 ice_release_nvm(hw);
447
448 return status;
449 }
450
--
0-DAY CI Kernel Test Service
https://01.org/lkp
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology
2022-07-20 14:40 [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Michal Wilczynski
` (3 preceding siblings ...)
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param Michal Wilczynski
@ 2022-07-20 23:17 ` Tony Nguyen
2022-07-21 12:03 ` Wilczynski, Michal
4 siblings, 1 reply; 12+ messages in thread
From: Tony Nguyen @ 2022-07-20 23:17 UTC (permalink / raw)
To: Michal Wilczynski, intel-wired-lan
On 7/20/2022 7:40 AM, Michal Wilczynski wrote:
> For performance reasons there is a need to have support for selectable
> tx scheduler topology. Currently firmware supports only the default
> 9-layer and 5-layer topology. This patch series enables switch from
> default to 5-layer topology, if user decides to opt-in.
Are you using next-queue/dev-queue[1] to base these patches on? These
are not applying, however, they do appear to apply to net-next as lkp is
able to apply them to that tree[2]. Please use net-queue or next-queue
if you are sending patches for IWL.
[1]
https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git/log/?h=dev-queue
[2]
https://lore.kernel.org/intel-wired-lan/202207210108.7ZpVcgDQ-lkp@intel.com/
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param Michal Wilczynski
2022-07-20 17:17 ` kernel test robot
@ 2022-07-20 23:17 ` Tony Nguyen
2022-07-21 14:46 ` Wilczynski, Michal
2022-07-21 1:13 ` kernel test robot
2 siblings, 1 reply; 12+ messages in thread
From: Tony Nguyen @ 2022-07-20 23:17 UTC (permalink / raw)
To: Michal Wilczynski, intel-wired-lan
On 7/20/2022 7:40 AM, Michal Wilczynski wrote:
> From: Lukasz Czapnik <lukasz.czapnik@intel.com>
>
> It was observed that Tx performance was inconsistent across all queues
> and/or VSIs and that it was directly connected to existing 9-layer
> topology of the Tx scheduler.
>
> Introduce new private devlink param - txbalance. This paramerer gives user
s/paramerer/parameter
> flexibility to choose the 5-layer transmit scheduler topology which helps
> to smooth out the transmit performance.
>
> Allowed parameter values are true for enabled and false for disabled.
Please document these in Documentation/networking/devlink/ice.rst
> Example usage:
>
> Show:
> devlink dev param show pci/0000:4b:00.0 name txbalancing
> pci/0000:4b:00.0:
> name txbalancing type driver-specific
> values:
> cmode permanent value true
>
> Set:
> devlink dev param set pci/0000:4b:00.0 name txbalancing value true cmode
> permanent
>
> Signed-off-by: Lukasz Czapnik <lukasz.czapnik@intel.com>
> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
<snip>
> +/**
> + * ice_devlink_txbalance_get - Get txbalance parameter
> + * @devlink: pointer to the devlink instance
> + * @id: the parameter ID to set
> + * @ctx: context to store the parameter value
> + *
> + * Returns zero on success and negative value on failure.
> + */
> +static int ice_devlink_txbalance_get(struct devlink *devlink, u32 id,
> + struct devlink_param_gset_ctx *ctx)
nit: Can you use GNU style on these
static int
ice_devlink_txbalance_get(...)
> +{
> + struct ice_pf *pf = devlink_priv(devlink);
> + struct device *dev = ice_pf_to_dev(pf);
> + int status;
> +
> + status = ice_get_tx_topo_user_sel(pf, &ctx->val.vbool);
> + if (status) {
> + dev_warn(dev, "Failed to read Tx Scheduler Tree - User Selection data from flash\n");
> + return -EIO;
> + }
> +
> + return 0;
> +}
> +
As well as the lkp reported issues[1]
[1]
https://lore.kernel.org/intel-wired-lan/202207210108.7ZpVcgDQ-lkp@intel.com/
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param Michal Wilczynski
@ 2022-07-21 1:13 ` kernel test robot
2022-07-20 23:17 ` Tony Nguyen
2022-07-21 1:13 ` kernel test robot
2 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2022-07-21 1:13 UTC (permalink / raw)
To: Michal Wilczynski, intel-wired-lan; +Cc: llvm, kbuild-all, Michal Wilczynski
Hi Michal,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 5fb859f79f4f49d9df16bac2b3a84a6fa3aaccf1
config: i386-randconfig-a013 (https://download.01.org/0day-ci/archive/20220721/202207210918.4FyECq6p-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project dd5635541cd7bbd62cd59b6694dfb759b6e9a0d8)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/15b804e74b266402a1af3d04b1b3106d06670c23
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
git checkout 15b804e74b266402a1af3d04b1b3106d06670c23
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/net/ethernet/intel/ice/
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/intel/ice/ice_devlink.c:389:5: warning: no previous prototype for function 'ice_get_tx_topo_user_sel' [-Wmissing-prototypes]
int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
^
drivers/net/ethernet/intel/ice/ice_devlink.c:389:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
^
static
>> drivers/net/ethernet/intel/ice/ice_devlink.c:421:1: warning: no previous prototype for function 'ice_update_tx_topo_user_sel' [-Wmissing-prototypes]
ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
^
drivers/net/ethernet/intel/ice/ice_devlink.c:420:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int
^
static
2 warnings generated.
vim +/ice_get_tx_topo_user_sel +389 drivers/net/ethernet/intel/ice/ice_devlink.c
379
380 /**
381 * ice_get_tx_topo_user_sel - Read user's choice from flash
382 * @pf: pointer to pf structure
383 * @txbalance_ena: value read from flash will be saved here
384 *
385 * Reads user's preference for Tx Scheduler Topology Tree from PFA TLV.
386 *
387 * Returns zero when read was successful, negative values otherwise.
388 */
> 389 int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
390 {
391 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
392 struct ice_hw *hw = &pf->hw;
393 int status;
394
395 status = ice_acquire_nvm(hw, ICE_RES_READ);
396 if (status)
397 return status;
398
399 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
400 sizeof(usr_sel), &usr_sel, true, true, NULL);
401 ice_release_nvm(hw);
402
403 *txbalance_ena = usr_sel.data & ICE_AQC_NVM_TX_TOPO_USER_SEL;
404
405 return status;
406 }
407
408 /**
409 * ice_update_tx_topo_user_sel - Save user's preference in flash
410 * @pf: pointer to pf structure
411 * @txbalance_ena: value to be saved in flash
412 *
413 * When txbalance_ena is set to true it means user's preference is to use
414 * five layer Tx Scheduler Topology Tree, when it is set to false then it is
415 * nine layer. This choice should be stored in PFA TLV field and should be
416 * picked up by driver, next time during init.
417 *
418 * Returns zero when save was successful, negative values otherwise.
419 */
420 int
> 421 ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
422 {
423 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
424 struct ice_hw *hw = &pf->hw;
425 int status;
426
427 status = ice_acquire_nvm(hw, ICE_RES_WRITE);
428 if (status)
429 return status;
430
431 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
432 sizeof(usr_sel), &usr_sel, true, true, NULL);
433 if (status)
434 goto exit_release_res;
435
436 if (txbalance_ena)
437 usr_sel.data |= ICE_AQC_NVM_TX_TOPO_USER_SEL;
438 else
439 usr_sel.data &= ~ICE_AQC_NVM_TX_TOPO_USER_SEL;
440
441 status = ice_write_one_nvm_block(pf, ICE_AQC_NVM_TX_TOPO_MOD_ID, 2,
442 sizeof(usr_sel.data), &usr_sel.data,
443 true, NULL, NULL);
444
445 exit_release_res:
446 ice_release_nvm(hw);
447
448 return status;
449 }
450
--
0-DAY CI Kernel Test Service
https://01.org/lkp
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param
@ 2022-07-21 1:13 ` kernel test robot
0 siblings, 0 replies; 12+ messages in thread
From: kernel test robot @ 2022-07-21 1:13 UTC (permalink / raw)
To: Michal Wilczynski, intel-wired-lan; +Cc: llvm, kbuild-all, Michal Wilczynski
Hi Michal,
Thank you for the patch! Perhaps something to improve:
[auto build test WARNING on net-next/master]
url: https://github.com/intel-lab-lkp/linux/commits/Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
base: https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git 5fb859f79f4f49d9df16bac2b3a84a6fa3aaccf1
config: i386-randconfig-a013 (https://download.01.org/0day-ci/archive/20220721/202207210918.4FyECq6p-lkp@intel.com/config)
compiler: clang version 15.0.0 (https://github.com/llvm/llvm-project dd5635541cd7bbd62cd59b6694dfb759b6e9a0d8)
reproduce (this is a W=1 build):
wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
chmod +x ~/bin/make.cross
# https://github.com/intel-lab-lkp/linux/commit/15b804e74b266402a1af3d04b1b3106d06670c23
git remote add linux-review https://github.com/intel-lab-lkp/linux
git fetch --no-tags linux-review Michal-Wilczynski/ice-Support-5-layer-tx-scheduler-topology/20220720-224322
git checkout 15b804e74b266402a1af3d04b1b3106d06670c23
# save the config file
mkdir build_dir && cp config build_dir/.config
COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross W=1 O=build_dir ARCH=i386 SHELL=/bin/bash drivers/net/ethernet/intel/ice/
If you fix the issue, kindly add following tag where applicable
Reported-by: kernel test robot <lkp@intel.com>
All warnings (new ones prefixed by >>):
>> drivers/net/ethernet/intel/ice/ice_devlink.c:389:5: warning: no previous prototype for function 'ice_get_tx_topo_user_sel' [-Wmissing-prototypes]
int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
^
drivers/net/ethernet/intel/ice/ice_devlink.c:389:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
^
static
>> drivers/net/ethernet/intel/ice/ice_devlink.c:421:1: warning: no previous prototype for function 'ice_update_tx_topo_user_sel' [-Wmissing-prototypes]
ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
^
drivers/net/ethernet/intel/ice/ice_devlink.c:420:1: note: declare 'static' if the function is not intended to be used outside of this translation unit
int
^
static
2 warnings generated.
vim +/ice_get_tx_topo_user_sel +389 drivers/net/ethernet/intel/ice/ice_devlink.c
379
380 /**
381 * ice_get_tx_topo_user_sel - Read user's choice from flash
382 * @pf: pointer to pf structure
383 * @txbalance_ena: value read from flash will be saved here
384 *
385 * Reads user's preference for Tx Scheduler Topology Tree from PFA TLV.
386 *
387 * Returns zero when read was successful, negative values otherwise.
388 */
> 389 int ice_get_tx_topo_user_sel(struct ice_pf *pf, bool *txbalance_ena)
390 {
391 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
392 struct ice_hw *hw = &pf->hw;
393 int status;
394
395 status = ice_acquire_nvm(hw, ICE_RES_READ);
396 if (status)
397 return status;
398
399 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
400 sizeof(usr_sel), &usr_sel, true, true, NULL);
401 ice_release_nvm(hw);
402
403 *txbalance_ena = usr_sel.data & ICE_AQC_NVM_TX_TOPO_USER_SEL;
404
405 return status;
406 }
407
408 /**
409 * ice_update_tx_topo_user_sel - Save user's preference in flash
410 * @pf: pointer to pf structure
411 * @txbalance_ena: value to be saved in flash
412 *
413 * When txbalance_ena is set to true it means user's preference is to use
414 * five layer Tx Scheduler Topology Tree, when it is set to false then it is
415 * nine layer. This choice should be stored in PFA TLV field and should be
416 * picked up by driver, next time during init.
417 *
418 * Returns zero when save was successful, negative values otherwise.
419 */
420 int
> 421 ice_update_tx_topo_user_sel(struct ice_pf *pf, bool txbalance_ena)
422 {
423 struct ice_aqc_nvm_tx_topo_user_sel usr_sel = {};
424 struct ice_hw *hw = &pf->hw;
425 int status;
426
427 status = ice_acquire_nvm(hw, ICE_RES_WRITE);
428 if (status)
429 return status;
430
431 status = ice_aq_read_nvm(hw, ICE_AQC_NVM_TX_TOPO_MOD_ID, 0,
432 sizeof(usr_sel), &usr_sel, true, true, NULL);
433 if (status)
434 goto exit_release_res;
435
436 if (txbalance_ena)
437 usr_sel.data |= ICE_AQC_NVM_TX_TOPO_USER_SEL;
438 else
439 usr_sel.data &= ~ICE_AQC_NVM_TX_TOPO_USER_SEL;
440
441 status = ice_write_one_nvm_block(pf, ICE_AQC_NVM_TX_TOPO_MOD_ID, 2,
442 sizeof(usr_sel.data), &usr_sel.data,
443 true, NULL, NULL);
444
445 exit_release_res:
446 ice_release_nvm(hw);
447
448 return status;
449 }
450
--
0-DAY CI Kernel Test Service
https://01.org/lkp
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology
2022-07-20 23:17 ` [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Tony Nguyen
@ 2022-07-21 12:03 ` Wilczynski, Michal
0 siblings, 0 replies; 12+ messages in thread
From: Wilczynski, Michal @ 2022-07-21 12:03 UTC (permalink / raw)
To: Tony Nguyen, intel-wired-lan
On 7/21/2022 1:17 AM, Tony Nguyen wrote:
>
>
> On 7/20/2022 7:40 AM, Michal Wilczynski wrote:
>> For performance reasons there is a need to have support for selectable
>> tx scheduler topology. Currently firmware supports only the default
>> 9-layer and 5-layer topology. This patch series enables switch from
>> default to 5-layer topology, if user decides to opt-in.
>
> Are you using next-queue/dev-queue[1] to base these patches on? These
> are not applying, however, they do appear to apply to net-next as lkp
> is able to apply them to that tree[2]. Please use net-queue or
> next-queue if you are sending patches for IWL.
>
> [1]
> https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git/log/?h=dev-queue
> [2]
> https://lore.kernel.org/intel-wired-lan/202207210108.7ZpVcgDQ-lkp@intel.com/
Oh sorry I am using your repository
https://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue.git ,
but I was rebasing against master, not dev-queue. Sorry about that.
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param
2022-07-20 23:17 ` Tony Nguyen
@ 2022-07-21 14:46 ` Wilczynski, Michal
0 siblings, 0 replies; 12+ messages in thread
From: Wilczynski, Michal @ 2022-07-21 14:46 UTC (permalink / raw)
To: Tony Nguyen, intel-wired-lan
On 7/21/2022 1:17 AM, Tony Nguyen wrote:
>
>
> On 7/20/2022 7:40 AM, Michal Wilczynski wrote:
>> From: Lukasz Czapnik <lukasz.czapnik@intel.com>
>>
>> It was observed that Tx performance was inconsistent across all queues
>> and/or VSIs and that it was directly connected to existing 9-layer
>> topology of the Tx scheduler.
>>
>> Introduce new private devlink param - txbalance. This paramerer gives
>> user
>
> s/paramerer/parameter
>
>> flexibility to choose the 5-layer transmit scheduler topology which
>> helps
>> to smooth out the transmit performance.
>>
>> Allowed parameter values are true for enabled and false for disabled.
>
> Please document these in Documentation/networking/devlink/ice.rst
>
>> Example usage:
>>
>> Show:
>> devlink dev param show pci/0000:4b:00.0 name txbalancing
>> pci/0000:4b:00.0:
>> name txbalancing type driver-specific
>> values:
>> cmode permanent value true
>>
>> Set:
>> devlink dev param set pci/0000:4b:00.0 name txbalancing value true cmode
>> permanent
>>
>> Signed-off-by: Lukasz Czapnik <lukasz.czapnik@intel.com>
>> Signed-off-by: Michal Wilczynski <michal.wilczynski@intel.com>
>
> <snip>
>
>> +/**
>> + * ice_devlink_txbalance_get - Get txbalance parameter
>> + * @devlink: pointer to the devlink instance
>> + * @id: the parameter ID to set
>> + * @ctx: context to store the parameter value
>> + *
>> + * Returns zero on success and negative value on failure.
>> + */
>> +static int ice_devlink_txbalance_get(struct devlink *devlink, u32 id,
>> + struct devlink_param_gset_ctx *ctx)
>
> nit: Can you use GNU style on these
>
> static int
> ice_devlink_txbalance_get(...)
>
>
>> +{
>> + struct ice_pf *pf = devlink_priv(devlink);
>> + struct device *dev = ice_pf_to_dev(pf);
>> + int status;
>> +
>> + status = ice_get_tx_topo_user_sel(pf, &ctx->val.vbool);
>> + if (status) {
>> + dev_warn(dev, "Failed to read Tx Scheduler Tree - User
>> Selection data from flash\n");
>> + return -EIO;
>> + }
>> +
>> + return 0;
>> +}
>> +
>
> As well as the lkp reported issues[1]
>
> [1]
> https://lore.kernel.org/intel-wired-lan/202207210108.7ZpVcgDQ-lkp@intel.com/
Okay thank you, fixed issues and submitted a new version
BR,
Michał
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2022-07-21 14:46 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-20 14:40 [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 1/4] ice: Support 5 layer topology Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 2/4] ice: Adjust the VSI/Aggregator layers Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 3/4] ice: Enable switching default tx scheduler topology Michal Wilczynski
2022-07-20 14:40 ` [Intel-wired-lan] [PATCH net-next v6 4/4] ice: Add txbalancing devlink param Michal Wilczynski
2022-07-20 17:17 ` kernel test robot
2022-07-20 23:17 ` Tony Nguyen
2022-07-21 14:46 ` Wilczynski, Michal
2022-07-21 1:13 ` kernel test robot
2022-07-21 1:13 ` kernel test robot
2022-07-20 23:17 ` [Intel-wired-lan] [PATCH net-next v6 0/4] ice: Support 5 layer tx scheduler topology Tony Nguyen
2022-07-21 12:03 ` Wilczynski, Michal
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.