All of lore.kernel.org
 help / color / mirror / Atom feed
* [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13
@ 2020-11-13 21:44 Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 01/15] ice: cleanup stack hog Tony Nguyen
                   ` (14 more replies)
  0 siblings, 15 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba; +Cc: Tony Nguyen, netdev, sassmann

This series contains updates to ice driver only.

Bruce changes the allocation of ice_flow_prof_params from stack to heap to
avoid excessive stack usage. Corrects a misleading comment and silences a
sparse warning that is not a problem.

Tony renames Flow Director functions to be more generic as their use
is expanded.

Real implements ACL filtering. This expands support of network flow
classification rules for the ethtool ntuple command. ACL filtering will
allow for an ip or port field's optional mask to be specified.

Paul allows for HW initialization to continue if PHY abilities cannot
be obtained.

Jeb removes bypassing FW link override and reading Option ROM and
netlist information for non-E810 devices as this is now available on
other devices.

Nick removes vlan_ena field as this information can be gathered by
checking num_vlan.

Jake combines format strings and debug prints to the same line.

Simon adds a space to fix string concatenation.

v3: Fix email address for DaveM and fix character in cover letter
v2: Expand on commit message for patch 3 to show example usage/commands.
    Reduce number of defensive checks being done.

For now, we'd like to keep these ice_status error codes in the driver as
there are cases where specific codes may translate to specific cases that
must be handled. There is little equivalency between them and POSIX
standard error codes so these cases can be lost by changing them. We are
careful to turn everything into POSIX standard error codes before passing
it to the kernel.

We do understand your feedback on the refactoring pain. We will look into
this in the future.

The following are changes since commit e1d9d7b91302593d1951fcb12feddda6fb58a3c0:
  Merge https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
and are available in the git repository at:
  git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue 100GbE

Bruce Allan (3):
  ice: cleanup stack hog
  ice: cleanup misleading comment
  ice: silence static analysis warning

Jacob Keller (1):
  ice: join format strings to same line as ice_debug

Jeb Cramer (2):
  ice: Enable Support for FW Override (E82X)
  ice: Remove gate to OROM init

Nick Nunley (1):
  ice: Remove vlan_ena from vsi structure

Paul M Stillwell Jr (1):
  ice: don't always return an error for Get PHY Abilities AQ command

Real Valiquette (5):
  ice: initialize ACL table
  ice: initialize ACL scenario
  ice: create flow profile
  ice: create ACL entry
  ice: program ACL entry

Simon Perron Caissy (1):
  ice: Add space to unknown speed

Tony Nguyen (1):
  ice: rename shared Flow Director functions

 drivers/net/ethernet/intel/ice/Makefile       |    3 +
 drivers/net/ethernet/intel/ice/ice.h          |   26 +-
 drivers/net/ethernet/intel/ice/ice_acl.c      |  482 +++++++
 drivers/net/ethernet/intel/ice/ice_acl.h      |  188 +++
 drivers/net/ethernet/intel/ice/ice_acl_ctrl.c | 1145 +++++++++++++++
 drivers/net/ethernet/intel/ice/ice_acl_main.c |  328 +++++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  400 +++++-
 drivers/net/ethernet/intel/ice/ice_common.c   |  108 +-
 drivers/net/ethernet/intel/ice/ice_controlq.c |   42 +-
 drivers/net/ethernet/intel/ice/ice_ethtool.c  |    8 +-
 .../net/ethernet/intel/ice/ice_ethtool_fdir.c |  448 ++++--
 drivers/net/ethernet/intel/ice/ice_fdir.c     |   15 +-
 drivers/net/ethernet/intel/ice/ice_fdir.h     |    5 +-
 .../net/ethernet/intel/ice/ice_flex_pipe.c    |   40 +-
 .../net/ethernet/intel/ice/ice_flex_pipe.h    |    7 +
 drivers/net/ethernet/intel/ice/ice_flow.c     | 1253 ++++++++++++++++-
 drivers/net/ethernet/intel/ice/ice_flow.h     |   38 +-
 .../net/ethernet/intel/ice/ice_lan_tx_rx.h    |    3 +
 drivers/net/ethernet/intel/ice/ice_lib.c      |   10 +-
 drivers/net/ethernet/intel/ice/ice_main.c     |   70 +-
 drivers/net/ethernet/intel/ice/ice_nvm.c      |   61 +-
 drivers/net/ethernet/intel/ice/ice_sched.c    |   21 +-
 drivers/net/ethernet/intel/ice/ice_switch.c   |   15 +-
 drivers/net/ethernet/intel/ice/ice_type.h     |    5 +
 24 files changed, 4355 insertions(+), 366 deletions(-)
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl_main.c

-- 
2.26.2


^ permalink raw reply	[flat|nested] 28+ messages in thread

* [net-next v3 01/15] ice: cleanup stack hog
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 02/15] ice: rename shared Flow Director functions Tony Nguyen
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Bruce Allan, netdev, sassmann, anthony.l.nguyen, Harikumar Bokkena

From: Bruce Allan <bruce.w.allan@intel.com>

In ice_flow_add_prof_sync(), struct ice_flow_prof_params has recently
grown in size hogging stack space when allocated there.  Hogging stack
space should be avoided.  Change allocation to be on the heap when needed.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Harikumar Bokkena <harikumarx.bokkena@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_flow.c | 44 +++++++++++++----------
 1 file changed, 26 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c
index eadc85aee389..2a92071bd7d1 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.c
+++ b/drivers/net/ethernet/intel/ice/ice_flow.c
@@ -708,37 +708,42 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk,
 		       struct ice_flow_seg_info *segs, u8 segs_cnt,
 		       struct ice_flow_prof **prof)
 {
-	struct ice_flow_prof_params params;
+	struct ice_flow_prof_params *params;
 	enum ice_status status;
 	u8 i;
 
 	if (!prof)
 		return ICE_ERR_BAD_PTR;
 
-	memset(&params, 0, sizeof(params));
-	params.prof = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*params.prof),
-				   GFP_KERNEL);
-	if (!params.prof)
+	params = kzalloc(sizeof(*params), GFP_KERNEL);
+	if (!params)
 		return ICE_ERR_NO_MEMORY;
 
+	params->prof = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*params->prof),
+				    GFP_KERNEL);
+	if (!params->prof) {
+		status = ICE_ERR_NO_MEMORY;
+		goto free_params;
+	}
+
 	/* initialize extraction sequence to all invalid (0xff) */
 	for (i = 0; i < ICE_MAX_FV_WORDS; i++) {
-		params.es[i].prot_id = ICE_PROT_INVALID;
-		params.es[i].off = ICE_FV_OFFSET_INVAL;
+		params->es[i].prot_id = ICE_PROT_INVALID;
+		params->es[i].off = ICE_FV_OFFSET_INVAL;
 	}
 
-	params.blk = blk;
-	params.prof->id = prof_id;
-	params.prof->dir = dir;
-	params.prof->segs_cnt = segs_cnt;
+	params->blk = blk;
+	params->prof->id = prof_id;
+	params->prof->dir = dir;
+	params->prof->segs_cnt = segs_cnt;
 
 	/* Make a copy of the segments that need to be persistent in the flow
 	 * profile instance
 	 */
 	for (i = 0; i < segs_cnt; i++)
-		memcpy(&params.prof->segs[i], &segs[i], sizeof(*segs));
+		memcpy(&params->prof->segs[i], &segs[i], sizeof(*segs));
 
-	status = ice_flow_proc_segs(hw, &params);
+	status = ice_flow_proc_segs(hw, params);
 	if (status) {
 		ice_debug(hw, ICE_DBG_FLOW,
 			  "Error processing a flow's packet segments\n");
@@ -746,19 +751,22 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk,
 	}
 
 	/* Add a HW profile for this flow profile */
-	status = ice_add_prof(hw, blk, prof_id, (u8 *)params.ptypes, params.es);
+	status = ice_add_prof(hw, blk, prof_id, (u8 *)params->ptypes,
+			      params->es);
 	if (status) {
 		ice_debug(hw, ICE_DBG_FLOW, "Error adding a HW flow profile\n");
 		goto out;
 	}
 
-	INIT_LIST_HEAD(&params.prof->entries);
-	mutex_init(&params.prof->entries_lock);
-	*prof = params.prof;
+	INIT_LIST_HEAD(&params->prof->entries);
+	mutex_init(&params->prof->entries_lock);
+	*prof = params->prof;
 
 out:
 	if (status)
-		devm_kfree(ice_hw_to_dev(hw), params.prof);
+		devm_kfree(ice_hw_to_dev(hw), params->prof);
+free_params:
+	kfree(params);
 
 	return status;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 02/15] ice: rename shared Flow Director functions
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 01/15] ice: cleanup stack hog Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 03/15] ice: initialize ACL table Tony Nguyen
                   ` (12 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Tony Nguyen, netdev, sassmann, Paul M Stillwell Jr, Harikumar Bokkena

These functions are currently used to add Flow Director filters, however,
they will be expanded to also add ACL filters. Rename the functions,
replacing 'fdir' to 'ntuple', to reflect that they are being
used to for ntuple filters and are not solely used for Flow Director.

Co-developed-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Harikumar Bokkena <harikumarx.bokkena@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |  4 +--
 drivers/net/ethernet/intel/ice/ice_ethtool.c  |  4 +--
 .../net/ethernet/intel/ice/ice_ethtool_fdir.c | 30 +++++++++----------
 3 files changed, 19 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index a0723831c4e4..59d3862bb7d8 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -592,8 +592,8 @@ int
 ice_fdir_write_fltr(struct ice_pf *pf, struct ice_fdir_fltr *input, bool add,
 		    bool is_tun);
 void ice_vsi_manage_fdir(struct ice_vsi *vsi, bool ena);
-int ice_add_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
-int ice_del_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
+int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
+int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
 int ice_get_ethtool_fdir_entry(struct ice_hw *hw, struct ethtool_rxnfc *cmd);
 int
 ice_get_fdir_fltr_ids(struct ice_hw *hw, struct ethtool_rxnfc *cmd,
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 9e8e9531cd87..363377fe90ee 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2652,9 +2652,9 @@ static int ice_set_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd)
 
 	switch (cmd->cmd) {
 	case ETHTOOL_SRXCLSRLINS:
-		return ice_add_fdir_ethtool(vsi, cmd);
+		return ice_add_ntuple_ethtool(vsi, cmd);
 	case ETHTOOL_SRXCLSRLDEL:
-		return ice_del_fdir_ethtool(vsi, cmd);
+		return ice_del_ntuple_ethtool(vsi, cmd);
 	case ETHTOOL_SRXFH:
 		return ice_set_rss_hash_opt(vsi, cmd);
 	default:
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
index 2d27f66ac853..f3d2199a2b42 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
@@ -1388,7 +1388,7 @@ ice_fdir_do_rem_flow(struct ice_pf *pf, enum ice_fltr_ptype flow_type)
 }
 
 /**
- * ice_fdir_update_list_entry - add or delete a filter from the filter list
+ * ice_ntuple_update_list_entry - add or delete a filter from the filter list
  * @pf: PF structure
  * @input: filter structure
  * @fltr_idx: ethtool index of filter to modify
@@ -1396,8 +1396,8 @@ ice_fdir_do_rem_flow(struct ice_pf *pf, enum ice_fltr_ptype flow_type)
  * returns 0 on success and negative on errors
  */
 static int
-ice_fdir_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
-			   int fltr_idx)
+ice_ntuple_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
+			     int fltr_idx)
 {
 	struct ice_fdir_fltr *old_fltr;
 	struct ice_hw *hw = &pf->hw;
@@ -1429,13 +1429,13 @@ ice_fdir_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
 }
 
 /**
- * ice_del_fdir_ethtool - delete Flow Director filter
+ * ice_del_ntuple_ethtool - delete Flow Director or ACL filter
  * @vsi: pointer to target VSI
- * @cmd: command to add or delete Flow Director filter
+ * @cmd: command to add or delete the filter
  *
  * Returns 0 on success and negative values for failure
  */
-int ice_del_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
+int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 {
 	struct ethtool_rx_flow_spec *fsp =
 		(struct ethtool_rx_flow_spec *)&cmd->fs;
@@ -1456,21 +1456,21 @@ int ice_del_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 		return -EBUSY;
 
 	mutex_lock(&hw->fdir_fltr_lock);
-	val = ice_fdir_update_list_entry(pf, NULL, fsp->location);
+	val = ice_ntuple_update_list_entry(pf, NULL, fsp->location);
 	mutex_unlock(&hw->fdir_fltr_lock);
 
 	return val;
 }
 
 /**
- * ice_set_fdir_input_set - Set the input set for Flow Director
+ * ice_ntuple_set_input_set - Set the input set for Flow Director
  * @vsi: pointer to target VSI
  * @fsp: pointer to ethtool Rx flow specification
  * @input: filter structure
  */
 static int
-ice_set_fdir_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
-		       struct ice_fdir_fltr *input)
+ice_ntuple_set_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
+			 struct ice_fdir_fltr *input)
 {
 	u16 dest_vsi, q_index = 0;
 	struct ice_pf *pf;
@@ -1594,13 +1594,13 @@ ice_set_fdir_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
 }
 
 /**
- * ice_add_fdir_ethtool - Add/Remove Flow Director filter
+ * ice_add_ntuple_ethtool - Add/Remove Flow Director or ACL filter
  * @vsi: pointer to target VSI
- * @cmd: command to add or delete Flow Director filter
+ * @cmd: command to add or delete the filter
  *
  * Returns 0 on success and negative values for failure
  */
-int ice_add_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
+int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 {
 	struct ice_rx_flow_userdef userdata;
 	struct ethtool_rx_flow_spec *fsp;
@@ -1657,7 +1657,7 @@ int ice_add_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 	if (!input)
 		return -ENOMEM;
 
-	ret = ice_set_fdir_input_set(vsi, fsp, input);
+	ret = ice_ntuple_set_input_set(vsi, fsp, input);
 	if (ret)
 		goto free_input;
 
@@ -1674,7 +1674,7 @@ int ice_add_fdir_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 	}
 
 	/* input struct is added to the HW filter list */
-	ice_fdir_update_list_entry(pf, input, fsp->location);
+	ice_ntuple_update_list_entry(pf, input, fsp->location);
 
 	ret = ice_fdir_write_all_fltr(pf, input, true);
 	if (ret)
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 03/15] ice: initialize ACL table
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 01/15] ice: cleanup stack hog Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 02/15] ice: rename shared Flow Director functions Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 04/15] ice: initialize ACL scenario Tony Nguyen
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Real Valiquette, netdev, sassmann, anthony.l.nguyen, Chinh Cao,
	Harikumar Bokkena

From: Real Valiquette <real.valiquette@intel.com>

ACL filtering can be utilized to expand support of ntuple rules by allowing
mask values to be specified for redirect to queue or drop.

Implement support for specifying the 'm' value of ethtool ntuple command
for currently supported fields (src-ip, dst-ip, src-port, and dst-port).

For example:

ethtool -N eth0 flow-type tcp4 dst-port 8880 m 0x00ff action 10
or
ethtool -N eth0 flow-type tcp4 src-ip 192.168.0.55 m 0.0.0.255 action -1

At this time the following flow-types support mask values: tcp4, udp4,
sctp4, and ip4.

Begin implementation of ACL filters by setting up structures, AdminQ
commands, and allocation of the ACL table in the hardware.

Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Real Valiquette <real.valiquette@intel.com>
Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Harikumar Bokkena  <harikumarx.bokkena@intel.com>
---
 drivers/net/ethernet/intel/ice/Makefile       |   2 +
 drivers/net/ethernet/intel/ice/ice.h          |   4 +
 drivers/net/ethernet/intel/ice/ice_acl.c      | 153 +++++++++
 drivers/net/ethernet/intel/ice/ice_acl.h      | 125 +++++++
 drivers/net/ethernet/intel/ice/ice_acl_ctrl.c | 311 ++++++++++++++++++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   | 215 +++++++++++-
 drivers/net/ethernet/intel/ice/ice_flow.h     |   2 +
 drivers/net/ethernet/intel/ice/ice_main.c     |  50 +++
 drivers/net/ethernet/intel/ice/ice_type.h     |   3 +
 9 files changed, 863 insertions(+), 2 deletions(-)
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl.h
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl_ctrl.c

diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile
index 6da4f43f2348..0747976622cf 100644
--- a/drivers/net/ethernet/intel/ice/Makefile
+++ b/drivers/net/ethernet/intel/ice/Makefile
@@ -20,6 +20,8 @@ ice-y := ice_main.o	\
 	 ice_fltr.o	\
 	 ice_fdir.o	\
 	 ice_ethtool_fdir.o \
+	 ice_acl.o	\
+	 ice_acl_ctrl.o	\
 	 ice_flex_pipe.o \
 	 ice_flow.o	\
 	 ice_devlink.o	\
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 59d3862bb7d8..0ff1d71a1d88 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -49,6 +49,7 @@
 #include "ice_dcb.h"
 #include "ice_switch.h"
 #include "ice_common.h"
+#include "ice_flow.h"
 #include "ice_sched.h"
 #include "ice_virtchnl_pf.h"
 #include "ice_sriov.h"
@@ -100,6 +101,9 @@
 #define ICE_TX_CTX_DESC(R, i) (&(((struct ice_tx_ctx_desc *)((R)->desc))[i]))
 #define ICE_TX_FDIRDESC(R, i) (&(((struct ice_fltr_desc *)((R)->desc))[i]))
 
+#define ICE_ACL_ENTIRE_SLICE	1
+#define ICE_ACL_HALF_SLICE	2
+
 /* Macro for each VSI in a PF */
 #define ice_for_each_vsi(pf, i) \
 	for ((i) = 0; (i) < (pf)->num_alloc_vsi; (i)++)
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.c b/drivers/net/ethernet/intel/ice/ice_acl.c
new file mode 100644
index 000000000000..30e2dca5d86b
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_acl.c
@@ -0,0 +1,153 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2018-2020, Intel Corporation. */
+
+#include "ice_acl.h"
+
+/**
+ * ice_aq_alloc_acl_tbl - allocate ACL table
+ * @hw: pointer to the HW struct
+ * @tbl: pointer to ice_acl_alloc_tbl struct
+ * @cd: pointer to command details structure or NULL
+ *
+ * Allocate ACL table (indirect 0x0C10)
+ */
+enum ice_status
+ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl,
+		     struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_alloc_table *cmd;
+	struct ice_aq_desc desc;
+
+	if (!tbl->act_pairs_per_entry)
+		return ICE_ERR_PARAM;
+
+	if (tbl->act_pairs_per_entry > ICE_AQC_MAX_ACTION_MEMORIES)
+		return ICE_ERR_MAX_LIMIT;
+
+	/* If this is concurrent table, then buffer shall be valid and
+	 * contain DependentAllocIDs, 'num_dependent_alloc_ids' should be valid
+	 * and within limit
+	 */
+	if (tbl->concurr) {
+		if (!tbl->num_dependent_alloc_ids)
+			return ICE_ERR_PARAM;
+		if (tbl->num_dependent_alloc_ids >
+		    ICE_AQC_MAX_CONCURRENT_ACL_TBL)
+			return ICE_ERR_INVAL_SIZE;
+	}
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_alloc_acl_tbl);
+	desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+	cmd = &desc.params.alloc_table;
+	cmd->table_width = cpu_to_le16(tbl->width * BITS_PER_BYTE);
+	cmd->table_depth = cpu_to_le16(tbl->depth);
+	cmd->act_pairs_per_entry = tbl->act_pairs_per_entry;
+	if (tbl->concurr)
+		cmd->table_type = tbl->num_dependent_alloc_ids;
+
+	return ice_aq_send_cmd(hw, &desc, &tbl->buf, sizeof(tbl->buf), cd);
+}
+
+/**
+ * ice_aq_dealloc_acl_tbl - deallocate ACL table
+ * @hw: pointer to the HW struct
+ * @alloc_id: allocation ID of the table being released
+ * @buf: address of indirect data buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Deallocate ACL table (indirect 0x0C11)
+ *
+ * NOTE: This command has no buffer format for command itself but response
+ * format is 'struct ice_aqc_acl_generic', pass ptr to that struct
+ * as 'buf' and its size as 'buf_size'
+ */
+enum ice_status
+ice_aq_dealloc_acl_tbl(struct ice_hw *hw, u16 alloc_id,
+		       struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_tbl_actpair *cmd;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_tbl);
+	cmd = &desc.params.tbl_actpair;
+	cmd->alloc_id = cpu_to_le16(alloc_id);
+
+	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+}
+
+static enum ice_status
+ice_aq_acl_entry(struct ice_hw *hw, u16 opcode, u8 tcam_idx, u16 entry_idx,
+		 struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_entry *cmd;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+
+	if (opcode == ice_aqc_opc_program_acl_entry)
+		desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+	cmd = &desc.params.program_query_entry;
+	cmd->tcam_index = tcam_idx;
+	cmd->entry_index = cpu_to_le16(entry_idx);
+
+	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+}
+
+/**
+ * ice_aq_program_acl_entry - program ACL entry
+ * @hw: pointer to the HW struct
+ * @tcam_idx: Updated TCAM block index
+ * @entry_idx: updated entry index
+ * @buf: address of indirect data buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Program ACL entry (direct 0x0C20)
+ */
+enum ice_status
+ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
+			 struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd)
+{
+	return ice_aq_acl_entry(hw, ice_aqc_opc_program_acl_entry, tcam_idx,
+				entry_idx, buf, cd);
+}
+
+/* Helper function to program/query ACL action pair */
+static enum ice_status
+ice_aq_actpair_p_q(struct ice_hw *hw, u16 opcode, u8 act_mem_idx,
+		   u16 act_entry_idx, struct ice_aqc_actpair *buf,
+		   struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_actpair *cmd;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+
+	if (opcode == ice_aqc_opc_program_acl_actpair)
+		desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+
+	cmd = &desc.params.program_query_actpair;
+	cmd->act_mem_index = act_mem_idx;
+	cmd->act_entry_index = cpu_to_le16(act_entry_idx);
+
+	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+}
+
+/**
+ * ice_aq_program_actpair - program ACL actionpair
+ * @hw: pointer to the HW struct
+ * @act_mem_idx: action memory index to program/update/query
+ * @act_entry_idx: the entry index in action memory to be programmed/updated
+ * @buf: address of indirect data buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Program action entries (indirect 0x0C1C)
+ */
+enum ice_status
+ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
+		       struct ice_aqc_actpair *buf, struct ice_sq_cd *cd)
+{
+	return ice_aq_actpair_p_q(hw, ice_aqc_opc_program_acl_actpair,
+				  act_mem_idx, act_entry_idx, buf, cd);
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.h b/drivers/net/ethernet/intel/ice/ice_acl.h
new file mode 100644
index 000000000000..5d39ef59ed5a
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_acl.h
@@ -0,0 +1,125 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2018-2020, Intel Corporation. */
+
+#ifndef _ICE_ACL_H_
+#define _ICE_ACL_H_
+
+#include "ice_common.h"
+
+struct ice_acl_tbl_params {
+	u16 width;	/* Select/match bytes */
+	u16 depth;	/* Number of entries */
+
+#define ICE_ACL_TBL_MAX_DEP_TBLS	15
+	u16 dep_tbls[ICE_ACL_TBL_MAX_DEP_TBLS];
+
+	u8 entry_act_pairs;	/* Action pairs per entry */
+	u8 concurr;		/* Concurrent table lookup enable */
+};
+
+struct ice_acl_act_mem {
+	u8 act_mem;
+#define ICE_ACL_ACT_PAIR_MEM_INVAL	0xff
+	u8 member_of_tcam;
+};
+
+struct ice_acl_tbl {
+	/* TCAM configuration */
+	u8 first_tcam;	/* Index of the first TCAM block */
+	u8 last_tcam;	/* Index of the last TCAM block */
+	/* Index of the first entry in the first TCAM */
+	u16 first_entry;
+	/* Index of the last entry in the last TCAM */
+	u16 last_entry;
+
+	/* List of active scenarios */
+	struct list_head scens;
+
+	struct ice_acl_tbl_params info;
+	struct ice_acl_act_mem act_mems[ICE_AQC_MAX_ACTION_MEMORIES];
+
+	/* Keep track of available 64-entry chunks in TCAMs */
+	DECLARE_BITMAP(avail, ICE_AQC_ACL_ALLOC_UNITS);
+
+	u16 id;
+};
+
+enum ice_acl_entry_prio {
+	ICE_ACL_PRIO_LOW = 0,
+	ICE_ACL_PRIO_NORMAL,
+	ICE_ACL_PRIO_HIGH,
+	ICE_ACL_MAX_PRIO
+};
+
+/* Scenario structure
+ * A scenario is a logical partition within an ACL table. It can span more
+ * than one TCAM in cascade mode to support select/mask key widths larger.
+ * than the width of a TCAM. It can also span more than one TCAM in stacked
+ * mode to support larger number of entries than what a TCAM can hold. It is
+ * used to select values from selection bases (field vectors holding extract
+ * protocol header fields) to form lookup keys, and to associate action memory
+ * banks to the TCAMs used.
+ */
+struct ice_acl_scen {
+	struct list_head list_entry;
+	/* If nth bit of act_mem_bitmap is set, then nth action memory will
+	 * participate in this scenario
+	 */
+	DECLARE_BITMAP(act_mem_bitmap, ICE_AQC_MAX_ACTION_MEMORIES);
+	u16 first_idx[ICE_ACL_MAX_PRIO];
+	u16 last_idx[ICE_ACL_MAX_PRIO];
+
+	u16 id;
+	u16 start;	/* Number of entry from the start of the parent table */
+#define ICE_ACL_SCEN_MIN_WIDTH	0x3
+	u16 width;	/* Number of select/mask bytes */
+	u16 num_entry;	/* Number of scenario entry */
+	u16 end;	/* Last addressable entry from start of table */
+	u8 eff_width;	/* Available width in bytes to match */
+#define ICE_ACL_SCEN_PKT_DIR_IDX_IN_TCAM	0x2
+#define ICE_ACL_SCEN_PID_IDX_IN_TCAM		0x3
+#define ICE_ACL_SCEN_RNG_CHK_IDX_IN_TCAM	0x4
+	u8 pid_idx;	/* Byte index used to match profile ID */
+	u8 rng_chk_idx;	/* Byte index used to match range checkers result */
+	u8 pkt_dir_idx;	/* Byte index used to match packet direction */
+};
+
+/* This structure represents input fields needed to allocate ACL table */
+struct ice_acl_alloc_tbl {
+	/* Table's width in number of bytes matched */
+	u16 width;
+	/* Table's depth in number of entries. */
+	u16 depth;
+	u8 num_dependent_alloc_ids;	/* number of depdendent alloc IDs */
+	u8 concurr;			/* true for concurrent table type */
+
+	/* Amount of action pairs per table entry. Minimal valid
+	 * value for this field is 1 (e.g. single pair of actions)
+	 */
+	u8 act_pairs_per_entry;
+	union {
+		struct ice_aqc_acl_alloc_table_data data_buf;
+		struct ice_aqc_acl_generic resp_buf;
+	} buf;
+};
+
+enum ice_status
+ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params);
+enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw);
+enum ice_status
+ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl,
+		     struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_dealloc_acl_tbl(struct ice_hw *hw, u16 alloc_id,
+		       struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx,
+			 struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
+		       struct ice_aqc_actpair *buf, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id,
+		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd);
+
+#endif /* _ICE_ACL_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c b/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
new file mode 100644
index 000000000000..f8f9aff91c60
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
@@ -0,0 +1,311 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2018-2020, Intel Corporation. */
+
+#include "ice_acl.h"
+
+/* Determine the TCAM index of entry 'e' within the ACL table */
+#define ICE_ACL_TBL_TCAM_IDX(e) ((e) / ICE_AQC_ACL_TCAM_DEPTH)
+
+/**
+ * ice_acl_init_tbl
+ * @hw: pointer to the hardware structure
+ *
+ * Initialize the ACL table by invalidating TCAM entries and action pairs.
+ */
+static enum ice_status ice_acl_init_tbl(struct ice_hw *hw)
+{
+	struct ice_aqc_actpair act_buf;
+	struct ice_aqc_acl_data buf;
+	enum ice_status status = 0;
+	struct ice_acl_tbl *tbl;
+	u8 tcam_idx, i;
+	u16 idx;
+
+	tbl = hw->acl_tbl;
+	if (!tbl)
+		return ICE_ERR_CFG;
+
+	memset(&buf, 0, sizeof(buf));
+	memset(&act_buf, 0, sizeof(act_buf));
+
+	tcam_idx = tbl->first_tcam;
+	idx = tbl->first_entry;
+	while (tcam_idx < tbl->last_tcam ||
+	       (tcam_idx == tbl->last_tcam && idx <= tbl->last_entry)) {
+		/* Use the same value for entry_key and entry_key_inv since
+		 * we are initializing the fields to 0
+		 */
+		status = ice_aq_program_acl_entry(hw, tcam_idx, idx, &buf,
+						  NULL);
+		if (status)
+			return status;
+
+		if (++idx > tbl->last_entry) {
+			tcam_idx++;
+			idx = tbl->first_entry;
+		}
+	}
+
+	for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++) {
+		u16 act_entry_idx, start, end;
+
+		if (tbl->act_mems[i].act_mem == ICE_ACL_ACT_PAIR_MEM_INVAL)
+			continue;
+
+		start = tbl->first_entry;
+		end = tbl->last_entry;
+
+		for (act_entry_idx = start; act_entry_idx <= end;
+		     act_entry_idx++) {
+			/* Invalidate all allocated action pairs */
+			status = ice_aq_program_actpair(hw, i, act_entry_idx,
+							&act_buf, NULL);
+			if (status)
+				return status;
+		}
+	}
+
+	return status;
+}
+
+/**
+ * ice_acl_assign_act_mems_to_tcam
+ * @tbl: pointer to ACL table structure
+ * @cur_tcam: Index of current TCAM. Value = 0 to (ICE_AQC_ACL_SLICES - 1)
+ * @cur_mem_idx: Index of current action memory bank. Value = 0 to
+ *		 (ICE_AQC_MAX_ACTION_MEMORIES - 1)
+ * @num_mem: Number of action memory banks for this TCAM
+ *
+ * Assign "num_mem" valid action memory banks from "curr_mem_idx" to
+ * "curr_tcam" TCAM.
+ */
+static void
+ice_acl_assign_act_mems_to_tcam(struct ice_acl_tbl *tbl, u8 cur_tcam,
+				u8 *cur_mem_idx, u8 num_mem)
+{
+	u8 mem_cnt;
+
+	for (mem_cnt = 0;
+	     *cur_mem_idx < ICE_AQC_MAX_ACTION_MEMORIES && mem_cnt < num_mem;
+	     (*cur_mem_idx)++) {
+		struct ice_acl_act_mem *p_mem = &tbl->act_mems[*cur_mem_idx];
+
+		if (p_mem->act_mem == ICE_ACL_ACT_PAIR_MEM_INVAL)
+			continue;
+
+		p_mem->member_of_tcam = cur_tcam;
+
+		mem_cnt++;
+	}
+}
+
+/**
+ * ice_acl_divide_act_mems_to_tcams
+ * @tbl: pointer to ACL table structure
+ *
+ * Figure out how to divide given action memory banks to given TCAMs. This
+ * division is for SW book keeping. In the time when scenario is created,
+ * an action memory bank can be used for different TCAM.
+ *
+ * For example, given that we have 2x2 ACL table with each table entry has
+ * 2 action memory pairs. As the result, we will have 4 TCAMs (T1,T2,T3,T4)
+ * and 4 action memory banks (A1,A2,A3,A4)
+ *	[T1 - T2] { A1 - A2 }
+ *	[T3 - T4] { A3 - A4 }
+ * In the time when we need to create a scenario, for example, 2x1 scenario,
+ * we will use [T3,T4] in a cascaded layout. As it is a requirement that all
+ * action memory banks in a cascaded TCAM's row will need to associate with
+ * the last TCAM. Thus, we will associate action memory banks [A3] and [A4]
+ * for TCAM [T4].
+ * For SW book-keeping purpose, we will keep theoretical maps between TCAM
+ * [Tn] to action memory bank [An].
+ */
+static void ice_acl_divide_act_mems_to_tcams(struct ice_acl_tbl *tbl)
+{
+	u16 num_cscd, stack_level, stack_idx, min_act_mem;
+	u8 tcam_idx = tbl->first_tcam;
+	u16 max_idx_to_get_extra;
+	u8 mem_idx = 0;
+
+	/* Determine number of stacked TCAMs */
+	stack_level = DIV_ROUND_UP(tbl->info.depth, ICE_AQC_ACL_TCAM_DEPTH);
+
+	/* Determine number of cascaded TCAMs */
+	num_cscd = DIV_ROUND_UP(tbl->info.width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+
+	/* In a line of cascaded TCAM, given the number of action memory
+	 * banks per ACL table entry, we want to fairly divide these action
+	 * memory banks between these TCAMs.
+	 *
+	 * For example, there are 3 TCAMs (TCAM 3,4,5) in a line of
+	 * cascaded TCAM, and there are 7 act_mems for each ACL table entry.
+	 * The result is:
+	 *	[TCAM_3 will have 3 act_mems]
+	 *	[TCAM_4 will have 2 act_mems]
+	 *	[TCAM_5 will have 2 act_mems]
+	 */
+	min_act_mem = tbl->info.entry_act_pairs / num_cscd;
+	max_idx_to_get_extra = tbl->info.entry_act_pairs % num_cscd;
+
+	for (stack_idx = 0; stack_idx < stack_level; stack_idx++) {
+		u16 i;
+
+		for (i = 0; i < num_cscd; i++) {
+			u8 total_act_mem = min_act_mem;
+
+			if (i < max_idx_to_get_extra)
+				total_act_mem++;
+
+			ice_acl_assign_act_mems_to_tcam(tbl, tcam_idx,
+							&mem_idx,
+							total_act_mem);
+
+			tcam_idx++;
+		}
+	}
+}
+
+/**
+ * ice_acl_create_tbl
+ * @hw: pointer to the HW struct
+ * @params: parameters for the table to be created
+ *
+ * Create a LEM table for ACL usage. We are currently starting with some fixed
+ * values for the size of the table, but this will need to grow as more flow
+ * entries are added by the user level.
+ */
+enum ice_status
+ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params)
+{
+	u16 width, depth, first_e, last_e, i;
+	struct ice_aqc_acl_generic *resp_buf;
+	struct ice_acl_alloc_tbl tbl_alloc;
+	struct ice_acl_tbl *tbl;
+	enum ice_status status;
+
+	if (hw->acl_tbl)
+		return ICE_ERR_ALREADY_EXISTS;
+
+	if (!params)
+		return ICE_ERR_PARAM;
+
+	/* round up the width to the next TCAM width boundary. */
+	width = roundup(params->width, (u16)ICE_AQC_ACL_KEY_WIDTH_BYTES);
+	/* depth should be provided in chunk (64 entry) increments */
+	depth = ALIGN(params->depth, ICE_ACL_ENTRY_ALLOC_UNIT);
+
+	if (params->entry_act_pairs < width / ICE_AQC_ACL_KEY_WIDTH_BYTES) {
+		params->entry_act_pairs = width / ICE_AQC_ACL_KEY_WIDTH_BYTES;
+
+		if (params->entry_act_pairs > ICE_AQC_TBL_MAX_ACTION_PAIRS)
+			params->entry_act_pairs = ICE_AQC_TBL_MAX_ACTION_PAIRS;
+	}
+
+	/* Validate that width*depth will not exceed the TCAM limit */
+	if ((DIV_ROUND_UP(depth, ICE_AQC_ACL_TCAM_DEPTH) *
+	     (width / ICE_AQC_ACL_KEY_WIDTH_BYTES)) > ICE_AQC_ACL_SLICES)
+		return ICE_ERR_MAX_LIMIT;
+
+	memset(&tbl_alloc, 0, sizeof(tbl_alloc));
+	tbl_alloc.width = width;
+	tbl_alloc.depth = depth;
+	tbl_alloc.act_pairs_per_entry = params->entry_act_pairs;
+	tbl_alloc.concurr = params->concurr;
+	/* Set dependent_alloc_id only for concurrent table type */
+	if (params->concurr) {
+		tbl_alloc.num_dependent_alloc_ids =
+			ICE_AQC_MAX_CONCURRENT_ACL_TBL;
+
+		for (i = 0; i < ICE_AQC_MAX_CONCURRENT_ACL_TBL; i++)
+			tbl_alloc.buf.data_buf.alloc_ids[i] =
+				cpu_to_le16(params->dep_tbls[i]);
+	}
+
+	/* call the AQ command to create the ACL table with these values */
+	status = ice_aq_alloc_acl_tbl(hw, &tbl_alloc, NULL);
+	if (status) {
+		if (le16_to_cpu(tbl_alloc.buf.resp_buf.alloc_id) <
+		    ICE_AQC_ALLOC_ID_LESS_THAN_4K)
+			ice_debug(hw, ICE_DBG_ACL, "Alloc ACL table failed. Unavailable resource.\n");
+		else
+			ice_debug(hw, ICE_DBG_ACL, "AQ allocation of ACL failed with error. status: %d\n",
+				  status);
+		return status;
+	}
+
+	tbl = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*tbl), GFP_KERNEL);
+	if (!tbl)
+		return ICE_ERR_NO_MEMORY;
+
+	resp_buf = &tbl_alloc.buf.resp_buf;
+
+	/* Retrieve information of the allocated table */
+	tbl->id = le16_to_cpu(resp_buf->alloc_id);
+	tbl->first_tcam = resp_buf->ops.table.first_tcam;
+	tbl->last_tcam = resp_buf->ops.table.last_tcam;
+	tbl->first_entry = le16_to_cpu(resp_buf->first_entry);
+	tbl->last_entry = le16_to_cpu(resp_buf->last_entry);
+
+	tbl->info = *params;
+	tbl->info.width = width;
+	tbl->info.depth = depth;
+	hw->acl_tbl = tbl;
+
+	for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++)
+		tbl->act_mems[i].act_mem = resp_buf->act_mem[i];
+
+	/* Figure out which TCAMs that these newly allocated action memories
+	 * belong to.
+	 */
+	ice_acl_divide_act_mems_to_tcams(tbl);
+
+	/* Initialize the resources allocated by invalidating all TCAM entries
+	 * and all the action pairs
+	 */
+	status = ice_acl_init_tbl(hw);
+	if (status) {
+		devm_kfree(ice_hw_to_dev(hw), tbl);
+		hw->acl_tbl = NULL;
+		ice_debug(hw, ICE_DBG_ACL, "Initialization of TCAM entries failed. status: %d\n",
+			  status);
+		return status;
+	}
+
+	first_e = (tbl->first_tcam * ICE_AQC_MAX_TCAM_ALLOC_UNITS) +
+		(tbl->first_entry / ICE_ACL_ENTRY_ALLOC_UNIT);
+	last_e = (tbl->last_tcam * ICE_AQC_MAX_TCAM_ALLOC_UNITS) +
+		(tbl->last_entry / ICE_ACL_ENTRY_ALLOC_UNIT);
+
+	/* Indicate available entries in the table */
+	bitmap_set(tbl->avail, first_e, last_e - first_e + 1);
+
+	INIT_LIST_HEAD(&tbl->scens);
+
+	return 0;
+}
+
+/**
+ * ice_acl_destroy_tbl - Destroy a previously created LEM table for ACL
+ * @hw: pointer to the HW struct
+ */
+enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw)
+{
+	struct ice_aqc_acl_generic resp_buf;
+	enum ice_status status;
+
+	if (!hw->acl_tbl)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* call the AQ command to destroy the ACL table */
+	status = ice_aq_dealloc_acl_tbl(hw, hw->acl_tbl->id, &resp_buf, NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_ACL, "AQ de-allocation of ACL failed. status: %d\n",
+			  status);
+		return status;
+	}
+
+	devm_kfree(ice_hw_to_dev(hw), hw->acl_tbl);
+	hw->acl_tbl = NULL;
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index b06fbe99d8e9..688a2069482d 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -327,6 +327,7 @@ struct ice_aqc_vsi_props {
 #define ICE_AQ_VSI_PROP_RXQ_MAP_VALID		BIT(6)
 #define ICE_AQ_VSI_PROP_Q_OPT_VALID		BIT(7)
 #define ICE_AQ_VSI_PROP_OUTER_UP_VALID		BIT(8)
+#define ICE_AQ_VSI_PROP_ACL_VALID		BIT(10)
 #define ICE_AQ_VSI_PROP_FLOW_DIR_VALID		BIT(11)
 #define ICE_AQ_VSI_PROP_PASID_VALID		BIT(12)
 	/* switch section */
@@ -442,8 +443,12 @@ struct ice_aqc_vsi_props {
 	u8 q_opt_reserved[3];
 	/* outer up section */
 	__le32 outer_up_table; /* same structure and defines as ingress tbl */
-	/* section 10 */
-	__le16 sect_10_reserved;
+	/* ACL section */
+	__le16 acl_def_act;
+#define ICE_AQ_VSI_ACL_DEF_RX_PROF_S	0
+#define ICE_AQ_VSI_ACL_DEF_RX_PROF_M	(0xF << ICE_AQ_VSI_ACL_DEF_RX_PROF_S)
+#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_S	4
+#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_M	(0xF << ICE_AQ_VSI_ACL_DEF_RX_TABLE_S)
 	/* flow director section */
 	__le16 fd_options;
 #define ICE_AQ_VSI_FD_ENABLE		BIT(0)
@@ -1612,6 +1617,200 @@ struct ice_aqc_get_set_rss_lut {
 	__le32 addr_low;
 };
 
+/* Allocate ACL table (indirect 0x0C10) */
+#define ICE_AQC_ACL_KEY_WIDTH_BYTES	5
+#define ICE_AQC_ACL_TCAM_DEPTH		512
+#define ICE_ACL_ENTRY_ALLOC_UNIT	64
+#define ICE_AQC_MAX_CONCURRENT_ACL_TBL	15
+#define ICE_AQC_MAX_ACTION_MEMORIES	20
+#define ICE_AQC_ACL_SLICES		16
+#define ICE_AQC_ALLOC_ID_LESS_THAN_4K	0x1000
+/* The ACL block supports up to 8 actions per a single output. */
+#define ICE_AQC_TBL_MAX_ACTION_PAIRS	4
+
+#define ICE_AQC_MAX_TCAM_ALLOC_UNITS	(ICE_AQC_ACL_TCAM_DEPTH / \
+					 ICE_ACL_ENTRY_ALLOC_UNIT)
+#define ICE_AQC_ACL_ALLOC_UNITS		(ICE_AQC_ACL_SLICES * \
+					 ICE_AQC_MAX_TCAM_ALLOC_UNITS)
+
+struct ice_aqc_acl_alloc_table {
+	__le16 table_width;
+	__le16 table_depth;
+	u8 act_pairs_per_entry;
+	u8 table_type;
+	__le16 reserved;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+/* Allocate ACL table command buffer format */
+struct ice_aqc_acl_alloc_table_data {
+	/* Dependent table AllocIDs. Each word in this 15 word array specifies
+	 * a dependent table AllocID according to the amount specified in the
+	 * "table_type" field. All unused words shall be set to 0xFFFF
+	 */
+#define ICE_AQC_CONCURR_ID_INVALID	0xffff
+	__le16 alloc_ids[ICE_AQC_MAX_CONCURRENT_ACL_TBL];
+};
+
+/* Deallocate ACL table (indirect 0x0C11) */
+
+/* Following structure is common and used in case of deallocation
+ * of ACL table and action-pair
+ */
+struct ice_aqc_acl_tbl_actpair {
+	/* Alloc ID of the table being released */
+	__le16 alloc_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+/* This response structure is same in case of alloc/dealloc table,
+ * alloc/dealloc action-pair
+ */
+struct ice_aqc_acl_generic {
+	/* if alloc_id is below 0x1000 then allocation failed due to
+	 * unavailable resources, else this is set by FW to identify
+	 * table allocation
+	 */
+	__le16 alloc_id;
+
+	union {
+		/* to be used only in case of alloc/dealloc table */
+		struct {
+			/* Index of the first TCAM block, otherwise set to 0xFF
+			 * for a failed allocation
+			 */
+			u8 first_tcam;
+			/* Index of the last TCAM block. This index shall be
+			 * set to the value of first_tcam for single TCAM block
+			 * allocation, otherwise set to 0xFF for a failed
+			 * allocation
+			 */
+			u8 last_tcam;
+		} table;
+		/* reserved in case of alloc/dealloc action-pair */
+		struct {
+			__le16 reserved;
+		} act_pair;
+	} ops;
+
+	/* index of first entry (in both TCAM and action memories),
+	 * otherwise set to 0xFF for a failed allocation
+	 */
+	__le16 first_entry;
+	/* index of last entry (in both TCAM and action memories),
+	 * otherwise set to 0xFF for a failed allocation
+	 */
+	__le16 last_entry;
+
+	/* Each act_mem element specifies the order of the memory
+	 * otherwise 0xFF
+	 */
+	u8 act_mem[ICE_AQC_MAX_ACTION_MEMORIES];
+};
+
+/* Update ACL scenario (direct 0x0C1B)
+ * Query ACL scenario (direct 0x0C23)
+ */
+struct ice_aqc_acl_update_query_scen {
+	__le16 scen_id;
+	u8 reserved[6];
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+/* Input buffer format in case allocate/update ACL scenario and same format
+ * is used for response buffer in case of query ACL scenario.
+ * NOTE: de-allocate ACL scenario is direct command and doesn't require
+ * "buffer", hence no buffer format.
+ */
+struct ice_aqc_acl_scen {
+	struct {
+		/* Byte [x] selection for the TCAM key. This value must be set
+		 * to 0x0 for unused TCAM.
+		 * Only Bit 6..0 is used in each byte and MSB is reserved
+		 */
+#define ICE_AQC_ACL_BYTE_SEL_BASE		0x20
+#define ICE_AQC_ACL_BYTE_SEL_BASE_PID		0x3E
+#define ICE_AQC_ACL_BYTE_SEL_BASE_PKT_DIR	ICE_AQC_ACL_BYTE_SEL_BASE
+#define ICE_AQC_ACL_BYTE_SEL_BASE_RNG_CHK	0x3F
+		u8 tcam_select[5];
+		/* TCAM Block entry masking. This value should be set to 0x0 for
+		 * unused TCAM
+		 */
+		u8 chnk_msk;
+		/* Bit 0 : masks TCAM entries 0-63
+		 * Bit 1 : masks TCAM entries 64-127
+		 * Bit 2 to 7 : follow the pattern of bit 0 and 1
+		 */
+#define ICE_AQC_ACL_ALLOC_SCE_START_CMP		BIT(0)
+#define ICE_AQC_ACL_ALLOC_SCE_START_SET		BIT(1)
+		u8 start_cmp_set;
+	} tcam_cfg[ICE_AQC_ACL_SLICES];
+
+	/* Each byte, Bit 6..0: Action memory association to a TCAM block,
+	 * otherwise it shall be set to 0x0 for disabled memory action.
+	 * Bit 7 : Action memory enable for this scenario
+	 */
+#define ICE_AQC_ACL_SCE_ACT_MEM_EN		BIT(7)
+	u8 act_mem_cfg[ICE_AQC_MAX_ACTION_MEMORIES];
+};
+
+/* Program ACL actionpair (indirect 0x0C1C) */
+struct ice_aqc_acl_actpair {
+	/* action mem index to program/update */
+	u8 act_mem_index;
+	u8 reserved;
+	/* The entry index in action memory to be programmed/updated */
+	__le16 act_entry_index;
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+/* Input buffer format for program/query action-pair admin command */
+struct ice_acl_act_entry {
+	/* Action priority, values must be between 0..7 */
+	u8 prio;
+	/* Action meta-data identifier. This field should be set to 0x0
+	 * for a NOP action
+	 */
+	u8 mdid;
+	/* Action value */
+	__le16 value;
+};
+
+#define ICE_ACL_NUM_ACT_PER_ACT_PAIR 2
+struct ice_aqc_actpair {
+	struct ice_acl_act_entry act[ICE_ACL_NUM_ACT_PER_ACT_PAIR];
+};
+
+/* Program ACL entry (indirect 0x0C20) */
+struct ice_aqc_acl_entry {
+	u8 tcam_index; /* Updated TCAM block index */
+	u8 reserved;
+	__le16 entry_index; /* Updated entry index */
+	__le32 reserved2;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+/* Input buffer format in case of program ACL entry and response buffer format
+ * in case of query ACL entry
+ */
+struct ice_aqc_acl_data {
+	/* Entry key and entry key invert are 40 bits wide.
+	 * Byte 0..4 : entry key and Byte 5..7 are reserved
+	 * Byte 8..12: entry key invert and Byte 13..15 are reserved
+	 */
+	struct {
+		u8 val[5];
+		u8 reserved[3];
+	} entry_key, entry_key_invert;
+};
+
 /* Add Tx LAN Queues (indirect 0x0C30) */
 struct ice_aqc_add_txqs {
 	u8 num_qgrps;
@@ -1880,6 +2079,11 @@ struct ice_aq_desc {
 		struct ice_aqc_lldp_stop_start_specific_agent lldp_agent_ctrl;
 		struct ice_aqc_get_set_rss_lut get_set_rss_lut;
 		struct ice_aqc_get_set_rss_key get_set_rss_key;
+		struct ice_aqc_acl_alloc_table alloc_table;
+		struct ice_aqc_acl_tbl_actpair tbl_actpair;
+		struct ice_aqc_acl_update_query_scen update_query_scen;
+		struct ice_aqc_acl_entry program_query_entry;
+		struct ice_aqc_acl_actpair program_query_actpair;
 		struct ice_aqc_add_txqs add_txqs;
 		struct ice_aqc_dis_txqs dis_txqs;
 		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
@@ -2024,6 +2228,13 @@ enum ice_adminq_opc {
 	ice_aqc_opc_set_rss_lut				= 0x0B03,
 	ice_aqc_opc_get_rss_key				= 0x0B04,
 	ice_aqc_opc_get_rss_lut				= 0x0B05,
+	/* ACL commands */
+	ice_aqc_opc_alloc_acl_tbl			= 0x0C10,
+	ice_aqc_opc_dealloc_acl_tbl			= 0x0C11,
+	ice_aqc_opc_update_acl_scen			= 0x0C1B,
+	ice_aqc_opc_program_acl_actpair			= 0x0C1C,
+	ice_aqc_opc_program_acl_entry			= 0x0C20,
+	ice_aqc_opc_query_acl_scen			= 0x0C23,
 
 	/* Tx queue handling commands/events */
 	ice_aqc_opc_add_txqs				= 0x0C30,
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.h b/drivers/net/ethernet/intel/ice/ice_flow.h
index 829f90b1e998..875e8b8f1c84 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.h
+++ b/drivers/net/ethernet/intel/ice/ice_flow.h
@@ -4,6 +4,8 @@
 #ifndef _ICE_FLOW_H_
 #define _ICE_FLOW_H_
 
+#include "ice_acl.h"
+
 #define ICE_FLOW_ENTRY_HANDLE_INVAL	0
 #define ICE_FLOW_FLD_OFF_INVAL		0xffff
 
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 2dea4d0e9415..a49751f6951a 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -3822,6 +3822,48 @@ static enum ice_status ice_send_version(struct ice_pf *pf)
 	return ice_aq_send_driver_ver(&pf->hw, &dv, NULL);
 }
 
+/**
+ * ice_init_acl - Initializes the ACL block
+ * @pf: ptr to PF device
+ *
+ * returns 0 on success, negative on error
+ */
+static int ice_init_acl(struct ice_pf *pf)
+{
+	struct ice_acl_tbl_params params;
+	struct ice_hw *hw = &pf->hw;
+	int divider;
+
+	/* Creates a single ACL table that consist of src_ip(4 byte),
+	 * dest_ip(4 byte), src_port(2 byte) and dst_port(2 byte) for a total
+	 * of 12 bytes (96 bits), hence 120 bit wide keys, i.e. 3 TCAM slices.
+	 * If the given hardware card contains less than 8 PFs (ports) then
+	 * each PF will have its own TCAM slices. For 8 PFs, a given slice will
+	 * be shared by 2 different PFs.
+	 */
+	if (hw->dev_caps.num_funcs < 8)
+		divider = ICE_ACL_ENTIRE_SLICE;
+	else
+		divider = ICE_ACL_HALF_SLICE;
+
+	memset(&params, 0, sizeof(params));
+	params.width = ICE_AQC_ACL_KEY_WIDTH_BYTES * 3;
+	params.depth = ICE_AQC_ACL_TCAM_DEPTH / divider;
+	params.entry_act_pairs = 1;
+	params.concurr = false;
+
+	return ice_status_to_errno(ice_acl_create_tbl(hw, &params));
+}
+
+/**
+ * ice_deinit_acl - Unroll the initialization of the ACL block
+ * @pf: ptr to PF device
+ */
+static void ice_deinit_acl(struct ice_pf *pf)
+{
+	ice_acl_destroy_tbl(&pf->hw);
+}
+
 /**
  * ice_init_fdir - Initialize flow director VSI and configuration
  * @pf: pointer to the PF instance
@@ -4231,6 +4273,12 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
 	/* Note: Flow director init failure is non-fatal to load */
 	if (ice_init_fdir(pf))
 		dev_err(dev, "could not initialize flow director\n");
+	if (test_bit(ICE_FLAG_FD_ENA, pf->flags)) {
+		/* Note: ACL init failure is non-fatal to load */
+		err = ice_init_acl(pf);
+		if (err)
+			dev_err(dev, "Failed to initialize ACL: %d\n", err);
+	}
 
 	/* Note: DCB init failure is non-fatal to load */
 	if (ice_init_pf_dcb(pf, false)) {
@@ -4361,6 +4409,8 @@ static void ice_remove(struct pci_dev *pdev)
 
 	ice_aq_cancel_waiting_tasks(pf);
 
+	if (test_bit(ICE_FLAG_FD_ENA, pf->flags))
+		ice_deinit_acl(pf);
 	mutex_destroy(&(&pf->hw)->fdir_fltr_lock);
 	if (!ice_is_safe_mode(pf))
 		ice_remove_arfs(pf);
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 2226a291a394..a1600c7e8b17 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -47,6 +47,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R)
 #define ICE_DBG_SCHED		BIT_ULL(14)
 #define ICE_DBG_PKG		BIT_ULL(16)
 #define ICE_DBG_RES		BIT_ULL(17)
+#define ICE_DBG_ACL		BIT_ULL(18)
 #define ICE_DBG_AQ_MSG		BIT_ULL(24)
 #define ICE_DBG_AQ_DESC		BIT_ULL(25)
 #define ICE_DBG_AQ_DESC_BUF	BIT_ULL(26)
@@ -679,6 +680,8 @@ struct ice_hw {
 	struct udp_tunnel_nic_shared udp_tunnel_shared;
 	struct udp_tunnel_nic_info udp_tunnel_nic;
 
+	struct ice_acl_tbl *acl_tbl;
+
 	/* HW block tables */
 	struct ice_blk_info blk[ICE_BLK_COUNT];
 	struct mutex fl_profs_locks[ICE_BLK_COUNT];	/* lock fltr profiles */
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 04/15] ice: initialize ACL scenario
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (2 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 03/15] ice: initialize ACL table Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 05/15] ice: create flow profile Tony Nguyen
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Real Valiquette, netdev, sassmann, anthony.l.nguyen, Chinh Cao,
	Brijesh Behera

From: Real Valiquette <real.valiquette@intel.com>

Complete initialization of the ACL table by programming the table with an
initial scenario. The scenario stores the data for the filtering rules.
Adjust reporting of ntuple filters to include ACL filters.

Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Real Valiquette <real.valiquette@intel.com>
Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Brijesh Behera <brijeshx.behera@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |   1 +
 drivers/net/ethernet/intel/ice/ice_acl.c      | 112 ++++
 drivers/net/ethernet/intel/ice/ice_acl.h      |  11 +
 drivers/net/ethernet/intel/ice/ice_acl_ctrl.c | 574 ++++++++++++++++++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  31 +
 drivers/net/ethernet/intel/ice/ice_ethtool.c  |   4 +-
 .../net/ethernet/intel/ice/ice_ethtool_fdir.c |  47 +-
 drivers/net/ethernet/intel/ice/ice_fdir.c     |  15 +-
 drivers/net/ethernet/intel/ice/ice_fdir.h     |   5 +-
 drivers/net/ethernet/intel/ice/ice_flow.h     |   7 +
 drivers/net/ethernet/intel/ice/ice_main.c     |   9 +-
 drivers/net/ethernet/intel/ice/ice_type.h     |   2 +
 12 files changed, 799 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 0ff1d71a1d88..1008a6785e55 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -599,6 +599,7 @@ void ice_vsi_manage_fdir(struct ice_vsi *vsi, bool ena);
 int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
 int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
 int ice_get_ethtool_fdir_entry(struct ice_hw *hw, struct ethtool_rxnfc *cmd);
+u32 ice_ntuple_get_max_fltr_cnt(struct ice_hw *hw);
 int
 ice_get_fdir_fltr_ids(struct ice_hw *hw, struct ethtool_rxnfc *cmd,
 		      u32 *rule_locs);
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.c b/drivers/net/ethernet/intel/ice/ice_acl.c
index 30e2dca5d86b..7ff97917aca9 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl.c
+++ b/drivers/net/ethernet/intel/ice/ice_acl.c
@@ -151,3 +151,115 @@ ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
 	return ice_aq_actpair_p_q(hw, ice_aqc_opc_program_acl_actpair,
 				  act_mem_idx, act_entry_idx, buf, cd);
 }
+
+/**
+ * ice_aq_alloc_acl_scen - allocate ACL scenario
+ * @hw: pointer to the HW struct
+ * @scen_id: memory location to receive allocated scenario ID
+ * @buf: address of indirect data buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Allocate ACL scenario (indirect 0x0C14)
+ */
+enum ice_status
+ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id,
+		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_alloc_scen *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	if (!scen_id)
+		return ICE_ERR_PARAM;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_alloc_acl_scen);
+	desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+	cmd = &desc.params.alloc_scen;
+
+	status = ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+	if (!status)
+		*scen_id = le16_to_cpu(cmd->ops.resp.scen_id);
+
+	return status;
+}
+
+/**
+ * ice_aq_dealloc_acl_scen - deallocate ACL scenario
+ * @hw: pointer to the HW struct
+ * @scen_id: scen_id to be deallocated (input and output field)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Deallocate ACL scenario (direct 0x0C15)
+ */
+enum ice_status
+ice_aq_dealloc_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_dealloc_scen *cmd;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_scen);
+	cmd = &desc.params.dealloc_scen;
+	cmd->scen_id = cpu_to_le16(scen_id);
+
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
+/**
+ * ice_aq_update_query_scen - update or query ACL scenario
+ * @hw: pointer to the HW struct
+ * @opcode: AQ command opcode for either query or update scenario
+ * @scen_id: scen_id to be updated or queried
+ * @buf: address of indirect data buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Calls update or query ACL scenario
+ */
+static enum ice_status
+ice_aq_update_query_scen(struct ice_hw *hw, u16 opcode, u16 scen_id,
+			 struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_update_query_scen *cmd;
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opcode);
+	if (opcode == ice_aqc_opc_update_acl_scen)
+		desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+	cmd = &desc.params.update_query_scen;
+	cmd->scen_id = cpu_to_le16(scen_id);
+
+	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+}
+
+/**
+ * ice_aq_update_acl_scen - update ACL scenario
+ * @hw: pointer to the HW struct
+ * @scen_id: scen_id to be updated
+ * @buf: address of indirect data buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Update ACL scenario (indirect 0x0C1B)
+ */
+enum ice_status
+ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id,
+		       struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd)
+{
+	return ice_aq_update_query_scen(hw, ice_aqc_opc_update_acl_scen,
+					scen_id, buf, cd);
+}
+
+/**
+ * ice_aq_query_acl_scen - query ACL scenario
+ * @hw: pointer to the HW struct
+ * @scen_id: scen_id to be queried
+ * @buf: address of indirect data buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query ACL scenario (indirect 0x0C23)
+ */
+enum ice_status
+ice_aq_query_acl_scen(struct ice_hw *hw, u16 scen_id,
+		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd)
+{
+	return ice_aq_update_query_scen(hw, ice_aqc_opc_query_acl_scen,
+					scen_id, buf, cd);
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.h b/drivers/net/ethernet/intel/ice/ice_acl.h
index 5d39ef59ed5a..9e776f3f749c 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl.h
+++ b/drivers/net/ethernet/intel/ice/ice_acl.h
@@ -107,6 +107,9 @@ enum ice_status
 ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params);
 enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw);
 enum ice_status
+ice_acl_create_scen(struct ice_hw *hw, u16 match_width, u16 num_entries,
+		    u16 *scen_id);
+enum ice_status
 ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl,
 		     struct ice_sq_cd *cd);
 enum ice_status
@@ -121,5 +124,13 @@ ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
 enum ice_status
 ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id,
 		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_dealloc_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id,
+		       struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_query_acl_scen(struct ice_hw *hw, u16 scen_id,
+		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd);
 
 #endif /* _ICE_ACL_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c b/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
index f8f9aff91c60..84a96ccf40d5 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
+++ b/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
@@ -6,6 +6,78 @@
 /* Determine the TCAM index of entry 'e' within the ACL table */
 #define ICE_ACL_TBL_TCAM_IDX(e) ((e) / ICE_AQC_ACL_TCAM_DEPTH)
 
+/**
+ * ice_acl_init_entry
+ * @scen: pointer to the scenario struct
+ *
+ * Initialize the scenario control structure.
+ */
+static void ice_acl_init_entry(struct ice_acl_scen *scen)
+{
+	/* low priority: start from the highest index, 25% of total entries
+	 * normal priority: start from the highest index, 50% of total entries
+	 * high priority: start from the lowest index, 25% of total entries
+	 */
+	scen->first_idx[ICE_ACL_PRIO_LOW] = scen->num_entry - 1;
+	scen->first_idx[ICE_ACL_PRIO_NORMAL] = scen->num_entry -
+		scen->num_entry / 4 - 1;
+	scen->first_idx[ICE_ACL_PRIO_HIGH] = 0;
+
+	scen->last_idx[ICE_ACL_PRIO_LOW] = scen->num_entry -
+		scen->num_entry / 4;
+	scen->last_idx[ICE_ACL_PRIO_NORMAL] = scen->num_entry / 4;
+	scen->last_idx[ICE_ACL_PRIO_HIGH] = scen->num_entry / 4 - 1;
+}
+
+/**
+ * ice_acl_tbl_calc_end_idx
+ * @start: start index of the TCAM entry of this partition
+ * @num_entries: number of entries in this partition
+ * @width: width of a partition in number of TCAMs
+ *
+ * Calculate the end entry index for a partition with starting entry index
+ * 'start', entries 'num_entries', and width 'width'.
+ */
+static u16 ice_acl_tbl_calc_end_idx(u16 start, u16 num_entries, u16 width)
+{
+	u16 end_idx, add_entries = 0;
+
+	end_idx = start + (num_entries - 1);
+
+	/* In case that our ACL partition requires cascading TCAMs */
+	if (width > 1) {
+		u16 num_stack_level;
+
+		/* Figure out the TCAM stacked level in this ACL scenario */
+		num_stack_level = (start % ICE_AQC_ACL_TCAM_DEPTH) +
+			num_entries;
+		num_stack_level = DIV_ROUND_UP(num_stack_level,
+					       ICE_AQC_ACL_TCAM_DEPTH);
+
+		/* In this case, each entries in our ACL partition span
+		 * multiple TCAMs. Thus, we will need to add
+		 * ((width - 1) * num_stack_level) TCAM's entries to
+		 * end_idx.
+		 *
+		 * For example : In our case, our scenario is 2x2:
+		 *	[TCAM 0]	[TCAM 1]
+		 *	[TCAM 2]	[TCAM 3]
+		 * Assuming that a TCAM will have 512 entries. If "start"
+		 * is 500, "num_entries" is 3 and "width" = 2, then end_idx
+		 * should be 1024 (belongs to TCAM 2).
+		 * Before going to this if statement, end_idx will have the
+		 * value of 512. If "width" is 1, then the final value of
+		 * end_idx is 512. However, in our case, width is 2, then we
+		 * will need add (2 - 1) * 1 * 512. As result, end_idx will
+		 * have the value of 1024.
+		 */
+		add_entries = (width - 1) * num_stack_level *
+			ICE_AQC_ACL_TCAM_DEPTH;
+	}
+
+	return end_idx + add_entries;
+}
+
 /**
  * ice_acl_init_tbl
  * @hw: pointer to the hardware structure
@@ -284,18 +356,520 @@ ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params)
 	return 0;
 }
 
+/**
+ * ice_acl_alloc_partition - Allocate a partition from the ACL table
+ * @hw: pointer to the hardware structure
+ * @req: info of partition being allocated
+ */
+static enum ice_status
+ice_acl_alloc_partition(struct ice_hw *hw, struct ice_acl_scen *req)
+{
+	u16 start = 0, cnt = 0, off = 0;
+	u16 width, r_entries, row;
+	bool done = false;
+	int dir;
+
+	/* Determine the number of TCAMs each entry overlaps */
+	width = DIV_ROUND_UP(req->width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+
+	/* Check if we have enough TCAMs to accommodate the width */
+	if (width > hw->acl_tbl->last_tcam - hw->acl_tbl->first_tcam + 1)
+		return ICE_ERR_MAX_LIMIT;
+
+	/* Number of entries must be multiple of ICE_ACL_ENTRY_ALLOC_UNIT's */
+	r_entries = ALIGN(req->num_entry, ICE_ACL_ENTRY_ALLOC_UNIT);
+
+	/* To look for an available partition that can accommodate the request,
+	 * the process first logically arranges available TCAMs in rows such
+	 * that each row produces entries with the requested width. It then
+	 * scans the TCAMs' available bitmap, one bit at a time, and
+	 * accumulates contiguous available 64-entry chunks until there are
+	 * enough of them or when all TCAM configurations have been checked.
+	 *
+	 * For width of 1 TCAM, the scanning process starts from the top most
+	 * TCAM, and goes downward. Available bitmaps are examined from LSB
+	 * to MSB.
+	 *
+	 * For width of multiple TCAMs, the process starts from the bottom-most
+	 * row of TCAMs, and goes upward. Available bitmaps are examined from
+	 * the MSB to the LSB.
+	 *
+	 * To make sure that adjacent TCAMs can be logically arranged in the
+	 * same row, the scanning process may have multiple passes. In each
+	 * pass, the first TCAM of the bottom-most row is displaced by one
+	 * additional TCAM. The width of the row and the number of the TCAMs
+	 * available determine the number of passes. When the displacement is
+	 * more than the size of width, the TCAM row configurations will
+	 * repeat. The process will terminate when the configurations repeat.
+	 *
+	 * Available partitions can span more than one row of TCAMs.
+	 */
+	if (width == 1) {
+		row = hw->acl_tbl->first_tcam;
+		dir = 1;
+	} else {
+		/* Start with the bottom-most row, and scan for available
+		 * entries upward
+		 */
+		row = hw->acl_tbl->last_tcam + 1 - width;
+		dir = -1;
+	}
+
+	do {
+		u16 i;
+
+		/* Scan all 64-entry chunks, one chunk at a time, in the
+		 * current TCAM row
+		 */
+		for (i = 0;
+		     i < ICE_AQC_MAX_TCAM_ALLOC_UNITS && cnt < r_entries;
+		     i++) {
+			bool avail = true;
+			u16 w, p;
+
+			/* Compute the cumulative available mask across the
+			 * TCAM row to determine if the current 64-entry chunk
+			 * is available.
+			 */
+			p = dir > 0 ? i : ICE_AQC_MAX_TCAM_ALLOC_UNITS - i - 1;
+			for (w = row; w < row + width && avail; w++) {
+				u16 b;
+
+				b = (w * ICE_AQC_MAX_TCAM_ALLOC_UNITS) + p;
+				avail &= test_bit(b, hw->acl_tbl->avail);
+			}
+
+			if (!avail) {
+				cnt = 0;
+			} else {
+				/* Compute the starting index of the newly
+				 * found partition. When 'dir' is negative, the
+				 * scan processes is going upward. If so, the
+				 * starting index needs to be updated for every
+				 * available 64-entry chunk found.
+				 */
+				if (!cnt || dir < 0)
+					start = (row * ICE_AQC_ACL_TCAM_DEPTH) +
+						(p * ICE_ACL_ENTRY_ALLOC_UNIT);
+				cnt += ICE_ACL_ENTRY_ALLOC_UNIT;
+			}
+		}
+
+		if (cnt >= r_entries) {
+			req->start = start;
+			req->num_entry = r_entries;
+			req->end = ice_acl_tbl_calc_end_idx(start, r_entries,
+							    width);
+			break;
+		}
+
+		row = dir > 0 ? row + width : row - width;
+		if (row > hw->acl_tbl->last_tcam ||
+		    row < hw->acl_tbl->first_tcam) {
+			/* All rows have been checked. Increment 'off' that
+			 * will help yield a different TCAM configuration in
+			 * which adjacent TCAMs can be alternatively in the
+			 * same row.
+			 */
+			off++;
+
+			/* However, if the new 'off' value yields previously
+			 * checked configurations, then exit.
+			 */
+			if (off >= width)
+				done = true;
+			else
+				row = dir > 0 ? off :
+					hw->acl_tbl->last_tcam + 1 - off -
+					width;
+		}
+	} while (!done);
+
+	return cnt >= r_entries ? ICE_SUCCESS : ICE_ERR_MAX_LIMIT;
+}
+
+/**
+ * ice_acl_fill_tcam_select
+ * @scen_buf: Pointer to the scenario buffer that needs to be populated
+ * @scen: Pointer to the available space for the scenario
+ * @tcam_idx: Index of the TCAM used for this scenario
+ * @tcam_idx_in_cascade: Local index of the TCAM in the cascade scenario
+ *
+ * For all TCAM that participate in this scenario, fill out the tcam_select
+ * value.
+ */
+static void
+ice_acl_fill_tcam_select(struct ice_aqc_acl_scen *scen_buf,
+			 struct ice_acl_scen *scen, u16 tcam_idx,
+			 u16 tcam_idx_in_cascade)
+{
+	u16 cascade_cnt, idx;
+	u8 j;
+
+	idx = tcam_idx_in_cascade * ICE_AQC_ACL_KEY_WIDTH_BYTES;
+	cascade_cnt = DIV_ROUND_UP(scen->width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+
+	/* For each scenario, we reserved last three bytes of scenario width for
+	 * profile ID, range checker, and packet direction. Thus, the last three
+	 * bytes of the last cascaded TCAMs will have value of 1st, 31st and
+	 * 32nd byte location of BYTE selection base.
+	 *
+	 * For other bytes in the TCAMs:
+	 * For non-cascade mode (1 TCAM wide) scenario, TCAM[x]'s Select {0-1}
+	 * select indices 0-1 of the Byte Selection Base
+	 * For cascade mode, the leftmost TCAM of the first cascade row selects
+	 * indices 0-4 of the Byte Selection Base; the second TCAM in the
+	 * cascade row selects indices starting with 5-n
+	 */
+	for (j = 0; j < ICE_AQC_ACL_KEY_WIDTH_BYTES; j++) {
+		/* PKT DIR uses the 1st location of Byte Selection Base: + 1 */
+		u8 val = ICE_AQC_ACL_BYTE_SEL_BASE + 1 + idx;
+
+		if (tcam_idx_in_cascade == cascade_cnt - 1) {
+			if (j == ICE_ACL_SCEN_RNG_CHK_IDX_IN_TCAM)
+				val = ICE_AQC_ACL_BYTE_SEL_BASE_RNG_CHK;
+			else if (j == ICE_ACL_SCEN_PID_IDX_IN_TCAM)
+				val = ICE_AQC_ACL_BYTE_SEL_BASE_PID;
+			else if (j == ICE_ACL_SCEN_PKT_DIR_IDX_IN_TCAM)
+				val = ICE_AQC_ACL_BYTE_SEL_BASE_PKT_DIR;
+		}
+
+		/* In case that scenario's width is greater than the width of
+		 * the Byte selection base, we will not assign a value to the
+		 * tcam_select[j]. As a result, the tcam_select[j] will have
+		 * default value which is zero.
+		 */
+		if (val > ICE_AQC_ACL_BYTE_SEL_BASE_RNG_CHK)
+			continue;
+
+		scen_buf->tcam_cfg[tcam_idx].tcam_select[j] = val;
+
+		idx++;
+	}
+}
+
+/**
+ * ice_acl_set_scen_chnk_msk
+ * @scen_buf: Pointer to the scenario buffer that needs to be populated
+ * @scen: pointer to the available space for the scenario
+ *
+ * Set the chunk mask for the entries that will be used by this scenario
+ */
+static void
+ice_acl_set_scen_chnk_msk(struct ice_aqc_acl_scen *scen_buf,
+			  struct ice_acl_scen *scen)
+{
+	u16 tcam_idx, num_cscd, units, cnt;
+	u8 chnk_offst;
+
+	/* Determine the starting TCAM index and offset of the start entry */
+	tcam_idx = ICE_ACL_TBL_TCAM_IDX(scen->start);
+	chnk_offst = (u8)((scen->start % ICE_AQC_ACL_TCAM_DEPTH) /
+			  ICE_ACL_ENTRY_ALLOC_UNIT);
+
+	/* Entries are allocated and tracked in multiple of 64's */
+	units = scen->num_entry / ICE_ACL_ENTRY_ALLOC_UNIT;
+
+	/* Determine number of cascaded TCAMs */
+	num_cscd = scen->width / ICE_AQC_ACL_KEY_WIDTH_BYTES;
+
+	for (cnt = 0; cnt < units; cnt++) {
+		u16 i;
+
+		/* Set the corresponding bitmap of individual 64-entry
+		 * chunk spans across a cascade of 1 or more TCAMs
+		 * For each TCAM, there will be (ICE_AQC_ACL_TCAM_DEPTH
+		 * / ICE_ACL_ENTRY_ALLOC_UNIT) or 8 chunks.
+		 */
+		for (i = tcam_idx; i < tcam_idx + num_cscd; i++)
+			scen_buf->tcam_cfg[i].chnk_msk |= BIT(chnk_offst);
+
+		chnk_offst = (chnk_offst + 1) % ICE_AQC_MAX_TCAM_ALLOC_UNITS;
+		if (!chnk_offst)
+			tcam_idx += num_cscd;
+	}
+}
+
+/**
+ * ice_acl_assign_act_mem_for_scen
+ * @tbl: pointer to ACL table structure
+ * @scen: pointer to the scenario struct
+ * @scen_buf: pointer to the available space for the scenario
+ * @current_tcam_idx: theoretical index of the TCAM that we associated those
+ *		      action memory banks with, at the table creation time.
+ * @target_tcam_idx: index of the TCAM that we want to associate those action
+ *		     memory banks with.
+ */
+static void
+ice_acl_assign_act_mem_for_scen(struct ice_acl_tbl *tbl,
+				struct ice_acl_scen *scen,
+				struct ice_aqc_acl_scen *scen_buf,
+				u8 current_tcam_idx, u8 target_tcam_idx)
+{
+	u8 i;
+
+	for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++) {
+		struct ice_acl_act_mem *p_mem = &tbl->act_mems[i];
+
+		if (p_mem->act_mem == ICE_ACL_ACT_PAIR_MEM_INVAL ||
+		    p_mem->member_of_tcam != current_tcam_idx)
+			continue;
+
+		scen_buf->act_mem_cfg[i] = target_tcam_idx;
+		scen_buf->act_mem_cfg[i] |= ICE_AQC_ACL_SCE_ACT_MEM_EN;
+		set_bit(i, scen->act_mem_bitmap);
+	}
+}
+
+/**
+ * ice_acl_commit_partition - Indicate if the specified partition is active
+ * @hw: pointer to the hardware structure
+ * @scen: pointer to the scenario struct
+ * @commit: true if the partition is being commit
+ */
+static void
+ice_acl_commit_partition(struct ice_hw *hw, struct ice_acl_scen *scen,
+			 bool commit)
+{
+	u16 tcam_idx, off, num_cscd, units, cnt;
+
+	/* Determine the starting TCAM index and offset of the start entry */
+	tcam_idx = ICE_ACL_TBL_TCAM_IDX(scen->start);
+	off = (scen->start % ICE_AQC_ACL_TCAM_DEPTH) /
+		ICE_ACL_ENTRY_ALLOC_UNIT;
+
+	/* Entries are allocated and tracked in multiple of 64's */
+	units = scen->num_entry / ICE_ACL_ENTRY_ALLOC_UNIT;
+
+	/* Determine number of cascaded TCAM */
+	num_cscd = scen->width / ICE_AQC_ACL_KEY_WIDTH_BYTES;
+
+	for (cnt = 0; cnt < units; cnt++) {
+		u16 w;
+
+		/* Set/clear the corresponding bitmap of individual 64-entry
+		 * chunk spans across a row of 1 or more TCAMs
+		 */
+		for (w = 0; w < num_cscd; w++) {
+			u16 b;
+
+			b = ((tcam_idx + w) * ICE_AQC_MAX_TCAM_ALLOC_UNITS) +
+				off;
+			if (commit)
+				set_bit(b, hw->acl_tbl->avail);
+			else
+				clear_bit(b, hw->acl_tbl->avail);
+		}
+
+		off = (off + 1) % ICE_AQC_MAX_TCAM_ALLOC_UNITS;
+		if (!off)
+			tcam_idx += num_cscd;
+	}
+}
+
+/**
+ * ice_acl_create_scen
+ * @hw: pointer to the hardware structure
+ * @match_width: number of bytes to be matched in this scenario
+ * @num_entries: number of entries to be allocated for the scenario
+ * @scen_id: holds returned scenario ID if successful
+ */
+enum ice_status
+ice_acl_create_scen(struct ice_hw *hw, u16 match_width, u16 num_entries,
+		    u16 *scen_id)
+{
+	u8 cascade_cnt, first_tcam, last_tcam, i, k;
+	struct ice_aqc_acl_scen scen_buf;
+	struct ice_acl_scen *scen;
+	enum ice_status status;
+
+	if (!hw->acl_tbl)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	scen = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*scen), GFP_KERNEL);
+	if (!scen)
+		return ICE_ERR_NO_MEMORY;
+
+	scen->start = hw->acl_tbl->first_entry;
+	scen->width = ICE_AQC_ACL_KEY_WIDTH_BYTES *
+		DIV_ROUND_UP(match_width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+	scen->num_entry = num_entries;
+
+	status = ice_acl_alloc_partition(hw, scen);
+	if (status)
+		goto out;
+
+	memset(&scen_buf, 0, sizeof(scen_buf));
+
+	/* Determine the number of cascade TCAMs, given the scenario's width */
+	cascade_cnt = DIV_ROUND_UP(scen->width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+	first_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start);
+	last_tcam = ICE_ACL_TBL_TCAM_IDX(scen->end);
+
+	/* For each scenario, we reserved last three bytes of scenario width for
+	 * packet direction flag, profile ID and range checker. Thus, we want to
+	 * return back to the caller the eff_width, pkt_dir_idx, rng_chk_idx and
+	 * pid_idx.
+	 */
+	scen->eff_width = cascade_cnt * ICE_AQC_ACL_KEY_WIDTH_BYTES -
+		ICE_ACL_SCEN_MIN_WIDTH;
+	scen->rng_chk_idx = (cascade_cnt - 1) * ICE_AQC_ACL_KEY_WIDTH_BYTES +
+		ICE_ACL_SCEN_RNG_CHK_IDX_IN_TCAM;
+	scen->pid_idx = (cascade_cnt - 1) * ICE_AQC_ACL_KEY_WIDTH_BYTES +
+		ICE_ACL_SCEN_PID_IDX_IN_TCAM;
+	scen->pkt_dir_idx = (cascade_cnt - 1) * ICE_AQC_ACL_KEY_WIDTH_BYTES +
+		ICE_ACL_SCEN_PKT_DIR_IDX_IN_TCAM;
+
+	/* set the chunk mask for the tcams */
+	ice_acl_set_scen_chnk_msk(&scen_buf, scen);
+
+	/* set the TCAM select and start_cmp and start_set bits */
+	k = first_tcam;
+	/* set the START_SET bit at the beginning of the stack */
+	scen_buf.tcam_cfg[k].start_cmp_set |= ICE_AQC_ACL_ALLOC_SCE_START_SET;
+	while (k <= last_tcam) {
+		u8 last_tcam_idx_cascade = cascade_cnt + k - 1;
+
+		/* set start_cmp for the first cascaded TCAM */
+		scen_buf.tcam_cfg[k].start_cmp_set |=
+			ICE_AQC_ACL_ALLOC_SCE_START_CMP;
+
+		/* cascade TCAMs up to the width of the scenario */
+		for (i = k; i < cascade_cnt + k; i++) {
+			ice_acl_fill_tcam_select(&scen_buf, scen, i, i - k);
+			ice_acl_assign_act_mem_for_scen(hw->acl_tbl, scen,
+							&scen_buf, i,
+							last_tcam_idx_cascade);
+		}
+
+		k = i;
+	}
+
+	/* We need to set the start_cmp bit for the unused TCAMs. */
+	i = 0;
+	while (i < first_tcam)
+		scen_buf.tcam_cfg[i++].start_cmp_set =
+					ICE_AQC_ACL_ALLOC_SCE_START_CMP;
+
+	i = last_tcam + 1;
+	while (i < ICE_AQC_ACL_SLICES)
+		scen_buf.tcam_cfg[i++].start_cmp_set =
+					ICE_AQC_ACL_ALLOC_SCE_START_CMP;
+
+	status = ice_aq_alloc_acl_scen(hw, scen_id, &scen_buf, NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_ACL, "AQ allocation of ACL scenario failed. status: %d\n",
+			  status);
+		goto out;
+	}
+
+	scen->id = *scen_id;
+	ice_acl_commit_partition(hw, scen, false);
+	ice_acl_init_entry(scen);
+	list_add(&scen->list_entry, &hw->acl_tbl->scens);
+
+out:
+	if (status)
+		devm_kfree(ice_hw_to_dev(hw), scen);
+
+	return status;
+}
+
+/**
+ * ice_acl_destroy_scen - Destroy an ACL scenario
+ * @hw: pointer to the HW struct
+ * @scen_id: ID of the remove scenario
+ */
+static enum ice_status ice_acl_destroy_scen(struct ice_hw *hw, u16 scen_id)
+{
+	struct ice_acl_scen *scen, *tmp_scen;
+	struct ice_flow_prof *p, *tmp;
+	enum ice_status status;
+
+	if (!hw->acl_tbl)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* Remove profiles that use "scen_id" scenario */
+	list_for_each_entry_safe(p, tmp, &hw->fl_profs[ICE_BLK_ACL], l_entry)
+		if (p->cfg.scen && p->cfg.scen->id == scen_id) {
+			status = ice_flow_rem_prof(hw, ICE_BLK_ACL, p->id);
+			if (status) {
+				ice_debug(hw, ICE_DBG_ACL, "ice_flow_rem_prof failed. status: %d\n",
+					  status);
+				return status;
+			}
+		}
+
+	/* Call the AQ command to destroy the targeted scenario */
+	status = ice_aq_dealloc_acl_scen(hw, scen_id, NULL);
+	if (status) {
+		ice_debug(hw, ICE_DBG_ACL, "AQ de-allocation of scenario failed. status: %d\n",
+			  status);
+		return status;
+	}
+
+	/* Remove scenario from hw->acl_tbl->scens */
+	list_for_each_entry_safe(scen, tmp_scen, &hw->acl_tbl->scens,
+				 list_entry)
+		if (scen->id == scen_id) {
+			list_del(&scen->list_entry);
+			devm_kfree(ice_hw_to_dev(hw), scen);
+		}
+
+	return 0;
+}
+
 /**
  * ice_acl_destroy_tbl - Destroy a previously created LEM table for ACL
  * @hw: pointer to the HW struct
  */
 enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw)
 {
+	struct ice_acl_scen *pos_scen, *tmp_scen;
 	struct ice_aqc_acl_generic resp_buf;
+	struct ice_aqc_acl_scen buf;
 	enum ice_status status;
+	u8 i;
 
 	if (!hw->acl_tbl)
 		return ICE_ERR_DOES_NOT_EXIST;
 
+	/* Mark all the created scenario's TCAM to stop the packet lookup and
+	 * delete them afterward
+	 */
+	list_for_each_entry_safe(pos_scen, tmp_scen, &hw->acl_tbl->scens,
+				 list_entry) {
+		status = ice_aq_query_acl_scen(hw, pos_scen->id, &buf, NULL);
+		if (status) {
+			ice_debug(hw, ICE_DBG_ACL, "ice_aq_query_acl_scen() failed. status: %d\n",
+				  status);
+			return status;
+		}
+
+		for (i = 0; i < ICE_AQC_ACL_SLICES; i++) {
+			buf.tcam_cfg[i].chnk_msk = 0;
+			buf.tcam_cfg[i].start_cmp_set =
+					ICE_AQC_ACL_ALLOC_SCE_START_CMP;
+		}
+
+		for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++)
+			buf.act_mem_cfg[i] = 0;
+
+		status = ice_aq_update_acl_scen(hw, pos_scen->id, &buf, NULL);
+		if (status) {
+			ice_debug(hw, ICE_DBG_ACL, "ice_aq_update_acl_scen() failed. status: %d\n",
+				  status);
+			return status;
+		}
+
+		status = ice_acl_destroy_scen(hw, pos_scen->id);
+		if (status) {
+			ice_debug(hw, ICE_DBG_ACL, "deletion of scenario failed. status: %d\n",
+				  status);
+			return status;
+		}
+	}
+
 	/* call the AQ command to destroy the ACL table */
 	status = ice_aq_dealloc_acl_tbl(hw, hw->acl_tbl->id, &resp_buf, NULL);
 	if (status) {
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 688a2069482d..062a90248f8f 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1711,6 +1711,33 @@ struct ice_aqc_acl_generic {
 	u8 act_mem[ICE_AQC_MAX_ACTION_MEMORIES];
 };
 
+/* Allocate ACL scenario (indirect 0x0C14). This command doesn't have separate
+ * response buffer since original command buffer gets updated with
+ * 'scen_id' in case of success
+ */
+struct ice_aqc_acl_alloc_scen {
+	union {
+		struct {
+			u8 reserved[8];
+		} cmd;
+		struct {
+			__le16 scen_id;
+			u8 reserved[6];
+		} resp;
+	} ops;
+	__le32 addr_high;
+	__le32 addr_low;
+};
+
+/* De-allocate ACL scenario (direct 0x0C15). This command doesn't need
+ * separate response buffer since nothing to be returned as a response
+ * except status.
+ */
+struct ice_aqc_acl_dealloc_scen {
+	__le16 scen_id;
+	u8 reserved[14];
+};
+
 /* Update ACL scenario (direct 0x0C1B)
  * Query ACL scenario (direct 0x0C23)
  */
@@ -2081,6 +2108,8 @@ struct ice_aq_desc {
 		struct ice_aqc_get_set_rss_key get_set_rss_key;
 		struct ice_aqc_acl_alloc_table alloc_table;
 		struct ice_aqc_acl_tbl_actpair tbl_actpair;
+		struct ice_aqc_acl_alloc_scen alloc_scen;
+		struct ice_aqc_acl_dealloc_scen dealloc_scen;
 		struct ice_aqc_acl_update_query_scen update_query_scen;
 		struct ice_aqc_acl_entry program_query_entry;
 		struct ice_aqc_acl_actpair program_query_actpair;
@@ -2231,6 +2260,8 @@ enum ice_adminq_opc {
 	/* ACL commands */
 	ice_aqc_opc_alloc_acl_tbl			= 0x0C10,
 	ice_aqc_opc_dealloc_acl_tbl			= 0x0C11,
+	ice_aqc_opc_alloc_acl_scen			= 0x0C14,
+	ice_aqc_opc_dealloc_acl_scen			= 0x0C15,
 	ice_aqc_opc_update_acl_scen			= 0x0C1B,
 	ice_aqc_opc_program_acl_actpair			= 0x0C1C,
 	ice_aqc_opc_program_acl_entry			= 0x0C20,
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 363377fe90ee..f53a6722c146 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2689,8 +2689,8 @@ ice_get_rxnfc(struct net_device *netdev, struct ethtool_rxnfc *cmd,
 		break;
 	case ETHTOOL_GRXCLSRLCNT:
 		cmd->rule_cnt = hw->fdir_active_fltr;
-		/* report total rule count */
-		cmd->data = ice_get_fdir_cnt_all(hw);
+		/* report max rule count */
+		cmd->data = ice_ntuple_get_max_fltr_cnt(hw);
 		ret = 0;
 		break;
 	case ETHTOOL_GRXCLSRULE:
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
index f3d2199a2b42..6869357624ab 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
@@ -219,6 +219,22 @@ int ice_get_ethtool_fdir_entry(struct ice_hw *hw, struct ethtool_rxnfc *cmd)
 	return ret;
 }
 
+/**
+ * ice_ntuple_get_max_fltr_cnt - return the maximum number of allowed filters
+ * @hw: hardware structure containing filter information
+ */
+u32 ice_ntuple_get_max_fltr_cnt(struct ice_hw *hw)
+{
+	int acl_cnt;
+
+	if (hw->dev_caps.num_funcs < 8)
+		acl_cnt = ICE_AQC_ACL_TCAM_DEPTH / ICE_ACL_ENTIRE_SLICE;
+	else
+		acl_cnt = ICE_AQC_ACL_TCAM_DEPTH / ICE_ACL_HALF_SLICE;
+
+	return ice_get_fdir_cnt_all(hw) + acl_cnt;
+}
+
 /**
  * ice_get_fdir_fltr_ids - fill buffer with filter IDs of active filters
  * @hw: hardware structure containing the filter list
@@ -235,8 +251,8 @@ ice_get_fdir_fltr_ids(struct ice_hw *hw, struct ethtool_rxnfc *cmd,
 	unsigned int cnt = 0;
 	int val = 0;
 
-	/* report total rule count */
-	cmd->data = ice_get_fdir_cnt_all(hw);
+	/* report max rule count */
+	cmd->data = ice_ntuple_get_max_fltr_cnt(hw);
 
 	mutex_lock(&hw->fdir_fltr_lock);
 
@@ -265,6 +281,9 @@ ice_get_fdir_fltr_ids(struct ice_hw *hw, struct ethtool_rxnfc *cmd,
 static struct ice_fd_hw_prof *
 ice_fdir_get_hw_prof(struct ice_hw *hw, enum ice_block blk, int flow)
 {
+	if (blk == ICE_BLK_ACL && hw->acl_prof)
+		return hw->acl_prof[flow];
+
 	if (blk == ICE_BLK_FD && hw->fdir_prof)
 		return hw->fdir_prof[flow];
 
@@ -1345,11 +1364,12 @@ void ice_vsi_manage_fdir(struct ice_vsi *vsi, bool ena)
 	if (!test_and_clear_bit(ICE_FLAG_FD_ENA, pf->flags))
 		goto release_lock;
 	list_for_each_entry_safe(f_rule, tmp, &hw->fdir_list_head, fltr_node) {
-		/* ignore return value */
-		ice_fdir_write_all_fltr(pf, f_rule, false);
-		ice_fdir_update_cntrs(hw, f_rule->flow_type, false);
+		if (!f_rule->acl_fltr)
+			ice_fdir_write_all_fltr(pf, f_rule, false);
+		ice_fdir_update_cntrs(hw, f_rule->flow_type, f_rule->acl_fltr,
+				      false);
 		list_del(&f_rule->fltr_node);
-		devm_kfree(ice_hw_to_dev(hw), f_rule);
+		devm_kfree(ice_pf_to_dev(pf), f_rule);
 	}
 
 	if (hw->fdir_prof)
@@ -1358,6 +1378,12 @@ void ice_vsi_manage_fdir(struct ice_vsi *vsi, bool ena)
 			if (hw->fdir_prof[flow])
 				ice_fdir_rem_flow(hw, ICE_BLK_FD, flow);
 
+	if (hw->acl_prof)
+		for (flow = ICE_FLTR_PTYPE_NONF_NONE; flow < ICE_FLTR_PTYPE_MAX;
+		     flow++)
+			if (hw->acl_prof[flow])
+				ice_fdir_rem_flow(hw, ICE_BLK_ACL, flow);
+
 release_lock:
 	mutex_unlock(&hw->fdir_fltr_lock);
 }
@@ -1412,7 +1438,8 @@ ice_ntuple_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
 		err = ice_fdir_write_all_fltr(pf, old_fltr, false);
 		if (err)
 			return err;
-		ice_fdir_update_cntrs(hw, old_fltr->flow_type, false);
+		ice_fdir_update_cntrs(hw, old_fltr->flow_type,
+				      false, false);
 		if (!input && !hw->fdir_fltr_cnt[old_fltr->flow_type])
 			/* we just deleted the last filter of flow_type so we
 			 * should also delete the HW filter info.
@@ -1424,7 +1451,7 @@ ice_ntuple_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
 	if (!input)
 		return err;
 	ice_fdir_list_add_fltr(hw, input);
-	ice_fdir_update_cntrs(hw, input->flow_type, true);
+	ice_fdir_update_cntrs(hw, input->flow_type, input->acl_fltr, true);
 	return 0;
 }
 
@@ -1640,7 +1667,7 @@ int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 	if (ret)
 		return ret;
 
-	if (fsp->location >= ice_get_fdir_cnt_all(hw)) {
+	if (fsp->location >= ice_ntuple_get_max_fltr_cnt(hw)) {
 		dev_err(dev, "Failed to add filter.  The maximum number of flow director filters has been reached.\n");
 		return -ENOSPC;
 	}
@@ -1683,7 +1710,7 @@ int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 	goto release_lock;
 
 remove_sw_rule:
-	ice_fdir_update_cntrs(hw, input->flow_type, false);
+	ice_fdir_update_cntrs(hw, input->flow_type, false, false);
 	list_del(&input->fltr_node);
 release_lock:
 	mutex_unlock(&hw->fdir_fltr_lock);
diff --git a/drivers/net/ethernet/intel/ice/ice_fdir.c b/drivers/net/ethernet/intel/ice/ice_fdir.c
index 59c0c6a0f8c5..6e9d1e6f4159 100644
--- a/drivers/net/ethernet/intel/ice/ice_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_fdir.c
@@ -718,20 +718,25 @@ void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *fltr)
  * ice_fdir_update_cntrs - increment / decrement filter counter
  * @hw: pointer to hardware structure
  * @flow: filter flow type
+ * @acl_fltr: true indicates an ACL filter
  * @add: true implies filters added
  */
 void
-ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow, bool add)
+ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow,
+		      bool acl_fltr, bool add)
 {
 	int incr;
 
 	incr = add ? 1 : -1;
 	hw->fdir_active_fltr += incr;
-
-	if (flow == ICE_FLTR_PTYPE_NONF_NONE || flow >= ICE_FLTR_PTYPE_MAX)
+	if (flow == ICE_FLTR_PTYPE_NONF_NONE || flow >= ICE_FLTR_PTYPE_MAX) {
 		ice_debug(hw, ICE_DBG_SW, "Unknown filter type %d\n", flow);
-	else
-		hw->fdir_fltr_cnt[flow] += incr;
+	} else {
+		if (acl_fltr)
+			hw->acl_fltr_cnt[flow] += incr;
+		else
+			hw->fdir_fltr_cnt[flow] += incr;
+	}
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/ice/ice_fdir.h b/drivers/net/ethernet/intel/ice/ice_fdir.h
index 1c587766daab..5cc89d91ec39 100644
--- a/drivers/net/ethernet/intel/ice/ice_fdir.h
+++ b/drivers/net/ethernet/intel/ice/ice_fdir.h
@@ -132,6 +132,8 @@ struct ice_fdir_fltr {
 	u8 fltr_status;
 	u16 cnt_index;
 	u32 fltr_id;
+	/* Set to true for an ACL filter */
+	bool acl_fltr;
 };
 
 /* Dummy packet filter definition structure */
@@ -161,6 +163,7 @@ bool ice_fdir_has_frag(enum ice_fltr_ptype flow);
 struct ice_fdir_fltr *
 ice_fdir_find_fltr_by_idx(struct ice_hw *hw, u32 fltr_idx);
 void
-ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow, bool add);
+ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow,
+		      bool acl_fltr, bool add);
 void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input);
 #endif /* _ICE_FDIR_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.h b/drivers/net/ethernet/intel/ice/ice_flow.h
index 875e8b8f1c84..00109262f152 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.h
+++ b/drivers/net/ethernet/intel/ice/ice_flow.h
@@ -214,6 +214,13 @@ struct ice_flow_prof {
 
 	/* software VSI handles referenced by this flow profile */
 	DECLARE_BITMAP(vsis, ICE_MAX_VSI);
+
+	union {
+		/* struct sw_recipe */
+		struct ice_acl_scen *scen;
+		/* struct fd */
+		u32 data;
+	} cfg;
 };
 
 struct ice_rss_cfg {
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index a49751f6951a..67bab35d590b 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -3832,7 +3832,9 @@ static int ice_init_acl(struct ice_pf *pf)
 {
 	struct ice_acl_tbl_params params;
 	struct ice_hw *hw = &pf->hw;
+	enum ice_status status;
 	int divider;
+	u16 scen_id;
 
 	/* Creates a single ACL table that consist of src_ip(4 byte),
 	 * dest_ip(4 byte), src_port(2 byte) and dst_port(2 byte) for a total
@@ -3852,7 +3854,12 @@ static int ice_init_acl(struct ice_pf *pf)
 	params.entry_act_pairs = 1;
 	params.concurr = false;
 
-	return ice_status_to_errno(ice_acl_create_tbl(hw, &params));
+	status = ice_acl_create_tbl(hw, &params);
+	if (status)
+		return ice_status_to_errno(status);
+
+	return ice_status_to_errno(ice_acl_create_scen(hw, params.width,
+						       params.depth, &scen_id));
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index a1600c7e8b17..f5e42ba9a286 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -681,6 +681,8 @@ struct ice_hw {
 	struct udp_tunnel_nic_info udp_tunnel_nic;
 
 	struct ice_acl_tbl *acl_tbl;
+	struct ice_fd_hw_prof **acl_prof;
+	u16 acl_fltr_cnt[ICE_FLTR_PTYPE_MAX];
 
 	/* HW block tables */
 	struct ice_blk_info blk[ICE_BLK_COUNT];
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 05/15] ice: create flow profile
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (3 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 04/15] ice: initialize ACL scenario Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 23:56   ` Alexander Duyck
  2020-11-13 21:44 ` [net-next v3 06/15] ice: create ACL entry Tony Nguyen
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Real Valiquette, netdev, sassmann, anthony.l.nguyen, Chinh Cao,
	Brijesh Behera

From: Real Valiquette <real.valiquette@intel.com>

Implement the initial steps for creating an ACL filter to support ntuple
masks. Create a flow profile based on a given mask rule and program it to
the hardware. Though the profile is written to hardware, no actions are
associated with the profile yet.

Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Real Valiquette <real.valiquette@intel.com>
Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Brijesh Behera <brijeshx.behera@intel.com>
---
 drivers/net/ethernet/intel/ice/Makefile       |   1 +
 drivers/net/ethernet/intel/ice/ice.h          |   9 +
 drivers/net/ethernet/intel/ice/ice_acl_main.c | 260 ++++++++++++++++
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  39 +++
 .../net/ethernet/intel/ice/ice_ethtool_fdir.c | 290 ++++++++++++++----
 .../net/ethernet/intel/ice/ice_flex_pipe.c    |  12 +-
 drivers/net/ethernet/intel/ice/ice_flow.c     | 178 ++++++++++-
 drivers/net/ethernet/intel/ice/ice_flow.h     |  17 +
 8 files changed, 727 insertions(+), 79 deletions(-)
 create mode 100644 drivers/net/ethernet/intel/ice/ice_acl_main.c

diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile
index 0747976622cf..36a787b5ad8d 100644
--- a/drivers/net/ethernet/intel/ice/Makefile
+++ b/drivers/net/ethernet/intel/ice/Makefile
@@ -20,6 +20,7 @@ ice-y := ice_main.o	\
 	 ice_fltr.o	\
 	 ice_fdir.o	\
 	 ice_ethtool_fdir.o \
+	 ice_acl_main.o	\
 	 ice_acl.o	\
 	 ice_acl_ctrl.o	\
 	 ice_flex_pipe.o \
diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 1008a6785e55..d813a5c765d0 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -601,16 +601,25 @@ int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
 int ice_get_ethtool_fdir_entry(struct ice_hw *hw, struct ethtool_rxnfc *cmd);
 u32 ice_ntuple_get_max_fltr_cnt(struct ice_hw *hw);
 int
+ice_ntuple_l4_proto_to_port(enum ice_flow_seg_hdr l4_proto,
+			    enum ice_flow_field *src_port,
+			    enum ice_flow_field *dst_port);
+int ice_ntuple_check_ip4_seg(struct ethtool_tcpip4_spec *tcp_ip4_spec);
+int ice_ntuple_check_ip4_usr_seg(struct ethtool_usrip4_spec *usr_ip4_spec);
+int
 ice_get_fdir_fltr_ids(struct ice_hw *hw, struct ethtool_rxnfc *cmd,
 		      u32 *rule_locs);
 void ice_fdir_release_flows(struct ice_hw *hw);
 void ice_fdir_replay_flows(struct ice_hw *hw);
 void ice_fdir_replay_fltrs(struct ice_pf *pf);
 int ice_fdir_create_dflt_rules(struct ice_pf *pf);
+enum ice_fltr_ptype ice_ethtool_flow_to_fltr(int eth);
 int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
 			  struct ice_rq_event_info *event);
 int ice_open(struct net_device *netdev);
 int ice_stop(struct net_device *netdev);
 void ice_service_task_schedule(struct ice_pf *pf);
+int
+ice_acl_add_rule_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
 
 #endif /* _ICE_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_acl_main.c b/drivers/net/ethernet/intel/ice/ice_acl_main.c
new file mode 100644
index 000000000000..be97dfb94652
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_acl_main.c
@@ -0,0 +1,260 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2018-2020, Intel Corporation. */
+
+/* ACL support for ice */
+
+#include "ice.h"
+#include "ice_lib.h"
+
+/* Number of action */
+#define ICE_ACL_NUM_ACT		1
+
+/**
+ * ice_acl_set_ip4_addr_seg
+ * @seg: flow segment for programming
+ *
+ * Set the IPv4 source and destination address mask for the given flow segment
+ */
+static void ice_acl_set_ip4_addr_seg(struct ice_flow_seg_info *seg)
+{
+	u16 val_loc, mask_loc;
+
+	/* IP source address */
+	val_loc = offsetof(struct ice_fdir_fltr, ip.v4.src_ip);
+	mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.src_ip);
+
+	ice_flow_set_fld(seg, ICE_FLOW_FIELD_IDX_IPV4_SA, val_loc,
+			 mask_loc, ICE_FLOW_FLD_OFF_INVAL, false);
+
+	/* IP destination address */
+	val_loc = offsetof(struct ice_fdir_fltr, ip.v4.dst_ip);
+	mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.dst_ip);
+
+	ice_flow_set_fld(seg, ICE_FLOW_FIELD_IDX_IPV4_DA, val_loc,
+			 mask_loc, ICE_FLOW_FLD_OFF_INVAL, false);
+}
+
+/**
+ * ice_acl_set_ip4_port_seg
+ * @seg: flow segment for programming
+ * @l4_proto: Layer 4 protocol to program
+ *
+ * Set the source and destination port for the given flow segment based on the
+ * provided layer 4 protocol
+ */
+static int
+ice_acl_set_ip4_port_seg(struct ice_flow_seg_info *seg,
+			 enum ice_flow_seg_hdr l4_proto)
+{
+	enum ice_flow_field src_port, dst_port;
+	u16 val_loc, mask_loc;
+	int err;
+
+	err = ice_ntuple_l4_proto_to_port(l4_proto, &src_port, &dst_port);
+	if (err)
+		return err;
+
+	/* Layer 4 source port */
+	val_loc = offsetof(struct ice_fdir_fltr, ip.v4.src_port);
+	mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.src_port);
+
+	ice_flow_set_fld(seg, src_port, val_loc, mask_loc,
+			 ICE_FLOW_FLD_OFF_INVAL, false);
+
+	/* Layer 4 destination port */
+	val_loc = offsetof(struct ice_fdir_fltr, ip.v4.dst_port);
+	mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.dst_port);
+
+	ice_flow_set_fld(seg, dst_port, val_loc, mask_loc,
+			 ICE_FLOW_FLD_OFF_INVAL, false);
+
+	return 0;
+}
+
+/**
+ * ice_acl_set_ip4_seg
+ * @seg: flow segment for programming
+ * @tcp_ip4_spec: mask data from ethtool
+ * @l4_proto: Layer 4 protocol to program
+ *
+ * Set the mask data into the flow segment to be used to program HW
+ * table based on provided L4 protocol for IPv4
+ */
+static int
+ice_acl_set_ip4_seg(struct ice_flow_seg_info *seg,
+		    struct ethtool_tcpip4_spec *tcp_ip4_spec,
+		    enum ice_flow_seg_hdr l4_proto)
+{
+	int err;
+
+	if (!seg)
+		return -EINVAL;
+
+	err = ice_ntuple_check_ip4_seg(tcp_ip4_spec);
+	if (err)
+		return err;
+
+	ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV4 | l4_proto);
+	ice_acl_set_ip4_addr_seg(seg);
+
+	return ice_acl_set_ip4_port_seg(seg, l4_proto);
+}
+
+/**
+ * ice_acl_set_ip4_usr_seg
+ * @seg: flow segment for programming
+ * @usr_ip4_spec: ethtool userdef packet offset
+ *
+ * Set the offset data into the flow segment to be used to program HW
+ * table for IPv4
+ */
+static int
+ice_acl_set_ip4_usr_seg(struct ice_flow_seg_info *seg,
+			struct ethtool_usrip4_spec *usr_ip4_spec)
+{
+	int err;
+
+	if (!seg)
+		return -EINVAL;
+
+	err = ice_ntuple_check_ip4_usr_seg(usr_ip4_spec);
+	if (err)
+		return err;
+
+	ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV4);
+	ice_acl_set_ip4_addr_seg(seg);
+
+	return 0;
+}
+
+/**
+ * ice_acl_check_input_set - Checks that a given ACL input set is valid
+ * @pf: ice PF structure
+ * @fsp: pointer to ethtool Rx flow specification
+ *
+ * Returns 0 on success and negative values for failure
+ */
+static int
+ice_acl_check_input_set(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp)
+{
+	struct ice_fd_hw_prof *hw_prof = NULL;
+	struct ice_flow_prof *prof = NULL;
+	struct ice_flow_seg_info *old_seg;
+	struct ice_flow_seg_info *seg;
+	enum ice_fltr_ptype fltr_type;
+	struct ice_hw *hw = &pf->hw;
+	enum ice_status status;
+	struct device *dev;
+	int err;
+
+	if (!fsp)
+		return -EINVAL;
+
+	dev = ice_pf_to_dev(pf);
+	seg = devm_kzalloc(dev, sizeof(*seg), GFP_KERNEL);
+	if (!seg)
+		return -ENOMEM;
+
+	switch (fsp->flow_type & ~FLOW_EXT) {
+	case TCP_V4_FLOW:
+		err = ice_acl_set_ip4_seg(seg, &fsp->m_u.tcp_ip4_spec,
+					  ICE_FLOW_SEG_HDR_TCP);
+		break;
+	case UDP_V4_FLOW:
+		err = ice_acl_set_ip4_seg(seg, &fsp->m_u.tcp_ip4_spec,
+					  ICE_FLOW_SEG_HDR_UDP);
+		break;
+	case SCTP_V4_FLOW:
+		err = ice_acl_set_ip4_seg(seg, &fsp->m_u.tcp_ip4_spec,
+					  ICE_FLOW_SEG_HDR_SCTP);
+		break;
+	case IPV4_USER_FLOW:
+		err = ice_acl_set_ip4_usr_seg(seg, &fsp->m_u.usr_ip4_spec);
+		break;
+	default:
+		err = -EOPNOTSUPP;
+	}
+	if (err)
+		goto err_exit;
+
+	fltr_type = ice_ethtool_flow_to_fltr(fsp->flow_type & ~FLOW_EXT);
+
+	if (!hw->acl_prof) {
+		hw->acl_prof = devm_kcalloc(dev, ICE_FLTR_PTYPE_MAX,
+					    sizeof(*hw->acl_prof), GFP_KERNEL);
+		if (!hw->acl_prof) {
+			err = -ENOMEM;
+			goto err_exit;
+		}
+	}
+	if (!hw->acl_prof[fltr_type]) {
+		hw->acl_prof[fltr_type] = devm_kzalloc(dev,
+						       sizeof(**hw->acl_prof),
+						       GFP_KERNEL);
+		if (!hw->acl_prof[fltr_type]) {
+			err = -ENOMEM;
+			goto err_acl_prof_exit;
+		}
+		hw->acl_prof[fltr_type]->cnt = 0;
+	}
+
+	hw_prof = hw->acl_prof[fltr_type];
+	old_seg = hw_prof->fdir_seg[0];
+	if (old_seg) {
+		/* This flow_type already has an input set.
+		 * If it matches the requested input set then we are
+		 * done. If it's different then it's an error.
+		 */
+		if (!memcmp(old_seg, seg, sizeof(*seg))) {
+			devm_kfree(dev, seg);
+			return 0;
+		}
+
+		err = -EINVAL;
+		goto err_acl_prof_flow_exit;
+	}
+
+	/* Adding a profile for the given flow specification with no
+	 * actions (NULL) and zero actions 0.
+	 */
+	status = ice_flow_add_prof(hw, ICE_BLK_ACL, ICE_FLOW_RX, fltr_type,
+				   seg, 1, &prof);
+	if (status) {
+		err = ice_status_to_errno(status);
+		goto err_exit;
+	}
+
+	hw_prof->fdir_seg[0] = seg;
+	return 0;
+
+err_acl_prof_flow_exit:
+	devm_kfree(dev, hw->acl_prof[fltr_type]);
+err_acl_prof_exit:
+	devm_kfree(dev, hw->acl_prof);
+err_exit:
+	devm_kfree(dev, seg);
+
+	return err;
+}
+
+/**
+ * ice_acl_add_rule_ethtool - Adds an ACL rule
+ * @vsi: pointer to target VSI
+ * @cmd: command to add or delete ACL rule
+ *
+ * Returns 0 on success and negative values for failure
+ */
+int ice_acl_add_rule_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
+{
+	struct ethtool_rx_flow_spec *fsp;
+	struct ice_pf *pf;
+
+	if (!vsi || !cmd)
+		return -EINVAL;
+
+	pf = vsi->back;
+
+	fsp = (struct ethtool_rx_flow_spec *)&cmd->fs;
+
+	return ice_acl_check_input_set(pf, fsp);
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 062a90248f8f..f5fdab2b7058 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -234,6 +234,8 @@ struct ice_aqc_get_sw_cfg_resp_elem {
 #define ICE_AQC_RES_TYPE_FDIR_COUNTER_BLOCK		0x21
 #define ICE_AQC_RES_TYPE_FDIR_GUARANTEED_ENTRIES	0x22
 #define ICE_AQC_RES_TYPE_FDIR_SHARED_ENTRIES		0x23
+#define ICE_AQC_RES_TYPE_ACL_PROF_BLDR_PROFID		0x50
+#define ICE_AQC_RES_TYPE_ACL_PROF_BLDR_TCAM		0x51
 #define ICE_AQC_RES_TYPE_FD_PROF_BLDR_PROFID		0x58
 #define ICE_AQC_RES_TYPE_FD_PROF_BLDR_TCAM		0x59
 #define ICE_AQC_RES_TYPE_HASH_PROF_BLDR_PROFID		0x60
@@ -1814,6 +1816,43 @@ struct ice_aqc_actpair {
 	struct ice_acl_act_entry act[ICE_ACL_NUM_ACT_PER_ACT_PAIR];
 };
 
+/* The first byte of the byte selection base is reserved to keep the
+ * first byte of the field vector where the packet direction info is
+ * available. Thus we should start at index 1 of the field vector to
+ * map its entries to the byte selection base.
+ */
+#define ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX	1
+#define ICE_AQC_ACL_PROF_BYTE_SEL_ELEMS		30
+
+/* Input buffer format for program profile extraction admin command and
+ * response buffer format for query profile admin command is as defined
+ * in struct ice_aqc_acl_prof_generic_frmt
+ */
+
+/* Input buffer format for program profile ranges and query profile ranges
+ * admin commands. Same format is used for response buffer in case of query
+ * profile ranges command
+ */
+struct ice_acl_rng_data {
+	/* The range checker output shall be sent when the value
+	 * related to this range checker is lower than low boundary
+	 */
+	__be16 low_boundary;
+	/* The range checker output shall be sent when the value
+	 * related to this range checker is higher than high boundary
+	 */
+	__be16 high_boundary;
+	/* A value of '0' in bit shall clear the relevant bit input
+	 * to the range checker
+	 */
+	__be16 mask;
+};
+
+struct ice_aqc_acl_profile_ranges {
+#define ICE_AQC_ACL_PROF_RANGES_NUM_CFG 8
+	struct ice_acl_rng_data checker_cfg[ICE_AQC_ACL_PROF_RANGES_NUM_CFG];
+};
+
 /* Program ACL entry (indirect 0x0C20) */
 struct ice_aqc_acl_entry {
 	u8 tcam_index; /* Updated TCAM block index */
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
index 6869357624ab..ef641bc8ca0e 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
@@ -68,7 +68,7 @@ static int ice_fltr_to_ethtool_flow(enum ice_fltr_ptype flow)
  *
  * Returns flow enum
  */
-static enum ice_fltr_ptype ice_ethtool_flow_to_fltr(int eth)
+enum ice_fltr_ptype ice_ethtool_flow_to_fltr(int eth)
 {
 	switch (eth) {
 	case TCP_V4_FLOW:
@@ -773,6 +773,56 @@ ice_create_init_fdir_rule(struct ice_pf *pf, enum ice_fltr_ptype flow)
 	return -EOPNOTSUPP;
 }
 
+/**
+ * ice_ntuple_check_ip4_seg - Check valid fields are provided for filter
+ * @tcp_ip4_spec: mask data from ethtool
+ */
+int ice_ntuple_check_ip4_seg(struct ethtool_tcpip4_spec *tcp_ip4_spec)
+{
+	if (!tcp_ip4_spec)
+		return -EINVAL;
+
+	/* make sure we don't have any empty rule */
+	if (!tcp_ip4_spec->psrc && !tcp_ip4_spec->ip4src &&
+	    !tcp_ip4_spec->pdst && !tcp_ip4_spec->ip4dst)
+		return -EINVAL;
+
+	/* filtering on TOS not supported */
+	if (tcp_ip4_spec->tos)
+		return -EOPNOTSUPP;
+
+	return 0;
+}
+
+/**
+ * ice_ntuple_l4_proto_to_port
+ * @l4_proto: Layer 4 protocol to program
+ * @src_port: source flow field value for provided l4 protocol
+ * @dst_port: destination flow field value for provided l4 protocol
+ *
+ * Set associated src and dst port for given l4 protocol
+ */
+int
+ice_ntuple_l4_proto_to_port(enum ice_flow_seg_hdr l4_proto,
+			    enum ice_flow_field *src_port,
+			    enum ice_flow_field *dst_port)
+{
+	if (l4_proto == ICE_FLOW_SEG_HDR_TCP) {
+		*src_port = ICE_FLOW_FIELD_IDX_TCP_SRC_PORT;
+		*dst_port = ICE_FLOW_FIELD_IDX_TCP_DST_PORT;
+	} else if (l4_proto == ICE_FLOW_SEG_HDR_UDP) {
+		*src_port = ICE_FLOW_FIELD_IDX_UDP_SRC_PORT;
+		*dst_port = ICE_FLOW_FIELD_IDX_UDP_DST_PORT;
+	} else if (l4_proto == ICE_FLOW_SEG_HDR_SCTP) {
+		*src_port = ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT;
+		*dst_port = ICE_FLOW_FIELD_IDX_SCTP_DST_PORT;
+	} else {
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
 /**
  * ice_set_fdir_ip4_seg
  * @seg: flow segment for programming
@@ -790,28 +840,18 @@ ice_set_fdir_ip4_seg(struct ice_flow_seg_info *seg,
 		     enum ice_flow_seg_hdr l4_proto, bool *perfect_fltr)
 {
 	enum ice_flow_field src_port, dst_port;
+	int ret;
 
-	/* make sure we don't have any empty rule */
-	if (!tcp_ip4_spec->psrc && !tcp_ip4_spec->ip4src &&
-	    !tcp_ip4_spec->pdst && !tcp_ip4_spec->ip4dst)
+	if (!seg || !perfect_fltr)
 		return -EINVAL;
 
-	/* filtering on TOS not supported */
-	if (tcp_ip4_spec->tos)
-		return -EOPNOTSUPP;
+	ret = ice_ntuple_check_ip4_seg(tcp_ip4_spec);
+	if (ret)
+		return ret;
 
-	if (l4_proto == ICE_FLOW_SEG_HDR_TCP) {
-		src_port = ICE_FLOW_FIELD_IDX_TCP_SRC_PORT;
-		dst_port = ICE_FLOW_FIELD_IDX_TCP_DST_PORT;
-	} else if (l4_proto == ICE_FLOW_SEG_HDR_UDP) {
-		src_port = ICE_FLOW_FIELD_IDX_UDP_SRC_PORT;
-		dst_port = ICE_FLOW_FIELD_IDX_UDP_DST_PORT;
-	} else if (l4_proto == ICE_FLOW_SEG_HDR_SCTP) {
-		src_port = ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT;
-		dst_port = ICE_FLOW_FIELD_IDX_SCTP_DST_PORT;
-	} else {
-		return -EOPNOTSUPP;
-	}
+	ret = ice_ntuple_l4_proto_to_port(l4_proto, &src_port, &dst_port);
+	if (ret)
+		return ret;
 
 	*perfect_fltr = true;
 	ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV4 | l4_proto);
@@ -860,20 +900,14 @@ ice_set_fdir_ip4_seg(struct ice_flow_seg_info *seg,
 }
 
 /**
- * ice_set_fdir_ip4_usr_seg
- * @seg: flow segment for programming
+ * ice_ntuple_check_ip4_usr_seg - Check valid fields are provided for filter
  * @usr_ip4_spec: ethtool userdef packet offset
- * @perfect_fltr: only valid on success; returns true if perfect filter,
- *		  false if not
- *
- * Set the offset data into the flow segment to be used to program HW
- * table for IPv4
  */
-static int
-ice_set_fdir_ip4_usr_seg(struct ice_flow_seg_info *seg,
-			 struct ethtool_usrip4_spec *usr_ip4_spec,
-			 bool *perfect_fltr)
+int ice_ntuple_check_ip4_usr_seg(struct ethtool_usrip4_spec *usr_ip4_spec)
 {
+	if (!usr_ip4_spec)
+		return -EINVAL;
+
 	/* first 4 bytes of Layer 4 header */
 	if (usr_ip4_spec->l4_4_bytes)
 		return -EINVAL;
@@ -888,6 +922,33 @@ ice_set_fdir_ip4_usr_seg(struct ice_flow_seg_info *seg,
 	if (!usr_ip4_spec->ip4src && !usr_ip4_spec->ip4dst)
 		return -EINVAL;
 
+	return 0;
+}
+
+/**
+ * ice_set_fdir_ip4_usr_seg
+ * @seg: flow segment for programming
+ * @usr_ip4_spec: ethtool userdef packet offset
+ * @perfect_fltr: only set on success; returns true if perfect filter, false if
+ *		  not
+ *
+ * Set the offset data into the flow segment to be used to program HW
+ * table for IPv4
+ */
+static int
+ice_set_fdir_ip4_usr_seg(struct ice_flow_seg_info *seg,
+			 struct ethtool_usrip4_spec *usr_ip4_spec,
+			 bool *perfect_fltr)
+{
+	int ret;
+
+	if (!seg || !perfect_fltr)
+		return -EINVAL;
+
+	ret = ice_ntuple_check_ip4_usr_seg(usr_ip4_spec);
+	if (ret)
+		return ret;
+
 	*perfect_fltr = true;
 	ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV4);
 
@@ -914,6 +975,30 @@ ice_set_fdir_ip4_usr_seg(struct ice_flow_seg_info *seg,
 	return 0;
 }
 
+/**
+ * ice_ntuple_check_ip6_seg - Check valid fields are provided for filter
+ * @tcp_ip6_spec: mask data from ethtool
+ */
+static int ice_ntuple_check_ip6_seg(struct ethtool_tcpip6_spec *tcp_ip6_spec)
+{
+	if (!tcp_ip6_spec)
+		return -EINVAL;
+
+	/* make sure we don't have any empty rule */
+	if (!memcmp(tcp_ip6_spec->ip6src, &zero_ipv6_addr_mask,
+		    sizeof(struct in6_addr)) &&
+	    !memcmp(tcp_ip6_spec->ip6dst, &zero_ipv6_addr_mask,
+		    sizeof(struct in6_addr)) &&
+	    !tcp_ip6_spec->psrc && !tcp_ip6_spec->pdst)
+		return -EINVAL;
+
+	/* filtering on TC not supported */
+	if (tcp_ip6_spec->tclass)
+		return -EOPNOTSUPP;
+
+	return 0;
+}
+
 /**
  * ice_set_fdir_ip6_seg
  * @seg: flow segment for programming
@@ -931,31 +1016,18 @@ ice_set_fdir_ip6_seg(struct ice_flow_seg_info *seg,
 		     enum ice_flow_seg_hdr l4_proto, bool *perfect_fltr)
 {
 	enum ice_flow_field src_port, dst_port;
+	int ret;
 
-	/* make sure we don't have any empty rule */
-	if (!memcmp(tcp_ip6_spec->ip6src, &zero_ipv6_addr_mask,
-		    sizeof(struct in6_addr)) &&
-	    !memcmp(tcp_ip6_spec->ip6dst, &zero_ipv6_addr_mask,
-		    sizeof(struct in6_addr)) &&
-	    !tcp_ip6_spec->psrc && !tcp_ip6_spec->pdst)
+	if (!seg || !perfect_fltr)
 		return -EINVAL;
 
-	/* filtering on TC not supported */
-	if (tcp_ip6_spec->tclass)
-		return -EOPNOTSUPP;
+	ret = ice_ntuple_check_ip6_seg(tcp_ip6_spec);
+	if (ret)
+		return ret;
 
-	if (l4_proto == ICE_FLOW_SEG_HDR_TCP) {
-		src_port = ICE_FLOW_FIELD_IDX_TCP_SRC_PORT;
-		dst_port = ICE_FLOW_FIELD_IDX_TCP_DST_PORT;
-	} else if (l4_proto == ICE_FLOW_SEG_HDR_UDP) {
-		src_port = ICE_FLOW_FIELD_IDX_UDP_SRC_PORT;
-		dst_port = ICE_FLOW_FIELD_IDX_UDP_DST_PORT;
-	} else if (l4_proto == ICE_FLOW_SEG_HDR_SCTP) {
-		src_port = ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT;
-		dst_port = ICE_FLOW_FIELD_IDX_SCTP_DST_PORT;
-	} else {
-		return -EINVAL;
-	}
+	ret = ice_ntuple_l4_proto_to_port(l4_proto, &src_port, &dst_port);
+	if (ret)
+		return ret;
 
 	*perfect_fltr = true;
 	ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV6 | l4_proto);
@@ -1006,20 +1078,15 @@ ice_set_fdir_ip6_seg(struct ice_flow_seg_info *seg,
 }
 
 /**
- * ice_set_fdir_ip6_usr_seg
- * @seg: flow segment for programming
+ * ice_ntuple_check_ip6_usr_seg - Check valid fields are provided for filter
  * @usr_ip6_spec: ethtool userdef packet offset
- * @perfect_fltr: only valid on success; returns true if perfect filter,
- *		  false if not
- *
- * Set the offset data into the flow segment to be used to program HW
- * table for IPv6
  */
 static int
-ice_set_fdir_ip6_usr_seg(struct ice_flow_seg_info *seg,
-			 struct ethtool_usrip6_spec *usr_ip6_spec,
-			 bool *perfect_fltr)
+ice_ntuple_check_ip6_usr_seg(struct ethtool_usrip6_spec *usr_ip6_spec)
 {
+	if (!usr_ip6_spec)
+		return -EINVAL;
+
 	/* filtering on Layer 4 bytes not supported */
 	if (usr_ip6_spec->l4_4_bytes)
 		return -EOPNOTSUPP;
@@ -1036,6 +1103,33 @@ ice_set_fdir_ip6_usr_seg(struct ice_flow_seg_info *seg,
 		    sizeof(struct in6_addr)))
 		return -EINVAL;
 
+	return 0;
+}
+
+/**
+ * ice_set_fdir_ip6_usr_seg
+ * @seg: flow segment for programming
+ * @usr_ip6_spec: ethtool userdef packet offset
+ * @perfect_fltr: only set on success; returns true if perfect filter, false if
+ *		  not
+ *
+ * Set the offset data into the flow segment to be used to program HW
+ * table for IPv6
+ */
+static int
+ice_set_fdir_ip6_usr_seg(struct ice_flow_seg_info *seg,
+			 struct ethtool_usrip6_spec *usr_ip6_spec,
+			 bool *perfect_fltr)
+{
+	int ret;
+
+	if (!seg || !perfect_fltr)
+		return -EINVAL;
+
+	ret = ice_ntuple_check_ip6_usr_seg(usr_ip6_spec);
+	if (ret)
+		return ret;
+
 	*perfect_fltr = true;
 	ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV6);
 
@@ -1489,6 +1583,64 @@ int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 	return val;
 }
 
+/**
+ * ice_is_acl_filter - Checks if it's a FD or ACL filter
+ * @fsp: pointer to ethtool Rx flow specification
+ *
+ * If any field of the provided filter is using a partial mask then this is
+ * an ACL filter.
+ *
+ * Returns true if ACL filter otherwise false.
+ */
+static bool ice_is_acl_filter(struct ethtool_rx_flow_spec *fsp)
+{
+	struct ethtool_tcpip4_spec *tcp_ip4_spec;
+	struct ethtool_usrip4_spec *usr_ip4_spec;
+
+	switch (fsp->flow_type & ~FLOW_EXT) {
+	case TCP_V4_FLOW:
+	case UDP_V4_FLOW:
+	case SCTP_V4_FLOW:
+		tcp_ip4_spec = &fsp->m_u.tcp_ip4_spec;
+
+		/* IP source address */
+		if (tcp_ip4_spec->ip4src &&
+		    tcp_ip4_spec->ip4src != htonl(0xFFFFFFFF))
+			return true;
+
+		/* IP destination address */
+		if (tcp_ip4_spec->ip4dst &&
+		    tcp_ip4_spec->ip4dst != htonl(0xFFFFFFFF))
+			return true;
+
+		/* Layer 4 source port */
+		if (tcp_ip4_spec->psrc && tcp_ip4_spec->psrc != htons(0xFFFF))
+			return true;
+
+		/* Layer 4 destination port */
+		if (tcp_ip4_spec->pdst && tcp_ip4_spec->pdst != htons(0xFFFF))
+			return true;
+
+		break;
+	case IPV4_USER_FLOW:
+		usr_ip4_spec = &fsp->m_u.usr_ip4_spec;
+
+		/* IP source address */
+		if (usr_ip4_spec->ip4src &&
+		    usr_ip4_spec->ip4src != htonl(0xFFFFFFFF))
+			return true;
+
+		/* IP destination address */
+		if (usr_ip4_spec->ip4dst &&
+		    usr_ip4_spec->ip4dst != htonl(0xFFFFFFFF))
+			return true;
+
+		break;
+	}
+
+	return false;
+}
+
 /**
  * ice_ntuple_set_input_set - Set the input set for Flow Director
  * @vsi: pointer to target VSI
@@ -1651,7 +1803,7 @@ int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 
 	/* Do not program filters during reset */
 	if (ice_is_reset_in_progress(pf->state)) {
-		dev_err(dev, "Device is resetting - adding Flow Director filters not supported during reset\n");
+		dev_err(dev, "Device is resetting - adding ntuple filters not supported during reset\n");
 		return -EBUSY;
 	}
 
@@ -1663,15 +1815,19 @@ int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 	if (fsp->flow_type & FLOW_MAC_EXT)
 		return -EINVAL;
 
-	ret = ice_cfg_fdir_xtrct_seq(pf, fsp, &userdata);
-	if (ret)
-		return ret;
-
 	if (fsp->location >= ice_ntuple_get_max_fltr_cnt(hw)) {
-		dev_err(dev, "Failed to add filter.  The maximum number of flow director filters has been reached.\n");
+		dev_err(dev, "Failed to add filter.  The maximum number of ntuple filters has been reached.\n");
 		return -ENOSPC;
 	}
 
+	/* ACL filter */
+	if (pf->hw.acl_tbl && ice_is_acl_filter(fsp))
+		return ice_acl_add_rule_ethtool(vsi, cmd);
+
+	ret = ice_cfg_fdir_xtrct_seq(pf, fsp, &userdata);
+	if (ret)
+		return ret;
+
 	/* return error if not an update and no available filters */
 	fltrs_needed = ice_get_open_tunnel_port(hw, &tunnel_port) ? 2 : 1;
 	if (!ice_fdir_find_fltr_by_idx(hw, fsp->location) &&
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
index 9095b4d274ad..696d08e6716d 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
@@ -2409,6 +2409,9 @@ ice_find_prof_id(struct ice_hw *hw, enum ice_block blk,
 static bool ice_prof_id_rsrc_type(enum ice_block blk, u16 *rsrc_type)
 {
 	switch (blk) {
+	case ICE_BLK_ACL:
+		*rsrc_type = ICE_AQC_RES_TYPE_ACL_PROF_BLDR_PROFID;
+		break;
 	case ICE_BLK_FD:
 		*rsrc_type = ICE_AQC_RES_TYPE_FD_PROF_BLDR_PROFID;
 		break;
@@ -2429,6 +2432,9 @@ static bool ice_prof_id_rsrc_type(enum ice_block blk, u16 *rsrc_type)
 static bool ice_tcam_ent_rsrc_type(enum ice_block blk, u16 *rsrc_type)
 {
 	switch (blk) {
+	case ICE_BLK_ACL:
+		*rsrc_type = ICE_AQC_RES_TYPE_ACL_PROF_BLDR_TCAM;
+		break;
 	case ICE_BLK_FD:
 		*rsrc_type = ICE_AQC_RES_TYPE_FD_PROF_BLDR_TCAM;
 		break;
@@ -3800,7 +3806,6 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 				 BITS_PER_BYTE) {
 			u16 ptype;
 			u8 ptg;
-			u8 m;
 
 			ptype = byte * BITS_PER_BYTE + bit;
 
@@ -3819,11 +3824,6 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
 
 			if (++prof->ptg_cnt >= ICE_MAX_PTG_PER_PROFILE)
 				break;
-
-			/* nothing left in byte, then exit */
-			m = ~(u8)((1 << (bit + 1)) - 1);
-			if (!(ptypes[byte] & m))
-				break;
 		}
 
 		bytes--;
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c
index 2a92071bd7d1..d2df5101ef74 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.c
+++ b/drivers/net/ethernet/intel/ice/ice_flow.c
@@ -346,6 +346,42 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 	return 0;
 }
 
+/**
+ * ice_flow_xtract_pkt_flags - Create an extr sequence entry for packet flags
+ * @hw: pointer to the HW struct
+ * @params: information about the flow to be processed
+ * @flags: The value of pkt_flags[x:x] in Rx/Tx MDID metadata.
+ *
+ * This function will allocate an extraction sequence entries for a DWORD size
+ * chunk of the packet flags.
+ */
+static enum ice_status
+ice_flow_xtract_pkt_flags(struct ice_hw *hw,
+			  struct ice_flow_prof_params *params,
+			  enum ice_flex_mdid_pkt_flags flags)
+{
+	u8 fv_words = hw->blk[params->blk].es.fvw;
+	u8 idx;
+
+	/* Make sure the number of extraction sequence entries required does not
+	 * exceed the block's capacity.
+	 */
+	if (params->es_cnt >= fv_words)
+		return ICE_ERR_MAX_LIMIT;
+
+	/* some blocks require a reversed field vector layout */
+	if (hw->blk[params->blk].es.reverse)
+		idx = fv_words - params->es_cnt - 1;
+	else
+		idx = params->es_cnt;
+
+	params->es[idx].prot_id = ICE_PROT_META_ID;
+	params->es[idx].off = flags;
+	params->es_cnt++;
+
+	return 0;
+}
+
 /**
  * ice_flow_xtract_fld - Create an extraction sequence entry for the given field
  * @hw: pointer to the HW struct
@@ -528,19 +564,29 @@ static enum ice_status
 ice_flow_create_xtrct_seq(struct ice_hw *hw,
 			  struct ice_flow_prof_params *params)
 {
-	struct ice_flow_prof *prof = params->prof;
 	enum ice_status status = 0;
 	u8 i;
 
-	for (i = 0; i < prof->segs_cnt; i++) {
-		u8 j;
+	/* For ACL, we also need to extract the direction bit (Rx,Tx) data from
+	 * packet flags
+	 */
+	if (params->blk == ICE_BLK_ACL) {
+		status = ice_flow_xtract_pkt_flags(hw, params,
+						   ICE_RX_MDID_PKT_FLAGS_15_0);
+		if (status)
+			return status;
+	}
 
-		for_each_set_bit(j, (unsigned long *)&prof->segs[i].match,
+	for (i = 0; i < params->prof->segs_cnt; i++) {
+		u64 match = params->prof->segs[i].match;
+		enum ice_flow_field j;
+
+		for_each_set_bit(j, (unsigned long *)&match,
 				 ICE_FLOW_FIELD_IDX_MAX) {
-			status = ice_flow_xtract_fld(hw, params, i,
-						     (enum ice_flow_field)j);
+			status = ice_flow_xtract_fld(hw, params, i, j);
 			if (status)
 				return status;
+			clear_bit(j, (unsigned long *)&match);
 		}
 
 		/* Process raw matching bytes */
@@ -552,6 +598,118 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw,
 	return status;
 }
 
+/**
+ * ice_flow_sel_acl_scen - returns the specific scenario
+ * @hw: pointer to the hardware structure
+ * @params: information about the flow to be processed
+ *
+ * This function will return the specific scenario based on the
+ * params passed to it
+ */
+static enum ice_status
+ice_flow_sel_acl_scen(struct ice_hw *hw, struct ice_flow_prof_params *params)
+{
+	/* Find the best-fit scenario for the provided match width */
+	struct ice_acl_scen *cand_scen = NULL, *scen;
+
+	if (!hw->acl_tbl)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* Loop through each scenario and match against the scenario width
+	 * to select the specific scenario
+	 */
+	list_for_each_entry(scen, &hw->acl_tbl->scens, list_entry)
+		if (scen->eff_width >= params->entry_length &&
+		    (!cand_scen || cand_scen->eff_width > scen->eff_width))
+			cand_scen = scen;
+	if (!cand_scen)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	params->prof->cfg.scen = cand_scen;
+
+	return 0;
+}
+
+/**
+ * ice_flow_acl_def_entry_frmt - Determine the layout of flow entries
+ * @params: information about the flow to be processed
+ */
+static enum ice_status
+ice_flow_acl_def_entry_frmt(struct ice_flow_prof_params *params)
+{
+	u16 index, i, range_idx = 0;
+
+	index = ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX;
+
+	for (i = 0; i < params->prof->segs_cnt; i++) {
+		struct ice_flow_seg_info *seg = &params->prof->segs[i];
+		u8 j;
+
+		for_each_set_bit(j, (unsigned long *)&seg->match,
+				 ICE_FLOW_FIELD_IDX_MAX) {
+			struct ice_flow_fld_info *fld = &seg->fields[j];
+
+			fld->entry.mask = ICE_FLOW_FLD_OFF_INVAL;
+
+			if (fld->type == ICE_FLOW_FLD_TYPE_RANGE) {
+				fld->entry.last = ICE_FLOW_FLD_OFF_INVAL;
+
+				/* Range checking only supported for single
+				 * words
+				 */
+				if (DIV_ROUND_UP(ice_flds_info[j].size +
+						 fld->xtrct.disp,
+						 BITS_PER_BYTE * 2) > 1)
+					return ICE_ERR_PARAM;
+
+				/* Ranges must define low and high values */
+				if (fld->src.val == ICE_FLOW_FLD_OFF_INVAL ||
+				    fld->src.last == ICE_FLOW_FLD_OFF_INVAL)
+					return ICE_ERR_PARAM;
+
+				fld->entry.val = range_idx++;
+			} else {
+				/* Store adjusted byte-length of field for later
+				 * use, taking into account potential
+				 * non-byte-aligned displacement
+				 */
+				fld->entry.last = DIV_ROUND_UP(ice_flds_info[j].size +
+							       (fld->xtrct.disp % BITS_PER_BYTE),
+							       BITS_PER_BYTE);
+				fld->entry.val = index;
+				index += fld->entry.last;
+			}
+		}
+
+		for (j = 0; j < seg->raws_cnt; j++) {
+			struct ice_flow_seg_fld_raw *raw = &seg->raws[j];
+
+			raw->info.entry.mask = ICE_FLOW_FLD_OFF_INVAL;
+			raw->info.entry.val = index;
+			raw->info.entry.last = raw->info.src.last;
+			index += raw->info.entry.last;
+		}
+	}
+
+	/* Currently only support using the byte selection base, which only
+	 * allows for an effective entry size of 30 bytes. Reject anything
+	 * larger.
+	 */
+	if (index > ICE_AQC_ACL_PROF_BYTE_SEL_ELEMS)
+		return ICE_ERR_PARAM;
+
+	/* Only 8 range checkers per profile, reject anything trying to use
+	 * more
+	 */
+	if (range_idx > ICE_AQC_ACL_PROF_RANGES_NUM_CFG)
+		return ICE_ERR_PARAM;
+
+	/* Store # bytes required for entry for later use */
+	params->entry_length = index - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX;
+
+	return 0;
+}
+
 /**
  * ice_flow_proc_segs - process all packet segments associated with a profile
  * @hw: pointer to the HW struct
@@ -575,6 +733,14 @@ ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params)
 	case ICE_BLK_RSS:
 		status = 0;
 		break;
+	case ICE_BLK_ACL:
+		status = ice_flow_acl_def_entry_frmt(params);
+		if (status)
+			return status;
+		status = ice_flow_sel_acl_scen(hw, params);
+		if (status)
+			return status;
+		break;
 	default:
 		return ICE_ERR_NOT_IMPL;
 	}
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.h b/drivers/net/ethernet/intel/ice/ice_flow.h
index 00109262f152..f0cea38e8e78 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.h
+++ b/drivers/net/ethernet/intel/ice/ice_flow.h
@@ -231,6 +231,23 @@ struct ice_rss_cfg {
 	u32 packet_hdr;
 };
 
+enum ice_flow_action_type {
+	ICE_FLOW_ACT_NOP,
+	ICE_FLOW_ACT_DROP,
+	ICE_FLOW_ACT_CNTR_PKT,
+	ICE_FLOW_ACT_FWD_QUEUE,
+	ICE_FLOW_ACT_CNTR_BYTES,
+	ICE_FLOW_ACT_CNTR_PKT_BYTES,
+};
+
+struct ice_flow_action {
+	enum ice_flow_action_type type;
+	union {
+		struct ice_acl_act_entry acl_act;
+		u32 dummy;
+	} data;
+};
+
 enum ice_status
 ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
 		  u64 prof_id, struct ice_flow_seg_info *segs, u8 segs_cnt,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 06/15] ice: create ACL entry
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (4 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 05/15] ice: create flow profile Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 07/15] ice: program " Tony Nguyen
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Real Valiquette, netdev, sassmann, anthony.l.nguyen, Chinh Cao,
	Brijesh Behera

From: Real Valiquette <real.valiquette@intel.com>

Create an ACL entry for the mask match data and set the desired action.
Generate and program the associated extraction sequence.

Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Real Valiquette <real.valiquette@intel.com>
Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Brijesh Behera <brijeshx.behera@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |   4 +
 drivers/net/ethernet/intel/ice/ice_acl.c      | 171 +++++
 drivers/net/ethernet/intel/ice/ice_acl.h      |  29 +
 drivers/net/ethernet/intel/ice/ice_acl_main.c |  66 +-
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   | 121 +++-
 .../net/ethernet/intel/ice/ice_ethtool_fdir.c |  35 +-
 .../net/ethernet/intel/ice/ice_flex_pipe.c    |   4 +-
 .../net/ethernet/intel/ice/ice_flex_pipe.h    |   7 +
 drivers/net/ethernet/intel/ice/ice_flow.c     | 616 +++++++++++++++++-
 drivers/net/ethernet/intel/ice/ice_flow.h     |   9 +-
 .../net/ethernet/intel/ice/ice_lan_tx_rx.h    |   3 +
 11 files changed, 1039 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index d813a5c765d0..31eea8bd92f2 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -601,6 +601,10 @@ int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
 int ice_get_ethtool_fdir_entry(struct ice_hw *hw, struct ethtool_rxnfc *cmd);
 u32 ice_ntuple_get_max_fltr_cnt(struct ice_hw *hw);
 int
+ice_ntuple_set_input_set(struct ice_vsi *vsi, enum ice_block blk,
+			 struct ethtool_rx_flow_spec *fsp,
+			 struct ice_fdir_fltr *input);
+int
 ice_ntuple_l4_proto_to_port(enum ice_flow_seg_hdr l4_proto,
 			    enum ice_flow_field *src_port,
 			    enum ice_flow_field *dst_port);
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.c b/drivers/net/ethernet/intel/ice/ice_acl.c
index 7ff97917aca9..767cccc3ba67 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl.c
+++ b/drivers/net/ethernet/intel/ice/ice_acl.c
@@ -152,6 +152,177 @@ ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
 				  act_mem_idx, act_entry_idx, buf, cd);
 }
 
+/**
+ * ice_acl_prof_aq_send - sending ACL profile AQ commands
+ * @hw: pointer to the HW struct
+ * @opc: command opcode
+ * @prof_id: profile ID
+ * @buf: ptr to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * This function sends ACL profile commands
+ */
+static enum ice_status
+ice_acl_prof_aq_send(struct ice_hw *hw, u16 opc, u8 prof_id,
+		     struct ice_aqc_acl_prof_generic_frmt *buf,
+		     struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, opc);
+	desc.params.profile.profile_id = prof_id;
+	if (opc == ice_aqc_opc_program_acl_prof_extraction)
+		desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+}
+
+/**
+ * ice_prgm_acl_prof_xtrct - program ACL profile extraction sequence
+ * @hw: pointer to the HW struct
+ * @prof_id: profile ID
+ * @buf: ptr to buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Program ACL profile extraction (indirect 0x0C1D)
+ */
+enum ice_status
+ice_prgm_acl_prof_xtrct(struct ice_hw *hw, u8 prof_id,
+			struct ice_aqc_acl_prof_generic_frmt *buf,
+			struct ice_sq_cd *cd)
+{
+	return ice_acl_prof_aq_send(hw, ice_aqc_opc_program_acl_prof_extraction,
+				    prof_id, buf, cd);
+}
+
+/**
+ * ice_query_acl_prof - query ACL profile
+ * @hw: pointer to the HW struct
+ * @prof_id: profile ID
+ * @buf: ptr to buffer (which will contain response of this command)
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query ACL profile (indirect 0x0C21)
+ */
+enum ice_status
+ice_query_acl_prof(struct ice_hw *hw, u8 prof_id,
+		   struct ice_aqc_acl_prof_generic_frmt *buf,
+		   struct ice_sq_cd *cd)
+{
+	return ice_acl_prof_aq_send(hw, ice_aqc_opc_query_acl_prof, prof_id,
+				    buf, cd);
+}
+
+/**
+ * ice_aq_acl_cntrs_chk_params - Checks ACL counter parameters
+ * @cntrs: ptr to buffer describing input and output params
+ *
+ * This function checks the counter bank range for counter type and returns
+ * success or failure.
+ */
+static enum ice_status ice_aq_acl_cntrs_chk_params(struct ice_acl_cntrs *cntrs)
+{
+	enum ice_status status = 0;
+
+	if (!cntrs || !cntrs->amount)
+		return ICE_ERR_PARAM;
+
+	switch (cntrs->type) {
+	case ICE_AQC_ACL_CNT_TYPE_SINGLE:
+		/* Single counter type - configured to count either bytes
+		 * or packets, the valid values for byte or packet counters
+		 * shall be 0-3.
+		 */
+		if (cntrs->bank > ICE_AQC_ACL_MAX_CNT_SINGLE)
+			status = ICE_ERR_OUT_OF_RANGE;
+		break;
+	case ICE_AQC_ACL_CNT_TYPE_DUAL:
+		/* Pair counter type - counts number of bytes and packets
+		 * The valid values for byte/packet counter duals shall be 0-1
+		 */
+		if (cntrs->bank > ICE_AQC_ACL_MAX_CNT_DUAL)
+			status = ICE_ERR_OUT_OF_RANGE;
+		break;
+	default:
+		/* Unspecified counter type - Invalid or error */
+		status = ICE_ERR_PARAM;
+	}
+
+	return status;
+}
+
+/**
+ * ice_aq_alloc_acl_cntrs - allocate ACL counters
+ * @hw: pointer to the HW struct
+ * @cntrs: ptr to buffer describing input and output params
+ * @cd: pointer to command details structure or NULL
+ *
+ * Allocate ACL counters (indirect 0x0C16). This function attempts to
+ * allocate a contiguous block of counters. In case of failures, caller can
+ * attempt to allocate a smaller chunk. The allocation is considered
+ * unsuccessful if returned counter value is invalid. In this case it returns
+ * an error otherwise success.
+ */
+enum ice_status
+ice_aq_alloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs,
+		       struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_alloc_counters *cmd;
+	u16 first_cntr, last_cntr;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	/* check for invalid params */
+	status = ice_aq_acl_cntrs_chk_params(cntrs);
+	if (status)
+		return status;
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_alloc_acl_counters);
+	cmd = &desc.params.alloc_counters;
+	cmd->counter_amount = cntrs->amount;
+	cmd->counters_type = cntrs->type;
+	cmd->bank_alloc = cntrs->bank;
+	status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+	if (!status) {
+		first_cntr = le16_to_cpu(cmd->ops.resp.first_counter);
+		last_cntr = le16_to_cpu(cmd->ops.resp.last_counter);
+		if (first_cntr == ICE_AQC_ACL_ALLOC_CNT_INVAL ||
+		    last_cntr == ICE_AQC_ACL_ALLOC_CNT_INVAL)
+			return ICE_ERR_OUT_OF_RANGE;
+		cntrs->first_cntr = first_cntr;
+		cntrs->last_cntr = last_cntr;
+	}
+	return status;
+}
+
+/**
+ * ice_aq_dealloc_acl_cntrs - deallocate ACL counters
+ * @hw: pointer to the HW struct
+ * @cntrs: ptr to buffer describing input and output params
+ * @cd: pointer to command details structure or NULL
+ *
+ * De-allocate ACL counters (direct 0x0C17)
+ */
+enum ice_status
+ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs,
+			 struct ice_sq_cd *cd)
+{
+	struct ice_aqc_acl_dealloc_counters *cmd;
+	struct ice_aq_desc desc;
+	enum ice_status status;
+
+	/* check for invalid params */
+	status = ice_aq_acl_cntrs_chk_params(cntrs);
+	if (status)
+		return status;
+
+	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_counters);
+	cmd = &desc.params.dealloc_counters;
+	cmd->first_counter = cpu_to_le16(cntrs->first_cntr);
+	cmd->last_counter = cpu_to_le16(cntrs->last_cntr);
+	cmd->counters_type = cntrs->type;
+	cmd->bank_alloc = cntrs->bank;
+	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
+}
+
 /**
  * ice_aq_alloc_acl_scen - allocate ACL scenario
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.h b/drivers/net/ethernet/intel/ice/ice_acl.h
index 9e776f3f749c..8235d16bd162 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl.h
+++ b/drivers/net/ethernet/intel/ice/ice_acl.h
@@ -103,6 +103,21 @@ struct ice_acl_alloc_tbl {
 	} buf;
 };
 
+/* This structure is used to communicate input and output params for
+ * [de]allocate_acl_counters
+ */
+struct ice_acl_cntrs {
+	u8 amount;
+	u8 type;
+	u8 bank;
+
+	/* Next 2 variables are used for output in case of alloc_acl_counters
+	 * and input in case of deallocate_acl_counters
+	 */
+	u16 first_cntr;
+	u16 last_cntr;
+};
+
 enum ice_status
 ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params);
 enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw);
@@ -122,6 +137,20 @@ enum ice_status
 ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx,
 		       struct ice_aqc_actpair *buf, struct ice_sq_cd *cd);
 enum ice_status
+ice_prgm_acl_prof_xtrct(struct ice_hw *hw, u8 prof_id,
+			struct ice_aqc_acl_prof_generic_frmt *buf,
+			struct ice_sq_cd *cd);
+enum ice_status
+ice_query_acl_prof(struct ice_hw *hw, u8 prof_id,
+		   struct ice_aqc_acl_prof_generic_frmt *buf,
+		   struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_alloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs,
+		       struct ice_sq_cd *cd);
+enum ice_status
+ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs,
+			 struct ice_sq_cd *cd);
+enum ice_status
 ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id,
 		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd);
 enum ice_status
diff --git a/drivers/net/ethernet/intel/ice/ice_acl_main.c b/drivers/net/ethernet/intel/ice/ice_acl_main.c
index be97dfb94652..3b56194ab3fc 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_acl_main.c
@@ -6,6 +6,9 @@
 #include "ice.h"
 #include "ice_lib.h"
 
+/* Default ACL Action priority */
+#define ICE_ACL_ACT_PRIO	3
+
 /* Number of action */
 #define ICE_ACL_NUM_ACT		1
 
@@ -246,15 +249,76 @@ ice_acl_check_input_set(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp)
  */
 int ice_acl_add_rule_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 {
+	struct ice_flow_action acts[ICE_ACL_NUM_ACT];
 	struct ethtool_rx_flow_spec *fsp;
+	struct ice_fd_hw_prof *hw_prof;
+	struct ice_fdir_fltr *input;
+	enum ice_fltr_ptype flow;
+	enum ice_status status;
+	struct device *dev;
 	struct ice_pf *pf;
+	struct ice_hw *hw;
+	u64 entry_h = 0;
+	int act_cnt;
+	int ret;
 
 	if (!vsi || !cmd)
 		return -EINVAL;
 
 	pf = vsi->back;
+	hw = &pf->hw;
+	dev = ice_pf_to_dev(pf);
 
 	fsp = (struct ethtool_rx_flow_spec *)&cmd->fs;
 
-	return ice_acl_check_input_set(pf, fsp);
+	ret = ice_acl_check_input_set(pf, fsp);
+	if (ret)
+		return ret;
+
+	/* Add new rule */
+	input = devm_kzalloc(dev, sizeof(*input), GFP_KERNEL);
+	if (!input)
+		return -ENOMEM;
+
+	ret = ice_ntuple_set_input_set(vsi, ICE_BLK_ACL, fsp, input);
+	if (ret)
+		goto free_input;
+
+	memset(&acts, 0, sizeof(acts));
+	act_cnt = 1;
+	if (fsp->ring_cookie == RX_CLS_FLOW_DISC) {
+		acts[0].type = ICE_FLOW_ACT_DROP;
+		acts[0].data.acl_act.mdid = ICE_MDID_RX_PKT_DROP;
+		acts[0].data.acl_act.prio = ICE_ACL_ACT_PRIO;
+		acts[0].data.acl_act.value = cpu_to_le16(0x1);
+	} else {
+		acts[0].type = ICE_FLOW_ACT_FWD_QUEUE;
+		acts[0].data.acl_act.mdid = ICE_MDID_RX_DST_Q;
+		acts[0].data.acl_act.prio = ICE_ACL_ACT_PRIO;
+		acts[0].data.acl_act.value = cpu_to_le16(input->q_index);
+	}
+
+	flow = ice_ethtool_flow_to_fltr(fsp->flow_type & ~FLOW_EXT);
+	hw_prof = hw->acl_prof[flow];
+
+	status = ice_flow_add_entry(hw, ICE_BLK_ACL, flow, fsp->location,
+				    vsi->idx, ICE_FLOW_PRIO_NORMAL, input, acts,
+				    act_cnt, &entry_h);
+	if (status) {
+		dev_err(dev, "Could not add flow entry %d\n", flow);
+		ret = ice_status_to_errno(status);
+		goto free_input;
+	}
+
+	if (!hw_prof->cnt || vsi->idx != hw_prof->vsi_h[hw_prof->cnt - 1]) {
+		hw_prof->vsi_h[hw_prof->cnt] = vsi->idx;
+		hw_prof->entry_h[hw_prof->cnt++][0] = entry_h;
+	}
+
+	return 0;
+
+free_input:
+	devm_kfree(dev, input);
+
+	return ret;
 }
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index f5fdab2b7058..5449c5f6e10c 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1787,6 +1787,68 @@ struct ice_aqc_acl_scen {
 	u8 act_mem_cfg[ICE_AQC_MAX_ACTION_MEMORIES];
 };
 
+/* Allocate ACL counters (indirect 0x0C16) */
+struct ice_aqc_acl_alloc_counters {
+	/* Amount of contiguous counters requested. Min value is 1 and
+	 * max value is 255
+	 */
+	u8 counter_amount;
+
+	/* Counter type: 'single counter' which can be configured to count
+	 * either bytes or packets
+	 */
+#define ICE_AQC_ACL_CNT_TYPE_SINGLE	0x0
+
+	/* Counter type: 'counter pair' which counts number of bytes and number
+	 * of packets.
+	 */
+#define ICE_AQC_ACL_CNT_TYPE_DUAL	0x1
+	/* requested counter type, single/dual */
+	u8 counters_type;
+
+	/* counter bank allocation shall be 0-3 for 'byte or packet counter' */
+#define ICE_AQC_ACL_MAX_CNT_SINGLE	0x3
+	/* counter bank allocation shall be 0-1 for 'byte and packet counter dual' */
+#define ICE_AQC_ACL_MAX_CNT_DUAL	0x1
+	/* requested counter bank allocation */
+	u8 bank_alloc;
+
+	u8 reserved;
+
+	union {
+		/* Applicable only in case of command */
+		struct {
+			u8 reserved[12];
+		} cmd;
+		/* Applicable only in case of response */
+#define ICE_AQC_ACL_ALLOC_CNT_INVAL	0xFFFF
+		struct {
+			/* Index of first allocated counter. 0xFFFF in case
+			 * of unsuccessful allocation
+			 */
+			__le16 first_counter;
+			/* Index of last allocated counter. 0xFFFF in case
+			 * of unsuccessful allocation
+			 */
+			__le16 last_counter;
+			u8 rsvd[8];
+		} resp;
+	} ops;
+};
+
+/* De-allocate ACL counters (direct 0x0C17) */
+struct ice_aqc_acl_dealloc_counters {
+	/* first counter being released */
+	__le16 first_counter;
+	/* last counter being released */
+	__le16 last_counter;
+	/* requested counter type, single/dual */
+	u8 counters_type;
+	/* requested counter bank allocation */
+	u8 bank_alloc;
+	u8 reserved[10];
+};
+
 /* Program ACL actionpair (indirect 0x0C1C) */
 struct ice_aqc_acl_actpair {
 	/* action mem index to program/update */
@@ -1816,13 +1878,57 @@ struct ice_aqc_actpair {
 	struct ice_acl_act_entry act[ICE_ACL_NUM_ACT_PER_ACT_PAIR];
 };
 
-/* The first byte of the byte selection base is reserved to keep the
- * first byte of the field vector where the packet direction info is
- * available. Thus we should start at index 1 of the field vector to
- * map its entries to the byte selection base.
+/* Generic format used to describe either input or response buffer
+ * for admin commands related to ACL profile
  */
+struct ice_aqc_acl_prof_generic_frmt {
+	/* The first byte of the byte selection base is reserved to keep the
+	 * first byte of the field vector where the packet direction info is
+	 * available. Thus we should start at index 1 of the field vector to
+	 * map its entries to the byte selection base.
+	 */
 #define ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX	1
+	/* In each byte:
+	 * Bit 0..5 = Byte selection for the byte selection base from the
+	 * extracted fields (expressed as byte offset in extracted fields).
+	 * Applicable values are 0..63
+	 * Bit 6..7 = Reserved
+	 */
 #define ICE_AQC_ACL_PROF_BYTE_SEL_ELEMS		30
+	u8 byte_selection[ICE_AQC_ACL_PROF_BYTE_SEL_ELEMS];
+	/* In each byte:
+	 * Bit 0..4 = Word selection for the word selection base from the
+	 * extracted fields (expressed as word offset in extracted fields).
+	 * Applicable values are 0..31
+	 * Bit 5..7 = Reserved
+	 */
+#define ICE_AQC_ACL_PROF_WORD_SEL_ELEMS		32
+	u8 word_selection[ICE_AQC_ACL_PROF_WORD_SEL_ELEMS];
+	/* In each byte:
+	 * Bit 0..3 = Double word selection for the double-word selection base
+	 * from the extracted fields (expressed as double-word offset in
+	 * extracted fields).
+	 * Applicable values are 0..15
+	 * Bit 4..7 = Reserved
+	 */
+#define ICE_AQC_ACL_PROF_DWORD_SEL_ELEMS	15
+	u8 dword_selection[ICE_AQC_ACL_PROF_DWORD_SEL_ELEMS];
+	/* Scenario numbers for individual Physical Function's */
+#define ICE_AQC_ACL_PROF_PF_SCEN_NUM_ELEMS	8
+	u8 pf_scenario_num[ICE_AQC_ACL_PROF_PF_SCEN_NUM_ELEMS];
+};
+
+/* Program ACL profile extraction (indirect 0x0C1D)
+ * Program ACL profile ranges (indirect 0x0C1E)
+ * Query ACL profile (indirect 0x0C21)
+ * Query ACL profile ranges (indirect 0x0C22)
+ */
+struct ice_aqc_acl_profile {
+	u8 profile_id; /* Programmed/Updated profile ID */
+	u8 reserved[7];
+	__le32 addr_high;
+	__le32 addr_low;
+};
 
 /* Input buffer format for program profile extraction admin command and
  * response buffer format for query profile admin command is as defined
@@ -2150,8 +2256,11 @@ struct ice_aq_desc {
 		struct ice_aqc_acl_alloc_scen alloc_scen;
 		struct ice_aqc_acl_dealloc_scen dealloc_scen;
 		struct ice_aqc_acl_update_query_scen update_query_scen;
+		struct ice_aqc_acl_alloc_counters alloc_counters;
+		struct ice_aqc_acl_dealloc_counters dealloc_counters;
 		struct ice_aqc_acl_entry program_query_entry;
 		struct ice_aqc_acl_actpair program_query_actpair;
+		struct ice_aqc_acl_profile profile;
 		struct ice_aqc_add_txqs add_txqs;
 		struct ice_aqc_dis_txqs dis_txqs;
 		struct ice_aqc_add_get_update_free_vsi vsi_cmd;
@@ -2301,9 +2410,13 @@ enum ice_adminq_opc {
 	ice_aqc_opc_dealloc_acl_tbl			= 0x0C11,
 	ice_aqc_opc_alloc_acl_scen			= 0x0C14,
 	ice_aqc_opc_dealloc_acl_scen			= 0x0C15,
+	ice_aqc_opc_alloc_acl_counters			= 0x0C16,
+	ice_aqc_opc_dealloc_acl_counters		= 0x0C17,
 	ice_aqc_opc_update_acl_scen			= 0x0C1B,
 	ice_aqc_opc_program_acl_actpair			= 0x0C1C,
+	ice_aqc_opc_program_acl_prof_extraction		= 0x0C1D,
 	ice_aqc_opc_program_acl_entry			= 0x0C20,
+	ice_aqc_opc_query_acl_prof			= 0x0C21,
 	ice_aqc_opc_query_acl_scen			= 0x0C23,
 
 	/* Tx queue handling commands/events */
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
index ef641bc8ca0e..dd495f6a4adf 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
@@ -402,7 +402,7 @@ void ice_fdir_replay_flows(struct ice_hw *hw)
 							 prof->vsi_h[0],
 							 prof->vsi_h[j],
 							 prio, prof->fdir_seg,
-							 &entry_h);
+							 NULL, 0, &entry_h);
 				if (err) {
 					dev_err(ice_hw_to_dev(hw), "Could not replay Flow Director, flow type %d\n",
 						flow);
@@ -606,14 +606,14 @@ ice_fdir_set_hw_fltr_rule(struct ice_pf *pf, struct ice_flow_seg_info *seg,
 		return ice_status_to_errno(status);
 	status = ice_flow_add_entry(hw, ICE_BLK_FD, prof_id, main_vsi->idx,
 				    main_vsi->idx, ICE_FLOW_PRIO_NORMAL,
-				    seg, &entry1_h);
+				    seg, NULL, 0, &entry1_h);
 	if (status) {
 		err = ice_status_to_errno(status);
 		goto err_prof;
 	}
 	status = ice_flow_add_entry(hw, ICE_BLK_FD, prof_id, main_vsi->idx,
 				    ctrl_vsi->idx, ICE_FLOW_PRIO_NORMAL,
-				    seg, &entry2_h);
+				    seg, NULL, 0, &entry2_h);
 	if (status) {
 		err = ice_status_to_errno(status);
 		goto err_entry;
@@ -1642,24 +1642,33 @@ static bool ice_is_acl_filter(struct ethtool_rx_flow_spec *fsp)
 }
 
 /**
- * ice_ntuple_set_input_set - Set the input set for Flow Director
+ * ice_ntuple_set_input_set - Set the input set for specified block
  * @vsi: pointer to target VSI
+ * @blk: filter block to configure
  * @fsp: pointer to ethtool Rx flow specification
  * @input: filter structure
  */
-static int
-ice_ntuple_set_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
+int
+ice_ntuple_set_input_set(struct ice_vsi *vsi, enum ice_block blk,
+			 struct ethtool_rx_flow_spec *fsp,
 			 struct ice_fdir_fltr *input)
 {
 	u16 dest_vsi, q_index = 0;
+	int flow_type, flow_mask;
 	struct ice_pf *pf;
 	struct ice_hw *hw;
-	int flow_type;
 	u8 dest_ctl;
 
 	if (!vsi || !fsp || !input)
 		return -EINVAL;
 
+	if (blk == ICE_BLK_FD)
+		flow_mask = FLOW_EXT;
+	else if (blk == ICE_BLK_ACL)
+		flow_mask = FLOW_MAC_EXT;
+	else
+		return -EINVAL;
+
 	pf = vsi->back;
 	hw = &pf->hw;
 
@@ -1671,8 +1680,8 @@ ice_ntuple_set_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
 		u8 vf = ethtool_get_flow_spec_ring_vf(fsp->ring_cookie);
 
 		if (vf) {
-			dev_err(ice_pf_to_dev(pf), "Failed to add filter. Flow director filters are not supported on VF queues.\n");
-			return -EINVAL;
+			dev_err(ice_pf_to_dev(pf), "Failed to add filter. %s filters are not supported on VF queues.\n",
+				blk == ICE_BLK_FD ? "Flow Director" : "ACL");
 		}
 
 		if (ring >= vsi->num_rxq)
@@ -1684,7 +1693,7 @@ ice_ntuple_set_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
 
 	input->fltr_id = fsp->location;
 	input->q_index = q_index;
-	flow_type = fsp->flow_type & ~FLOW_EXT;
+	flow_type = fsp->flow_type & ~flow_mask;
 
 	input->dest_vsi = dest_vsi;
 	input->dest_ctl = dest_ctl;
@@ -1733,9 +1742,9 @@ ice_ntuple_set_input_set(struct ice_vsi *vsi, struct ethtool_rx_flow_spec *fsp,
 	case TCP_V6_FLOW:
 	case UDP_V6_FLOW:
 	case SCTP_V6_FLOW:
-		memcpy(input->ip.v6.dst_ip, fsp->h_u.usr_ip6_spec.ip6dst,
+		memcpy(input->ip.v6.dst_ip, fsp->h_u.tcp_ip6_spec.ip6dst,
 		       sizeof(struct in6_addr));
-		memcpy(input->ip.v6.src_ip, fsp->h_u.usr_ip6_spec.ip6src,
+		memcpy(input->ip.v6.src_ip, fsp->h_u.tcp_ip6_spec.ip6src,
 		       sizeof(struct in6_addr));
 		input->ip.v6.dst_port = fsp->h_u.tcp_ip6_spec.pdst;
 		input->ip.v6.src_port = fsp->h_u.tcp_ip6_spec.psrc;
@@ -1840,7 +1849,7 @@ int ice_add_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 	if (!input)
 		return -ENOMEM;
 
-	ret = ice_ntuple_set_input_set(vsi, fsp, input);
+	ret = ice_ntuple_set_input_set(vsi, ICE_BLK_FD, fsp, input);
 	if (ret)
 		goto free_input;
 
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
index 696d08e6716d..da9797c11a8d 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
@@ -649,7 +649,7 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max)
  *	dc == NULL --> dc mask is all 0's (no don't care bits)
  *	nm == NULL --> nm mask is all 0's (no never match bits)
  */
-static enum ice_status
+enum ice_status
 ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off,
 	    u16 len)
 {
@@ -3847,7 +3847,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[],
  * This will search for a profile tracking ID which was previously added.
  * The profile map lock should be held before calling this function.
  */
-static struct ice_prof_map *
+struct ice_prof_map *
 ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id)
 {
 	struct ice_prof_map *entry = NULL;
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h
index 20deddb807c5..61fd8f2fc959 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.h
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.h
@@ -40,6 +40,13 @@ void ice_free_seg(struct ice_hw *hw);
 void ice_fill_blk_tbls(struct ice_hw *hw);
 void ice_clear_hw_tbls(struct ice_hw *hw);
 void ice_free_hw_tbls(struct ice_hw *hw);
+struct ice_prof_map *
+ice_search_prof_id(struct ice_hw *hw, enum ice_block blk, u64 id);
 enum ice_status
 ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id);
+
+enum ice_status
+ice_set_key(u8 *key, u16 size, u8 *val, u8 *upd, u8 *dc, u8 *nm, u16 off,
+	    u16 len);
+
 #endif /* _ICE_FLEX_PIPE_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c
index d2df5101ef74..7ea94a627c5d 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.c
+++ b/drivers/net/ethernet/intel/ice/ice_flow.c
@@ -833,9 +833,161 @@ ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry)
 	if (entry->entry)
 		devm_kfree(ice_hw_to_dev(hw), entry->entry);
 
+	if (entry->range_buf) {
+		devm_kfree(ice_hw_to_dev(hw), entry->range_buf);
+		entry->range_buf = NULL;
+	}
+
+	if (entry->acts) {
+		devm_kfree(ice_hw_to_dev(hw), entry->acts);
+		entry->acts = NULL;
+		entry->acts_cnt = 0;
+	}
+
 	devm_kfree(ice_hw_to_dev(hw), entry);
 }
 
+/**
+ * ice_flow_get_hw_prof - return the HW profile for a specific profile ID handle
+ * @hw: pointer to the HW struct
+ * @blk: classification stage
+ * @prof_id: the profile ID handle
+ * @hw_prof_id: pointer to variable to receive the HW profile ID
+ */
+static enum ice_status
+ice_flow_get_hw_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
+		     u8 *hw_prof_id)
+{
+	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
+	struct ice_prof_map *map;
+
+	mutex_lock(&hw->blk[blk].es.prof_map_lock);
+	map = ice_search_prof_id(hw, blk, prof_id);
+	if (map) {
+		*hw_prof_id = map->prof_id;
+		status = 0;
+	}
+	mutex_unlock(&hw->blk[blk].es.prof_map_lock);
+	return status;
+}
+
+#define ICE_ACL_INVALID_SCEN	0x3f
+
+/**
+ * ice_flow_acl_is_prof_in_use - Verify if the profile is associated to any PF
+ * @hw: pointer to the hardware structure
+ * @prof: pointer to flow profile
+ * @buf: destination buffer function writes partial extraction sequence to
+ *
+ * returns ICE_SUCCESS if no PF is associated to the given profile
+ * returns ICE_ERR_IN_USE if at least one PF is associated to the given profile
+ * returns other error code for real error
+ */
+static enum ice_status
+ice_flow_acl_is_prof_in_use(struct ice_hw *hw, struct ice_flow_prof *prof,
+			    struct ice_aqc_acl_prof_generic_frmt *buf)
+{
+	enum ice_status status;
+	u8 prof_id = 0;
+
+	status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id);
+	if (status)
+		return status;
+
+	status = ice_query_acl_prof(hw, prof_id, buf, NULL);
+	if (status)
+		return status;
+
+	/* If all PF's associated scenarios are all 0 or all
+	 * ICE_ACL_INVALID_SCEN (63) for the given profile then the latter has
+	 * not been configured yet.
+	 */
+	if (buf->pf_scenario_num[0] == 0 && buf->pf_scenario_num[1] == 0 &&
+	    buf->pf_scenario_num[2] == 0 && buf->pf_scenario_num[3] == 0 &&
+	    buf->pf_scenario_num[4] == 0 && buf->pf_scenario_num[5] == 0 &&
+	    buf->pf_scenario_num[6] == 0 && buf->pf_scenario_num[7] == 0)
+		return 0;
+
+	if (buf->pf_scenario_num[0] == ICE_ACL_INVALID_SCEN &&
+	    buf->pf_scenario_num[1] == ICE_ACL_INVALID_SCEN &&
+	    buf->pf_scenario_num[2] == ICE_ACL_INVALID_SCEN &&
+	    buf->pf_scenario_num[3] == ICE_ACL_INVALID_SCEN &&
+	    buf->pf_scenario_num[4] == ICE_ACL_INVALID_SCEN &&
+	    buf->pf_scenario_num[5] == ICE_ACL_INVALID_SCEN &&
+	    buf->pf_scenario_num[6] == ICE_ACL_INVALID_SCEN &&
+	    buf->pf_scenario_num[7] == ICE_ACL_INVALID_SCEN)
+		return 0;
+
+	return ICE_ERR_IN_USE;
+}
+
+/**
+ * ice_flow_acl_free_act_cntr - Free the ACL rule's actions
+ * @hw: pointer to the hardware structure
+ * @acts: array of actions to be performed on a match
+ * @acts_cnt: number of actions
+ */
+static enum ice_status
+ice_flow_acl_free_act_cntr(struct ice_hw *hw, struct ice_flow_action *acts,
+			   u8 acts_cnt)
+{
+	int i;
+
+	for (i = 0; i < acts_cnt; i++) {
+		if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT ||
+		    acts[i].type == ICE_FLOW_ACT_CNTR_BYTES ||
+		    acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) {
+			struct ice_acl_cntrs cntrs;
+			enum ice_status status;
+
+			cntrs.bank = 0; /* Only bank0 for the moment */
+			cntrs.first_cntr =
+					le16_to_cpu(acts[i].data.acl_act.value);
+			cntrs.last_cntr =
+					le16_to_cpu(acts[i].data.acl_act.value);
+
+			if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES)
+				cntrs.type = ICE_AQC_ACL_CNT_TYPE_DUAL;
+			else
+				cntrs.type = ICE_AQC_ACL_CNT_TYPE_SINGLE;
+
+			status = ice_aq_dealloc_acl_cntrs(hw, &cntrs, NULL);
+			if (status)
+				return status;
+		}
+	}
+	return 0;
+}
+
+/**
+ * ice_flow_acl_disassoc_scen - Disassociate the scenario from the profile
+ * @hw: pointer to the hardware structure
+ * @prof: pointer to flow profile
+ *
+ * Disassociate the scenario from the profile for the PF of the VSI.
+ */
+static enum ice_status
+ice_flow_acl_disassoc_scen(struct ice_hw *hw, struct ice_flow_prof *prof)
+{
+	struct ice_aqc_acl_prof_generic_frmt buf;
+	enum ice_status status = 0;
+	u8 prof_id = 0;
+
+	memset(&buf, 0, sizeof(buf));
+
+	status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id);
+	if (status)
+		return status;
+
+	status = ice_query_acl_prof(hw, prof_id, &buf, NULL);
+	if (status)
+		return status;
+
+	/* Clear scenario for this PF */
+	buf.pf_scenario_num[hw->pf_id] = ICE_ACL_INVALID_SCEN;
+	return ice_prgm_acl_prof_xtrct(hw, prof_id, &buf, NULL);
+}
+
 /**
  * ice_flow_rem_entry_sync - Remove a flow entry
  * @hw: pointer to the HW struct
@@ -843,12 +995,19 @@ ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry)
  * @entry: flow entry to be removed
  */
 static enum ice_status
-ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block __always_unused blk,
+ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block blk,
 			struct ice_flow_entry *entry)
 {
 	if (!entry)
 		return ICE_ERR_BAD_PTR;
 
+	if (blk == ICE_BLK_ACL) {
+		/* Checks if we need to release an ACL counter. */
+		if (entry->acts_cnt && entry->acts)
+			ice_flow_acl_free_act_cntr(hw, entry->acts,
+						   entry->acts_cnt);
+	}
+
 	list_del(&entry->l_entry);
 
 	ice_dealloc_flow_entry(hw, entry);
@@ -966,6 +1125,13 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk,
 		mutex_unlock(&prof->entries_lock);
 	}
 
+	if (blk == ICE_BLK_ACL) {
+		/* Disassociate the scenario from the profile for the PF */
+		status = ice_flow_acl_disassoc_scen(hw, prof);
+		if (status)
+			return status;
+	}
+
 	/* Remove all hardware profiles associated with this flow profile */
 	status = ice_rem_prof(hw, blk, prof->id);
 	if (!status) {
@@ -977,6 +1143,89 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk,
 	return status;
 }
 
+/**
+ * ice_flow_acl_set_xtrct_seq_fld - Populate xtrct seq for single field
+ * @buf: Destination buffer function writes partial xtrct sequence to
+ * @info: Info about field
+ */
+static void
+ice_flow_acl_set_xtrct_seq_fld(struct ice_aqc_acl_prof_generic_frmt *buf,
+			       struct ice_flow_fld_info *info)
+{
+	u16 dst, i;
+	u8 src;
+
+	src = info->xtrct.idx * ICE_FLOW_FV_EXTRACT_SZ +
+		info->xtrct.disp / BITS_PER_BYTE;
+	dst = info->entry.val;
+	for (i = 0; i < info->entry.last; i++)
+		/* HW stores field vector words in LE, convert words back to BE
+		 * so constructed entries will end up in network order
+		 */
+		buf->byte_selection[dst++] = src++ ^ 1;
+}
+
+/**
+ * ice_flow_acl_set_xtrct_seq - Program ACL extraction sequence
+ * @hw: pointer to the hardware structure
+ * @prof: pointer to flow profile
+ */
+static enum ice_status
+ice_flow_acl_set_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof *prof)
+{
+	struct ice_aqc_acl_prof_generic_frmt buf;
+	struct ice_flow_fld_info *info;
+	enum ice_status status;
+	u8 prof_id = 0;
+	u16 i;
+
+	memset(&buf, 0, sizeof(buf));
+
+	status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id);
+	if (status)
+		return status;
+
+	status = ice_flow_acl_is_prof_in_use(hw, prof, &buf);
+	if (status && status != ICE_ERR_IN_USE)
+		return status;
+
+	if (!status) {
+		/* Program the profile dependent configuration. This is done
+		 * only once regardless of the number of PFs using that profile
+		 */
+		memset(&buf, 0, sizeof(buf));
+
+		for (i = 0; i < prof->segs_cnt; i++) {
+			struct ice_flow_seg_info *seg = &prof->segs[i];
+			u16 j;
+
+			for_each_set_bit(j, (unsigned long *)&seg->match,
+					 ICE_FLOW_FIELD_IDX_MAX) {
+				info = &seg->fields[j];
+
+				if (info->type == ICE_FLOW_FLD_TYPE_RANGE)
+					buf.word_selection[info->entry.val] =
+						info->xtrct.idx;
+				else
+					ice_flow_acl_set_xtrct_seq_fld(&buf,
+								       info);
+			}
+
+			for (j = 0; j < seg->raws_cnt; j++) {
+				info = &seg->raws[j].info;
+				ice_flow_acl_set_xtrct_seq_fld(&buf, info);
+			}
+		}
+
+		memset(&buf.pf_scenario_num[0], ICE_ACL_INVALID_SCEN,
+		       ICE_AQC_ACL_PROF_PF_SCEN_NUM_ELEMS);
+	}
+
+	/* Update the current PF */
+	buf.pf_scenario_num[hw->pf_id] = (u8)prof->cfg.scen->id;
+	return ice_prgm_acl_prof_xtrct(hw, prof_id, &buf, NULL);
+}
+
 /**
  * ice_flow_assoc_prof - associate a VSI with a flow profile
  * @hw: pointer to the hardware structure
@@ -994,6 +1243,11 @@ ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk,
 	enum ice_status status = 0;
 
 	if (!test_bit(vsi_handle, prof->vsis)) {
+		if (blk == ICE_BLK_ACL) {
+			status = ice_flow_acl_set_xtrct_seq(hw, prof);
+			if (status)
+				return status;
+		}
 		status = ice_add_prof_id_flow(hw, blk,
 					      ice_get_hw_vsi_num(hw,
 								 vsi_handle),
@@ -1112,6 +1366,341 @@ ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
 	return status;
 }
 
+/**
+ * ice_flow_acl_check_actions - Checks the ACL rule's actions
+ * @hw: pointer to the hardware structure
+ * @acts: array of actions to be performed on a match
+ * @acts_cnt: number of actions
+ * @cnt_alloc: indicates if an ACL counter has been allocated.
+ */
+static enum ice_status
+ice_flow_acl_check_actions(struct ice_hw *hw, struct ice_flow_action *acts,
+			   u8 acts_cnt, bool *cnt_alloc)
+{
+	DECLARE_BITMAP(dup_check, ICE_AQC_TBL_MAX_ACTION_PAIRS * 2);
+	int i;
+
+	bitmap_zero(dup_check, ICE_AQC_TBL_MAX_ACTION_PAIRS * 2);
+	*cnt_alloc = false;
+
+	if (acts_cnt > ICE_FLOW_ACL_MAX_NUM_ACT)
+		return ICE_ERR_OUT_OF_RANGE;
+
+	for (i = 0; i < acts_cnt; i++) {
+		if (acts[i].type != ICE_FLOW_ACT_NOP &&
+		    acts[i].type != ICE_FLOW_ACT_DROP &&
+		    acts[i].type != ICE_FLOW_ACT_CNTR_PKT &&
+		    acts[i].type != ICE_FLOW_ACT_FWD_QUEUE)
+			return ICE_ERR_CFG;
+
+		/* If the caller want to add two actions of the same type, then
+		 * it is considered invalid configuration.
+		 */
+		if (test_and_set_bit(acts[i].type, dup_check))
+			return ICE_ERR_PARAM;
+	}
+
+	/* Checks if ACL counters are needed. */
+	for (i = 0; i < acts_cnt; i++) {
+		if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT ||
+		    acts[i].type == ICE_FLOW_ACT_CNTR_BYTES ||
+		    acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) {
+			struct ice_acl_cntrs cntrs;
+			enum ice_status status;
+
+			cntrs.amount = 1;
+			cntrs.bank = 0; /* Only bank0 for the moment */
+
+			if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES)
+				cntrs.type = ICE_AQC_ACL_CNT_TYPE_DUAL;
+			else
+				cntrs.type = ICE_AQC_ACL_CNT_TYPE_SINGLE;
+
+			status = ice_aq_alloc_acl_cntrs(hw, &cntrs, NULL);
+			if (status)
+				return status;
+			/* Counter index within the bank */
+			acts[i].data.acl_act.value =
+						cpu_to_le16(cntrs.first_cntr);
+			*cnt_alloc = true;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * ice_flow_acl_frmt_entry_range - Format an ACL range checker for a given field
+ * @fld: number of the given field
+ * @info: info about field
+ * @range_buf: range checker configuration buffer
+ * @data: pointer to a data buffer containing flow entry's match values/masks
+ * @range: Input/output param indicating which range checkers are being used
+ */
+static void
+ice_flow_acl_frmt_entry_range(u16 fld, struct ice_flow_fld_info *info,
+			      struct ice_aqc_acl_profile_ranges *range_buf,
+			      u8 *data, u8 *range)
+{
+	u16 new_mask;
+
+	/* If not specified, default mask is all bits in field */
+	new_mask = (info->src.mask == ICE_FLOW_FLD_OFF_INVAL ?
+		    BIT(ice_flds_info[fld].size) - 1 :
+		    (*(u16 *)(data + info->src.mask))) << info->xtrct.disp;
+
+	/* If the mask is 0, then we don't need to worry about this input
+	 * range checker value.
+	 */
+	if (new_mask) {
+		u16 new_high =
+			(*(u16 *)(data + info->src.last)) << info->xtrct.disp;
+		u16 new_low =
+			(*(u16 *)(data + info->src.val)) << info->xtrct.disp;
+		u8 range_idx = info->entry.val;
+
+		range_buf->checker_cfg[range_idx].low_boundary =
+			cpu_to_be16(new_low);
+		range_buf->checker_cfg[range_idx].high_boundary =
+			cpu_to_be16(new_high);
+		range_buf->checker_cfg[range_idx].mask = cpu_to_be16(new_mask);
+
+		/* Indicate which range checker is being used */
+		*range |= BIT(range_idx);
+	}
+}
+
+/**
+ * ice_flow_acl_frmt_entry_fld - Partially format ACL entry for a given field
+ * @fld: number of the given field
+ * @info: info about the field
+ * @buf: buffer containing the entry
+ * @dontcare: buffer containing don't care mask for entry
+ * @data: pointer to a data buffer containing flow entry's match values/masks
+ */
+static void
+ice_flow_acl_frmt_entry_fld(u16 fld, struct ice_flow_fld_info *info, u8 *buf,
+			    u8 *dontcare, u8 *data)
+{
+	u16 dst, src, mask, k, end_disp, tmp_s = 0, tmp_m = 0;
+	bool use_mask = false;
+	u8 disp;
+
+	src = info->src.val;
+	mask = info->src.mask;
+	dst = info->entry.val - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX;
+	disp = info->xtrct.disp % BITS_PER_BYTE;
+
+	if (mask != ICE_FLOW_FLD_OFF_INVAL)
+		use_mask = true;
+
+	for (k = 0; k < info->entry.last; k++, dst++) {
+		/* Add overflow bits from previous byte */
+		buf[dst] = (tmp_s & 0xff00) >> 8;
+
+		/* If mask is not valid, tmp_m is always zero, so just setting
+		 * dontcare to 0 (no masked bits). If mask is valid, pulls in
+		 * overflow bits of mask from prev byte
+		 */
+		dontcare[dst] = (tmp_m & 0xff00) >> 8;
+
+		/* If there is displacement, last byte will only contain
+		 * displaced data, but there is no more data to read from user
+		 * buffer, so skip so as not to potentially read beyond end of
+		 * user buffer
+		 */
+		if (!disp || k < info->entry.last - 1) {
+			/* Store shifted data to use in next byte */
+			tmp_s = data[src++] << disp;
+
+			/* Add current (shifted) byte */
+			buf[dst] |= tmp_s & 0xff;
+
+			/* Handle mask if valid */
+			if (use_mask) {
+				tmp_m = (~data[mask++] & 0xff) << disp;
+				dontcare[dst] |= tmp_m & 0xff;
+			}
+		}
+	}
+
+	/* Fill in don't care bits at beginning of field */
+	if (disp) {
+		dst = info->entry.val - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX;
+		for (k = 0; k < disp; k++)
+			dontcare[dst] |= BIT(k);
+	}
+
+	end_disp = (disp + ice_flds_info[fld].size) % BITS_PER_BYTE;
+
+	/* Fill in don't care bits at end of field */
+	if (end_disp) {
+		dst = info->entry.val - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX +
+		      info->entry.last - 1;
+		for (k = end_disp; k < BITS_PER_BYTE; k++)
+			dontcare[dst] |= BIT(k);
+	}
+}
+
+/**
+ * ice_flow_acl_frmt_entry - Format ACL entry
+ * @hw: pointer to the hardware structure
+ * @prof: pointer to flow profile
+ * @e: pointer to the flow entry
+ * @data: pointer to a data buffer containing flow entry's match values/masks
+ * @acts: array of actions to be performed on a match
+ * @acts_cnt: number of actions
+ *
+ * Formats the key (and key_inverse) to be matched from the data passed in,
+ * along with data from the flow profile. This key/key_inverse pair makes up
+ * the 'entry' for an ACL flow entry.
+ */
+static enum ice_status
+ice_flow_acl_frmt_entry(struct ice_hw *hw, struct ice_flow_prof *prof,
+			struct ice_flow_entry *e, u8 *data,
+			struct ice_flow_action *acts, u8 acts_cnt)
+{
+	u8 *buf = NULL, *dontcare = NULL, *key = NULL, range = 0, dir_flag_msk;
+	struct ice_aqc_acl_profile_ranges *range_buf = NULL;
+	enum ice_status status;
+	bool cnt_alloc;
+	u8 prof_id = 0;
+	u16 i, buf_sz;
+
+	status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id);
+	if (status)
+		return status;
+
+	/* Format the result action */
+
+	status = ice_flow_acl_check_actions(hw, acts, acts_cnt, &cnt_alloc);
+	if (status)
+		return status;
+
+	status = ICE_ERR_NO_MEMORY;
+
+	e->acts = devm_kmemdup(ice_hw_to_dev(hw), acts,
+			       acts_cnt * sizeof(*acts), GFP_KERNEL);
+	if (!e->acts)
+		goto out;
+
+	e->acts_cnt = acts_cnt;
+
+	/* Format the matching data */
+	buf_sz = prof->cfg.scen->width;
+	buf = kzalloc(buf_sz, GFP_KERNEL);
+	if (!buf)
+		goto out;
+
+	dontcare = kzalloc(buf_sz, GFP_KERNEL);
+	if (!dontcare)
+		goto out;
+
+	/* 'key' buffer will store both key and key_inverse, so must be twice
+	 * size of buf
+	 */
+	key = devm_kzalloc(ice_hw_to_dev(hw), buf_sz * 2, GFP_KERNEL);
+	if (!key)
+		goto out;
+
+	range_buf = devm_kzalloc(ice_hw_to_dev(hw),
+				 sizeof(struct ice_aqc_acl_profile_ranges),
+				 GFP_KERNEL);
+	if (!range_buf)
+		goto out;
+
+	/* Set don't care mask to all 1's to start, will zero out used bytes */
+	memset(dontcare, 0xff, buf_sz);
+
+	for (i = 0; i < prof->segs_cnt; i++) {
+		struct ice_flow_seg_info *seg = &prof->segs[i];
+		u8 j;
+
+		for_each_set_bit(j, (unsigned long *)&seg->match,
+				 ICE_FLOW_FIELD_IDX_MAX) {
+			struct ice_flow_fld_info *info = &seg->fields[j];
+
+			if (info->type == ICE_FLOW_FLD_TYPE_RANGE)
+				ice_flow_acl_frmt_entry_range(j, info,
+							      range_buf, data,
+							      &range);
+			else
+				ice_flow_acl_frmt_entry_fld(j, info, buf,
+							    dontcare, data);
+		}
+
+		for (j = 0; j < seg->raws_cnt; j++) {
+			struct ice_flow_fld_info *info = &seg->raws[j].info;
+			u16 dst, src, mask, k;
+			bool use_mask = false;
+
+			src = info->src.val;
+			dst = info->entry.val -
+					ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX;
+			mask = info->src.mask;
+
+			if (mask != ICE_FLOW_FLD_OFF_INVAL)
+				use_mask = true;
+
+			for (k = 0; k < info->entry.last; k++, dst++) {
+				buf[dst] = data[src++];
+				if (use_mask)
+					dontcare[dst] = ~data[mask++];
+				else
+					dontcare[dst] = 0;
+			}
+		}
+	}
+
+	buf[prof->cfg.scen->pid_idx] = (u8)prof_id;
+	dontcare[prof->cfg.scen->pid_idx] = 0;
+
+	/* Format the buffer for direction flags */
+	dir_flag_msk = BIT(ICE_FLG_PKT_DIR);
+
+	if (prof->dir == ICE_FLOW_RX)
+		buf[prof->cfg.scen->pkt_dir_idx] = dir_flag_msk;
+
+	if (range) {
+		buf[prof->cfg.scen->rng_chk_idx] = range;
+		/* Mark any unused range checkers as don't care */
+		dontcare[prof->cfg.scen->rng_chk_idx] = ~range;
+		e->range_buf = range_buf;
+	} else {
+		devm_kfree(ice_hw_to_dev(hw), range_buf);
+	}
+
+	status = ice_set_key(key, buf_sz * 2, buf, NULL, dontcare, NULL, 0,
+			     buf_sz);
+	if (status)
+		goto out;
+
+	e->entry = key;
+	e->entry_sz = buf_sz * 2;
+
+out:
+	kfree(buf);
+	kfree(dontcare);
+
+	if (status && key)
+		devm_kfree(ice_hw_to_dev(hw), key);
+
+	if (status && range_buf) {
+		devm_kfree(ice_hw_to_dev(hw), range_buf);
+		e->range_buf = NULL;
+	}
+
+	if (status && e->acts) {
+		devm_kfree(ice_hw_to_dev(hw), e->acts);
+		e->acts = NULL;
+		e->acts_cnt = 0;
+	}
+
+	if (status && cnt_alloc)
+		ice_flow_acl_free_act_cntr(hw, acts, acts_cnt);
+
+	return status;
+}
 /**
  * ice_flow_add_entry - Add a flow entry
  * @hw: pointer to the HW struct
@@ -1121,17 +1710,24 @@ ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
  * @vsi_handle: software VSI handle for the flow entry
  * @prio: priority of the flow entry
  * @data: pointer to a data buffer containing flow entry's match values/masks
+ * @acts: arrays of actions to be performed on a match
+ * @acts_cnt: number of actions
  * @entry_h: pointer to buffer that receives the new flow entry's handle
  */
 enum ice_status
 ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		   u64 entry_id, u16 vsi_handle, enum ice_flow_priority prio,
-		   void *data, u64 *entry_h)
+		   void *data, struct ice_flow_action *acts, u8 acts_cnt,
+		   u64 *entry_h)
 {
 	struct ice_flow_entry *e = NULL;
 	struct ice_flow_prof *prof;
 	enum ice_status status;
 
+	/* ACL entries must indicate an action */
+	if (blk == ICE_BLK_ACL && (!acts || !acts_cnt))
+		return ICE_ERR_PARAM;
+
 	/* No flow entry data is expected for RSS */
 	if (!entry_h || (!data && blk != ICE_BLK_RSS))
 		return ICE_ERR_BAD_PTR;
@@ -1168,14 +1764,24 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 	case ICE_BLK_FD:
 	case ICE_BLK_RSS:
 		break;
+	case ICE_BLK_ACL:
+		/* ACL will handle the entry management */
+		status = ice_flow_acl_frmt_entry(hw, prof, e, (u8 *)data, acts,
+						 acts_cnt);
+		if (status)
+			goto out;
+		break;
 	default:
 		status = ICE_ERR_NOT_IMPL;
 		goto out;
 	}
 
-	mutex_lock(&prof->entries_lock);
-	list_add(&e->l_entry, &prof->entries);
-	mutex_unlock(&prof->entries_lock);
+	if (blk != ICE_BLK_ACL) {
+		/* ACL will handle the entry management */
+		mutex_lock(&prof->entries_lock);
+		list_add(&e->l_entry, &prof->entries);
+		mutex_unlock(&prof->entries_lock);
+	}
 
 	*entry_h = ICE_FLOW_ENTRY_HNDL(e);
 
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.h b/drivers/net/ethernet/intel/ice/ice_flow.h
index f0cea38e8e78..ba3ceaf30b93 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.h
+++ b/drivers/net/ethernet/intel/ice/ice_flow.h
@@ -189,11 +189,17 @@ struct ice_flow_entry {
 
 	u64 id;
 	struct ice_flow_prof *prof;
+	/* Action list */
+	struct ice_flow_action *acts;
 	/* Flow entry's content */
 	void *entry;
+	/* Range buffer (For ACL only) */
+	struct ice_aqc_acl_profile_ranges *range_buf;
 	enum ice_flow_priority priority;
 	u16 vsi_handle;
 	u16 entry_sz;
+#define ICE_FLOW_ACL_MAX_NUM_ACT	2
+	u8 acts_cnt;
 };
 
 #define ICE_FLOW_ENTRY_HNDL(e)	((u64)(uintptr_t)e)
@@ -257,7 +263,8 @@ ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id);
 enum ice_status
 ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		   u64 entry_id, u16 vsi, enum ice_flow_priority prio,
-		   void *data, u64 *entry_h);
+		   void *data, struct ice_flow_action *acts, u8 acts_cnt,
+		   u64 *entry_h);
 enum ice_status
 ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_h);
 void
diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
index 4ec24c3e813f..d2360a514e3e 100644
--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
@@ -309,6 +309,8 @@ enum ice_flex_mdid_pkt_flags {
 enum ice_flex_rx_mdid {
 	ICE_RX_MDID_FLOW_ID_LOWER	= 5,
 	ICE_RX_MDID_FLOW_ID_HIGH,
+	ICE_MDID_RX_PKT_DROP	= 8,
+	ICE_MDID_RX_DST_Q		= 12,
 	ICE_RX_MDID_SRC_VSI		= 19,
 	ICE_RX_MDID_HASH_LOW		= 56,
 	ICE_RX_MDID_HASH_HIGH,
@@ -317,6 +319,7 @@ enum ice_flex_rx_mdid {
 /* Rx/Tx Flag64 packet flag bits */
 enum ice_flg64_bits {
 	ICE_FLG_PKT_DSI		= 0,
+	ICE_FLG_PKT_DIR		= 4,
 	ICE_FLG_EVLAN_x8100	= 14,
 	ICE_FLG_EVLAN_x9100,
 	ICE_FLG_VLAN_x8100,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 07/15] ice: program ACL entry
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (5 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 06/15] ice: create ACL entry Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 08/15] ice: don't always return an error for Get PHY Abilities AQ command Tony Nguyen
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Real Valiquette, netdev, sassmann, anthony.l.nguyen, Chinh Cao,
	Brijesh Behera

From: Real Valiquette <real.valiquette@intel.com>

Complete the filter programming process; set the flow entry and action into
the scenario and write it to hardware. Configure the VSI for ACL filters.

Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
Signed-off-by: Real Valiquette <real.valiquette@intel.com>
Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Tested-by: Brijesh Behera <brijeshx.behera@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |   3 +
 drivers/net/ethernet/intel/ice/ice_acl.c      |  48 ++-
 drivers/net/ethernet/intel/ice/ice_acl.h      |  23 +
 drivers/net/ethernet/intel/ice/ice_acl_ctrl.c | 260 +++++++++++
 drivers/net/ethernet/intel/ice/ice_acl_main.c |   4 +
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |   2 +
 .../net/ethernet/intel/ice/ice_ethtool_fdir.c |  58 ++-
 drivers/net/ethernet/intel/ice/ice_flow.c     | 406 ++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_flow.h     |   3 +
 drivers/net/ethernet/intel/ice/ice_lib.c      |  10 +-
 10 files changed, 805 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 31eea8bd92f2..fa24826c5af7 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -620,6 +620,9 @@ int ice_fdir_create_dflt_rules(struct ice_pf *pf);
 enum ice_fltr_ptype ice_ethtool_flow_to_fltr(int eth);
 int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
 			  struct ice_rq_event_info *event);
+int
+ice_ntuple_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
+			     int fltr_idx);
 int ice_open(struct net_device *netdev);
 int ice_stop(struct net_device *netdev);
 void ice_service_task_schedule(struct ice_pf *pf);
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.c b/drivers/net/ethernet/intel/ice/ice_acl.c
index 767cccc3ba67..a897dd9bfcde 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl.c
+++ b/drivers/net/ethernet/intel/ice/ice_acl.c
@@ -171,7 +171,8 @@ ice_acl_prof_aq_send(struct ice_hw *hw, u16 opc, u8 prof_id,
 
 	ice_fill_dflt_direct_cmd_desc(&desc, opc);
 	desc.params.profile.profile_id = prof_id;
-	if (opc == ice_aqc_opc_program_acl_prof_extraction)
+	if (opc == ice_aqc_opc_program_acl_prof_extraction ||
+	    opc == ice_aqc_opc_program_acl_prof_ranges)
 		desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
 	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
 }
@@ -323,6 +324,51 @@ ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs,
 	return ice_aq_send_cmd(hw, &desc, NULL, 0, cd);
 }
 
+/**
+ * ice_prog_acl_prof_ranges - program ACL profile ranges
+ * @hw: pointer to the HW struct
+ * @prof_id: programmed or updated profile ID
+ * @buf: pointer to input buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Program ACL profile ranges (indirect 0x0C1E)
+ */
+enum ice_status
+ice_prog_acl_prof_ranges(struct ice_hw *hw, u8 prof_id,
+			 struct ice_aqc_acl_profile_ranges *buf,
+			 struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc,
+				      ice_aqc_opc_program_acl_prof_ranges);
+	desc.params.profile.profile_id = prof_id;
+	desc.flags |= cpu_to_le16(ICE_AQ_FLAG_RD);
+	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+}
+
+/**
+ * ice_query_acl_prof_ranges - query ACL profile ranges
+ * @hw: pointer to the HW struct
+ * @prof_id: programmed or updated profile ID
+ * @buf: pointer to response buffer
+ * @cd: pointer to command details structure or NULL
+ *
+ * Query ACL profile ranges (indirect 0x0C22)
+ */
+enum ice_status
+ice_query_acl_prof_ranges(struct ice_hw *hw, u8 prof_id,
+			  struct ice_aqc_acl_profile_ranges *buf,
+			  struct ice_sq_cd *cd)
+{
+	struct ice_aq_desc desc;
+
+	ice_fill_dflt_direct_cmd_desc(&desc,
+				      ice_aqc_opc_query_acl_prof_ranges);
+	desc.params.profile.profile_id = prof_id;
+	return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd);
+}
+
 /**
  * ice_aq_alloc_acl_scen - allocate ACL scenario
  * @hw: pointer to the HW struct
diff --git a/drivers/net/ethernet/intel/ice/ice_acl.h b/drivers/net/ethernet/intel/ice/ice_acl.h
index 8235d16bd162..b0bb261b28b7 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl.h
+++ b/drivers/net/ethernet/intel/ice/ice_acl.h
@@ -44,6 +44,7 @@ struct ice_acl_tbl {
 	u16 id;
 };
 
+#define ICE_MAX_ACL_TCAM_ENTRY (ICE_AQC_ACL_TCAM_DEPTH * ICE_AQC_ACL_SLICES)
 enum ice_acl_entry_prio {
 	ICE_ACL_PRIO_LOW = 0,
 	ICE_ACL_PRIO_NORMAL,
@@ -66,6 +67,11 @@ struct ice_acl_scen {
 	 * participate in this scenario
 	 */
 	DECLARE_BITMAP(act_mem_bitmap, ICE_AQC_MAX_ACTION_MEMORIES);
+
+	/* If nth bit of entry_bitmap is set, then nth entry will
+	 * be available in this scenario
+	 */
+	DECLARE_BITMAP(entry_bitmap, ICE_MAX_ACL_TCAM_ENTRY);
 	u16 first_idx[ICE_ACL_MAX_PRIO];
 	u16 last_idx[ICE_ACL_MAX_PRIO];
 
@@ -151,6 +157,14 @@ enum ice_status
 ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs,
 			 struct ice_sq_cd *cd);
 enum ice_status
+ice_prog_acl_prof_ranges(struct ice_hw *hw, u8 prof_id,
+			 struct ice_aqc_acl_profile_ranges *buf,
+			 struct ice_sq_cd *cd);
+enum ice_status
+ice_query_acl_prof_ranges(struct ice_hw *hw, u8 prof_id,
+			  struct ice_aqc_acl_profile_ranges *buf,
+			  struct ice_sq_cd *cd);
+enum ice_status
 ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id,
 		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd);
 enum ice_status
@@ -161,5 +175,14 @@ ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id,
 enum ice_status
 ice_aq_query_acl_scen(struct ice_hw *hw, u16 scen_id,
 		      struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd);
+enum ice_status
+ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen,
+		  enum ice_acl_entry_prio prio, u8 *keys, u8 *inverts,
+		  struct ice_acl_act_entry *acts, u8 acts_cnt, u16 *entry_idx);
+enum ice_status
+ice_acl_prog_act(struct ice_hw *hw, struct ice_acl_scen *scen,
+		 struct ice_acl_act_entry *acts, u8 acts_cnt, u16 entry_idx);
+enum ice_status
+ice_acl_rem_entry(struct ice_hw *hw, struct ice_acl_scen *scen, u16 entry_idx);
 
 #endif /* _ICE_ACL_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c b/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
index 84a96ccf40d5..b345c0d5b710 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
+++ b/drivers/net/ethernet/intel/ice/ice_acl_ctrl.c
@@ -6,6 +6,11 @@
 /* Determine the TCAM index of entry 'e' within the ACL table */
 #define ICE_ACL_TBL_TCAM_IDX(e) ((e) / ICE_AQC_ACL_TCAM_DEPTH)
 
+/* Determine the entry index within the TCAM */
+#define ICE_ACL_TBL_TCAM_ENTRY_IDX(e) ((e) % ICE_AQC_ACL_TCAM_DEPTH)
+
+#define ICE_ACL_SCEN_ENTRY_INVAL 0xFFFF
+
 /**
  * ice_acl_init_entry
  * @scen: pointer to the scenario struct
@@ -29,6 +34,56 @@ static void ice_acl_init_entry(struct ice_acl_scen *scen)
 	scen->last_idx[ICE_ACL_PRIO_HIGH] = scen->num_entry / 4 - 1;
 }
 
+/**
+ * ice_acl_scen_assign_entry_idx
+ * @scen: pointer to the scenario struct
+ * @prio: the priority of the flow entry being allocated
+ *
+ * To find the index of an available entry in scenario
+ *
+ * Returns ICE_ACL_SCEN_ENTRY_INVAL if fails
+ * Returns index on success
+ */
+static u16
+ice_acl_scen_assign_entry_idx(struct ice_acl_scen *scen,
+			      enum ice_acl_entry_prio prio)
+{
+	u16 first_idx, last_idx, i;
+	s8 step;
+
+	if (prio >= ICE_ACL_MAX_PRIO)
+		return ICE_ACL_SCEN_ENTRY_INVAL;
+
+	first_idx = scen->first_idx[prio];
+	last_idx = scen->last_idx[prio];
+	step = first_idx <= last_idx ? 1 : -1;
+
+	for (i = first_idx; i != last_idx + step; i += step)
+		if (!test_and_set_bit(i, scen->entry_bitmap))
+			return i;
+
+	return ICE_ACL_SCEN_ENTRY_INVAL;
+}
+
+/**
+ * ice_acl_scen_free_entry_idx
+ * @scen: pointer to the scenario struct
+ * @idx: the index of the flow entry being de-allocated
+ *
+ * To mark an entry available in scenario
+ */
+static enum ice_status
+ice_acl_scen_free_entry_idx(struct ice_acl_scen *scen, u16 idx)
+{
+	if (idx >= scen->num_entry)
+		return ICE_ERR_MAX_LIMIT;
+
+	if (!test_and_clear_bit(idx, scen->entry_bitmap))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	return 0;
+}
+
 /**
  * ice_acl_tbl_calc_end_idx
  * @start: start index of the TCAM entry of this partition
@@ -883,3 +938,208 @@ enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw)
 
 	return 0;
 }
+
+/**
+ * ice_acl_add_entry - Add a flow entry to an ACL scenario
+ * @hw: pointer to the HW struct
+ * @scen: scenario to add the entry to
+ * @prio: priority level of the entry being added
+ * @keys: buffer of the value of the key to be programmed to the ACL entry
+ * @inverts: buffer of the value of the key inverts to be programmed
+ * @acts: pointer to a buffer containing formatted actions
+ * @acts_cnt: indicates the number of actions stored in "acts"
+ * @entry_idx: returned scenario relative index of the added flow entry
+ *
+ * Given an ACL table and a scenario, to add the specified key and key invert
+ * to an available entry in the specified scenario.
+ * The "keys" and "inverts" buffers must be of the size which is the same as
+ * the scenario's width
+ */
+enum ice_status
+ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen,
+		  enum ice_acl_entry_prio prio, u8 *keys, u8 *inverts,
+		  struct ice_acl_act_entry *acts, u8 acts_cnt, u16 *entry_idx)
+{
+	u8 i, entry_tcam, num_cscd, offset;
+	struct ice_aqc_acl_data buf;
+	enum ice_status status = 0;
+	u16 idx;
+
+	if (!scen)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	*entry_idx = ice_acl_scen_assign_entry_idx(scen, prio);
+	if (*entry_idx >= scen->num_entry) {
+		*entry_idx = 0;
+		return ICE_ERR_MAX_LIMIT;
+	}
+
+	/* Determine number of cascaded TCAMs */
+	num_cscd = DIV_ROUND_UP(scen->width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+
+	entry_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start);
+	idx = ICE_ACL_TBL_TCAM_ENTRY_IDX(scen->start + *entry_idx);
+
+	memset(&buf, 0, sizeof(buf));
+	for (i = 0; i < num_cscd; i++) {
+		/* If the key spans more than one TCAM in the case of cascaded
+		 * TCAMs, the key and key inverts need to be properly split
+		 * among TCAMs.E.g.bytes 0 - 4 go to an index in the first TCAM
+		 * and bytes 5 - 9 go to the same index in the next TCAM, etc.
+		 * If the entry spans more than one TCAM in a cascaded TCAM
+		 * mode, the programming of the entries in the TCAMs must be in
+		 * reversed order - the TCAM entry of the rightmost TCAM should
+		 * be programmed first; the TCAM entry of the leftmost TCAM
+		 * should be programmed last.
+		 */
+		offset = num_cscd - i - 1;
+		memcpy(&buf.entry_key.val,
+		       &keys[offset * sizeof(buf.entry_key.val)],
+		       sizeof(buf.entry_key.val));
+		memcpy(&buf.entry_key_invert.val,
+		       &inverts[offset * sizeof(buf.entry_key_invert.val)],
+		       sizeof(buf.entry_key_invert.val));
+		status = ice_aq_program_acl_entry(hw, entry_tcam + offset, idx,
+						  &buf, NULL);
+		if (status) {
+			ice_debug(hw, ICE_DBG_ACL, "aq program acl entry failed status: %d\n",
+				  status);
+			goto out;
+		}
+	}
+
+	/* Program the action memory */
+	status = ice_acl_prog_act(hw, scen, acts, acts_cnt, *entry_idx);
+
+out:
+	if (status) {
+		ice_acl_rem_entry(hw, scen, *entry_idx);
+		*entry_idx = 0;
+	}
+
+	return status;
+}
+
+/**
+ * ice_acl_prog_act - Program a scenario's action memory
+ * @hw: pointer to the HW struct
+ * @scen: scenario to add the entry to
+ * @acts: pointer to a buffer containing formatted actions
+ * @acts_cnt: indicates the number of actions stored in "acts"
+ * @entry_idx: scenario relative index of the added flow entry
+ *
+ * Program a scenario's action memory
+ */
+enum ice_status
+ice_acl_prog_act(struct ice_hw *hw, struct ice_acl_scen *scen,
+		 struct ice_acl_act_entry *acts, u8 acts_cnt,
+		 u16 entry_idx)
+{
+	u8 entry_tcam, num_cscd, i, actx_idx = 0;
+	struct ice_aqc_actpair act_buf;
+	enum ice_status status = 0;
+	u16 idx;
+
+	if (entry_idx >= scen->num_entry)
+		return ICE_ERR_MAX_LIMIT;
+
+	memset(&act_buf, 0, sizeof(act_buf));
+
+	/* Determine number of cascaded TCAMs */
+	num_cscd = DIV_ROUND_UP(scen->width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+
+	entry_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start);
+	idx = ICE_ACL_TBL_TCAM_ENTRY_IDX(scen->start + entry_idx);
+
+	for_each_set_bit(i, scen->act_mem_bitmap, ICE_AQC_MAX_ACTION_MEMORIES) {
+		struct ice_acl_act_mem *mem = &hw->acl_tbl->act_mems[i];
+
+		if (actx_idx >= acts_cnt)
+			break;
+		if (mem->member_of_tcam >= entry_tcam &&
+		    mem->member_of_tcam < entry_tcam + num_cscd) {
+			memcpy(&act_buf.act[0], &acts[actx_idx],
+			       sizeof(struct ice_acl_act_entry));
+
+			if (++actx_idx < acts_cnt) {
+				memcpy(&act_buf.act[1], &acts[actx_idx],
+				       sizeof(struct ice_acl_act_entry));
+			}
+
+			status = ice_aq_program_actpair(hw, i, idx, &act_buf,
+							NULL);
+			if (status) {
+				ice_debug(hw, ICE_DBG_ACL, "program actpair failed status: %d\n",
+					  status);
+				break;
+			}
+			actx_idx++;
+		}
+	}
+
+	if (!status && actx_idx < acts_cnt)
+		status = ICE_ERR_MAX_LIMIT;
+
+	return status;
+}
+
+/**
+ * ice_acl_rem_entry - Remove a flow entry from an ACL scenario
+ * @hw: pointer to the HW struct
+ * @scen: scenario to remove the entry from
+ * @entry_idx: the scenario-relative index of the flow entry being removed
+ */
+enum ice_status
+ice_acl_rem_entry(struct ice_hw *hw, struct ice_acl_scen *scen, u16 entry_idx)
+{
+	struct ice_aqc_actpair act_buf;
+	struct ice_aqc_acl_data buf;
+	u8 entry_tcam, num_cscd, i;
+	enum ice_status status = 0;
+	u16 idx;
+
+	if (!scen)
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	if (entry_idx >= scen->num_entry)
+		return ICE_ERR_MAX_LIMIT;
+
+	if (!test_bit(entry_idx, scen->entry_bitmap))
+		return ICE_ERR_DOES_NOT_EXIST;
+
+	/* Determine number of cascaded TCAMs */
+	num_cscd = DIV_ROUND_UP(scen->width, ICE_AQC_ACL_KEY_WIDTH_BYTES);
+
+	entry_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start);
+	idx = ICE_ACL_TBL_TCAM_ENTRY_IDX(scen->start + entry_idx);
+
+	/* invalidate the flow entry */
+	memset(&buf, 0, sizeof(buf));
+	for (i = 0; i < num_cscd; i++) {
+		status = ice_aq_program_acl_entry(hw, entry_tcam + i, idx, &buf,
+						  NULL);
+		if (status)
+			ice_debug(hw, ICE_DBG_ACL, "AQ program ACL entry failed status: %d\n",
+				  status);
+	}
+
+	memset(&act_buf, 0, sizeof(act_buf));
+
+	for_each_set_bit(i, scen->act_mem_bitmap, ICE_AQC_MAX_ACTION_MEMORIES) {
+		struct ice_acl_act_mem *mem = &hw->acl_tbl->act_mems[i];
+
+		if (mem->member_of_tcam >= entry_tcam &&
+		    mem->member_of_tcam < entry_tcam + num_cscd) {
+			/* Invalidate allocated action pairs */
+			status = ice_aq_program_actpair(hw, i, idx, &act_buf,
+							NULL);
+			if (status)
+				ice_debug(hw, ICE_DBG_ACL, "program actpair failed status: %d\n",
+					  status);
+		}
+	}
+
+	ice_acl_scen_free_entry_idx(scen, entry_idx);
+
+	return status;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_acl_main.c b/drivers/net/ethernet/intel/ice/ice_acl_main.c
index 3b56194ab3fc..c5d6c26ddbb1 100644
--- a/drivers/net/ethernet/intel/ice/ice_acl_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_acl_main.c
@@ -315,6 +315,10 @@ int ice_acl_add_rule_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
 		hw_prof->entry_h[hw_prof->cnt++][0] = entry_h;
 	}
 
+	input->acl_fltr = true;
+	/* input struct is added to the HW filter list */
+	ice_ntuple_update_list_entry(pf, input, fsp->location);
+
 	return 0;
 
 free_input:
diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 5449c5f6e10c..ddaf8df23480 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -2415,8 +2415,10 @@ enum ice_adminq_opc {
 	ice_aqc_opc_update_acl_scen			= 0x0C1B,
 	ice_aqc_opc_program_acl_actpair			= 0x0C1C,
 	ice_aqc_opc_program_acl_prof_extraction		= 0x0C1D,
+	ice_aqc_opc_program_acl_prof_ranges		= 0x0C1E,
 	ice_aqc_opc_program_acl_entry			= 0x0C20,
 	ice_aqc_opc_query_acl_prof			= 0x0C21,
+	ice_aqc_opc_query_acl_prof_ranges		= 0x0C22,
 	ice_aqc_opc_query_acl_scen			= 0x0C23,
 
 	/* Tx queue handling commands/events */
diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
index dd495f6a4adf..98261e7e7b85 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool_fdir.c
@@ -1482,6 +1482,22 @@ void ice_vsi_manage_fdir(struct ice_vsi *vsi, bool ena)
 	mutex_unlock(&hw->fdir_fltr_lock);
 }
 
+/**
+ * ice_del_acl_ethtool - Deletes an ACL rule entry.
+ * @hw: pointer to HW instance
+ * @fltr: filter structure
+ *
+ * returns 0 on success and negative value on error
+ */
+static int
+ice_del_acl_ethtool(struct ice_hw *hw, struct ice_fdir_fltr *fltr)
+{
+	u64 entry;
+
+	entry = ice_flow_find_entry(hw, ICE_BLK_ACL, fltr->fltr_id);
+	return ice_status_to_errno(ice_flow_rem_entry(hw, ICE_BLK_ACL, entry));
+}
+
 /**
  * ice_fdir_do_rem_flow - delete flow and possibly add perfect flow
  * @pf: PF structure
@@ -1515,7 +1531,7 @@ ice_fdir_do_rem_flow(struct ice_pf *pf, enum ice_fltr_ptype flow_type)
  *
  * returns 0 on success and negative on errors
  */
-static int
+int
 ice_ntuple_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
 			     int fltr_idx)
 {
@@ -1529,18 +1545,40 @@ ice_ntuple_update_list_entry(struct ice_pf *pf, struct ice_fdir_fltr *input,
 
 	old_fltr = ice_fdir_find_fltr_by_idx(hw, fltr_idx);
 	if (old_fltr) {
-		err = ice_fdir_write_all_fltr(pf, old_fltr, false);
-		if (err)
-			return err;
-		ice_fdir_update_cntrs(hw, old_fltr->flow_type,
-				      false, false);
-		if (!input && !hw->fdir_fltr_cnt[old_fltr->flow_type])
-			/* we just deleted the last filter of flow_type so we
-			 * should also delete the HW filter info.
+		if (!old_fltr->acl_fltr) {
+			/* FD filter */
+			err = ice_fdir_write_all_fltr(pf, old_fltr, false);
+			if (err)
+				return err;
+		} else {
+			/* ACL filter - if the input buffer is present
+			 * then this is an update and we don't want to
+			 * delete the filter from the HW. we've already
+			 * written the change to the HW at this point, so
+			 * just update the SW structures to make sure
+			 * everything is hunky-dory. if no input then this
+			 * is a delete so we should delete the filter from
+			 * the HW and clean up our SW structures.
 			 */
+			if (!input) {
+				err = ice_del_acl_ethtool(hw, old_fltr);
+				if (err)
+					return err;
+			}
+		}
+		ice_fdir_update_cntrs(hw, old_fltr->flow_type,
+				      old_fltr->acl_fltr, false);
+		/* Also delete the HW filter info if we have just deleted the
+		 * last filter of flow_type.
+		 */
+		if (!old_fltr->acl_fltr && !input &&
+		    !hw->fdir_fltr_cnt[old_fltr->flow_type])
 			ice_fdir_do_rem_flow(pf, old_fltr->flow_type);
+		else if (old_fltr->acl_fltr && !input &&
+			 !hw->acl_fltr_cnt[old_fltr->flow_type])
+			ice_fdir_rem_flow(hw, ICE_BLK_ACL, old_fltr->flow_type);
 		list_del(&old_fltr->fltr_node);
-		devm_kfree(ice_hw_to_dev(hw), old_fltr);
+		devm_kfree(ice_pf_to_dev(pf), old_fltr);
 	}
 	if (!input)
 		return err;
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c
index 7ea94a627c5d..bff0ca02f8c6 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.c
+++ b/drivers/net/ethernet/intel/ice/ice_flow.c
@@ -1002,6 +1002,16 @@ ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block blk,
 		return ICE_ERR_BAD_PTR;
 
 	if (blk == ICE_BLK_ACL) {
+		enum ice_status status;
+
+		if (!entry->prof)
+			return ICE_ERR_BAD_PTR;
+
+		status = ice_acl_rem_entry(hw, entry->prof->cfg.scen,
+					   entry->scen_entry_idx);
+		if (status)
+			return status;
+
 		/* Checks if we need to release an ACL counter. */
 		if (entry->acts_cnt && entry->acts)
 			ice_flow_acl_free_act_cntr(hw, entry->acts,
@@ -1126,10 +1136,36 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk,
 	}
 
 	if (blk == ICE_BLK_ACL) {
+		struct ice_aqc_acl_profile_ranges query_rng_buf;
+		struct ice_aqc_acl_prof_generic_frmt buf;
+		u8 prof_id = 0;
+
 		/* Disassociate the scenario from the profile for the PF */
 		status = ice_flow_acl_disassoc_scen(hw, prof);
 		if (status)
 			return status;
+
+		/* Clear the range-checker if the profile ID is no longer
+		 * used by any PF
+		 */
+		status = ice_flow_acl_is_prof_in_use(hw, prof, &buf);
+		if (status && status != ICE_ERR_IN_USE) {
+			return status;
+		} else if (!status) {
+			/* Clear the range-checker value for profile ID */
+			memset(&query_rng_buf, 0,
+			       sizeof(struct ice_aqc_acl_profile_ranges));
+
+			status = ice_flow_get_hw_prof(hw, blk, prof->id,
+						      &prof_id);
+			if (status)
+				return status;
+
+			status = ice_prog_acl_prof_ranges(hw, prof_id,
+							  &query_rng_buf, NULL);
+			if (status)
+				return status;
+		}
 	}
 
 	/* Remove all hardware profiles associated with this flow profile */
@@ -1366,6 +1402,44 @@ ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id)
 	return status;
 }
 
+/**
+ * ice_flow_find_entry - look for a flow entry using its unique ID
+ * @hw: pointer to the HW struct
+ * @blk: classification stage
+ * @entry_id: unique ID to identify this flow entry
+ *
+ * This function looks for the flow entry with the specified unique ID in all
+ * flow profiles of the specified classification stage. If the entry is found,
+ * and it returns the handle to the flow entry. Otherwise, it returns
+ * ICE_FLOW_ENTRY_ID_INVAL.
+ */
+u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id)
+{
+	struct ice_flow_entry *found = NULL;
+	struct ice_flow_prof *p;
+
+	mutex_lock(&hw->fl_profs_locks[blk]);
+
+	list_for_each_entry(p, &hw->fl_profs[blk], l_entry) {
+		struct ice_flow_entry *e;
+
+		mutex_lock(&p->entries_lock);
+		list_for_each_entry(e, &p->entries, l_entry)
+			if (e->id == entry_id) {
+				found = e;
+				break;
+			}
+		mutex_unlock(&p->entries_lock);
+
+		if (found)
+			break;
+	}
+
+	mutex_unlock(&hw->fl_profs_locks[blk]);
+
+	return found ? ICE_FLOW_ENTRY_HNDL(found) : ICE_FLOW_ENTRY_HANDLE_INVAL;
+}
+
 /**
  * ice_flow_acl_check_actions - Checks the ACL rule's actions
  * @hw: pointer to the hardware structure
@@ -1701,6 +1775,333 @@ ice_flow_acl_frmt_entry(struct ice_hw *hw, struct ice_flow_prof *prof,
 
 	return status;
 }
+
+/**
+ * ice_flow_acl_find_scen_entry_cond - Find an ACL scenario entry that matches
+ *				       the compared data.
+ * @prof: pointer to flow profile
+ * @e: pointer to the comparing flow entry
+ * @do_chg_action: decide if we want to change the ACL action
+ * @do_add_entry: decide if we want to add the new ACL entry
+ * @do_rem_entry: decide if we want to remove the current ACL entry
+ *
+ * Find an ACL scenario entry that matches the compared data. In the same time,
+ * this function also figure out:
+ * a/ If we want to change the ACL action
+ * b/ If we want to add the new ACL entry
+ * c/ If we want to remove the current ACL entry
+ */
+static struct ice_flow_entry *
+ice_flow_acl_find_scen_entry_cond(struct ice_flow_prof *prof,
+				  struct ice_flow_entry *e, bool *do_chg_action,
+				  bool *do_add_entry, bool *do_rem_entry)
+{
+	struct ice_flow_entry *p, *return_entry = NULL;
+	u8 i, j;
+
+	/* Check if:
+	 * a/ There exists an entry with same matching data, but different
+	 *    priority, then we remove this existing ACL entry. Then, we
+	 *    will add the new entry to the ACL scenario.
+	 * b/ There exists an entry with same matching data, priority, and
+	 *    result action, then we do nothing
+	 * c/ There exists an entry with same matching data, priority, but
+	 *    different, action, then do only change the action's entry.
+	 * d/ Else, we add this new entry to the ACL scenario.
+	 */
+	*do_chg_action = false;
+	*do_add_entry = true;
+	*do_rem_entry = false;
+	list_for_each_entry(p, &prof->entries, l_entry) {
+		if (memcmp(p->entry, e->entry, p->entry_sz))
+			continue;
+
+		/* From this point, we have the same matching_data. */
+		*do_add_entry = false;
+		return_entry = p;
+
+		if (p->priority != e->priority) {
+			/* matching data && !priority */
+			*do_add_entry = true;
+			*do_rem_entry = true;
+			break;
+		}
+
+		/* From this point, we will have matching_data && priority */
+		if (p->acts_cnt != e->acts_cnt)
+			*do_chg_action = true;
+		for (i = 0; i < p->acts_cnt; i++) {
+			bool found_not_match = false;
+
+			for (j = 0; j < e->acts_cnt; j++)
+				if (memcmp(&p->acts[i], &e->acts[j],
+					   sizeof(struct ice_flow_action))) {
+					found_not_match = true;
+					break;
+				}
+
+			if (found_not_match) {
+				*do_chg_action = true;
+				break;
+			}
+		}
+
+		/* (do_chg_action = true) means :
+		 *    matching_data && priority && !result_action
+		 * (do_chg_action = false) means :
+		 *    matching_data && priority && result_action
+		 */
+		break;
+	}
+
+	return return_entry;
+}
+
+/**
+ * ice_flow_acl_convert_to_acl_prio - Convert to ACL priority
+ * @p: flow priority
+ */
+static enum ice_acl_entry_prio
+ice_flow_acl_convert_to_acl_prio(enum ice_flow_priority p)
+{
+	enum ice_acl_entry_prio acl_prio;
+
+	switch (p) {
+	case ICE_FLOW_PRIO_LOW:
+		acl_prio = ICE_ACL_PRIO_LOW;
+		break;
+	case ICE_FLOW_PRIO_NORMAL:
+		acl_prio = ICE_ACL_PRIO_NORMAL;
+		break;
+	case ICE_FLOW_PRIO_HIGH:
+		acl_prio = ICE_ACL_PRIO_HIGH;
+		break;
+	default:
+		acl_prio = ICE_ACL_PRIO_NORMAL;
+		break;
+	}
+
+	return acl_prio;
+}
+
+/**
+ * ice_flow_acl_union_rng_chk - Perform union operation between two
+ *                              range-range checker buffers
+ * @dst_buf: pointer to destination range checker buffer
+ * @src_buf: pointer to source range checker buffer
+ *
+ * For this function, we do the union between dst_buf and src_buf
+ * range checker buffer, and we will save the result back to dst_buf
+ */
+static enum ice_status
+ice_flow_acl_union_rng_chk(struct ice_aqc_acl_profile_ranges *dst_buf,
+			   struct ice_aqc_acl_profile_ranges *src_buf)
+{
+	u8 i, j;
+
+	if (!dst_buf || !src_buf)
+		return ICE_ERR_BAD_PTR;
+
+	for (i = 0; i < ICE_AQC_ACL_PROF_RANGES_NUM_CFG; i++) {
+		struct ice_acl_rng_data *cfg_data = NULL, *in_data;
+		bool will_populate = false;
+
+		in_data = &src_buf->checker_cfg[i];
+
+		if (!in_data->mask)
+			break;
+
+		for (j = 0; j < ICE_AQC_ACL_PROF_RANGES_NUM_CFG; j++) {
+			cfg_data = &dst_buf->checker_cfg[j];
+
+			if (!cfg_data->mask ||
+			    !memcmp(cfg_data, in_data,
+				    sizeof(struct ice_acl_rng_data))) {
+				will_populate = true;
+				break;
+			}
+		}
+
+		if (will_populate) {
+			memcpy(cfg_data, in_data,
+			       sizeof(struct ice_acl_rng_data));
+		} else {
+			/* No available slot left to program range checker */
+			return ICE_ERR_MAX_LIMIT;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * ice_flow_acl_add_scen_entry_sync - Add entry to ACL scenario sync
+ * @hw: pointer to the hardware structure
+ * @prof: pointer to flow profile
+ * @entry: double pointer to the flow entry
+ *
+ * For this function, we will look at the current added entries in the
+ * corresponding ACL scenario. Then, we will perform matching logic to
+ * see if we want to add/modify/do nothing with this new entry.
+ */
+static enum ice_status
+ice_flow_acl_add_scen_entry_sync(struct ice_hw *hw, struct ice_flow_prof *prof,
+				 struct ice_flow_entry **entry)
+{
+	bool do_add_entry, do_rem_entry, do_chg_action, do_chg_rng_chk;
+	struct ice_aqc_acl_profile_ranges query_rng_buf, cfg_rng_buf;
+	struct ice_acl_act_entry *acts = NULL;
+	struct ice_flow_entry *exist;
+	enum ice_status status = 0;
+	struct ice_flow_entry *e;
+	u8 i;
+
+	if (!entry || !(*entry) || !prof)
+		return ICE_ERR_BAD_PTR;
+
+	e = *entry;
+
+	do_chg_rng_chk = false;
+	if (e->range_buf) {
+		u8 prof_id = 0;
+
+		status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id,
+					      &prof_id);
+		if (status)
+			return status;
+
+		/* Query the current range-checker value in FW */
+		status = ice_query_acl_prof_ranges(hw, prof_id, &query_rng_buf,
+						   NULL);
+		if (status)
+			return status;
+		memcpy(&cfg_rng_buf, &query_rng_buf,
+		       sizeof(struct ice_aqc_acl_profile_ranges));
+
+		/* Generate the new range-checker value */
+		status = ice_flow_acl_union_rng_chk(&cfg_rng_buf, e->range_buf);
+		if (status)
+			return status;
+
+		/* Reconfigure the range check if the buffer is changed. */
+		do_chg_rng_chk = false;
+		if (memcmp(&query_rng_buf, &cfg_rng_buf,
+			   sizeof(struct ice_aqc_acl_profile_ranges))) {
+			status = ice_prog_acl_prof_ranges(hw, prof_id,
+							  &cfg_rng_buf, NULL);
+			if (status)
+				return status;
+
+			do_chg_rng_chk = true;
+		}
+	}
+
+	/* Figure out if we want to (change the ACL action) and/or
+	 * (Add the new ACL entry) and/or (Remove the current ACL entry)
+	 */
+	exist = ice_flow_acl_find_scen_entry_cond(prof, e, &do_chg_action,
+						  &do_add_entry, &do_rem_entry);
+
+	if (do_rem_entry) {
+		status = ice_flow_rem_entry_sync(hw, ICE_BLK_ACL, exist);
+		if (status)
+			return status;
+	}
+
+	/* Prepare the result action buffer */
+	acts = kcalloc(e->entry_sz, sizeof(struct ice_acl_act_entry),
+		       GFP_KERNEL);
+	if (!acts)
+		return ICE_ERR_NO_MEMORY;
+
+	for (i = 0; i < e->acts_cnt; i++)
+		memcpy(&acts[i], &e->acts[i].data.acl_act,
+		       sizeof(struct ice_acl_act_entry));
+
+	if (do_add_entry) {
+		enum ice_acl_entry_prio prio;
+		u8 *keys, *inverts;
+		u16 entry_idx;
+
+		keys = (u8 *)e->entry;
+		inverts = keys + (e->entry_sz / 2);
+		prio = ice_flow_acl_convert_to_acl_prio(e->priority);
+
+		status = ice_acl_add_entry(hw, prof->cfg.scen, prio, keys,
+					   inverts, acts, e->acts_cnt,
+					   &entry_idx);
+		if (status)
+			goto out;
+
+		e->scen_entry_idx = entry_idx;
+		list_add(&e->l_entry, &prof->entries);
+	} else {
+		if (do_chg_action) {
+			/* For the action memory info, update the SW's copy of
+			 * exist entry with e's action memory info
+			 */
+			devm_kfree(ice_hw_to_dev(hw), exist->acts);
+			exist->acts_cnt = e->acts_cnt;
+			exist->acts = devm_kcalloc(ice_hw_to_dev(hw),
+						   exist->acts_cnt,
+						   sizeof(struct ice_flow_action),
+						   GFP_KERNEL);
+			if (!exist->acts) {
+				status = ICE_ERR_NO_MEMORY;
+				goto out;
+			}
+
+			memcpy(exist->acts, e->acts,
+			       sizeof(struct ice_flow_action) * e->acts_cnt);
+
+			status = ice_acl_prog_act(hw, prof->cfg.scen, acts,
+						  e->acts_cnt,
+						  exist->scen_entry_idx);
+			if (status)
+				goto out;
+		}
+
+		if (do_chg_rng_chk) {
+			/* In this case, we want to update the range checker
+			 * information of the exist entry
+			 */
+			status = ice_flow_acl_union_rng_chk(exist->range_buf,
+							    e->range_buf);
+			if (status)
+				goto out;
+		}
+
+		/* As we don't add the new entry to our SW DB, deallocate its
+		 * memories, and return the exist entry to the caller
+		 */
+		ice_dealloc_flow_entry(hw, e);
+		*entry = exist;
+	}
+out:
+	kfree(acts);
+
+	return status;
+}
+
+/**
+ * ice_flow_acl_add_scen_entry - Add entry to ACL scenario
+ * @hw: pointer to the hardware structure
+ * @prof: pointer to flow profile
+ * @e: double pointer to the flow entry
+ */
+static enum ice_status
+ice_flow_acl_add_scen_entry(struct ice_hw *hw, struct ice_flow_prof *prof,
+			    struct ice_flow_entry **e)
+{
+	enum ice_status status;
+
+	mutex_lock(&prof->entries_lock);
+	status = ice_flow_acl_add_scen_entry_sync(hw, prof, e);
+	mutex_unlock(&prof->entries_lock);
+
+	return status;
+}
+
 /**
  * ice_flow_add_entry - Add a flow entry
  * @hw: pointer to the HW struct
@@ -1770,6 +2171,11 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 						 acts_cnt);
 		if (status)
 			goto out;
+
+		status = ice_flow_acl_add_scen_entry(hw, prof, &e);
+		if (status)
+			goto out;
+
 		break;
 	default:
 		status = ICE_ERR_NOT_IMPL;
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.h b/drivers/net/ethernet/intel/ice/ice_flow.h
index ba3ceaf30b93..31c690051e05 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.h
+++ b/drivers/net/ethernet/intel/ice/ice_flow.h
@@ -198,6 +198,8 @@ struct ice_flow_entry {
 	enum ice_flow_priority priority;
 	u16 vsi_handle;
 	u16 entry_sz;
+	/* Entry index in the ACL's scenario */
+	u16 scen_entry_idx;
 #define ICE_FLOW_ACL_MAX_NUM_ACT	2
 	u8 acts_cnt;
 };
@@ -260,6 +262,7 @@ ice_flow_add_prof(struct ice_hw *hw, enum ice_block blk, enum ice_flow_dir dir,
 		  struct ice_flow_prof **prof);
 enum ice_status
 ice_flow_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 prof_id);
+u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id);
 enum ice_status
 ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id,
 		   u64 entry_id, u16 vsi, enum ice_flow_priority prio,
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 3df67486d42d..df38fa8a0c7c 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -855,7 +855,7 @@ static void ice_set_fd_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
 	if (vsi->type != ICE_VSI_PF && vsi->type != ICE_VSI_CTRL)
 		return;
 
-	val = ICE_AQ_VSI_PROP_FLOW_DIR_VALID;
+	val = ICE_AQ_VSI_PROP_FLOW_DIR_VALID | ICE_AQ_VSI_PROP_ACL_VALID;
 	ctxt->info.valid_sections |= cpu_to_le16(val);
 	dflt_q = 0;
 	dflt_q_group = 0;
@@ -885,6 +885,14 @@ static void ice_set_fd_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
 	val |= ((dflt_q_prio << ICE_AQ_VSI_FD_DEF_PRIORITY_S) &
 		ICE_AQ_VSI_FD_DEF_PRIORITY_M);
 	ctxt->info.fd_report_opt = cpu_to_le16(val);
+
+#define ICE_ACL_RX_PROF_MISS_CNTR ((2 << ICE_AQ_VSI_ACL_DEF_RX_PROF_S) & \
+				   ICE_AQ_VSI_ACL_DEF_RX_PROF_M)
+#define ICE_ACL_RX_TBL_MISS_CNTR ((3 << ICE_AQ_VSI_ACL_DEF_RX_TABLE_S) & \
+				  ICE_AQ_VSI_ACL_DEF_RX_TABLE_M)
+
+	val = ICE_ACL_RX_PROF_MISS_CNTR | ICE_ACL_RX_TBL_MISS_CNTR;
+	ctxt->info.acl_def_act = cpu_to_le16(val);
 }
 
 /**
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 08/15] ice: don't always return an error for Get PHY Abilities AQ command
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (6 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 07/15] ice: program " Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-14  1:25   ` Alexander Duyck
  2020-11-13 21:44 ` [net-next v3 09/15] ice: Enable Support for FW Override (E82X) Tony Nguyen
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Paul M Stillwell Jr, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

There are times when the driver shouldn't return an error when the Get
PHY abilities AQ command (0x0600) returns an error. Instead the driver
should log that the error occurred and continue on. This allows the
driver to load even though the AQ command failed. The user can then
later determine the reason for the failure and correct it.

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 7db5fd977367..3c600808d0da 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -925,7 +925,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 				     ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
 	devm_kfree(ice_hw_to_dev(hw), pcaps);
 	if (status)
-		goto err_unroll_sched;
+		ice_debug(hw, ICE_DBG_PHY, "Get PHY capabilities failed, continuing anyway\n");
 
 	/* Initialize port_info struct with link information */
 	status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 09/15] ice: Enable Support for FW Override (E82X)
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (7 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 08/15] ice: don't always return an error for Get PHY Abilities AQ command Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 10/15] ice: Remove gate to OROM init Tony Nguyen
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba; +Cc: Jeb Cramer, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Jeb Cramer <jeb.j.cramer@intel.com>

The driver is able to override the firmware when it comes to supporting
a more lenient link mode.  This feature was limited to E810 devices.  It
is now extended to E82X devices.

Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 3c600808d0da..59459b6ccb9c 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -4261,10 +4261,6 @@ ice_sched_query_elem(struct ice_hw *hw, u32 node_teid,
  */
 bool ice_fw_supports_link_override(struct ice_hw *hw)
 {
-	/* Currently, only supported for E810 devices */
-	if (hw->mac_type != ICE_MAC_E810)
-		return false;
-
 	if (hw->api_maj_ver == ICE_FW_API_LINK_OVERRIDE_MAJ) {
 		if (hw->api_min_ver > ICE_FW_API_LINK_OVERRIDE_MIN)
 			return true;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 10/15] ice: Remove gate to OROM init
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (8 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 09/15] ice: Enable Support for FW Override (E82X) Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 11/15] ice: Remove vlan_ena from vsi structure Tony Nguyen
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba; +Cc: Jeb Cramer, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Jeb Cramer <jeb.j.cramer@intel.com>

Remove the gate that prevents the OROM and netlist info from being
populated.  The NVM now has the appropriate section for software to
reference the versioning info.

Signed-off-by: Jeb Cramer <jeb.j.cramer@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_nvm.c | 26 ------------------------
 1 file changed, 26 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
index 5903a36763de..cd442a5415d1 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
@@ -634,32 +634,6 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
 		return status;
 	}
 
-	switch (hw->device_id) {
-	/* the following devices do not have boot_cfg_tlv yet */
-	case ICE_DEV_ID_E823C_BACKPLANE:
-	case ICE_DEV_ID_E823C_QSFP:
-	case ICE_DEV_ID_E823C_SFP:
-	case ICE_DEV_ID_E823C_10G_BASE_T:
-	case ICE_DEV_ID_E823C_SGMII:
-	case ICE_DEV_ID_E822C_BACKPLANE:
-	case ICE_DEV_ID_E822C_QSFP:
-	case ICE_DEV_ID_E822C_10G_BASE_T:
-	case ICE_DEV_ID_E822C_SGMII:
-	case ICE_DEV_ID_E822C_SFP:
-	case ICE_DEV_ID_E822L_BACKPLANE:
-	case ICE_DEV_ID_E822L_SFP:
-	case ICE_DEV_ID_E822L_10G_BASE_T:
-	case ICE_DEV_ID_E822L_SGMII:
-	case ICE_DEV_ID_E823L_BACKPLANE:
-	case ICE_DEV_ID_E823L_SFP:
-	case ICE_DEV_ID_E823L_10G_BASE_T:
-	case ICE_DEV_ID_E823L_1GBE:
-	case ICE_DEV_ID_E823L_QSFP:
-		return status;
-	default:
-		break;
-	}
-
 	status = ice_get_orom_ver_info(hw);
 	if (status) {
 		ice_debug(hw, ICE_DBG_INIT, "Failed to read Option ROM info.\n");
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 11/15] ice: Remove vlan_ena from vsi structure
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (9 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 10/15] ice: Remove gate to OROM init Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 12/15] ice: cleanup misleading comment Tony Nguyen
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba; +Cc: Nick Nunley, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Nick Nunley <nicholas.d.nunley@intel.com>

vlan_ena was introduced to track whether VLAN filters are enabled on
the device, but
1) checking for num_vlan > 1 already gives us this information, and is
currently used in this way throughout the code
2) the logic for vlan_ena is broken when multiple VLANs are active

Just remove vlan_ena and use num_vlan instead.

Signed-off-by: Nick Nunley <nicholas.d.nunley@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h      |  1 -
 drivers/net/ethernet/intel/ice/ice_main.c | 11 ++++-------
 2 files changed, 4 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index fa24826c5af7..f3aff465b483 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -308,7 +308,6 @@ struct ice_vsi {
 	u8 irqs_ready:1;
 	u8 current_isup:1;		 /* Sync 'link up' logging */
 	u8 stat_offsets_loaded:1;
-	u8 vlan_ena:1;
 	u16 num_vlan;
 
 	/* queue information */
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 67bab35d590b..166d177bf91a 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -224,7 +224,7 @@ static int ice_cfg_promisc(struct ice_vsi *vsi, u8 promisc_m, bool set_promisc)
 	if (vsi->type != ICE_VSI_PF)
 		return 0;
 
-	if (vsi->vlan_ena) {
+	if (vsi->num_vlan > 1) {
 		status = ice_set_vlan_vsi_promisc(hw, vsi->idx, promisc_m,
 						  set_promisc);
 	} else {
@@ -326,7 +326,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi)
 	/* check for changes in promiscuous modes */
 	if (changed_flags & IFF_ALLMULTI) {
 		if (vsi->current_netdev_flags & IFF_ALLMULTI) {
-			if (vsi->vlan_ena)
+			if (vsi->num_vlan > 1)
 				promisc_m = ICE_MCAST_VLAN_PROMISC_BITS;
 			else
 				promisc_m = ICE_MCAST_PROMISC_BITS;
@@ -340,7 +340,7 @@ static int ice_vsi_sync_fltr(struct ice_vsi *vsi)
 			}
 		} else {
 			/* !(vsi->current_netdev_flags & IFF_ALLMULTI) */
-			if (vsi->vlan_ena)
+			if (vsi->num_vlan > 1)
 				promisc_m = ICE_MCAST_VLAN_PROMISC_BITS;
 			else
 				promisc_m = ICE_MCAST_PROMISC_BITS;
@@ -3116,10 +3116,8 @@ ice_vlan_rx_add_vid(struct net_device *netdev, __always_unused __be16 proto,
 	 * packets aren't pruned by the device's internal switch on Rx
 	 */
 	ret = ice_vsi_add_vlan(vsi, vid, ICE_FWD_TO_VSI);
-	if (!ret) {
-		vsi->vlan_ena = true;
+	if (!ret)
 		set_bit(ICE_VSI_FLAG_VLAN_FLTR_CHANGED, vsi->flags);
-	}
 
 	return ret;
 }
@@ -3158,7 +3156,6 @@ ice_vlan_rx_kill_vid(struct net_device *netdev, __always_unused __be16 proto,
 	if (vsi->num_vlan == 1 && ice_vsi_is_vlan_pruning_ena(vsi))
 		ret = ice_cfg_vlan_pruning(vsi, false, false);
 
-	vsi->vlan_ena = false;
 	set_bit(ICE_VSI_FLAG_VLAN_FLTR_CHANGED, vsi->flags);
 	return ret;
 }
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 12/15] ice: cleanup misleading comment
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (10 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 11/15] ice: Remove vlan_ena from vsi structure Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 13/15] ice: silence static analysis warning Tony Nguyen
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba; +Cc: Bruce Allan, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Bruce Allan <bruce.w.allan@intel.com>

The maximum Admin Queue buffer size and NVM shadow RAM sector size are both
4 Kilobytes. Some comments refer to those as 4Kb which can be confused with
4 Kilobits. Update the comments to use the commonly used KB symbol instead.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_nvm.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
index cd442a5415d1..dc0d82c844ad 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
@@ -55,7 +55,7 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length,
  *
  * Reads a portion of the NVM, as a flat memory space. This function correctly
  * breaks read requests across Shadow RAM sectors and ensures that no single
- * read request exceeds the maximum 4Kb read for a single AdminQ command.
+ * read request exceeds the maximum 4KB read for a single AdminQ command.
  *
  * Returns a status code on failure. Note that the data pointer may be
  * partially updated if some reads succeed before a failure.
@@ -81,10 +81,10 @@ ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data,
 	do {
 		u32 read_size, sector_offset;
 
-		/* ice_aq_read_nvm cannot read more than 4Kb at a time.
+		/* ice_aq_read_nvm cannot read more than 4KB at a time.
 		 * Additionally, a read from the Shadow RAM may not cross over
 		 * a sector boundary. Conveniently, the sector size is also
-		 * 4Kb.
+		 * 4KB.
 		 */
 		sector_offset = offset % ICE_AQ_MAX_BUF_LEN;
 		read_size = min_t(u32, ICE_AQ_MAX_BUF_LEN - sector_offset,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 13/15] ice: silence static analysis warning
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (11 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 12/15] ice: cleanup misleading comment Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 14/15] ice: join format strings to same line as ice_debug Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 15/15] ice: Add space to unknown speed Tony Nguyen
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba; +Cc: Bruce Allan, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Bruce Allan <bruce.w.allan@intel.com>

sparse warns about cast to/from restricted types which is not
an actual problem; silence the warning.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_nvm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
index dc0d82c844ad..0d1092cbc927 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
@@ -196,7 +196,7 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data)
 	 * Shadow RAM sector restrictions necessary when reading from the NVM.
 	 */
 	status = ice_read_flat_nvm(hw, offset * sizeof(u16), &bytes,
-				   (u8 *)&data_local, true);
+				   (__force u8 *)&data_local, true);
 	if (status)
 		return status;
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 14/15] ice: join format strings to same line as ice_debug
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (12 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 13/15] ice: silence static analysis warning Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  2020-11-13 21:44 ` [net-next v3 15/15] ice: Add space to unknown speed Tony Nguyen
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba; +Cc: Jacob Keller, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Jacob Keller <jacob.e.keller@intel.com>

When printing messages with ice_debug, align the printed string to the
origin line of the message in order to ease debugging and tracking
messages back to their source.

Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c   | 102 ++++++------------
 drivers/net/ethernet/intel/ice/ice_controlq.c |  42 +++-----
 .../net/ethernet/intel/ice/ice_flex_pipe.c    |  24 ++---
 drivers/net/ethernet/intel/ice/ice_flow.c     |   9 +-
 drivers/net/ethernet/intel/ice/ice_nvm.c      |  27 ++---
 drivers/net/ethernet/intel/ice/ice_sched.c    |  21 ++--
 drivers/net/ethernet/intel/ice/ice_switch.c   |  15 +--
 7 files changed, 80 insertions(+), 160 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 59459b6ccb9c..625f54d1edcf 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -904,8 +904,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	/* Query the allocated resources for Tx scheduler */
 	status = ice_sched_query_res_alloc(hw);
 	if (status) {
-		ice_debug(hw, ICE_DBG_SCHED,
-			  "Failed to get scheduler allocated resources\n");
+		ice_debug(hw, ICE_DBG_SCHED, "Failed to get scheduler allocated resources\n");
 		goto err_unroll_alloc;
 	}
 
@@ -1044,8 +1043,7 @@ enum ice_status ice_check_reset(struct ice_hw *hw)
 	}
 
 	if (cnt == grst_timeout) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Global reset polling failed to complete.\n");
+		ice_debug(hw, ICE_DBG_INIT, "Global reset polling failed to complete.\n");
 		return ICE_ERR_RESET_FAILED;
 	}
 
@@ -1063,16 +1061,14 @@ enum ice_status ice_check_reset(struct ice_hw *hw)
 	for (cnt = 0; cnt < ICE_PF_RESET_WAIT_COUNT; cnt++) {
 		reg = rd32(hw, GLNVM_ULD) & uld_mask;
 		if (reg == uld_mask) {
-			ice_debug(hw, ICE_DBG_INIT,
-				  "Global reset processes done. %d\n", cnt);
+			ice_debug(hw, ICE_DBG_INIT, "Global reset processes done. %d\n", cnt);
 			break;
 		}
 		mdelay(10);
 	}
 
 	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
+		ice_debug(hw, ICE_DBG_INIT, "Wait for Reset Done timed out. GLNVM_ULD = 0x%x\n",
 			  reg);
 		return ICE_ERR_RESET_FAILED;
 	}
@@ -1124,8 +1120,7 @@ static enum ice_status ice_pf_reset(struct ice_hw *hw)
 	}
 
 	if (cnt == ICE_PF_RESET_WAIT_COUNT) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "PF reset polling failed to complete.\n");
+		ice_debug(hw, ICE_DBG_INIT, "PF reset polling failed to complete.\n");
 		return ICE_ERR_RESET_FAILED;
 	}
 
@@ -1578,8 +1573,7 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 		goto ice_acquire_res_exit;
 
 	if (status)
-		ice_debug(hw, ICE_DBG_RES,
-			  "resource %d acquire type %d failed.\n", res, access);
+		ice_debug(hw, ICE_DBG_RES, "resource %d acquire type %d failed.\n", res, access);
 
 	/* If necessary, poll until the current lock owner timeouts */
 	timeout = time_left;
@@ -1602,11 +1596,9 @@ ice_acquire_res(struct ice_hw *hw, enum ice_aq_res_ids res,
 ice_acquire_res_exit:
 	if (status == ICE_ERR_AQ_NO_WORK) {
 		if (access == ICE_RES_WRITE)
-			ice_debug(hw, ICE_DBG_RES,
-				  "resource indicates no work to do.\n");
+			ice_debug(hw, ICE_DBG_RES, "resource indicates no work to do.\n");
 		else
-			ice_debug(hw, ICE_DBG_RES,
-				  "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
+			ice_debug(hw, ICE_DBG_RES, "Warning: ICE_ERR_AQ_NO_WORK not expected\n");
 	}
 	return status;
 }
@@ -1792,66 +1784,53 @@ ice_parse_common_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps,
 	switch (cap) {
 	case ICE_AQC_CAPS_VALID_FUNCTIONS:
 		caps->valid_functions = number;
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: valid_functions (bitmap) = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: valid_functions (bitmap) = %d\n", prefix,
 			  caps->valid_functions);
 		break;
 	case ICE_AQC_CAPS_SRIOV:
 		caps->sr_iov_1_1 = (number == 1);
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: sr_iov_1_1 = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: sr_iov_1_1 = %d\n", prefix,
 			  caps->sr_iov_1_1);
 		break;
 	case ICE_AQC_CAPS_DCB:
 		caps->dcb = (number == 1);
 		caps->active_tc_bitmap = logical_id;
 		caps->maxtc = phys_id;
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: dcb = %d\n", prefix, caps->dcb);
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: active_tc_bitmap = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: dcb = %d\n", prefix, caps->dcb);
+		ice_debug(hw, ICE_DBG_INIT, "%s: active_tc_bitmap = %d\n", prefix,
 			  caps->active_tc_bitmap);
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: maxtc = %d\n", prefix, caps->maxtc);
+		ice_debug(hw, ICE_DBG_INIT, "%s: maxtc = %d\n", prefix, caps->maxtc);
 		break;
 	case ICE_AQC_CAPS_RSS:
 		caps->rss_table_size = number;
 		caps->rss_table_entry_width = logical_id;
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: rss_table_size = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: rss_table_size = %d\n", prefix,
 			  caps->rss_table_size);
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: rss_table_entry_width = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: rss_table_entry_width = %d\n", prefix,
 			  caps->rss_table_entry_width);
 		break;
 	case ICE_AQC_CAPS_RXQS:
 		caps->num_rxq = number;
 		caps->rxq_first_id = phys_id;
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: num_rxq = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: num_rxq = %d\n", prefix,
 			  caps->num_rxq);
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: rxq_first_id = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: rxq_first_id = %d\n", prefix,
 			  caps->rxq_first_id);
 		break;
 	case ICE_AQC_CAPS_TXQS:
 		caps->num_txq = number;
 		caps->txq_first_id = phys_id;
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: num_txq = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: num_txq = %d\n", prefix,
 			  caps->num_txq);
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: txq_first_id = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: txq_first_id = %d\n", prefix,
 			  caps->txq_first_id);
 		break;
 	case ICE_AQC_CAPS_MSIX:
 		caps->num_msix_vectors = number;
 		caps->msix_vector_first_id = phys_id;
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: num_msix_vectors = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: num_msix_vectors = %d\n", prefix,
 			  caps->num_msix_vectors);
-		ice_debug(hw, ICE_DBG_INIT,
-			  "%s: msix_vector_first_id = %d\n", prefix,
+		ice_debug(hw, ICE_DBG_INIT, "%s: msix_vector_first_id = %d\n", prefix,
 			  caps->msix_vector_first_id);
 		break;
 	case ICE_AQC_CAPS_PENDING_NVM_VER:
@@ -1904,8 +1883,7 @@ ice_recalc_port_limited_caps(struct ice_hw *hw, struct ice_hw_common_caps *caps)
 	if (hw->dev_caps.num_funcs > 4) {
 		/* Max 4 TCs per port */
 		caps->maxtc = 4;
-		ice_debug(hw, ICE_DBG_INIT,
-			  "reducing maxtc to %d (based on #ports)\n",
+		ice_debug(hw, ICE_DBG_INIT, "reducing maxtc to %d (based on #ports)\n",
 			  caps->maxtc);
 	}
 }
@@ -1973,11 +1951,9 @@ ice_parse_fdir_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p)
 		GLQF_FD_SIZE_FD_BSIZE_S;
 	func_p->fd_fltr_best_effort = val;
 
-	ice_debug(hw, ICE_DBG_INIT,
-		  "func caps: fd_fltr_guar = %d\n",
+	ice_debug(hw, ICE_DBG_INIT, "func caps: fd_fltr_guar = %d\n",
 		  func_p->fd_fltr_guar);
-	ice_debug(hw, ICE_DBG_INIT,
-		  "func caps: fd_fltr_best_effort = %d\n",
+	ice_debug(hw, ICE_DBG_INIT, "func caps: fd_fltr_best_effort = %d\n",
 		  func_p->fd_fltr_best_effort);
 }
 
@@ -2026,8 +2002,7 @@ ice_parse_func_caps(struct ice_hw *hw, struct ice_hw_func_caps *func_p,
 		default:
 			/* Don't list common capabilities as unknown */
 			if (!found)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "func caps: unknown capability[%d]: 0x%x\n",
+				ice_debug(hw, ICE_DBG_INIT, "func caps: unknown capability[%d]: 0x%x\n",
 					  i, cap);
 			break;
 		}
@@ -2160,8 +2135,7 @@ ice_parse_dev_caps(struct ice_hw *hw, struct ice_hw_dev_caps *dev_p,
 		default:
 			/* Don't list common capabilities as unknown */
 			if (!found)
-				ice_debug(hw, ICE_DBG_INIT,
-					  "dev caps: unknown capability[%d]: 0x%x\n",
+				ice_debug(hw, ICE_DBG_INIT, "dev caps: unknown capability[%d]: 0x%x\n",
 					  i, cap);
 			break;
 		}
@@ -2618,8 +2592,7 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi,
 
 	/* Ensure that only valid bits of cfg->caps can be turned on. */
 	if (cfg->caps & ~ICE_AQ_PHY_ENA_VALID_MASK) {
-		ice_debug(hw, ICE_DBG_PHY,
-			  "Invalid bit is set in ice_aqc_set_phy_cfg_data->caps : 0x%x\n",
+		ice_debug(hw, ICE_DBG_PHY, "Invalid bit is set in ice_aqc_set_phy_cfg_data->caps : 0x%x\n",
 			  cfg->caps);
 
 		cfg->caps &= ICE_AQ_PHY_ENA_VALID_MASK;
@@ -3067,8 +3040,7 @@ enum ice_status ice_get_link_status(struct ice_port_info *pi, bool *link_up)
 		status = ice_update_link_info(pi);
 
 		if (status)
-			ice_debug(pi->hw, ICE_DBG_LINK,
-				  "get link status error, status = %d\n",
+			ice_debug(pi->hw, ICE_DBG_LINK, "get link status error, status = %d\n",
 				  status);
 	}
 
@@ -3793,8 +3765,7 @@ ice_set_ctx(struct ice_hw *hw, u8 *src_ctx, u8 *dest_ctx,
 		 * of the endianness of the machine.
 		 */
 		if (ce_info[f].width > (ce_info[f].size_of * BITS_PER_BYTE)) {
-			ice_debug(hw, ICE_DBG_QCTX,
-				  "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
+			ice_debug(hw, ICE_DBG_QCTX, "Field %d width of %d bits larger than size of %d byte(s) ... skipping write\n",
 				  f, ce_info[f].width, ce_info[f].size_of);
 			continue;
 		}
@@ -4292,8 +4263,7 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
 	status = ice_get_pfa_module_tlv(hw, &tlv, &tlv_len,
 					ICE_SR_LINK_DEFAULT_OVERRIDE_PTR);
 	if (status) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Failed to read link override TLV.\n");
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read link override TLV.\n");
 		return status;
 	}
 
@@ -4304,8 +4274,7 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
 	/* link options first */
 	status = ice_read_sr_word(hw, tlv_start, &buf);
 	if (status) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Failed to read override link options.\n");
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read override link options.\n");
 		return status;
 	}
 	ldo->options = buf & ICE_LINK_OVERRIDE_OPT_M;
@@ -4316,8 +4285,7 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
 	offset = tlv_start + ICE_SR_PFA_LINK_OVERRIDE_FEC_OFFSET;
 	status = ice_read_sr_word(hw, offset, &buf);
 	if (status) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Failed to read override phy config.\n");
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read override phy config.\n");
 		return status;
 	}
 	ldo->fec_options = buf & ICE_LINK_OVERRIDE_FEC_OPT_M;
@@ -4327,8 +4295,7 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
 	for (i = 0; i < ICE_SR_PFA_LINK_OVERRIDE_PHY_WORDS; i++) {
 		status = ice_read_sr_word(hw, (offset + i), &buf);
 		if (status) {
-			ice_debug(hw, ICE_DBG_INIT,
-				  "Failed to read override link options.\n");
+			ice_debug(hw, ICE_DBG_INIT, "Failed to read override link options.\n");
 			return status;
 		}
 		/* shift 16 bits at a time to fill 64 bits */
@@ -4341,8 +4308,7 @@ ice_get_link_default_override(struct ice_link_default_override_tlv *ldo,
 	for (i = 0; i < ICE_SR_PFA_LINK_OVERRIDE_PHY_WORDS; i++) {
 		status = ice_read_sr_word(hw, (offset + i), &buf);
 		if (status) {
-			ice_debug(hw, ICE_DBG_INIT,
-				  "Failed to read override link options.\n");
+			ice_debug(hw, ICE_DBG_INIT, "Failed to read override link options.\n");
 			return status;
 		}
 		/* shift 16 bits at a time to fill 64 bits */
diff --git a/drivers/net/ethernet/intel/ice/ice_controlq.c b/drivers/net/ethernet/intel/ice/ice_controlq.c
index 1f46a7828be8..4db12d1f5808 100644
--- a/drivers/net/ethernet/intel/ice/ice_controlq.c
+++ b/drivers/net/ethernet/intel/ice/ice_controlq.c
@@ -717,8 +717,7 @@ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw)
 		if (status != ICE_ERR_AQ_FW_CRITICAL)
 			break;
 
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Retry Admin Queue init due to FW critical error\n");
+		ice_debug(hw, ICE_DBG_AQ_MSG, "Retry Admin Queue init due to FW critical error\n");
 		ice_shutdown_ctrlq(hw, ICE_CTL_Q_ADMIN);
 		msleep(ICE_CTL_Q_ADMIN_INIT_MSEC);
 	} while (retry++ < ICE_CTL_Q_ADMIN_INIT_TIMEOUT);
@@ -813,8 +812,7 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
 	details = ICE_CTL_Q_DETAILS(*sq, ntc);
 
 	while (rd32(hw, cq->sq.head) != ntc) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+		ice_debug(hw, ICE_DBG_AQ_MSG, "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
 		memset(desc, 0, sizeof(*desc));
 		memset(details, 0, sizeof(*details));
 		ntc++;
@@ -852,8 +850,7 @@ static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len)
 
 	len = le16_to_cpu(cq_desc->datalen);
 
-	ice_debug(hw, ICE_DBG_AQ_DESC,
-		  "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
+	ice_debug(hw, ICE_DBG_AQ_DESC, "CQ CMD: opcode 0x%04X, flags 0x%04X, datalen 0x%04X, retval 0x%04X\n",
 		  le16_to_cpu(cq_desc->opcode),
 		  le16_to_cpu(cq_desc->flags),
 		  le16_to_cpu(cq_desc->datalen), le16_to_cpu(cq_desc->retval));
@@ -925,8 +922,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	cq->sq_last_status = ICE_AQ_RC_OK;
 
 	if (!cq->sq.count) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Control Send queue not initialized.\n");
+		ice_debug(hw, ICE_DBG_AQ_MSG, "Control Send queue not initialized.\n");
 		status = ICE_ERR_AQ_EMPTY;
 		goto sq_send_command_error;
 	}
@@ -938,8 +934,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	if (buf) {
 		if (buf_size > cq->sq_buf_size) {
-			ice_debug(hw, ICE_DBG_AQ_MSG,
-				  "Invalid buffer size for Control Send queue: %d.\n",
+			ice_debug(hw, ICE_DBG_AQ_MSG, "Invalid buffer size for Control Send queue: %d.\n",
 				  buf_size);
 			status = ICE_ERR_INVAL_SIZE;
 			goto sq_send_command_error;
@@ -952,8 +947,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 
 	val = rd32(hw, cq->sq.head);
 	if (val >= cq->num_sq_entries) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "head overrun at %d in the Control Send Queue ring\n",
+		ice_debug(hw, ICE_DBG_AQ_MSG, "head overrun at %d in the Control Send Queue ring\n",
 			  val);
 		status = ICE_ERR_AQ_EMPTY;
 		goto sq_send_command_error;
@@ -971,8 +965,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	 * called in a separate thread in case of asynchronous completions.
 	 */
 	if (ice_clean_sq(hw, cq) == 0) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Error: Control Send Queue is full.\n");
+		ice_debug(hw, ICE_DBG_AQ_MSG, "Error: Control Send Queue is full.\n");
 		status = ICE_ERR_AQ_FULL;
 		goto sq_send_command_error;
 	}
@@ -1000,8 +993,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	}
 
 	/* Debug desc and buffer */
-	ice_debug(hw, ICE_DBG_AQ_DESC,
-		  "ATQ: Control Send queue desc and buffer:\n");
+	ice_debug(hw, ICE_DBG_AQ_DESC, "ATQ: Control Send queue desc and buffer:\n");
 
 	ice_debug_cq(hw, (void *)desc_on_ring, buf, buf_size);
 
@@ -1026,8 +1018,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 			u16 copy_size = le16_to_cpu(desc->datalen);
 
 			if (copy_size > buf_size) {
-				ice_debug(hw, ICE_DBG_AQ_MSG,
-					  "Return len %d > than buf len %d\n",
+				ice_debug(hw, ICE_DBG_AQ_MSG, "Return len %d > than buf len %d\n",
 					  copy_size, buf_size);
 				status = ICE_ERR_AQ_ERROR;
 			} else {
@@ -1036,8 +1027,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		}
 		retval = le16_to_cpu(desc->retval);
 		if (retval) {
-			ice_debug(hw, ICE_DBG_AQ_MSG,
-				  "Control Send Queue command 0x%04X completed with error 0x%X\n",
+			ice_debug(hw, ICE_DBG_AQ_MSG, "Control Send Queue command 0x%04X completed with error 0x%X\n",
 				  le16_to_cpu(desc->opcode),
 				  retval);
 
@@ -1050,8 +1040,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 		cq->sq_last_status = (enum ice_aq_err)retval;
 	}
 
-	ice_debug(hw, ICE_DBG_AQ_MSG,
-		  "ATQ: desc and buffer writeback:\n");
+	ice_debug(hw, ICE_DBG_AQ_MSG, "ATQ: desc and buffer writeback:\n");
 
 	ice_debug_cq(hw, (void *)desc, buf, buf_size);
 
@@ -1067,8 +1056,7 @@ ice_sq_send_cmd(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 			ice_debug(hw, ICE_DBG_AQ_MSG, "Critical FW error.\n");
 			status = ICE_ERR_AQ_FW_CRITICAL;
 		} else {
-			ice_debug(hw, ICE_DBG_AQ_MSG,
-				  "Control Send Queue Writeback timeout.\n");
+			ice_debug(hw, ICE_DBG_AQ_MSG, "Control Send Queue Writeback timeout.\n");
 			status = ICE_ERR_AQ_TIMEOUT;
 		}
 	}
@@ -1124,8 +1112,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	mutex_lock(&cq->rq_lock);
 
 	if (!cq->rq.count) {
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Control Receive queue not initialized.\n");
+		ice_debug(hw, ICE_DBG_AQ_MSG, "Control Receive queue not initialized.\n");
 		ret_code = ICE_ERR_AQ_EMPTY;
 		goto clean_rq_elem_err;
 	}
@@ -1147,8 +1134,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq,
 	flags = le16_to_cpu(desc->flags);
 	if (flags & ICE_AQ_FLAG_ERR) {
 		ret_code = ICE_ERR_AQ_ERROR;
-		ice_debug(hw, ICE_DBG_AQ_MSG,
-			  "Control Receive Queue Event 0x%04X received with error 0x%X\n",
+		ice_debug(hw, ICE_DBG_AQ_MSG, "Control Receive Queue Event 0x%04X received with error 0x%X\n",
 			  le16_to_cpu(desc->opcode),
 			  cq->rq_last_status);
 	}
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
index da9797c11a8d..eb11df8deae8 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
@@ -709,8 +709,7 @@ ice_acquire_global_cfg_lock(struct ice_hw *hw,
 	if (!status)
 		mutex_lock(&ice_global_cfg_lock_sw);
 	else if (status == ICE_ERR_AQ_NO_WORK)
-		ice_debug(hw, ICE_DBG_PKG,
-			  "Global config lock: No work to do\n");
+		ice_debug(hw, ICE_DBG_PKG, "Global config lock: No work to do\n");
 
 	return status;
 }
@@ -909,8 +908,7 @@ ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 					   last, &offset, &info, NULL);
 
 		if (status) {
-			ice_debug(hw, ICE_DBG_PKG,
-				  "Update pkg failed: err %d off %d inf %d\n",
+			ice_debug(hw, ICE_DBG_PKG, "Update pkg failed: err %d off %d inf %d\n",
 				  status, offset, info);
 			break;
 		}
@@ -988,8 +986,7 @@ ice_dwnld_cfg_bufs(struct ice_hw *hw, struct ice_buf *bufs, u32 count)
 		/* Save AQ status from download package */
 		hw->pkg_dwnld_status = hw->adminq.sq_last_status;
 		if (status) {
-			ice_debug(hw, ICE_DBG_PKG,
-				  "Pkg download failed: err %d off %d inf %d\n",
+			ice_debug(hw, ICE_DBG_PKG, "Pkg download failed: err %d off %d inf %d\n",
 				  status, offset, info);
 
 			break;
@@ -1083,8 +1080,7 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 			  meta_seg->pkg_ver.update, meta_seg->pkg_ver.draft,
 			  meta_seg->pkg_name);
 	} else {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Did not find metadata segment in driver package\n");
+		ice_debug(hw, ICE_DBG_INIT, "Did not find metadata segment in driver package\n");
 		return ICE_ERR_CFG;
 	}
 
@@ -1101,8 +1097,7 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 			  seg_hdr->seg_format_ver.draft,
 			  seg_hdr->seg_id);
 	} else {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Did not find ice segment in driver package\n");
+		ice_debug(hw, ICE_DBG_INIT, "Did not find ice segment in driver package\n");
 		return ICE_ERR_CFG;
 	}
 
@@ -1318,8 +1313,7 @@ ice_chk_pkg_compat(struct ice_hw *hw, struct ice_pkg_hdr *ospkg,
 		    (*seg)->hdr.seg_format_ver.minor >
 			pkg->pkg_info[i].ver.minor) {
 			status = ICE_ERR_FW_DDP_MISMATCH;
-			ice_debug(hw, ICE_DBG_INIT,
-				  "OS package is not compatible with NVM.\n");
+			ice_debug(hw, ICE_DBG_INIT, "OS package is not compatible with NVM.\n");
 		}
 		/* done processing NVM package so break */
 		break;
@@ -1387,8 +1381,7 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len)
 	ice_init_pkg_hints(hw, seg);
 	status = ice_download_pkg(hw, seg);
 	if (status == ICE_ERR_AQ_NO_WORK) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "package previously loaded - no work.\n");
+		ice_debug(hw, ICE_DBG_INIT, "package previously loaded - no work.\n");
 		status = 0;
 	}
 
@@ -3267,8 +3260,7 @@ ice_has_prof_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl)
 		if (ent->profile_cookie == hdl)
 			return true;
 
-	ice_debug(hw, ICE_DBG_INIT,
-		  "Characteristic list for VSI group %d not found.\n",
+	ice_debug(hw, ICE_DBG_INIT, "Characteristic list for VSI group %d not found.\n",
 		  vsig);
 	return false;
 }
diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c
index bff0ca02f8c6..a4c4857b933e 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.c
+++ b/drivers/net/ethernet/intel/ice/ice_flow.c
@@ -1080,8 +1080,7 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk,
 
 	status = ice_flow_proc_segs(hw, params);
 	if (status) {
-		ice_debug(hw, ICE_DBG_FLOW,
-			  "Error processing a flow's packet segments\n");
+		ice_debug(hw, ICE_DBG_FLOW, "Error processing a flow's packet segments\n");
 		goto out;
 	}
 
@@ -1291,8 +1290,7 @@ ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk,
 		if (!status)
 			set_bit(vsi_handle, prof->vsis);
 		else
-			ice_debug(hw, ICE_DBG_FLOW,
-				  "HW profile add failed, %d\n",
+			ice_debug(hw, ICE_DBG_FLOW, "HW profile add failed, %d\n",
 				  status);
 	}
 
@@ -1323,8 +1321,7 @@ ice_flow_disassoc_prof(struct ice_hw *hw, enum ice_block blk,
 		if (!status)
 			clear_bit(vsi_handle, prof->vsis);
 		else
-			ice_debug(hw, ICE_DBG_FLOW,
-				  "HW profile remove failed, %d\n",
+			ice_debug(hw, ICE_DBG_FLOW, "HW profile remove failed, %d\n",
 				  status);
 	}
 
diff --git a/drivers/net/ethernet/intel/ice/ice_nvm.c b/drivers/net/ethernet/intel/ice/ice_nvm.c
index 0d1092cbc927..f729cd0c6224 100644
--- a/drivers/net/ethernet/intel/ice/ice_nvm.c
+++ b/drivers/net/ethernet/intel/ice/ice_nvm.c
@@ -73,8 +73,7 @@ ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data,
 
 	/* Verify the length of the read if this is for the Shadow RAM */
 	if (read_shadow_ram && ((offset + inlen) > (hw->nvm.sr_words * 2u))) {
-		ice_debug(hw, ICE_DBG_NVM,
-			  "NVM error: requested offset is beyond Shadow RAM limit\n");
+		ice_debug(hw, ICE_DBG_NVM, "NVM error: requested offset is beyond Shadow RAM limit\n");
 		return ICE_ERR_PARAM;
 	}
 
@@ -397,8 +396,7 @@ static enum ice_status ice_get_orom_ver_info(struct ice_hw *hw)
 	status = ice_get_pfa_module_tlv(hw, &boot_cfg_tlv, &boot_cfg_tlv_len,
 					ICE_SR_BOOT_CFG_PTR);
 	if (status) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Failed to read Boot Configuration Block TLV.\n");
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read Boot Configuration Block TLV.\n");
 		return status;
 	}
 
@@ -406,8 +404,7 @@ static enum ice_status ice_get_orom_ver_info(struct ice_hw *hw)
 	 * (Combo Image Version High and Combo Image Version Low)
 	 */
 	if (boot_cfg_tlv_len < 2) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Invalid Boot Configuration Block TLV size.\n");
+		ice_debug(hw, ICE_DBG_INIT, "Invalid Boot Configuration Block TLV size.\n");
 		return ICE_ERR_INVAL_SIZE;
 	}
 
@@ -542,14 +539,12 @@ static enum ice_status ice_discover_flash_size(struct ice_hw *hw)
 		status = ice_read_flat_nvm(hw, offset, &len, &data, false);
 		if (status == ICE_ERR_AQ_ERROR &&
 		    hw->adminq.sq_last_status == ICE_AQ_RC_EINVAL) {
-			ice_debug(hw, ICE_DBG_NVM,
-				  "%s: New upper bound of %u bytes\n",
+			ice_debug(hw, ICE_DBG_NVM, "%s: New upper bound of %u bytes\n",
 				  __func__, offset);
 			status = 0;
 			max_size = offset;
 		} else if (!status) {
-			ice_debug(hw, ICE_DBG_NVM,
-				  "%s: New lower bound of %u bytes\n",
+			ice_debug(hw, ICE_DBG_NVM, "%s: New lower bound of %u bytes\n",
 				  __func__, offset);
 			min_size = offset;
 		} else {
@@ -558,8 +553,7 @@ static enum ice_status ice_discover_flash_size(struct ice_hw *hw)
 		}
 	}
 
-	ice_debug(hw, ICE_DBG_NVM,
-		  "Predicted flash size is %u bytes\n", max_size);
+	ice_debug(hw, ICE_DBG_NVM, "Predicted flash size is %u bytes\n", max_size);
 
 	hw->nvm.flash_size = max_size;
 
@@ -600,15 +594,13 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
 	} else {
 		/* Blank programming mode */
 		nvm->blank_nvm_mode = true;
-		ice_debug(hw, ICE_DBG_NVM,
-			  "NVM init error: unsupported blank mode.\n");
+		ice_debug(hw, ICE_DBG_NVM, "NVM init error: unsupported blank mode.\n");
 		return ICE_ERR_NVM_BLANK_MODE;
 	}
 
 	status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &ver);
 	if (status) {
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Failed to read DEV starter version.\n");
+		ice_debug(hw, ICE_DBG_INIT, "Failed to read DEV starter version.\n");
 		return status;
 	}
 	nvm->major_ver = (ver & ICE_NVM_VER_HI_MASK) >> ICE_NVM_VER_HI_SHIFT;
@@ -629,8 +621,7 @@ enum ice_status ice_init_nvm(struct ice_hw *hw)
 
 	status = ice_discover_flash_size(hw);
 	if (status) {
-		ice_debug(hw, ICE_DBG_NVM,
-			  "NVM init error: failed to discover flash size.\n");
+		ice_debug(hw, ICE_DBG_NVM, "NVM init error: failed to discover flash size.\n");
 		return status;
 	}
 
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 44a228530253..f0912e44d4ad 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -164,8 +164,7 @@ ice_sched_add_node(struct ice_port_info *pi, u8 layer,
 	parent = ice_sched_find_node_by_teid(pi->root,
 					     le32_to_cpu(info->parent_teid));
 	if (!parent) {
-		ice_debug(hw, ICE_DBG_SCHED,
-			  "Parent Node not found for parent_teid=0x%x\n",
+		ice_debug(hw, ICE_DBG_SCHED, "Parent Node not found for parent_teid=0x%x\n",
 			  le32_to_cpu(info->parent_teid));
 		return ICE_ERR_PARAM;
 	}
@@ -704,8 +703,7 @@ static void ice_sched_clear_rl_prof(struct ice_port_info *pi)
 			rl_prof_elem->prof_id_ref = 0;
 			status = ice_sched_del_rl_profile(hw, rl_prof_elem);
 			if (status) {
-				ice_debug(hw, ICE_DBG_SCHED,
-					  "Remove rl profile failed\n");
+				ice_debug(hw, ICE_DBG_SCHED, "Remove rl profile failed\n");
 				/* On error, free mem required */
 				list_del(&rl_prof_elem->list_entry);
 				devm_kfree(ice_hw_to_dev(hw), rl_prof_elem);
@@ -863,8 +861,7 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 	for (i = 0; i < num_nodes; i++) {
 		status = ice_sched_add_node(pi, layer, &buf->generic[i]);
 		if (status) {
-			ice_debug(hw, ICE_DBG_SCHED,
-				  "add nodes in SW DB failed status =%d\n",
+			ice_debug(hw, ICE_DBG_SCHED, "add nodes in SW DB failed status =%d\n",
 				  status);
 			break;
 		}
@@ -872,8 +869,7 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node,
 		teid = le32_to_cpu(buf->generic[i].node_teid);
 		new_node = ice_sched_find_node_by_teid(parent, teid);
 		if (!new_node) {
-			ice_debug(hw, ICE_DBG_SCHED,
-				  "Node is missing for teid =%d\n", teid);
+			ice_debug(hw, ICE_DBG_SCHED, "Node is missing for teid =%d\n", teid);
 			break;
 		}
 
@@ -1830,8 +1826,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
 			continue;
 
 		if (ice_sched_is_leaf_node_present(vsi_node)) {
-			ice_debug(pi->hw, ICE_DBG_SCHED,
-				  "VSI has leaf nodes in TC %d\n", i);
+			ice_debug(pi->hw, ICE_DBG_SCHED, "VSI has leaf nodes in TC %d\n", i);
 			status = ICE_ERR_IN_USE;
 			goto exit_sched_rm_vsi_cfg;
 		}
@@ -1896,8 +1891,7 @@ static void ice_sched_rm_unused_rl_prof(struct ice_port_info *pi)
 		list_for_each_entry_safe(rl_prof_elem, rl_prof_tmp,
 					 &pi->rl_prof_list[ln], list_entry) {
 			if (!ice_sched_del_rl_profile(pi->hw, rl_prof_elem))
-				ice_debug(pi->hw, ICE_DBG_SCHED,
-					  "Removed rl profile\n");
+				ice_debug(pi->hw, ICE_DBG_SCHED, "Removed rl profile\n");
 		}
 	}
 }
@@ -2441,8 +2435,7 @@ ice_sched_rm_rl_profile(struct ice_port_info *pi, u8 layer_num, u8 profile_type,
 			/* Remove old profile ID from database */
 			status = ice_sched_del_rl_profile(pi->hw, rl_prof_elem);
 			if (status && status != ICE_ERR_IN_USE)
-				ice_debug(pi->hw, ICE_DBG_SCHED,
-					  "Remove rl profile failed\n");
+				ice_debug(pi->hw, ICE_DBG_SCHED, "Remove rl profile failed\n");
 			break;
 		}
 	if (status == ICE_ERR_IN_USE)
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index c3a6c41385ee..c33612132ddf 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -537,8 +537,7 @@ ice_init_port_info(struct ice_port_info *pi, u16 vsi_port_num, u8 type,
 		pi->dflt_rx_vsi_num = ICE_DFLT_VSI_INVAL;
 		break;
 	default:
-		ice_debug(pi->hw, ICE_DBG_SW,
-			  "incorrect VSI/port type received\n");
+		ice_debug(pi->hw, ICE_DBG_SW, "incorrect VSI/port type received\n");
 		break;
 	}
 }
@@ -1476,8 +1475,7 @@ ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
 		tmp_fltr_info.vsi_handle = rem_vsi_handle;
 		status = ice_update_pkt_fwd_rule(hw, &tmp_fltr_info);
 		if (status) {
-			ice_debug(hw, ICE_DBG_SW,
-				  "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
+			ice_debug(hw, ICE_DBG_SW, "Failed to update pkt fwd rule to FWD_TO_VSI on HW VSI %d, error %d\n",
 				  tmp_fltr_info.fwd_id.hw_vsi_id, status);
 			return status;
 		}
@@ -1493,8 +1491,7 @@ ice_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle,
 		/* Remove the VSI list since it is no longer used */
 		status = ice_remove_vsi_list_rule(hw, vsi_list_id, lkup_type);
 		if (status) {
-			ice_debug(hw, ICE_DBG_SW,
-				  "Failed to remove VSI list %d, error %d\n",
+			ice_debug(hw, ICE_DBG_SW, "Failed to remove VSI list %d, error %d\n",
 				  vsi_list_id, status);
 			return status;
 		}
@@ -1853,8 +1850,7 @@ ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry)
 		 */
 		if (v_list_itr->vsi_count > 1 &&
 		    v_list_itr->vsi_list_info->ref_cnt > 1) {
-			ice_debug(hw, ICE_DBG_SW,
-				  "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
+			ice_debug(hw, ICE_DBG_SW, "Invalid configuration: Optimization to reuse VSI list with more than one VSI is not being done yet\n");
 			status = ICE_ERR_CFG;
 			goto exit;
 		}
@@ -2740,8 +2736,7 @@ ice_free_res_cntr(struct ice_hw *hw, u8 type, u8 alloc_shared, u16 num_items,
 	status = ice_aq_alloc_free_res(hw, 1, buf, buf_len,
 				       ice_aqc_opc_free_res, NULL);
 	if (status)
-		ice_debug(hw, ICE_DBG_SW,
-			  "counter resource could not be freed\n");
+		ice_debug(hw, ICE_DBG_SW, "counter resource could not be freed\n");
 
 	kfree(buf);
 	return status;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [net-next v3 15/15] ice: Add space to unknown speed
  2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
                   ` (13 preceding siblings ...)
  2020-11-13 21:44 ` [net-next v3 14/15] ice: join format strings to same line as ice_debug Tony Nguyen
@ 2020-11-13 21:44 ` Tony Nguyen
  14 siblings, 0 replies; 28+ messages in thread
From: Tony Nguyen @ 2020-11-13 21:44 UTC (permalink / raw)
  To: davem, kuba
  Cc: Simon Perron Caissy, netdev, sassmann, anthony.l.nguyen, Aaron Brown

From: Simon Perron Caissy <simon.perron.caissy@intel.com>

Add space to the end of 'Unknown' string  in  order to avoid
concatenation with 'bps' string when formatting netdev log message.

Signed-off-by: Simon Perron Caissy <simon.perron.caissy@intel.com>
Tested-by: Aaron Brown <aaron.f.brown@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 166d177bf91a..37c1dc70b27b 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -667,7 +667,7 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
 		speed = "100 M";
 		break;
 	default:
-		speed = "Unknown";
+		speed = "Unknown ";
 		break;
 	}
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-11-13 21:44 ` [net-next v3 05/15] ice: create flow profile Tony Nguyen
@ 2020-11-13 23:56   ` Alexander Duyck
  2020-11-21  0:42     ` Nguyen, Anthony L
  0 siblings, 1 reply; 28+ messages in thread
From: Alexander Duyck @ 2020-11-13 23:56 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: David Miller, Jakub Kicinski, Real Valiquette, Netdev,
	Stefan Assmann, Chinh Cao, Brijesh Behera

On Fri, Nov 13, 2020 at 1:46 PM Tony Nguyen <anthony.l.nguyen@intel.com> wrote:
>
> From: Real Valiquette <real.valiquette@intel.com>
>
> Implement the initial steps for creating an ACL filter to support ntuple
> masks. Create a flow profile based on a given mask rule and program it to
> the hardware. Though the profile is written to hardware, no actions are
> associated with the profile yet.
>
> Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
> Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
> Signed-off-by: Real Valiquette <real.valiquette@intel.com>
> Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> Tested-by: Brijesh Behera <brijeshx.behera@intel.com>

So I see two big issues with the patch.

First it looks like there is an anti-pattern of defensive NULL pointer
checks throughout. Those can probably all go since all of the callers
either use the pointer, or verify it is non-NULL before calling the
function in question.

In addition the mask handling doens't look right to me. It is calling
out a partial mask as being the only time you need an ACL and I would
think it is any time you don't have a full mask for all
ports/addresses since a flow director rule normally pulls in the full
4 tuple based on ice_ntuple_set_input_set() .

I commented on what I saw below.

Thanks.

- Alex

> ---
>  drivers/net/ethernet/intel/ice/Makefile       |   1 +
>  drivers/net/ethernet/intel/ice/ice.h          |   9 +
>  drivers/net/ethernet/intel/ice/ice_acl_main.c | 260 ++++++++++++++++
>  .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  39 +++
>  .../net/ethernet/intel/ice/ice_ethtool_fdir.c | 290 ++++++++++++++----
>  .../net/ethernet/intel/ice/ice_flex_pipe.c    |  12 +-
>  drivers/net/ethernet/intel/ice/ice_flow.c     | 178 ++++++++++-
>  drivers/net/ethernet/intel/ice/ice_flow.h     |  17 +
>  8 files changed, 727 insertions(+), 79 deletions(-)
>  create mode 100644 drivers/net/ethernet/intel/ice/ice_acl_main.c
>
> diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile
> index 0747976622cf..36a787b5ad8d 100644
> --- a/drivers/net/ethernet/intel/ice/Makefile
> +++ b/drivers/net/ethernet/intel/ice/Makefile
> @@ -20,6 +20,7 @@ ice-y := ice_main.o   \
>          ice_fltr.o     \
>          ice_fdir.o     \
>          ice_ethtool_fdir.o \
> +        ice_acl_main.o \
>          ice_acl.o      \
>          ice_acl_ctrl.o \
>          ice_flex_pipe.o \
> diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
> index 1008a6785e55..d813a5c765d0 100644
> --- a/drivers/net/ethernet/intel/ice/ice.h
> +++ b/drivers/net/ethernet/intel/ice/ice.h
> @@ -601,16 +601,25 @@ int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
>  int ice_get_ethtool_fdir_entry(struct ice_hw *hw, struct ethtool_rxnfc *cmd);
>  u32 ice_ntuple_get_max_fltr_cnt(struct ice_hw *hw);
>  int
> +ice_ntuple_l4_proto_to_port(enum ice_flow_seg_hdr l4_proto,
> +                           enum ice_flow_field *src_port,
> +                           enum ice_flow_field *dst_port);
> +int ice_ntuple_check_ip4_seg(struct ethtool_tcpip4_spec *tcp_ip4_spec);
> +int ice_ntuple_check_ip4_usr_seg(struct ethtool_usrip4_spec *usr_ip4_spec);
> +int
>  ice_get_fdir_fltr_ids(struct ice_hw *hw, struct ethtool_rxnfc *cmd,
>                       u32 *rule_locs);
>  void ice_fdir_release_flows(struct ice_hw *hw);
>  void ice_fdir_replay_flows(struct ice_hw *hw);
>  void ice_fdir_replay_fltrs(struct ice_pf *pf);
>  int ice_fdir_create_dflt_rules(struct ice_pf *pf);
> +enum ice_fltr_ptype ice_ethtool_flow_to_fltr(int eth);
>  int ice_aq_wait_for_event(struct ice_pf *pf, u16 opcode, unsigned long timeout,
>                           struct ice_rq_event_info *event);
>  int ice_open(struct net_device *netdev);
>  int ice_stop(struct net_device *netdev);
>  void ice_service_task_schedule(struct ice_pf *pf);
> +int
> +ice_acl_add_rule_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd);
>
>  #endif /* _ICE_H_ */
> diff --git a/drivers/net/ethernet/intel/ice/ice_acl_main.c b/drivers/net/ethernet/intel/ice/ice_acl_main.c
> new file mode 100644
> index 000000000000..be97dfb94652
> --- /dev/null
> +++ b/drivers/net/ethernet/intel/ice/ice_acl_main.c
> @@ -0,0 +1,260 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (C) 2018-2020, Intel Corporation. */
> +
> +/* ACL support for ice */
> +
> +#include "ice.h"
> +#include "ice_lib.h"
> +
> +/* Number of action */
> +#define ICE_ACL_NUM_ACT                1
> +
> +/**
> + * ice_acl_set_ip4_addr_seg
> + * @seg: flow segment for programming
> + *
> + * Set the IPv4 source and destination address mask for the given flow segment
> + */
> +static void ice_acl_set_ip4_addr_seg(struct ice_flow_seg_info *seg)
> +{
> +       u16 val_loc, mask_loc;
> +
> +       /* IP source address */
> +       val_loc = offsetof(struct ice_fdir_fltr, ip.v4.src_ip);
> +       mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.src_ip);
> +
> +       ice_flow_set_fld(seg, ICE_FLOW_FIELD_IDX_IPV4_SA, val_loc,
> +                        mask_loc, ICE_FLOW_FLD_OFF_INVAL, false);
> +
> +       /* IP destination address */
> +       val_loc = offsetof(struct ice_fdir_fltr, ip.v4.dst_ip);
> +       mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.dst_ip);
> +
> +       ice_flow_set_fld(seg, ICE_FLOW_FIELD_IDX_IPV4_DA, val_loc,
> +                        mask_loc, ICE_FLOW_FLD_OFF_INVAL, false);
> +}
> +
> +/**
> + * ice_acl_set_ip4_port_seg
> + * @seg: flow segment for programming
> + * @l4_proto: Layer 4 protocol to program
> + *
> + * Set the source and destination port for the given flow segment based on the
> + * provided layer 4 protocol
> + */
> +static int
> +ice_acl_set_ip4_port_seg(struct ice_flow_seg_info *seg,
> +                        enum ice_flow_seg_hdr l4_proto)
> +{
> +       enum ice_flow_field src_port, dst_port;
> +       u16 val_loc, mask_loc;
> +       int err;
> +
> +       err = ice_ntuple_l4_proto_to_port(l4_proto, &src_port, &dst_port);
> +       if (err)
> +               return err;
> +
> +       /* Layer 4 source port */
> +       val_loc = offsetof(struct ice_fdir_fltr, ip.v4.src_port);
> +       mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.src_port);
> +
> +       ice_flow_set_fld(seg, src_port, val_loc, mask_loc,
> +                        ICE_FLOW_FLD_OFF_INVAL, false);
> +
> +       /* Layer 4 destination port */
> +       val_loc = offsetof(struct ice_fdir_fltr, ip.v4.dst_port);
> +       mask_loc = offsetof(struct ice_fdir_fltr, mask.v4.dst_port);
> +
> +       ice_flow_set_fld(seg, dst_port, val_loc, mask_loc,
> +                        ICE_FLOW_FLD_OFF_INVAL, false);
> +
> +       return 0;
> +}
> +
> +/**
> + * ice_acl_set_ip4_seg
> + * @seg: flow segment for programming
> + * @tcp_ip4_spec: mask data from ethtool
> + * @l4_proto: Layer 4 protocol to program
> + *
> + * Set the mask data into the flow segment to be used to program HW
> + * table based on provided L4 protocol for IPv4
> + */
> +static int
> +ice_acl_set_ip4_seg(struct ice_flow_seg_info *seg,
> +                   struct ethtool_tcpip4_spec *tcp_ip4_spec,
> +                   enum ice_flow_seg_hdr l4_proto)
> +{
> +       int err;
> +
> +       if (!seg)
> +               return -EINVAL;
> +

Unnecessary NULL pointer check. This is a static function and all
callers check the value before calling this function. You would likely
be better off just letting a NULL pointer dereference catch this.

> +       err = ice_ntuple_check_ip4_seg(tcp_ip4_spec);
> +       if (err)
> +               return err;
> +
> +       ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV4 | l4_proto);
> +       ice_acl_set_ip4_addr_seg(seg);
> +
> +       return ice_acl_set_ip4_port_seg(seg, l4_proto);
> +}
> +
> +/**
> + * ice_acl_set_ip4_usr_seg
> + * @seg: flow segment for programming
> + * @usr_ip4_spec: ethtool userdef packet offset
> + *
> + * Set the offset data into the flow segment to be used to program HW
> + * table for IPv4
> + */
> +static int
> +ice_acl_set_ip4_usr_seg(struct ice_flow_seg_info *seg,
> +                       struct ethtool_usrip4_spec *usr_ip4_spec)
> +{
> +       int err;
> +
> +       if (!seg)
> +               return -EINVAL;
> +

and here..

> +       err = ice_ntuple_check_ip4_usr_seg(usr_ip4_spec);
> +       if (err)
> +               return err;
> +
> +       ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV4);
> +       ice_acl_set_ip4_addr_seg(seg);
> +
> +       return 0;
> +}
> +
> +/**
> + * ice_acl_check_input_set - Checks that a given ACL input set is valid
> + * @pf: ice PF structure
> + * @fsp: pointer to ethtool Rx flow specification
> + *
> + * Returns 0 on success and negative values for failure
> + */
> +static int
> +ice_acl_check_input_set(struct ice_pf *pf, struct ethtool_rx_flow_spec *fsp)
> +{
> +       struct ice_fd_hw_prof *hw_prof = NULL;
> +       struct ice_flow_prof *prof = NULL;
> +       struct ice_flow_seg_info *old_seg;
> +       struct ice_flow_seg_info *seg;
> +       enum ice_fltr_ptype fltr_type;
> +       struct ice_hw *hw = &pf->hw;
> +       enum ice_status status;
> +       struct device *dev;
> +       int err;
> +
> +       if (!fsp)
> +               return -EINVAL;
> +

and here...

> +       dev = ice_pf_to_dev(pf);
> +       seg = devm_kzalloc(dev, sizeof(*seg), GFP_KERNEL);
> +       if (!seg)
> +               return -ENOMEM;
> +

This check of seg covers the next 4 functions. So all of the functions
you call out below don't need to check for seg being NULL as you
already did it here.

> +       switch (fsp->flow_type & ~FLOW_EXT) {
> +       case TCP_V4_FLOW:
> +               err = ice_acl_set_ip4_seg(seg, &fsp->m_u.tcp_ip4_spec,
> +                                         ICE_FLOW_SEG_HDR_TCP);
> +               break;
> +       case UDP_V4_FLOW:
> +               err = ice_acl_set_ip4_seg(seg, &fsp->m_u.tcp_ip4_spec,
> +                                         ICE_FLOW_SEG_HDR_UDP);
> +               break;
> +       case SCTP_V4_FLOW:
> +               err = ice_acl_set_ip4_seg(seg, &fsp->m_u.tcp_ip4_spec,
> +                                         ICE_FLOW_SEG_HDR_SCTP);
> +               break;
> +       case IPV4_USER_FLOW:
> +               err = ice_acl_set_ip4_usr_seg(seg, &fsp->m_u.usr_ip4_spec);
> +               break;
> +       default:
> +               err = -EOPNOTSUPP;
> +       }
> +       if (err)
> +               goto err_exit;
> +
> +       fltr_type = ice_ethtool_flow_to_fltr(fsp->flow_type & ~FLOW_EXT);
> +
> +       if (!hw->acl_prof) {
> +               hw->acl_prof = devm_kcalloc(dev, ICE_FLTR_PTYPE_MAX,
> +                                           sizeof(*hw->acl_prof), GFP_KERNEL);
> +               if (!hw->acl_prof) {
> +                       err = -ENOMEM;
> +                       goto err_exit;
> +               }
> +       }
> +       if (!hw->acl_prof[fltr_type]) {
> +               hw->acl_prof[fltr_type] = devm_kzalloc(dev,
> +                                                      sizeof(**hw->acl_prof),
> +                                                      GFP_KERNEL);
> +               if (!hw->acl_prof[fltr_type]) {
> +                       err = -ENOMEM;
> +                       goto err_acl_prof_exit;
> +               }
> +               hw->acl_prof[fltr_type]->cnt = 0;
> +       }
> +
> +       hw_prof = hw->acl_prof[fltr_type];
> +       old_seg = hw_prof->fdir_seg[0];
> +       if (old_seg) {
> +               /* This flow_type already has an input set.
> +                * If it matches the requested input set then we are
> +                * done. If it's different then it's an error.
> +                */
> +               if (!memcmp(old_seg, seg, sizeof(*seg))) {
> +                       devm_kfree(dev, seg);
> +                       return 0;
> +               }
> +
> +               err = -EINVAL;
> +               goto err_acl_prof_flow_exit;
> +       }
> +
> +       /* Adding a profile for the given flow specification with no
> +        * actions (NULL) and zero actions 0.
> +        */
> +       status = ice_flow_add_prof(hw, ICE_BLK_ACL, ICE_FLOW_RX, fltr_type,
> +                                  seg, 1, &prof);
> +       if (status) {
> +               err = ice_status_to_errno(status);
> +               goto err_exit;
> +       }
> +
> +       hw_prof->fdir_seg[0] = seg;
> +       return 0;
> +
> +err_acl_prof_flow_exit:
> +       devm_kfree(dev, hw->acl_prof[fltr_type]);
> +err_acl_prof_exit:
> +       devm_kfree(dev, hw->acl_prof);
> +err_exit:
> +       devm_kfree(dev, seg);
> +
> +       return err;
> +}
> +
> +/**
> + * ice_acl_add_rule_ethtool - Adds an ACL rule
> + * @vsi: pointer to target VSI
> + * @cmd: command to add or delete ACL rule
> + *
> + * Returns 0 on success and negative values for failure
> + */
> +int ice_acl_add_rule_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
> +{
> +       struct ethtool_rx_flow_spec *fsp;
> +       struct ice_pf *pf;
> +
> +       if (!vsi || !cmd)
> +               return -EINVAL;
> +

This is unneeded. The one caller of this has already referenced cmd
and vsi in multiple spots so this is a redundant check.

Actually there are so many redundant NULL pointer checks throughout I
would suggest going through and everywhere where the function starts
with one of these checks you should evaluate if you really need it or
not, rather than me going through and calling out all of them.

<snip>

> +       ret = ice_ntuple_check_ip6_usr_seg(usr_ip6_spec);
> +       if (ret)
> +               return ret;
> +
>         *perfect_fltr = true;
>         ICE_FLOW_SET_HDRS(seg, ICE_FLOW_SEG_HDR_IPV6);
>
> @@ -1489,6 +1583,64 @@ int ice_del_ntuple_ethtool(struct ice_vsi *vsi, struct ethtool_rxnfc *cmd)
>         return val;
>  }
>
> +/**
> + * ice_is_acl_filter - Checks if it's a FD or ACL filter
> + * @fsp: pointer to ethtool Rx flow specification
> + *
> + * If any field of the provided filter is using a partial mask then this is
> + * an ACL filter.
> + *

I'm not sure this logic is correct. Can the flow director rules handle
a field that is removed? Last I knew it couldn't. If that is the case
you should be using ACL for any case in which a full mask is not
provided. So in your tests below you could probably drop the check for
zero as I don't think that is a valid case in which flow director
would work.

> + * Returns true if ACL filter otherwise false.
> + */
> +static bool ice_is_acl_filter(struct ethtool_rx_flow_spec *fsp)
> +{
> +       struct ethtool_tcpip4_spec *tcp_ip4_spec;
> +       struct ethtool_usrip4_spec *usr_ip4_spec;
> +
> +       switch (fsp->flow_type & ~FLOW_EXT) {
> +       case TCP_V4_FLOW:
> +       case UDP_V4_FLOW:
> +       case SCTP_V4_FLOW:
> +               tcp_ip4_spec = &fsp->m_u.tcp_ip4_spec;
> +
> +               /* IP source address */
> +               if (tcp_ip4_spec->ip4src &&
> +                   tcp_ip4_spec->ip4src != htonl(0xFFFFFFFF))
> +                       return true;
> +
> +               /* IP destination address */
> +               if (tcp_ip4_spec->ip4dst &&
> +                   tcp_ip4_spec->ip4dst != htonl(0xFFFFFFFF))
> +                       return true;
> +

Instead of testing this up here you could just skip the break and fall
through since the source and destination IP addresses occupy the same
spots on usr_ip4_spec and tcp_ip4_spec. You could probably also just
use tcp_ip4_spec for the entire test.

> +               /* Layer 4 source port */
> +               if (tcp_ip4_spec->psrc && tcp_ip4_spec->psrc != htons(0xFFFF))
> +                       return true;
> +
> +               /* Layer 4 destination port */
> +               if (tcp_ip4_spec->pdst && tcp_ip4_spec->pdst != htons(0xFFFF))
> +                       return true;
> +
> +               break;
> +       case IPV4_USER_FLOW:
> +               usr_ip4_spec = &fsp->m_u.usr_ip4_spec;
> +
> +               /* IP source address */
> +               if (usr_ip4_spec->ip4src &&
> +                   usr_ip4_spec->ip4src != htonl(0xFFFFFFFF))
> +                       return true;
> +
> +               /* IP destination address */
> +               if (usr_ip4_spec->ip4dst &&
> +                   usr_ip4_spec->ip4dst != htonl(0xFFFFFFFF))
> +                       return true;
> +
> +               break;
> +       }
> +
> +       return false;
> +}
> +

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 08/15] ice: don't always return an error for Get PHY Abilities AQ command
  2020-11-13 21:44 ` [net-next v3 08/15] ice: don't always return an error for Get PHY Abilities AQ command Tony Nguyen
@ 2020-11-14  1:25   ` Alexander Duyck
  2020-11-21  0:43     ` Nguyen, Anthony L
  0 siblings, 1 reply; 28+ messages in thread
From: Alexander Duyck @ 2020-11-14  1:25 UTC (permalink / raw)
  To: Tony Nguyen
  Cc: David Miller, Jakub Kicinski, Paul M Stillwell Jr, Netdev,
	Stefan Assmann, Aaron Brown

On Fri, Nov 13, 2020 at 1:49 PM Tony Nguyen <anthony.l.nguyen@intel.com> wrote:
>
> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
>
> There are times when the driver shouldn't return an error when the Get
> PHY abilities AQ command (0x0600) returns an error. Instead the driver
> should log that the error occurred and continue on. This allows the
> driver to load even though the AQ command failed. The user can then
> later determine the reason for the failure and correct it.
>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Tested-by: Aaron Brown <aaron.f.brown@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
> index 7db5fd977367..3c600808d0da 100644
> --- a/drivers/net/ethernet/intel/ice/ice_common.c
> +++ b/drivers/net/ethernet/intel/ice/ice_common.c
> @@ -925,7 +925,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
>                                      ICE_AQC_REPORT_TOPO_CAP, pcaps, NULL);
>         devm_kfree(ice_hw_to_dev(hw), pcaps);
>         if (status)
> -               goto err_unroll_sched;
> +               ice_debug(hw, ICE_DBG_PHY, "Get PHY capabilities failed, continuing anyway\n");
>
>         /* Initialize port_info struct with link information */
>         status = ice_aq_get_link_info(hw->port_info, false, NULL, NULL);
> --
> 2.26.2
>

If we are expecting the user to correct things then we should be
putting out a warning via dev_warn() rather than a debug message.
Otherwise the user is just going to have to come back through and turn
on debugging and reload the driver in order to figure out what is
going on. In my mind it should get the same treatment as an outdated
NVM image in ice_aq_ver_check().

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-11-13 23:56   ` Alexander Duyck
@ 2020-11-21  0:42     ` Nguyen, Anthony L
  2020-11-21  1:49       ` Alexander Duyck
  0 siblings, 1 reply; 28+ messages in thread
From: Nguyen, Anthony L @ 2020-11-21  0:42 UTC (permalink / raw)
  To: alexander.duyck
  Cc: Cao, Chinh T, davem, kuba, Behera, BrijeshX, Valiquette, Real,
	sassmann, netdev

On Fri, 2020-11-13 at 15:56 -0800, Alexander Duyck wrote:
> On Fri, Nov 13, 2020 at 1:46 PM Tony Nguyen <
> anthony.l.nguyen@intel.com> wrote:
> > 
> > From: Real Valiquette <real.valiquette@intel.com>
> > 
> > Implement the initial steps for creating an ACL filter to support
> > ntuple
> > masks. Create a flow profile based on a given mask rule and program
> > it to
> > the hardware. Though the profile is written to hardware, no actions
> > are
> > associated with the profile yet.
> > 
> > Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
> > Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
> > Signed-off-by: Real Valiquette <real.valiquette@intel.com>
> > Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > Tested-by: Brijesh Behera <brijeshx.behera@intel.com>
> 
> So I see two big issues with the patch.
> 
> First it looks like there is an anti-pattern of defensive NULL
> pointer
> checks throughout. Those can probably all go since all of the callers
> either use the pointer, or verify it is non-NULL before calling the
> function in question.

I'm removing those checks that you pointed out and some others as well.

> 
> In addition the mask handling doens't look right to me. It is calling
> out a partial mask as being the only time you need an ACL and I would
> think it is any time you don't have a full mask for all
> ports/addresses since a flow director rule normally pulls in the full
> 4 tuple based on ice_ntuple_set_input_set() .

Commented below as well.

<snip>

> > +/**
> > + * ice_is_acl_filter - Checks if it's a FD or ACL filter
> > + * @fsp: pointer to ethtool Rx flow specification
> > + *
> > + * If any field of the provided filter is using a partial mask
> > then this is
> > + * an ACL filter.
> > + *
> 
> I'm not sure this logic is correct. Can the flow director rules
> handle
> a field that is removed? Last I knew it couldn't. If that is the case
> you should be using ACL for any case in which a full mask is not
> provided. So in your tests below you could probably drop the check
> for
> zero as I don't think that is a valid case in which flow director
> would work.
> 

I'm not sure what you meant by a field that is removed, but Flow
Director can handle reduced input sets. Flow Director is able to handle
0 mask, full mask, and less than 4 tuples. ACL is needed/used only when
a partial mask rule is requested.


> > + * Returns true if ACL filter otherwise false.
> > + */
> > +static bool ice_is_acl_filter(struct ethtool_rx_flow_spec *fsp)
> > +{
> > +       struct ethtool_tcpip4_spec *tcp_ip4_spec;
> > +       struct ethtool_usrip4_spec *usr_ip4_spec;
> > +
> > +       switch (fsp->flow_type & ~FLOW_EXT) {
> > +       case TCP_V4_FLOW:
> > +       case UDP_V4_FLOW:
> > +       case SCTP_V4_FLOW:
> > +               tcp_ip4_spec = &fsp->m_u.tcp_ip4_spec;
> > +
> > +               /* IP source address */
> > +               if (tcp_ip4_spec->ip4src &&
> > +                   tcp_ip4_spec->ip4src != htonl(0xFFFFFFFF))
> > +                       return true;
> > +
> > +               /* IP destination address */
> > +               if (tcp_ip4_spec->ip4dst &&
> > +                   tcp_ip4_spec->ip4dst != htonl(0xFFFFFFFF))
> > +                       return true;
> > +
> 
> Instead of testing this up here you could just skip the break and
> fall
> through since the source and destination IP addresses occupy the same
> spots on usr_ip4_spec and tcp_ip4_spec. You could probably also just
> use tcp_ip4_spec for the entire test.

Will make this change.

Thanks,
Tony

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 08/15] ice: don't always return an error for Get PHY Abilities AQ command
  2020-11-14  1:25   ` Alexander Duyck
@ 2020-11-21  0:43     ` Nguyen, Anthony L
  0 siblings, 0 replies; 28+ messages in thread
From: Nguyen, Anthony L @ 2020-11-21  0:43 UTC (permalink / raw)
  To: alexander.duyck
  Cc: davem, kuba, Brown, Aaron F, netdev, Stillwell Jr, Paul M, sassmann

On Fri, 2020-11-13 at 17:25 -0800, Alexander Duyck wrote:
> On Fri, Nov 13, 2020 at 1:49 PM Tony Nguyen <
> anthony.l.nguyen@intel.com> wrote:
> > 
> > From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > 
> > There are times when the driver shouldn't return an error when the
> > Get
> > PHY abilities AQ command (0x0600) returns an error. Instead the
> > driver
> > should log that the error occurred and continue on. This allows the
> > driver to load even though the AQ command failed. The user can then
> > later determine the reason for the failure and correct it.
> > 
> > Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> > Tested-by: Aaron Brown <aaron.f.brown@intel.com>
> > Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > ---
> >  drivers/net/ethernet/intel/ice/ice_common.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/ethernet/intel/ice/ice_common.c
> > b/drivers/net/ethernet/intel/ice/ice_common.c
> > index 7db5fd977367..3c600808d0da 100644
> > --- a/drivers/net/ethernet/intel/ice/ice_common.c
> > +++ b/drivers/net/ethernet/intel/ice/ice_common.c
> > @@ -925,7 +925,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
> >                                      ICE_AQC_REPORT_TOPO_CAP,
> > pcaps, NULL);
> >         devm_kfree(ice_hw_to_dev(hw), pcaps);
> >         if (status)
> > -               goto err_unroll_sched;
> > +               ice_debug(hw, ICE_DBG_PHY, "Get PHY capabilities
> > failed, continuing anyway\n");
> > 
> >         /* Initialize port_info struct with link information */
> >         status = ice_aq_get_link_info(hw->port_info, false, NULL,
> > NULL);
> > --
> > 2.26.2
> > 
> 
> If we are expecting the user to correct things then we should be
> putting out a warning via dev_warn() rather than a debug message.
> Otherwise the user is just going to have to come back through and
> turn
> on debugging and reload the driver in order to figure out what is
> going on. In my mind it should get the same treatment as an outdated
> NVM image in ice_aq_ver_check().

Will change this to a dev_warn().

Thanks,
Tony

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-11-21  0:42     ` Nguyen, Anthony L
@ 2020-11-21  1:49       ` Alexander Duyck
  2020-11-23 23:21         ` Jesse Brandeburg
  0 siblings, 1 reply; 28+ messages in thread
From: Alexander Duyck @ 2020-11-21  1:49 UTC (permalink / raw)
  To: Nguyen, Anthony L
  Cc: Cao, Chinh T, davem, kuba, Behera, BrijeshX, Valiquette, Real,
	sassmann, netdev

On Fri, Nov 20, 2020 at 4:42 PM Nguyen, Anthony L
<anthony.l.nguyen@intel.com> wrote:
>
> On Fri, 2020-11-13 at 15:56 -0800, Alexander Duyck wrote:
> > On Fri, Nov 13, 2020 at 1:46 PM Tony Nguyen <
> > anthony.l.nguyen@intel.com> wrote:
> > >
> > > From: Real Valiquette <real.valiquette@intel.com>
> > >
> > > Implement the initial steps for creating an ACL filter to support
> > > ntuple
> > > masks. Create a flow profile based on a given mask rule and program
> > > it to
> > > the hardware. Though the profile is written to hardware, no actions
> > > are
> > > associated with the profile yet.
> > >
> > > Co-developed-by: Chinh Cao <chinh.t.cao@intel.com>
> > > Signed-off-by: Chinh Cao <chinh.t.cao@intel.com>
> > > Signed-off-by: Real Valiquette <real.valiquette@intel.com>
> > > Co-developed-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> > > Tested-by: Brijesh Behera <brijeshx.behera@intel.com>
> >
> > So I see two big issues with the patch.
> >
> > First it looks like there is an anti-pattern of defensive NULL
> > pointer
> > checks throughout. Those can probably all go since all of the callers
> > either use the pointer, or verify it is non-NULL before calling the
> > function in question.
>
> I'm removing those checks that you pointed out and some others as well.
>
> >
> > In addition the mask handling doens't look right to me. It is calling
> > out a partial mask as being the only time you need an ACL and I would
> > think it is any time you don't have a full mask for all
> > ports/addresses since a flow director rule normally pulls in the full
> > 4 tuple based on ice_ntuple_set_input_set() .
>
> Commented below as well.
>
> <snip>
>
> > > +/**
> > > + * ice_is_acl_filter - Checks if it's a FD or ACL filter
> > > + * @fsp: pointer to ethtool Rx flow specification
> > > + *
> > > + * If any field of the provided filter is using a partial mask
> > > then this is
> > > + * an ACL filter.
> > > + *
> >
> > I'm not sure this logic is correct. Can the flow director rules
> > handle
> > a field that is removed? Last I knew it couldn't. If that is the case
> > you should be using ACL for any case in which a full mask is not
> > provided. So in your tests below you could probably drop the check
> > for
> > zero as I don't think that is a valid case in which flow director
> > would work.
> >
>
> I'm not sure what you meant by a field that is removed, but Flow
> Director can handle reduced input sets. Flow Director is able to handle
> 0 mask, full mask, and less than 4 tuples. ACL is needed/used only when
> a partial mask rule is requested.

So historically speaking with flow director you are only allowed one
mask because it determines the inputs used to generate the hash that
identifies the flow. So you are only allowed one mask for all flows
because changing those inputs would break the hash mapping.

Normally this ends up meaning that you have to do like what we did in
ixgbe and disable ATR and only allow one mask for all inputs. I
believe for i40e they required that you always use a full 4 tuple. I
didn't see something like that here. As such you may want to double
check that you can have a mix of flow director rules that are using 1
tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew you couldn't.
Basically if you had fields included they had to be included for all
the rules on the port or device depending on how the tables are set
up.

Thanks.

- Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-11-21  1:49       ` Alexander Duyck
@ 2020-11-23 23:21         ` Jesse Brandeburg
  2020-11-24  1:11           ` Alexander Duyck
  0 siblings, 1 reply; 28+ messages in thread
From: Jesse Brandeburg @ 2020-11-23 23:21 UTC (permalink / raw)
  To: Alexander Duyck
  Cc: Nguyen, Anthony L, Cao, Chinh T, davem, kuba, Behera, BrijeshX,
	Valiquette, Real, sassmann, netdev

Alexander Duyck wrote:

> > > I'm not sure this logic is correct. Can the flow director rules
> > > handle
> > > a field that is removed? Last I knew it couldn't. If that is the case
> > > you should be using ACL for any case in which a full mask is not
> > > provided. So in your tests below you could probably drop the check
> > > for
> > > zero as I don't think that is a valid case in which flow director
> > > would work.
> > >
> >
> > I'm not sure what you meant by a field that is removed, but Flow
> > Director can handle reduced input sets. Flow Director is able to handle
> > 0 mask, full mask, and less than 4 tuples. ACL is needed/used only when
> > a partial mask rule is requested.
> 
> So historically speaking with flow director you are only allowed one
> mask because it determines the inputs used to generate the hash that
> identifies the flow. So you are only allowed one mask for all flows
> because changing those inputs would break the hash mapping.
> 
> Normally this ends up meaning that you have to do like what we did in
> ixgbe and disable ATR and only allow one mask for all inputs. I
> believe for i40e they required that you always use a full 4 tuple. I
> didn't see something like that here. As such you may want to double
> check that you can have a mix of flow director rules that are using 1
> tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew you couldn't.
> Basically if you had fields included they had to be included for all
> the rules on the port or device depending on how the tables are set
> up.

The ice driver hardware is quite a bit more capable than the ixgbe or
i40e hardware, and uses a limited set of ACL rules to support different
sets of masks. We have some limits on the number of masks and the
number of fields that we can simultaneously support, but I think
that is pretty normal for limited hardware resources.

Let's just say that if the code doesn't work on an E810 card then we
messed up and we'll have to fix it. :-)

Thanks for the review! Hope this helps...

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-11-23 23:21         ` Jesse Brandeburg
@ 2020-11-24  1:11           ` Alexander Duyck
  2020-12-08 16:58             ` Nguyen, Anthony L
  0 siblings, 1 reply; 28+ messages in thread
From: Alexander Duyck @ 2020-11-24  1:11 UTC (permalink / raw)
  To: Jesse Brandeburg
  Cc: Nguyen, Anthony L, Cao, Chinh T, davem, kuba, Behera, BrijeshX,
	Valiquette, Real, sassmann, netdev

On Mon, Nov 23, 2020 at 3:21 PM Jesse Brandeburg
<jesse.brandeburg@intel.com> wrote:
>
> Alexander Duyck wrote:
>
> > > > I'm not sure this logic is correct. Can the flow director rules
> > > > handle
> > > > a field that is removed? Last I knew it couldn't. If that is the case
> > > > you should be using ACL for any case in which a full mask is not
> > > > provided. So in your tests below you could probably drop the check
> > > > for
> > > > zero as I don't think that is a valid case in which flow director
> > > > would work.
> > > >
> > >
> > > I'm not sure what you meant by a field that is removed, but Flow
> > > Director can handle reduced input sets. Flow Director is able to handle
> > > 0 mask, full mask, and less than 4 tuples. ACL is needed/used only when
> > > a partial mask rule is requested.
> >
> > So historically speaking with flow director you are only allowed one
> > mask because it determines the inputs used to generate the hash that
> > identifies the flow. So you are only allowed one mask for all flows
> > because changing those inputs would break the hash mapping.
> >
> > Normally this ends up meaning that you have to do like what we did in
> > ixgbe and disable ATR and only allow one mask for all inputs. I
> > believe for i40e they required that you always use a full 4 tuple. I
> > didn't see something like that here. As such you may want to double
> > check that you can have a mix of flow director rules that are using 1
> > tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew you couldn't.
> > Basically if you had fields included they had to be included for all
> > the rules on the port or device depending on how the tables are set
> > up.
>
> The ice driver hardware is quite a bit more capable than the ixgbe or
> i40e hardware, and uses a limited set of ACL rules to support different
> sets of masks. We have some limits on the number of masks and the
> number of fields that we can simultaneously support, but I think
> that is pretty normal for limited hardware resources.
>
> Let's just say that if the code doesn't work on an E810 card then we
> messed up and we'll have to fix it. :-)
>
> Thanks for the review! Hope this helps...

I gather all that. The issue was the code in ice_is_acl_filter().
Basically if we start dropping fields it will not trigger the rule to
be considered an ACL rule if the field is completely dropped.

So for example I could define 4 rules, one that ignores the IPv4
source, one that ignores the IPv4 destination, one that ignores the
TCP source port, and one that ignores the TCP destination port. With
the current code all 4 of those rules would be considered to be
non-ACL rules because the mask is 0 and not partial. If I do the same
thing and ignore all but one bit then they are all ACL rules. In
addition I don't see anything telling flow director it can ignore
certain inputs over verifying the mask so I am assuming that the
previously mentioned rules that drop entire fields would likely not
work with Flow Director.

Anyway I just wanted to point that out as that would be an issue going
forward and it seems like it would be easy to fix by simply just
rejecting rules where the required flow director fields are not
entirely masked in.

- Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-11-24  1:11           ` Alexander Duyck
@ 2020-12-08 16:58             ` Nguyen, Anthony L
  2020-12-08 19:00               ` Alexander Duyck
  0 siblings, 1 reply; 28+ messages in thread
From: Nguyen, Anthony L @ 2020-12-08 16:58 UTC (permalink / raw)
  To: Brandeburg, Jesse, alexander.duyck
  Cc: Cao, Chinh T, davem, kuba, Behera, BrijeshX, Valiquette, Real,
	sassmann, netdev

On Mon, 2020-11-23 at 17:11 -0800, Alexander Duyck wrote:
> On Mon, Nov 23, 2020 at 3:21 PM Jesse Brandeburg
> <jesse.brandeburg@intel.com> wrote:
> > 
> > Alexander Duyck wrote:
> > 
> > > > > I'm not sure this logic is correct. Can the flow director
> > > > > rules
> > > > > handle
> > > > > a field that is removed? Last I knew it couldn't. If that is
> > > > > the case
> > > > > you should be using ACL for any case in which a full mask is
> > > > > not
> > > > > provided. So in your tests below you could probably drop the
> > > > > check
> > > > > for
> > > > > zero as I don't think that is a valid case in which flow
> > > > > director
> > > > > would work.
> > > > > 
> > > > 
> > > > I'm not sure what you meant by a field that is removed, but
> > > > Flow
> > > > Director can handle reduced input sets. Flow Director is able
> > > > to handle
> > > > 0 mask, full mask, and less than 4 tuples. ACL is needed/used
> > > > only when
> > > > a partial mask rule is requested.
> > > 
> > > So historically speaking with flow director you are only allowed
> > > one
> > > mask because it determines the inputs used to generate the hash
> > > that
> > > identifies the flow. So you are only allowed one mask for all
> > > flows
> > > because changing those inputs would break the hash mapping.
> > > 
> > > Normally this ends up meaning that you have to do like what we
> > > did in
> > > ixgbe and disable ATR and only allow one mask for all inputs. I
> > > believe for i40e they required that you always use a full 4
> > > tuple. I
> > > didn't see something like that here. As such you may want to
> > > double
> > > check that you can have a mix of flow director rules that are
> > > using 1
> > > tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew you
> > > couldn't.
> > > Basically if you had fields included they had to be included for
> > > all
> > > the rules on the port or device depending on how the tables are
> > > set
> > > up.
> > 
> > The ice driver hardware is quite a bit more capable than the ixgbe
> > or
> > i40e hardware, and uses a limited set of ACL rules to support
> > different
> > sets of masks. We have some limits on the number of masks and the
> > number of fields that we can simultaneously support, but I think
> > that is pretty normal for limited hardware resources.
> > 
> > Let's just say that if the code doesn't work on an E810 card then
> > we
> > messed up and we'll have to fix it. :-)
> > 
> > Thanks for the review! Hope this helps...
> 
> I gather all that. The issue was the code in ice_is_acl_filter().
> Basically if we start dropping fields it will not trigger the rule to
> be considered an ACL rule if the field is completely dropped.
> 
> So for example I could define 4 rules, one that ignores the IPv4
> source, one that ignores the IPv4 destination, one that ignores the
> TCP source port, and one that ignores the TCP destination port.

We have the limitation that you can use one input set at a time so any
of these rules could be created but they couldn't exist concurrently.

> With
> the current code all 4 of those rules would be considered to be
> non-ACL rules because the mask is 0 and not partial.

Correct. I did this to test Flow Director:

'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
192.168.0.20 src-port 8500 action 10' and sent traffic matching this.
Traffic correctly went to queue 10.

> If I do the same
> thing and ignore all but one bit then they are all ACL rules.

Also correct. I did as follows:

'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
192.168.0.20 src-port 9000 m 0x1 action 15'

Sending traffic to port 9000 and 90001, traffic went to queue 15
Sending traffic to port 8000 and 90002, traffic went to other queues

Thanks,
Tony

> In
> addition I don't see anything telling flow director it can ignore
> certain inputs over verifying the mask so I am assuming that the
> previously mentioned rules that drop entire fields would likely not
> work with Flow Director.
> 
> Anyway I just wanted to point that out as that would be an issue
> going
> forward and it seems like it would be easy to fix by simply just
> rejecting rules where the required flow director fields are not
> entirely masked in.
> 
> - Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-12-08 16:58             ` Nguyen, Anthony L
@ 2020-12-08 19:00               ` Alexander Duyck
  2020-12-08 22:01                 ` Nguyen, Anthony L
  0 siblings, 1 reply; 28+ messages in thread
From: Alexander Duyck @ 2020-12-08 19:00 UTC (permalink / raw)
  To: Nguyen, Anthony L
  Cc: Brandeburg, Jesse, Cao, Chinh T, davem, kuba, Behera, BrijeshX,
	Valiquette, Real, sassmann, netdev

On Tue, Dec 8, 2020 at 8:58 AM Nguyen, Anthony L
<anthony.l.nguyen@intel.com> wrote:
>
> On Mon, 2020-11-23 at 17:11 -0800, Alexander Duyck wrote:
> > On Mon, Nov 23, 2020 at 3:21 PM Jesse Brandeburg
> > <jesse.brandeburg@intel.com> wrote:
> > >
> > > Alexander Duyck wrote:
> > >
> > > > > > I'm not sure this logic is correct. Can the flow director
> > > > > > rules
> > > > > > handle
> > > > > > a field that is removed? Last I knew it couldn't. If that is
> > > > > > the case
> > > > > > you should be using ACL for any case in which a full mask is
> > > > > > not
> > > > > > provided. So in your tests below you could probably drop the
> > > > > > check
> > > > > > for
> > > > > > zero as I don't think that is a valid case in which flow
> > > > > > director
> > > > > > would work.
> > > > > >
> > > > >
> > > > > I'm not sure what you meant by a field that is removed, but
> > > > > Flow
> > > > > Director can handle reduced input sets. Flow Director is able
> > > > > to handle
> > > > > 0 mask, full mask, and less than 4 tuples. ACL is needed/used
> > > > > only when
> > > > > a partial mask rule is requested.
> > > >
> > > > So historically speaking with flow director you are only allowed
> > > > one
> > > > mask because it determines the inputs used to generate the hash
> > > > that
> > > > identifies the flow. So you are only allowed one mask for all
> > > > flows
> > > > because changing those inputs would break the hash mapping.
> > > >
> > > > Normally this ends up meaning that you have to do like what we
> > > > did in
> > > > ixgbe and disable ATR and only allow one mask for all inputs. I
> > > > believe for i40e they required that you always use a full 4
> > > > tuple. I
> > > > didn't see something like that here. As such you may want to
> > > > double
> > > > check that you can have a mix of flow director rules that are
> > > > using 1
> > > > tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew you
> > > > couldn't.
> > > > Basically if you had fields included they had to be included for
> > > > all
> > > > the rules on the port or device depending on how the tables are
> > > > set
> > > > up.
> > >
> > > The ice driver hardware is quite a bit more capable than the ixgbe
> > > or
> > > i40e hardware, and uses a limited set of ACL rules to support
> > > different
> > > sets of masks. We have some limits on the number of masks and the
> > > number of fields that we can simultaneously support, but I think
> > > that is pretty normal for limited hardware resources.
> > >
> > > Let's just say that if the code doesn't work on an E810 card then
> > > we
> > > messed up and we'll have to fix it. :-)
> > >
> > > Thanks for the review! Hope this helps...
> >
> > I gather all that. The issue was the code in ice_is_acl_filter().
> > Basically if we start dropping fields it will not trigger the rule to
> > be considered an ACL rule if the field is completely dropped.
> >
> > So for example I could define 4 rules, one that ignores the IPv4
> > source, one that ignores the IPv4 destination, one that ignores the
> > TCP source port, and one that ignores the TCP destination port.
>
> We have the limitation that you can use one input set at a time so any
> of these rules could be created but they couldn't exist concurrently.

No, I get that. The question I have is what happens if you try to
input a second input set. With ixgbe we triggered an error for trying
to change input sets. I'm wondering if you trigger an error on adding
a different input set or if you just invalidate the existing rules.

> > With
> > the current code all 4 of those rules would be considered to be
> > non-ACL rules because the mask is 0 and not partial.
>
> Correct. I did this to test Flow Director:
>
> 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> 192.168.0.20 src-port 8500 action 10' and sent traffic matching this.
> Traffic correctly went to queue 10.

So a better question here is what happens if you do a rule with
src-port 8500, and a second rule with dst-port 8500? Does the second
rule fail or does it invalidate the first. If it invalidates the first
then that would be a bug.

> > If I do the same
> > thing and ignore all but one bit then they are all ACL rules.
>
> Also correct. I did as follows:
>
> 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> 192.168.0.20 src-port 9000 m 0x1 action 15'
>
> Sending traffic to port 9000 and 90001, traffic went to queue 15
> Sending traffic to port 8000 and 90002, traffic went to other queues

The test here is to set-up two rules and verify each of them and one
case that fails both. Same thing for the test above. Basically we
should be able to program multiple ACL rules with different masks and
that shouldn't be an issue up to some limit I would imagine. Same
thing for flow director rules. After the first you should not be able
to provide a flow director rule with a different input mask.

- Alex

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-12-08 19:00               ` Alexander Duyck
@ 2020-12-08 22:01                 ` Nguyen, Anthony L
  2020-12-08 22:22                   ` Alexander Duyck
  0 siblings, 1 reply; 28+ messages in thread
From: Nguyen, Anthony L @ 2020-12-08 22:01 UTC (permalink / raw)
  To: alexander.duyck
  Cc: Brandeburg, Jesse, sassmann, Cao, Chinh T, kuba, netdev,
	Valiquette, Real, Behera, BrijeshX, davem

On Tue, 2020-12-08 at 11:00 -0800, Alexander Duyck wrote:
> On Tue, Dec 8, 2020 at 8:58 AM Nguyen, Anthony L
> <anthony.l.nguyen@intel.com> wrote:
> > 
> > On Mon, 2020-11-23 at 17:11 -0800, Alexander Duyck wrote:
> > > On Mon, Nov 23, 2020 at 3:21 PM Jesse Brandeburg
> > > <jesse.brandeburg@intel.com> wrote:
> > > > 
> > > > Alexander Duyck wrote:
> > > > 
> > > > > > > I'm not sure this logic is correct. Can the flow director
> > > > > > > rules
> > > > > > > handle
> > > > > > > a field that is removed? Last I knew it couldn't. If that
> > > > > > > is
> > > > > > > the case
> > > > > > > you should be using ACL for any case in which a full mask
> > > > > > > is
> > > > > > > not
> > > > > > > provided. So in your tests below you could probably drop
> > > > > > > the
> > > > > > > check
> > > > > > > for
> > > > > > > zero as I don't think that is a valid case in which flow
> > > > > > > director
> > > > > > > would work.
> > > > > > > 
> > > > > > 
> > > > > > I'm not sure what you meant by a field that is removed, but
> > > > > > Flow
> > > > > > Director can handle reduced input sets. Flow Director is
> > > > > > able
> > > > > > to handle
> > > > > > 0 mask, full mask, and less than 4 tuples. ACL is
> > > > > > needed/used
> > > > > > only when
> > > > > > a partial mask rule is requested.
> > > > > 
> > > > > So historically speaking with flow director you are only
> > > > > allowed
> > > > > one
> > > > > mask because it determines the inputs used to generate the
> > > > > hash
> > > > > that
> > > > > identifies the flow. So you are only allowed one mask for all
> > > > > flows
> > > > > because changing those inputs would break the hash mapping.
> > > > > 
> > > > > Normally this ends up meaning that you have to do like what
> > > > > we
> > > > > did in
> > > > > ixgbe and disable ATR and only allow one mask for all inputs.
> > > > > I
> > > > > believe for i40e they required that you always use a full 4
> > > > > tuple. I
> > > > > didn't see something like that here. As such you may want to
> > > > > double
> > > > > check that you can have a mix of flow director rules that are
> > > > > using 1
> > > > > tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew you
> > > > > couldn't.
> > > > > Basically if you had fields included they had to be included
> > > > > for
> > > > > all
> > > > > the rules on the port or device depending on how the tables
> > > > > are
> > > > > set
> > > > > up.
> > > > 
> > > > The ice driver hardware is quite a bit more capable than the
> > > > ixgbe
> > > > or
> > > > i40e hardware, and uses a limited set of ACL rules to support
> > > > different
> > > > sets of masks. We have some limits on the number of masks and
> > > > the
> > > > number of fields that we can simultaneously support, but I
> > > > think
> > > > that is pretty normal for limited hardware resources.
> > > > 
> > > > Let's just say that if the code doesn't work on an E810 card
> > > > then
> > > > we
> > > > messed up and we'll have to fix it. :-)
> > > > 
> > > > Thanks for the review! Hope this helps...
> > > 
> > > I gather all that. The issue was the code in ice_is_acl_filter().
> > > Basically if we start dropping fields it will not trigger the
> > > rule to
> > > be considered an ACL rule if the field is completely dropped.
> > > 
> > > So for example I could define 4 rules, one that ignores the IPv4
> > > source, one that ignores the IPv4 destination, one that ignores
> > > the
> > > TCP source port, and one that ignores the TCP destination port.
> > 
> > We have the limitation that you can use one input set at a time so
> > any
> > of these rules could be created but they couldn't exist
> > concurrently.
> 
> No, I get that. The question I have is what happens if you try to
> input a second input set. With ixgbe we triggered an error for trying
> to change input sets. I'm wondering if you trigger an error on adding
> a different input set or if you just invalidate the existing rules.
> 
> > > With
> > > the current code all 4 of those rules would be considered to be
> > > non-ACL rules because the mask is 0 and not partial.
> > 
> > Correct. I did this to test Flow Director:
> > 
> > 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > 192.168.0.20 src-port 8500 action 10' and sent traffic matching
> > this.
> > Traffic correctly went to queue 10.
> 
> So a better question here is what happens if you do a rule with
> src-port 8500, and a second rule with dst-port 8500? Does the second
> rule fail or does it invalidate the first. If it invalidates the
> first
> then that would be a bug.

The second rule fails and a message is output to dmesg.

ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
192.168.0.20 dst-port 8500 action 10
rmgr: Cannot insert RX class rule: Operation not supported

dmesg:
ice 0000:81:00.0: Failed to add filter.  Flow director filters on each
port must have the same input set.

> > > If I do the same
> > > thing and ignore all but one bit then they are all ACL rules.
> > 
> > Also correct. I did as follows:
> > 
> > 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > 192.168.0.20 src-port 9000 m 0x1 action 15'
> > 
> > Sending traffic to port 9000 and 90001, traffic went to queue 15
> > Sending traffic to port 8000 and 90002, traffic went to other
> > queues
> 
> The test here is to set-up two rules and verify each of them and one
> case that fails both. Same thing for the test above. Basically we
> should be able to program multiple ACL rules with different masks and
> that shouldn't be an issue up to some limit I would imagine. Same
> thing for flow director rules. After the first you should not be able
> to provide a flow director rule with a different input mask.

I did this:

ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
192.168.0.20 src-port 9000 m 0x1 action 15
ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
192.168.0.20 src-port 8000 m 0x2 action 20

Sending traffic to port 9000 and 9001 goes to queue 15
Sending traffic to port 8000 and 8002 goes to queue 20
Sending traffic to port 8001 and 8500 goes to neither of the queues

Thanks,
Tony


^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-12-08 22:01                 ` Nguyen, Anthony L
@ 2020-12-08 22:22                   ` Alexander Duyck
  2020-12-09 18:23                     ` Nguyen, Anthony L
  0 siblings, 1 reply; 28+ messages in thread
From: Alexander Duyck @ 2020-12-08 22:22 UTC (permalink / raw)
  To: Nguyen, Anthony L
  Cc: Brandeburg, Jesse, sassmann, Cao, Chinh T, kuba, netdev,
	Valiquette, Real, Behera, BrijeshX, davem

On Tue, Dec 8, 2020 at 2:01 PM Nguyen, Anthony L
<anthony.l.nguyen@intel.com> wrote:
>
> On Tue, 2020-12-08 at 11:00 -0800, Alexander Duyck wrote:
> > On Tue, Dec 8, 2020 at 8:58 AM Nguyen, Anthony L
> > <anthony.l.nguyen@intel.com> wrote:
> > >
> > > On Mon, 2020-11-23 at 17:11 -0800, Alexander Duyck wrote:
> > > > On Mon, Nov 23, 2020 at 3:21 PM Jesse Brandeburg
> > > > <jesse.brandeburg@intel.com> wrote:
> > > > >
> > > > > Alexander Duyck wrote:
> > > > >
> > > > > > > > I'm not sure this logic is correct. Can the flow director
> > > > > > > > rules
> > > > > > > > handle
> > > > > > > > a field that is removed? Last I knew it couldn't. If that
> > > > > > > > is
> > > > > > > > the case
> > > > > > > > you should be using ACL for any case in which a full mask
> > > > > > > > is
> > > > > > > > not
> > > > > > > > provided. So in your tests below you could probably drop
> > > > > > > > the
> > > > > > > > check
> > > > > > > > for
> > > > > > > > zero as I don't think that is a valid case in which flow
> > > > > > > > director
> > > > > > > > would work.
> > > > > > > >
> > > > > > >
> > > > > > > I'm not sure what you meant by a field that is removed, but
> > > > > > > Flow
> > > > > > > Director can handle reduced input sets. Flow Director is
> > > > > > > able
> > > > > > > to handle
> > > > > > > 0 mask, full mask, and less than 4 tuples. ACL is
> > > > > > > needed/used
> > > > > > > only when
> > > > > > > a partial mask rule is requested.
> > > > > >
> > > > > > So historically speaking with flow director you are only
> > > > > > allowed
> > > > > > one
> > > > > > mask because it determines the inputs used to generate the
> > > > > > hash
> > > > > > that
> > > > > > identifies the flow. So you are only allowed one mask for all
> > > > > > flows
> > > > > > because changing those inputs would break the hash mapping.
> > > > > >
> > > > > > Normally this ends up meaning that you have to do like what
> > > > > > we
> > > > > > did in
> > > > > > ixgbe and disable ATR and only allow one mask for all inputs.
> > > > > > I
> > > > > > believe for i40e they required that you always use a full 4
> > > > > > tuple. I
> > > > > > didn't see something like that here. As such you may want to
> > > > > > double
> > > > > > check that you can have a mix of flow director rules that are
> > > > > > using 1
> > > > > > tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew you
> > > > > > couldn't.
> > > > > > Basically if you had fields included they had to be included
> > > > > > for
> > > > > > all
> > > > > > the rules on the port or device depending on how the tables
> > > > > > are
> > > > > > set
> > > > > > up.
> > > > >
> > > > > The ice driver hardware is quite a bit more capable than the
> > > > > ixgbe
> > > > > or
> > > > > i40e hardware, and uses a limited set of ACL rules to support
> > > > > different
> > > > > sets of masks. We have some limits on the number of masks and
> > > > > the
> > > > > number of fields that we can simultaneously support, but I
> > > > > think
> > > > > that is pretty normal for limited hardware resources.
> > > > >
> > > > > Let's just say that if the code doesn't work on an E810 card
> > > > > then
> > > > > we
> > > > > messed up and we'll have to fix it. :-)
> > > > >
> > > > > Thanks for the review! Hope this helps...
> > > >
> > > > I gather all that. The issue was the code in ice_is_acl_filter().
> > > > Basically if we start dropping fields it will not trigger the
> > > > rule to
> > > > be considered an ACL rule if the field is completely dropped.
> > > >
> > > > So for example I could define 4 rules, one that ignores the IPv4
> > > > source, one that ignores the IPv4 destination, one that ignores
> > > > the
> > > > TCP source port, and one that ignores the TCP destination port.
> > >
> > > We have the limitation that you can use one input set at a time so
> > > any
> > > of these rules could be created but they couldn't exist
> > > concurrently.
> >
> > No, I get that. The question I have is what happens if you try to
> > input a second input set. With ixgbe we triggered an error for trying
> > to change input sets. I'm wondering if you trigger an error on adding
> > a different input set or if you just invalidate the existing rules.
> >
> > > > With
> > > > the current code all 4 of those rules would be considered to be
> > > > non-ACL rules because the mask is 0 and not partial.
> > >
> > > Correct. I did this to test Flow Director:
> > >
> > > 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > > 192.168.0.20 src-port 8500 action 10' and sent traffic matching
> > > this.
> > > Traffic correctly went to queue 10.
> >
> > So a better question here is what happens if you do a rule with
> > src-port 8500, and a second rule with dst-port 8500? Does the second
> > rule fail or does it invalidate the first. If it invalidates the
> > first
> > then that would be a bug.
>
> The second rule fails and a message is output to dmesg.
>
> ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> 192.168.0.20 dst-port 8500 action 10
> rmgr: Cannot insert RX class rule: Operation not supported

Ugh. I really don't like the choice to use EOPNOTSUPP as the return
value for a mask case. It really should have been something like an
EBUSY or EINVAL since you are trying to overwrite an already written
mask so you can change the field configuration.

> dmesg:
> ice 0000:81:00.0: Failed to add filter.  Flow director filters on each
> port must have the same input set.

Okay, so this is the behavior you see with Flow Director. If you don't
apply a partial mask it fails to add the second rule.

> > > > If I do the same
> > > > thing and ignore all but one bit then they are all ACL rules.
> > >
> > > Also correct. I did as follows:
> > >
> > > 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > > 192.168.0.20 src-port 9000 m 0x1 action 15'
> > >
> > > Sending traffic to port 9000 and 90001, traffic went to queue 15
> > > Sending traffic to port 8000 and 90002, traffic went to other
> > > queues
> >
> > The test here is to set-up two rules and verify each of them and one
> > case that fails both. Same thing for the test above. Basically we
> > should be able to program multiple ACL rules with different masks and
> > that shouldn't be an issue up to some limit I would imagine. Same
> > thing for flow director rules. After the first you should not be able
> > to provide a flow director rule with a different input mask.
>
> I did this:
>
> ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> 192.168.0.20 src-port 9000 m 0x1 action 15
> ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> 192.168.0.20 src-port 8000 m 0x2 action 20
>
> Sending traffic to port 9000 and 9001 goes to queue 15
> Sending traffic to port 8000 and 8002 goes to queue 20
> Sending traffic to port 8001 and 8500 goes to neither of the queues

Doing the same thing with a mask works. I could add src-port with a
mask in one rule, and I could add dst-port with a mask in another. Can
you see the inconsistency here?

I would argue that you need to have some sort of logic that basically
checks to see if you are going to hit the input set issue and falls
back and applies the ACL rules. Otherwise you are significantly
hampering the usefulness of this filter type. It doesn't make sense
that dropping a field will cause a rule to fail to be added, but
masking a single bit in some field will make it valid. It would make
it a nightmare to use from the user point of view as the rules come
across as arbitrary.

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [net-next v3 05/15] ice: create flow profile
  2020-12-08 22:22                   ` Alexander Duyck
@ 2020-12-09 18:23                     ` Nguyen, Anthony L
  0 siblings, 0 replies; 28+ messages in thread
From: Nguyen, Anthony L @ 2020-12-09 18:23 UTC (permalink / raw)
  To: alexander.duyck
  Cc: Brandeburg, Jesse, sassmann, Cao, Chinh T, kuba, netdev,
	Valiquette, Real, Behera, BrijeshX, davem

On Tue, 2020-12-08 at 14:22 -0800, Alexander Duyck wrote:
> On Tue, Dec 8, 2020 at 2:01 PM Nguyen, Anthony L
> <anthony.l.nguyen@intel.com> wrote:
> > 
> > On Tue, 2020-12-08 at 11:00 -0800, Alexander Duyck wrote:
> > > On Tue, Dec 8, 2020 at 8:58 AM Nguyen, Anthony L
> > > <anthony.l.nguyen@intel.com> wrote:
> > > > 
> > > > On Mon, 2020-11-23 at 17:11 -0800, Alexander Duyck wrote:
> > > > > On Mon, Nov 23, 2020 at 3:21 PM Jesse Brandeburg
> > > > > <jesse.brandeburg@intel.com> wrote:
> > > > > > 
> > > > > > Alexander Duyck wrote:
> > > > > > 
> > > > > > > > > I'm not sure this logic is correct. Can the flow
> > > > > > > > > director
> > > > > > > > > rules
> > > > > > > > > handle
> > > > > > > > > a field that is removed? Last I knew it couldn't. If
> > > > > > > > > that
> > > > > > > > > is
> > > > > > > > > the case
> > > > > > > > > you should be using ACL for any case in which a full
> > > > > > > > > mask
> > > > > > > > > is
> > > > > > > > > not
> > > > > > > > > provided. So in your tests below you could probably
> > > > > > > > > drop
> > > > > > > > > the
> > > > > > > > > check
> > > > > > > > > for
> > > > > > > > > zero as I don't think that is a valid case in which
> > > > > > > > > flow
> > > > > > > > > director
> > > > > > > > > would work.
> > > > > > > > > 
> > > > > > > > 
> > > > > > > > I'm not sure what you meant by a field that is removed,
> > > > > > > > but
> > > > > > > > Flow
> > > > > > > > Director can handle reduced input sets. Flow Director
> > > > > > > > is
> > > > > > > > able
> > > > > > > > to handle
> > > > > > > > 0 mask, full mask, and less than 4 tuples. ACL is
> > > > > > > > needed/used
> > > > > > > > only when
> > > > > > > > a partial mask rule is requested.
> > > > > > > 
> > > > > > > So historically speaking with flow director you are only
> > > > > > > allowed
> > > > > > > one
> > > > > > > mask because it determines the inputs used to generate
> > > > > > > the
> > > > > > > hash
> > > > > > > that
> > > > > > > identifies the flow. So you are only allowed one mask for
> > > > > > > all
> > > > > > > flows
> > > > > > > because changing those inputs would break the hash
> > > > > > > mapping.
> > > > > > > 
> > > > > > > Normally this ends up meaning that you have to do like
> > > > > > > what
> > > > > > > we
> > > > > > > did in
> > > > > > > ixgbe and disable ATR and only allow one mask for all
> > > > > > > inputs.
> > > > > > > I
> > > > > > > believe for i40e they required that you always use a full
> > > > > > > 4
> > > > > > > tuple. I
> > > > > > > didn't see something like that here. As such you may want
> > > > > > > to
> > > > > > > double
> > > > > > > check that you can have a mix of flow director rules that
> > > > > > > are
> > > > > > > using 1
> > > > > > > tuple, 2 tuples, 3 tuples, and 4 tuples as last I knew
> > > > > > > you
> > > > > > > couldn't.
> > > > > > > Basically if you had fields included they had to be
> > > > > > > included
> > > > > > > for
> > > > > > > all
> > > > > > > the rules on the port or device depending on how the
> > > > > > > tables
> > > > > > > are
> > > > > > > set
> > > > > > > up.
> > > > > > 
> > > > > > The ice driver hardware is quite a bit more capable than
> > > > > > the
> > > > > > ixgbe
> > > > > > or
> > > > > > i40e hardware, and uses a limited set of ACL rules to
> > > > > > support
> > > > > > different
> > > > > > sets of masks. We have some limits on the number of masks
> > > > > > and
> > > > > > the
> > > > > > number of fields that we can simultaneously support, but I
> > > > > > think
> > > > > > that is pretty normal for limited hardware resources.
> > > > > > 
> > > > > > Let's just say that if the code doesn't work on an E810
> > > > > > card
> > > > > > then
> > > > > > we
> > > > > > messed up and we'll have to fix it. :-)
> > > > > > 
> > > > > > Thanks for the review! Hope this helps...
> > > > > 
> > > > > I gather all that. The issue was the code in
> > > > > ice_is_acl_filter().
> > > > > Basically if we start dropping fields it will not trigger the
> > > > > rule to
> > > > > be considered an ACL rule if the field is completely dropped.
> > > > > 
> > > > > So for example I could define 4 rules, one that ignores the
> > > > > IPv4
> > > > > source, one that ignores the IPv4 destination, one that
> > > > > ignores
> > > > > the
> > > > > TCP source port, and one that ignores the TCP destination
> > > > > port.
> > > > 
> > > > We have the limitation that you can use one input set at a time
> > > > so
> > > > any
> > > > of these rules could be created but they couldn't exist
> > > > concurrently.
> > > 
> > > No, I get that. The question I have is what happens if you try to
> > > input a second input set. With ixgbe we triggered an error for
> > > trying
> > > to change input sets. I'm wondering if you trigger an error on
> > > adding
> > > a different input set or if you just invalidate the existing
> > > rules.
> > > 
> > > > > With
> > > > > the current code all 4 of those rules would be considered to
> > > > > be
> > > > > non-ACL rules because the mask is 0 and not partial.
> > > > 
> > > > Correct. I did this to test Flow Director:
> > > > 
> > > > 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > > > 192.168.0.20 src-port 8500 action 10' and sent traffic matching
> > > > this.
> > > > Traffic correctly went to queue 10.
> > > 
> > > So a better question here is what happens if you do a rule with
> > > src-port 8500, and a second rule with dst-port 8500? Does the
> > > second
> > > rule fail or does it invalidate the first. If it invalidates the
> > > first
> > > then that would be a bug.
> > 
> > The second rule fails and a message is output to dmesg.
> > 
> > ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > 192.168.0.20 dst-port 8500 action 10
> > rmgr: Cannot insert RX class rule: Operation not supported
> 
> Ugh. I really don't like the choice to use EOPNOTSUPP as the return
> value for a mask case. It really should have been something like an
> EBUSY or EINVAL since you are trying to overwrite an already written
> mask so you can change the field configuration.
> 
> > dmesg:
> > ice 0000:81:00.0: Failed to add filter.  Flow director filters on
> > each
> > port must have the same input set.
> 
> Okay, so this is the behavior you see with Flow Director. If you
> don't
> apply a partial mask it fails to add the second rule.
> 
> > > > > If I do the same
> > > > > thing and ignore all but one bit then they are all ACL rules.
> > > > 
> > > > Also correct. I did as follows:
> > > > 
> > > > 'ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > > > 192.168.0.20 src-port 9000 m 0x1 action 15'
> > > > 
> > > > Sending traffic to port 9000 and 90001, traffic went to queue
> > > > 15
> > > > Sending traffic to port 8000 and 90002, traffic went to other
> > > > queues
> > > 
> > > The test here is to set-up two rules and verify each of them and
> > > one
> > > case that fails both. Same thing for the test above. Basically we
> > > should be able to program multiple ACL rules with different masks
> > > and
> > > that shouldn't be an issue up to some limit I would imagine. Same
> > > thing for flow director rules. After the first you should not be
> > > able
> > > to provide a flow director rule with a different input mask.
> > 
> > I did this:
> > 
> > ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > 192.168.0.20 src-port 9000 m 0x1 action 15
> > ethtool -N ens801f0 flow-type tcp4 src-ip 192.168.0.10 dst-ip
> > 192.168.0.20 src-port 8000 m 0x2 action 20
> > 
> > Sending traffic to port 9000 and 9001 goes to queue 15
> > Sending traffic to port 8000 and 8002 goes to queue 20
> > Sending traffic to port 8001 and 8500 goes to neither of the queues
> 
> Doing the same thing with a mask works. I could add src-port with a
> mask in one rule, and I could add dst-port with a mask in another.
> Can
> you see the inconsistency here?

Thanks for the reviews Alex. I see your point. I'm going to drop the
ACL patches from this series and send the other patches while we look
into this. 

-Tony

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2020-12-09 18:24 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-13 21:44 [net-next v3 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2020-11-13 Tony Nguyen
2020-11-13 21:44 ` [net-next v3 01/15] ice: cleanup stack hog Tony Nguyen
2020-11-13 21:44 ` [net-next v3 02/15] ice: rename shared Flow Director functions Tony Nguyen
2020-11-13 21:44 ` [net-next v3 03/15] ice: initialize ACL table Tony Nguyen
2020-11-13 21:44 ` [net-next v3 04/15] ice: initialize ACL scenario Tony Nguyen
2020-11-13 21:44 ` [net-next v3 05/15] ice: create flow profile Tony Nguyen
2020-11-13 23:56   ` Alexander Duyck
2020-11-21  0:42     ` Nguyen, Anthony L
2020-11-21  1:49       ` Alexander Duyck
2020-11-23 23:21         ` Jesse Brandeburg
2020-11-24  1:11           ` Alexander Duyck
2020-12-08 16:58             ` Nguyen, Anthony L
2020-12-08 19:00               ` Alexander Duyck
2020-12-08 22:01                 ` Nguyen, Anthony L
2020-12-08 22:22                   ` Alexander Duyck
2020-12-09 18:23                     ` Nguyen, Anthony L
2020-11-13 21:44 ` [net-next v3 06/15] ice: create ACL entry Tony Nguyen
2020-11-13 21:44 ` [net-next v3 07/15] ice: program " Tony Nguyen
2020-11-13 21:44 ` [net-next v3 08/15] ice: don't always return an error for Get PHY Abilities AQ command Tony Nguyen
2020-11-14  1:25   ` Alexander Duyck
2020-11-21  0:43     ` Nguyen, Anthony L
2020-11-13 21:44 ` [net-next v3 09/15] ice: Enable Support for FW Override (E82X) Tony Nguyen
2020-11-13 21:44 ` [net-next v3 10/15] ice: Remove gate to OROM init Tony Nguyen
2020-11-13 21:44 ` [net-next v3 11/15] ice: Remove vlan_ena from vsi structure Tony Nguyen
2020-11-13 21:44 ` [net-next v3 12/15] ice: cleanup misleading comment Tony Nguyen
2020-11-13 21:44 ` [net-next v3 13/15] ice: silence static analysis warning Tony Nguyen
2020-11-13 21:44 ` [net-next v3 14/15] ice: join format strings to same line as ice_debug Tony Nguyen
2020-11-13 21:44 ` [net-next v3 15/15] ice: Add space to unknown speed Tony Nguyen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.