All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs
@ 2021-03-02 18:12 Tony Nguyen
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on specific VF Tony Nguyen
                   ` (13 more replies)
  0 siblings, 14 replies; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Vignesh Sridhar <vignesh.sridhar@intel.com>

Attempt to detect malicious VFs and, if suspected, log the information but
keep going to allow the user to take any desired actions.

Potentially malicious VFs are identified by checking if the VFs are
transmitting too many messages via the PF-VF mailbox which could cause an
overflow of this channel resulting in denial of service. This is done by
creating a snapshot or static capture of the mailbox buffer which can be
traversed and in which the messages sent by VFs are tracked.

Co-developed-by: Yashaswini Raghuram Prathivadi Bhayankaram <yashaswini.raghuram.prathivadi.bhayankaram@intel.com>
Signed-off-by: Yashaswini Raghuram Prathivadi Bhayankaram <yashaswini.raghuram.prathivadi.bhayankaram@intel.com>
Co-developed-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
Co-developed-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h          |   1 +
 drivers/net/ethernet/intel/ice/ice_main.c     |   7 +-
 drivers/net/ethernet/intel/ice/ice_sriov.c    | 400 +++++++++++++++++-
 drivers/net/ethernet/intel/ice/ice_sriov.h    |  20 +-
 drivers/net/ethernet/intel/ice/ice_type.h     |  75 ++++
 .../net/ethernet/intel/ice/ice_virtchnl_pf.c  |  92 +++-
 .../net/ethernet/intel/ice/ice_virtchnl_pf.h  |  12 +
 7 files changed, 603 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 07d4715e1bcd..ca94b01626d2 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -414,6 +414,7 @@ struct ice_pf {
 	u16 num_msix_per_vf;
 	/* used to ratelimit the MDD event logging */
 	unsigned long last_printed_mdd_jiffies;
+	DECLARE_BITMAP(malvfs, ICE_MAX_VF_COUNT);
 	DECLARE_BITMAP(state, __ICE_STATE_NBITS);
 	DECLARE_BITMAP(flags, ICE_PF_FLAGS_NBITS);
 	unsigned long *avail_txqs;	/* bitmap to track PF Tx queue usage */
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 5b66b27a98aa..9cf876e420c9 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -1210,6 +1210,10 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
 	case ICE_CTL_Q_MAILBOX:
 		cq = &hw->mailboxq;
 		qtype = "Mailbox";
+		/* we are going to try to detect a malicious VF, so set the
+		 * state to begin detection
+		 */
+		hw->mbx_snapshot.mbx_buf.state = ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
 		break;
 	default:
 		dev_warn(dev, "Unknown control queue type 0x%x\n", q_type);
@@ -1291,7 +1295,8 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
 			ice_vf_lan_overflow_event(pf, &event);
 			break;
 		case ice_mbx_opc_send_msg_to_pf:
-			ice_vc_process_vf_msg(pf, &event);
+			if (!ice_is_malicious_vf(pf, &event, i, pending))
+				ice_vc_process_vf_msg(pf, &event);
 			break;
 		case ice_aqc_opc_fw_logging:
 			ice_output_fw_log(hw, &event.desc, event.msg_buf);
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c b/drivers/net/ethernet/intel/ice/ice_sriov.c
index 554f567476f3..aa11d07793d4 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.c
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
@@ -2,7 +2,6 @@
 /* Copyright (c) 2018, Intel Corporation. */
 
 #include "ice_common.h"
-#include "ice_adminq_cmd.h"
 #include "ice_sriov.h"
 
 /**
@@ -132,3 +131,402 @@ u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed)
 
 	return speed;
 }
+
+/* The mailbox overflow detection algorithm helps to check if there
+ * is a possibility of a malicious VF transmitting too many MBX messages to the
+ * PF.
+ * 1. The mailbox snapshot structure, ice_mbx_snapshot, is initialized during
+ * driver initialization in ice_init_hw() using ice_mbx_init_snapshot().
+ * The struct ice_mbx_snapshot helps to track and traverse a static window of
+ * messages within the mailbox queue while looking for a malicious VF.
+ *
+ * 2. When the caller starts processing its mailbox queue in response to an
+ * interrupt, the structure ice_mbx_snapshot is expected to be cleared before
+ * the algorithm can be run for the first time for that interrupt. This can be
+ * done via ice_mbx_reset_snapshot().
+ *
+ * 3. For every message read by the caller from the MBX Queue, the caller must
+ * call the detection algorithm's entry function ice_mbx_vf_state_handler().
+ * Before every call to ice_mbx_vf_state_handler() the struct ice_mbx_data is
+ * filled as it is required to be passed to the algorithm.
+ *
+ * 4. Every time a message is read from the MBX queue, a VFId is received which
+ * is passed to the state handler. The boolean output is_malvf of the state
+ * handler ice_mbx_vf_state_handler() serves as an indicator to the caller
+ * whether this VF is malicious or not.
+ *
+ * 5. When a VF is identified to be malicious, the caller can send a message
+ * to the system administrator. The caller can invoke ice_mbx_report_malvf()
+ * to help determine if a malicious VF is to be reported or not. This function
+ * requires the caller to maintain a global bitmap to track all malicious VFs
+ * and pass that to ice_mbx_report_malvf() along with the VFID which was identified
+ * to be malicious by ice_mbx_vf_state_handler().
+ *
+ * 6. The global bitmap maintained by PF can be cleared completely if PF is in
+ * reset or the bit corresponding to a VF can be cleared if that VF is in reset.
+ * When a VF is shut down and brought back up, we assume that the new VF
+ * brought up is not malicious and hence report it if found malicious.
+ *
+ * 7. The function ice_mbx_reset_snapshot() is called to reset the information
+ * in ice_mbx_snapshot for every new mailbox interrupt handled.
+ *
+ * 8. The memory allocated for variables in ice_mbx_snapshot is de-allocated
+ * when driver is unloaded.
+ */
+#define ICE_RQ_DATA_MASK(rq_data) ((rq_data) & PF_MBX_ARQH_ARQH_M)
+/* Using the highest value for an unsigned 16-bit value 0xFFFF to indicate that
+ * the max messages check must be ignored in the algorithm
+ */
+#define ICE_IGNORE_MAX_MSG_CNT	0xFFFF
+
+/**
+ * ice_mbx_traverse - Pass through mailbox snapshot
+ * @hw: pointer to the HW struct
+ * @new_state: new algorithm state
+ *
+ * Traversing the mailbox static snapshot without checking
+ * for malicious VFs.
+ */
+static void
+ice_mbx_traverse(struct ice_hw *hw,
+		 enum ice_mbx_snapshot_state *new_state)
+{
+	struct ice_mbx_snap_buffer_data *snap_buf;
+	u32 num_iterations;
+
+	snap_buf = &hw->mbx_snapshot.mbx_buf;
+
+	/* As mailbox buffer is circular, applying a mask
+	 * on the incremented iteration count.
+	 */
+	num_iterations = ICE_RQ_DATA_MASK(++snap_buf->num_iterations);
+
+	/* Checking either of the below conditions to exit snapshot traversal:
+	 * Condition-1: If the number of iterations in the mailbox is equal to
+	 * the mailbox head which would indicate that we have reached the end
+	 * of the static snapshot.
+	 * Condition-2: If the maximum messages serviced in the mailbox for a
+	 * given interrupt is the highest possible value then there is no need
+	 * to check if the number of messages processed is equal to it. If not
+	 * check if the number of messages processed is greater than or equal
+	 * to the maximum number of mailbox entries serviced in current work item.
+	 */
+	if (num_iterations == snap_buf->head ||
+	    (snap_buf->max_num_msgs_mbx < ICE_IGNORE_MAX_MSG_CNT &&
+	     ++snap_buf->num_msg_proc >= snap_buf->max_num_msgs_mbx))
+		*new_state = ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
+}
+
+/**
+ * ice_mbx_detect_malvf - Detect malicious VF in snapshot
+ * @hw: pointer to the HW struct
+ * @vf_id: relative virtual function ID
+ * @new_state: new algorithm state
+ * @is_malvf: boolean output to indicate if VF is malicious
+ *
+ * This function tracks the number of asynchronous messages
+ * sent per VF and marks the VF as malicious if it exceeds
+ * the permissible number of messages to send.
+ */
+static enum ice_status
+ice_mbx_detect_malvf(struct ice_hw *hw, u16 vf_id,
+		     enum ice_mbx_snapshot_state *new_state,
+		     bool *is_malvf)
+{
+	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
+
+	if (vf_id >= snap->mbx_vf.vfcntr_len)
+		return ICE_ERR_OUT_OF_RANGE;
+
+	/* increment the message count in the VF array */
+	snap->mbx_vf.vf_cntr[vf_id]++;
+
+	if (snap->mbx_vf.vf_cntr[vf_id] >= ICE_ASYNC_VF_MSG_THRESHOLD)
+		*is_malvf = true;
+
+	/* continue to iterate through the mailbox snapshot */
+	ice_mbx_traverse(hw, new_state);
+
+	return 0;
+}
+
+/**
+ * ice_mbx_reset_snapshot - Reset mailbox snapshot structure
+ * @snap: pointer to mailbox snapshot structure in the ice_hw struct
+ *
+ * Reset the mailbox snapshot structure and clear VF counter array.
+ */
+static void ice_mbx_reset_snapshot(struct ice_mbx_snapshot *snap)
+{
+	u32 vfcntr_len;
+
+	if (!snap || !snap->mbx_vf.vf_cntr)
+		return;
+
+	/* Clear VF counters. */
+	vfcntr_len = snap->mbx_vf.vfcntr_len;
+	if (vfcntr_len)
+		memset(snap->mbx_vf.vf_cntr, 0,
+		       (vfcntr_len * sizeof(*snap->mbx_vf.vf_cntr)));
+
+	/* Reset mailbox snapshot for a new capture. */
+	memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
+	snap->mbx_buf.state = ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
+}
+
+/**
+ * ice_mbx_vf_state_handler - Handle states of the overflow algorithm
+ * @hw: pointer to the HW struct
+ * @mbx_data: pointer to structure containing mailbox data
+ * @vf_id: relative virtual function (VF) ID
+ * @is_malvf: boolean output to indicate if VF is malicious
+ *
+ * The function serves as an entry point for the malicious VF
+ * detection algorithm by handling the different states and state
+ * transitions of the algorithm:
+ * New snapshot: This state is entered when creating a new static
+ * snapshot. The data from any previous mailbox snapshot is
+ * cleared and a new capture of the mailbox head and tail is
+ * logged. This will be the new static snapshot to detect
+ * asynchronous messages sent by VFs. On capturing the snapshot
+ * and depending on whether the number of pending messages in that
+ * snapshot exceed the watermark value, the state machine enters
+ * traverse or detect states.
+ * Traverse: If pending message count is below watermark then iterate
+ * through the snapshot without any action on VF.
+ * Detect: If pending message count exceeds watermark traverse
+ * the static snapshot and look for a malicious VF.
+ */
+enum ice_status
+ice_mbx_vf_state_handler(struct ice_hw *hw,
+			 struct ice_mbx_data *mbx_data, u16 vf_id,
+			 bool *is_malvf)
+{
+	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
+	struct ice_mbx_snap_buffer_data *snap_buf;
+	struct ice_ctl_q_info *cq = &hw->mailboxq;
+	enum ice_mbx_snapshot_state new_state;
+	enum ice_status status = 0;
+
+	if (!is_malvf || !mbx_data)
+		return ICE_ERR_BAD_PTR;
+
+	/* When entering the mailbox state machine assume that the VF
+	 * is not malicious until detected.
+	 */
+	*is_malvf = false;
+
+	 /* Checking if max messages allowed to be processed while servicing current
+	  * interrupt is not less than the defined AVF message threshold.
+	  */
+	if (mbx_data->max_num_msgs_mbx <= ICE_ASYNC_VF_MSG_THRESHOLD)
+		return ICE_ERR_INVAL_SIZE;
+
+	/* The watermark value should not be lesser than the threshold limit
+	 * set for the number of asynchronous messages a VF can send to mailbox
+	 * nor should it be greater than the maximum number of messages in the
+	 * mailbox serviced in current interrupt.
+	 */
+	if (mbx_data->async_watermark_val < ICE_ASYNC_VF_MSG_THRESHOLD ||
+	    mbx_data->async_watermark_val > mbx_data->max_num_msgs_mbx)
+		return ICE_ERR_PARAM;
+
+	new_state = ICE_MAL_VF_DETECT_STATE_INVALID;
+	snap_buf = &snap->mbx_buf;
+
+	switch (snap_buf->state) {
+	case ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT:
+		/* Clear any previously held data in mailbox snapshot structure. */
+		ice_mbx_reset_snapshot(snap);
+
+		/* Collect the pending ARQ count, number of messages processed and
+		 * the maximum number of messages allowed to be processed from the
+		 * Mailbox for current interrupt.
+		 */
+		snap_buf->num_pending_arq = mbx_data->num_pending_arq;
+		snap_buf->num_msg_proc = mbx_data->num_msg_proc;
+		snap_buf->max_num_msgs_mbx = mbx_data->max_num_msgs_mbx;
+
+		/* Capture a new static snapshot of the mailbox by logging the
+		 * head and tail of snapshot and set num_iterations to the tail
+		 * value to mark the start of the iteration through the snapshot.
+		 */
+		snap_buf->head = ICE_RQ_DATA_MASK(cq->rq.next_to_clean +
+						  mbx_data->num_pending_arq);
+		snap_buf->tail = ICE_RQ_DATA_MASK(cq->rq.next_to_clean - 1);
+		snap_buf->num_iterations = snap_buf->tail;
+
+		/* Pending ARQ messages returned by ice_clean_rq_elem
+		 * is the difference between the head and tail of the
+		 * mailbox queue. Comparing this value against the watermark
+		 * helps to check if we potentially have malicious VFs.
+		 */
+		if (snap_buf->num_pending_arq >=
+		    mbx_data->async_watermark_val) {
+			new_state = ICE_MAL_VF_DETECT_STATE_DETECT;
+			status = ice_mbx_detect_malvf(hw, vf_id, &new_state, is_malvf);
+		} else {
+			new_state = ICE_MAL_VF_DETECT_STATE_TRAVERSE;
+			ice_mbx_traverse(hw, &new_state);
+		}
+		break;
+
+	case ICE_MAL_VF_DETECT_STATE_TRAVERSE:
+		new_state = ICE_MAL_VF_DETECT_STATE_TRAVERSE;
+		ice_mbx_traverse(hw, &new_state);
+		break;
+
+	case ICE_MAL_VF_DETECT_STATE_DETECT:
+		new_state = ICE_MAL_VF_DETECT_STATE_DETECT;
+		status = ice_mbx_detect_malvf(hw, vf_id, &new_state, is_malvf);
+		break;
+
+	default:
+		new_state = ICE_MAL_VF_DETECT_STATE_INVALID;
+		status = ICE_ERR_CFG;
+	}
+
+	snap_buf->state = new_state;
+
+	return status;
+}
+
+/**
+ * ice_mbx_report_malvf - Track and note malicious VF
+ * @hw: pointer to the HW struct
+ * @all_malvfs: all malicious VFs tracked by PF
+ * @bitmap_len: length of bitmap in bits
+ * @vf_id: relative virtual function ID of the malicious VF
+ * @report_malvf: boolean to indicate if malicious VF must be reported
+ *
+ * This function will update a bitmap that keeps track of the malicious
+ * VFs attached to the PF. A malicious VF must be reported only once if
+ * discovered between VF resets or loading so the function checks
+ * the input vf_id against the bitmap to verify if the VF has been
+ * detected in any previous mailbox iterations.
+ */
+enum ice_status
+ice_mbx_report_malvf(struct ice_hw *hw, unsigned long *all_malvfs,
+		     u16 bitmap_len, u16 vf_id, bool *report_malvf)
+{
+	if (!all_malvfs || !report_malvf)
+		return ICE_ERR_PARAM;
+
+	*report_malvf = false;
+
+	if (bitmap_len < hw->mbx_snapshot.mbx_vf.vfcntr_len)
+		return ICE_ERR_INVAL_SIZE;
+
+	if (vf_id >= bitmap_len)
+		return ICE_ERR_OUT_OF_RANGE;
+
+	/* If the vf_id is found in the bitmap set bit and boolean to true */
+	if (!test_and_set_bit(vf_id, all_malvfs))
+		*report_malvf = true;
+
+	return 0;
+}
+
+/**
+ * ice_mbx_clear_malvf - Clear VF bitmap and counter for VF ID
+ * @snap: pointer to the mailbox snapshot structure
+ * @all_malvfs: all malicious VFs tracked by PF
+ * @bitmap_len: length of bitmap in bits
+ * @vf_id: relative virtual function ID of the malicious VF
+ *
+ * In case of a VF reset, this function can be called to clear
+ * the bit corresponding to the VF ID in the bitmap tracking all
+ * malicious VFs attached to the PF. The function also clears the
+ * VF counter array at the index of the VF ID. This is to ensure
+ * that the new VF loaded is not considered malicious before going
+ * through the overflow detection algorithm.
+ */
+enum ice_status
+ice_mbx_clear_malvf(struct ice_mbx_snapshot *snap, unsigned long *all_malvfs,
+		    u16 bitmap_len, u16 vf_id)
+{
+	if (!snap || !all_malvfs)
+		return ICE_ERR_PARAM;
+
+	if (bitmap_len < snap->mbx_vf.vfcntr_len)
+		return ICE_ERR_INVAL_SIZE;
+
+	/* Ensure VF ID value is not larger than bitmap or VF counter length */
+	if (vf_id >= bitmap_len || vf_id >= snap->mbx_vf.vfcntr_len)
+		return ICE_ERR_OUT_OF_RANGE;
+
+	/* Clear VF ID bit in the bitmap tracking malicious VFs attached to PF */
+	clear_bit(vf_id, all_malvfs);
+
+	/* Clear the VF counter in the mailbox snapshot structure for that VF ID.
+	 * This is to ensure that if a VF is unloaded and a new one brought back
+	 * up with the same VF ID for a snapshot currently in traversal or detect
+	 * state the counter for that VF ID does not increment on top of existing
+	 * values in the mailbox overflow detection algorithm.
+	 */
+	snap->mbx_vf.vf_cntr[vf_id] = 0;
+
+	return 0;
+}
+
+/**
+ * ice_mbx_init_snapshot - Initialize mailbox snapshot structure
+ * @hw: pointer to the hardware structure
+ * @vf_count: number of VFs allocated on a PF
+ *
+ * Clear the mailbox snapshot structure and allocate memory
+ * for the VF counter array based on the number of VFs allocated
+ * on that PF.
+ *
+ * Assumption: This function will assume ice_get_caps() has already been
+ * called to ensure that the vf_count can be compared against the number
+ * of VFs supported as defined in the functional capabilities of the device.
+ */
+enum ice_status ice_mbx_init_snapshot(struct ice_hw *hw, u16 vf_count)
+{
+	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
+
+	/* Ensure that the number of VFs allocated is non-zero and
+	 * is not greater than the number of supported VFs defined in
+	 * the functional capabilities of the PF.
+	 */
+	if (!vf_count || vf_count > hw->func_caps.num_allocd_vfs)
+		return ICE_ERR_INVAL_SIZE;
+
+	snap->mbx_vf.vf_cntr = devm_kcalloc(ice_hw_to_dev(hw), vf_count,
+					    sizeof(*snap->mbx_vf.vf_cntr),
+					    GFP_KERNEL);
+	if (!snap->mbx_vf.vf_cntr)
+		return ICE_ERR_NO_MEMORY;
+
+	/* Setting the VF counter length to the number of allocated
+	 * VFs for given PF's functional capabilities.
+	 */
+	snap->mbx_vf.vfcntr_len = vf_count;
+
+	/* Clear mbx_buf in the mailbox snaphot structure and setting the
+	 * mailbox snapshot state to a new capture.
+	 */
+	memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
+	snap->mbx_buf.state = ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
+
+	return 0;
+}
+
+/**
+ * ice_mbx_deinit_snapshot - Free mailbox snapshot structure
+ * @hw: pointer to the hardware structure
+ *
+ * Clear the mailbox snapshot structure and free the VF counter array.
+ */
+void ice_mbx_deinit_snapshot(struct ice_hw *hw)
+{
+	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
+
+	/* Free VF counter array and reset VF counter length */
+	devm_kfree(ice_hw_to_dev(hw), snap->mbx_vf.vf_cntr);
+	snap->mbx_vf.vfcntr_len = 0;
+
+	/* Clear mbx_buf in the mailbox snaphot structure */
+	memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.h b/drivers/net/ethernet/intel/ice/ice_sriov.h
index 3d78a0795138..161dc55d9e9c 100644
--- a/drivers/net/ethernet/intel/ice/ice_sriov.h
+++ b/drivers/net/ethernet/intel/ice/ice_sriov.h
@@ -4,7 +4,14 @@
 #ifndef _ICE_SRIOV_H_
 #define _ICE_SRIOV_H_
 
-#include "ice_common.h"
+#include "ice_type.h"
+#include "ice_controlq.h"
+
+/* Defining the mailbox message threshold as 63 asynchronous
+ * pending messages. Normal VF functionality does not require
+ * sending more than 63 asynchronous pending message.
+ */
+#define ICE_ASYNC_VF_MSG_THRESHOLD	63
 
 #ifdef CONFIG_PCI_IOV
 enum ice_status
@@ -12,6 +19,17 @@ ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid, u32 v_opcode, u32 v_retval,
 		      u8 *msg, u16 msglen, struct ice_sq_cd *cd);
 
 u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16 link_speed);
+enum ice_status
+ice_mbx_vf_state_handler(struct ice_hw *hw, struct ice_mbx_data *mbx_data,
+			 u16 vf_id, bool *is_mal_vf);
+enum ice_status
+ice_mbx_clear_malvf(struct ice_mbx_snapshot *snap, unsigned long *all_malvfs,
+		    u16 bitmap_len, u16 vf_id);
+enum ice_status ice_mbx_init_snapshot(struct ice_hw *hw, u16 vf_count);
+void ice_mbx_deinit_snapshot(struct ice_hw *hw);
+enum ice_status
+ice_mbx_report_malvf(struct ice_hw *hw, unsigned long *all_malvfs,
+		     u16 bitmap_len, u16 vf_id, bool *report_malvf);
 #else /* CONFIG_PCI_IOV */
 static inline enum ice_status
 ice_aq_send_msg_to_vf(struct ice_hw __always_unused *hw,
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index eae7ba73731e..420fd487fd57 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -643,6 +643,80 @@ struct ice_fw_log_cfg {
 	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];
 };
 
+/* Enum defining the different states of the mailbox snapshot in the
+ * PF-VF mailbox overflow detection algorithm. The snapshot can be in
+ * states:
+ * 1. ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT - generate a new static snapshot
+ * within the mailbox buffer.
+ * 2. ICE_MAL_VF_DETECT_STATE_TRAVERSE - iterate through the mailbox snaphot
+ * 3. ICE_MAL_VF_DETECT_STATE_DETECT - track the messages sent per VF via the
+ * mailbox and mark any VFs sending more messages than the threshold limit set.
+ * 4. ICE_MAL_VF_DETECT_STATE_INVALID - Invalid mailbox state set to 0xFFFFFFFF.
+ */
+enum ice_mbx_snapshot_state {
+	ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT = 0,
+	ICE_MAL_VF_DETECT_STATE_TRAVERSE,
+	ICE_MAL_VF_DETECT_STATE_DETECT,
+	ICE_MAL_VF_DETECT_STATE_INVALID = 0xFFFFFFFF,
+};
+
+/* Structure to hold information of the static snapshot and the mailbox
+ * buffer data used to generate and track the snapshot.
+ * 1. state: the state of the mailbox snapshot in the malicious VF
+ * detection state handler ice_mbx_vf_state_handler()
+ * 2. head: head of the mailbox snapshot in a circular mailbox buffer
+ * 3. tail: tail of the mailbox snapshot in a circular mailbox buffer
+ * 4. num_iterations: number of messages traversed in circular mailbox buffer
+ * 5. num_msg_proc: number of messages processed in mailbox
+ * 6. num_pending_arq: number of pending asynchronous messages
+ * 7. max_num_msgs_mbx: maximum messages in mailbox for currently
+ * serviced work item or interrupt.
+ */
+struct ice_mbx_snap_buffer_data {
+	enum ice_mbx_snapshot_state state;
+	u32 head;
+	u32 tail;
+	u32 num_iterations;
+	u16 num_msg_proc;
+	u16 num_pending_arq;
+	u16 max_num_msgs_mbx;
+};
+
+/* Structure to track messages sent by VFs on mailbox:
+ * 1. vf_cntr: a counter array of VFs to track the number of
+ * asynchronous messages sent by each VF
+ * 2. vfcntr_len: number of entries in VF counter array
+ */
+struct ice_mbx_vf_counter {
+	u32 *vf_cntr;
+	u32 vfcntr_len;
+};
+
+/* Structure to hold data relevant to the captured static snapshot
+ * of the PF-VF mailbox.
+ */
+struct ice_mbx_snapshot {
+	struct ice_mbx_snap_buffer_data mbx_buf;
+	struct ice_mbx_vf_counter mbx_vf;
+};
+
+/* Structure to hold data to be used for capturing or updating a
+ * static snapshot.
+ * 1. num_msg_proc: number of messages processed in mailbox
+ * 2. num_pending_arq: number of pending asynchronous messages
+ * 3. max_num_msgs_mbx: maximum messages in mailbox for currently
+ * serviced work item or interrupt.
+ * 4. async_watermark_val: An upper threshold set by caller to determine
+ * if the pending arq count is large enough to assume that there is
+ * the possibility of a mailicious VF.
+ */
+struct ice_mbx_data {
+	u16 num_msg_proc;
+	u16 num_pending_arq;
+	u16 max_num_msgs_mbx;
+	u16 async_watermark_val;
+};
+
 /* Port hardware description */
 struct ice_hw {
 	u8 __iomem *hw_addr;
@@ -774,6 +848,7 @@ struct ice_hw {
 	DECLARE_BITMAP(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX);
 	struct mutex rss_locks;	/* protect RSS configuration */
 	struct list_head rss_list_head;
+	struct ice_mbx_snapshot mbx_snapshot;
 };
 
 /* Statistics collected by each port, VSI, VEB, and S-channel */
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index 6be6a54eb29c..0da9c84ed30f 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -424,6 +424,12 @@ void ice_free_vfs(struct ice_pf *pf)
 			wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
 		}
 	}
+
+	/* clear malicious info if the VFs are getting released */
+	for (i = 0; i < tmp; i++)
+		if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->malvfs, ICE_MAX_VF_COUNT, i))
+			dev_dbg(dev, "failed to clear malicious VF state for VF %u\n", i);
+
 	clear_bit(__ICE_VF_DIS, pf->state);
 	clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags);
 }
@@ -1262,6 +1268,11 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
 	if (!pf->num_alloc_vfs)
 		return false;
 
+	/* clear all malicious info if the VFs are getting reset */
+	ice_for_each_vf(pf, i)
+		if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->malvfs, ICE_MAX_VF_COUNT, i))
+			dev_dbg(dev, "failed to clear malicious VF state for VF %u\n", i);
+
 	/* If VFs have been disabled, there is no need to reset */
 	if (test_and_set_bit(__ICE_VF_DIS, pf->state))
 		return false;
@@ -1447,6 +1458,10 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
 
 	ice_vf_post_vsi_rebuild(vf);
 
+	/* if the VF has been reset allow it to come up again */
+	if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->malvfs, ICE_MAX_VF_COUNT, vf->vf_id))
+		dev_dbg(dev, "failed to clear malicious VF state for VF %u\n", i);
+
 	return true;
 }
 
@@ -1779,6 +1794,7 @@ int ice_sriov_configure(struct pci_dev *pdev, int num_vfs)
 {
 	struct ice_pf *pf = pci_get_drvdata(pdev);
 	struct device *dev = ice_pf_to_dev(pf);
+	enum ice_status status;
 	int err;
 
 	err = ice_check_sriov_allowed(pf);
@@ -1787,6 +1803,7 @@ int ice_sriov_configure(struct pci_dev *pdev, int num_vfs)
 
 	if (!num_vfs) {
 		if (!pci_vfs_assigned(pdev)) {
+			ice_mbx_deinit_snapshot(&pf->hw);
 			ice_free_vfs(pf);
 			if (pf->lag)
 				ice_enable_lag(pf->lag);
@@ -1797,9 +1814,15 @@ int ice_sriov_configure(struct pci_dev *pdev, int num_vfs)
 		return -EBUSY;
 	}
 
+	status = ice_mbx_init_snapshot(&pf->hw, num_vfs);
+	if (status)
+		return ice_status_to_errno(status);
+
 	err = ice_pci_sriov_ena(pf, num_vfs);
-	if (err)
+	if (err) {
+		ice_mbx_deinit_snapshot(&pf->hw);
 		return err;
+	}
 
 	if (pf->lag)
 		ice_disable_lag(pf->lag);
@@ -4382,3 +4405,70 @@ void ice_restore_all_vfs_msi_state(struct pci_dev *pdev)
 		}
 	}
 }
+
+/**
+ * ice_is_malicious_vf - helper function to detect a malicious VF
+ * @pf: ptr to struct ice_pf
+ * @event: pointer to the AQ event
+ * @num_msg_proc: the number of messages processed so far
+ * @num_msg_pending: the number of messages peinding in admin queue
+ */
+bool
+ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info *event,
+		    u16 num_msg_proc, u16 num_msg_pending)
+{
+	s16 vf_id = le16_to_cpu(event->desc.retval);
+	struct device *dev = ice_pf_to_dev(pf);
+	struct ice_mbx_data mbxdata;
+	enum ice_status status;
+	bool malvf = false;
+	struct ice_vf *vf;
+
+	if (ice_validate_vf_id(pf, vf_id))
+		return false;
+
+	vf = &pf->vf[vf_id];
+	/* Check if VF is disabled. */
+	if (test_bit(ICE_VF_STATE_DIS, vf->vf_states))
+		return false;
+
+	mbxdata.num_msg_proc = num_msg_proc;
+	mbxdata.num_pending_arq = num_msg_pending;
+	mbxdata.max_num_msgs_mbx = pf->hw.mailboxq.num_rq_entries;
+#define ICE_MBX_OVERFLOW_WATERMARK 64
+	mbxdata.async_watermark_val = ICE_MBX_OVERFLOW_WATERMARK;
+
+	/* check to see if we have a malicious VF */
+	status = ice_mbx_vf_state_handler(&pf->hw, &mbxdata, vf_id, &malvf);
+	if (status)
+		return false;
+
+	if (malvf) {
+		bool report_vf = false;
+
+		/* if the VF is malicious and we haven't let the user
+		 * know about it, then let them know now
+		 */
+		status = ice_mbx_report_malvf(&pf->hw, pf->malvfs,
+					      ICE_MAX_VF_COUNT, vf_id,
+					      &report_vf);
+		if (status)
+			dev_dbg(dev, "Error reporting malicious VF\n");
+
+		if (report_vf) {
+			struct ice_vsi *pf_vsi = ice_get_main_vsi(pf);
+
+			if (pf_vsi)
+				dev_warn(dev, "VF MAC %pM on PF MAC %pM is generating asynchronous messages and may be overflowing the PF message queue. Please see the Adapter User Guide for more information\n",
+					 &vf->dev_lan_addr.addr[0],
+					 pf_vsi->netdev->dev_addr);
+		}
+
+		return true;
+	}
+
+	/* if there was an error in detection or the VF is not malicious then
+	 * return false
+	 */
+	return false;
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
index afd5c22015e1..53391ac1f068 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
@@ -126,6 +126,9 @@ void ice_vc_notify_reset(struct ice_pf *pf);
 bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr);
 bool ice_reset_vf(struct ice_vf *vf, bool is_vflr);
 void ice_restore_all_vfs_msi_state(struct pci_dev *pdev);
+bool
+ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info *event,
+		    u16 num_msg_proc, u16 num_msg_pending);
 
 int
 ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
@@ -165,6 +168,15 @@ bool ice_vc_isvalid_vsi_id(struct ice_vf *vf, u16 vsi_id);
 #define ice_print_vf_rx_mdd_event(vf) do {} while (0)
 #define ice_restore_all_vfs_msi_state(pdev) do {} while (0)
 
+static inline bool
+ice_is_malicious_vf(struct ice_pf __always_unused *pf,
+		    struct ice_rq_event_info __always_unused *event,
+		    u16 __always_unused num_msg_proc,
+		    u16 __always_unused num_msg_pending)
+{
+	return false;
+}
+
 static inline bool
 ice_reset_all_vfs(struct ice_pf __always_unused *pf,
 		  bool __always_unused is_vflr)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on specific VF
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-04-21 18:48   ` Jankowski, Konrad0
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 03/14] ice: Add Support for XPS Tony Nguyen
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Michal Swiatkowski <michal.swiatkowski@intel.com>

Declare bitmap of allowed commands on VF. Initialize default
opcodes list that should be always supported. Declare array of
supported opcodes for each caps used in virtchnl code.

Change allowed bitmap by setting or clearing corresponding
bit to allowlist (bit set) or denylist (bit clear).

Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/Makefile       |   2 +-
 .../intel/ice/ice_virtchnl_allowlist.c        | 165 ++++++++++++++++++
 .../intel/ice/ice_virtchnl_allowlist.h        |  13 ++
 .../net/ethernet/intel/ice/ice_virtchnl_pf.c  |  18 ++
 .../net/ethernet/intel/ice/ice_virtchnl_pf.h  |   1 +
 include/linux/avf/virtchnl.h                  |   1 +
 6 files changed, 199 insertions(+), 1 deletion(-)
 create mode 100644 drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
 create mode 100644 drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h

diff --git a/drivers/net/ethernet/intel/ice/Makefile b/drivers/net/ethernet/intel/ice/Makefile
index f391691e2c7e..dc24ce7d1c1e 100644
--- a/drivers/net/ethernet/intel/ice/Makefile
+++ b/drivers/net/ethernet/intel/ice/Makefile
@@ -26,7 +26,7 @@ ice-y := ice_main.o	\
 	 ice_fw_update.o \
 	 ice_lag.o	\
 	 ice_ethtool.o
-ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o
+ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_allowlist.o ice_virtchnl_fdir.o
 ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o
 ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
 ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
new file mode 100644
index 000000000000..64b1314d4761
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
@@ -0,0 +1,165 @@
+// SPDX-License-Identifier: GPL-2.0
+/* Copyright (C) 2018-2021, Intel Corporation. */
+
+#include "ice_virtchnl_allowlist.h"
+
+/* Purpose of this file is to share functionality to allowlist or denylist
+ * opcodes used in PF <-> VF communication. Group of opcodes:
+ * - default -> should be always allowed after creating VF,
+ *   default_allowlist_opcodes
+ * - opcodes needed by VF to work correctly, but not associated with caps ->
+ *   should be allowed after successful VF resources allocation,
+ *   working_allowlist_opcodes
+ * - opcodes needed by VF when caps are activated
+ *
+ * Caps that don't use new opcodes (no opcodes should be allowed):
+ * - VIRTCHNL_VF_OFFLOAD_RSS_AQ
+ * - VIRTCHNL_VF_OFFLOAD_RSS_REG
+ * - VIRTCHNL_VF_OFFLOAD_WB_ON_ITR
+ * - VIRTCHNL_VF_OFFLOAD_CRC
+ * - VIRTCHNL_VF_OFFLOAD_RX_POLLING
+ * - VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2
+ * - VIRTCHNL_VF_OFFLOAD_ENCAP
+ * - VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM
+ * - VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM
+ * - VIRTCHNL_VF_OFFLOAD_USO
+ */
+
+/* default opcodes to communicate with VF */
+static const u32 default_allowlist_opcodes[] = {
+	VIRTCHNL_OP_GET_VF_RESOURCES, VIRTCHNL_OP_VERSION, VIRTCHNL_OP_RESET_VF,
+};
+
+/* opcodes supported after successful VIRTCHNL_OP_GET_VF_RESOURCES */
+static const u32 working_allowlist_opcodes[] = {
+	VIRTCHNL_OP_CONFIG_TX_QUEUE, VIRTCHNL_OP_CONFIG_RX_QUEUE,
+	VIRTCHNL_OP_CONFIG_VSI_QUEUES, VIRTCHNL_OP_CONFIG_IRQ_MAP,
+	VIRTCHNL_OP_ENABLE_QUEUES, VIRTCHNL_OP_DISABLE_QUEUES,
+	VIRTCHNL_OP_GET_STATS, VIRTCHNL_OP_EVENT,
+};
+
+/* VIRTCHNL_VF_OFFLOAD_L2 */
+static const u32 l2_allowlist_opcodes[] = {
+	VIRTCHNL_OP_ADD_ETH_ADDR, VIRTCHNL_OP_DEL_ETH_ADDR,
+	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE,
+};
+
+/* VIRTCHNL_VF_OFFLOAD_REQ_QUEUES */
+static const u32 req_queues_allowlist_opcodes[] = {
+	VIRTCHNL_OP_REQUEST_QUEUES,
+};
+
+/* VIRTCHNL_VF_OFFLOAD_VLAN */
+static const u32 vlan_allowlist_opcodes[] = {
+	VIRTCHNL_OP_ADD_VLAN, VIRTCHNL_OP_DEL_VLAN,
+	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING,
+};
+
+/* VIRTCHNL_VF_OFFLOAD_RSS_PF */
+static const u32 rss_pf_allowlist_opcodes[] = {
+	VIRTCHNL_OP_CONFIG_RSS_KEY, VIRTCHNL_OP_CONFIG_RSS_LUT,
+	VIRTCHNL_OP_GET_RSS_HENA_CAPS, VIRTCHNL_OP_SET_RSS_HENA,
+};
+
+/* VIRTCHNL_VF_OFFLOAD_FDIR_PF */
+static const u32 fdir_pf_allowlist_opcodes[] = {
+	VIRTCHNL_OP_ADD_FDIR_FILTER, VIRTCHNL_OP_DEL_FDIR_FILTER,
+};
+
+struct allowlist_opcode_info {
+	const u32 *opcodes;
+	size_t size;
+};
+
+#define BIT_INDEX(caps) (HWEIGHT((caps) - 1))
+#define ALLOW_ITEM(caps, list) \
+	[BIT_INDEX(caps)] = { \
+		.opcodes = list, \
+		.size = ARRAY_SIZE(list) \
+	}
+static const struct allowlist_opcode_info allowlist_opcodes[] = {
+	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_L2, l2_allowlist_opcodes),
+	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_REQ_QUEUES, req_queues_allowlist_opcodes),
+	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN, vlan_allowlist_opcodes),
+	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_RSS_PF, rss_pf_allowlist_opcodes),
+	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_FDIR_PF, fdir_pf_allowlist_opcodes),
+};
+
+/**
+ * ice_vc_opcode_is_allowed - check if this opcode is allowed on this VF
+ * @vf: pointer to VF structure
+ * @opcode: virtchnl opcode
+ *
+ * Return true if message is allowed on this VF
+ */
+bool ice_vc_is_opcode_allowed(struct ice_vf *vf, u32 opcode)
+{
+	if (opcode >= VIRTCHNL_OP_MAX)
+		return false;
+
+	return test_bit(opcode, vf->opcodes_allowlist);
+}
+
+/**
+ * ice_vc_allowlist_opcodes - allowlist selected opcodes
+ * @vf: pointer to VF structure
+ * @opcodes: array of opocodes to allowlist
+ * @size: size of opcodes array
+ *
+ * Function should be called to allowlist opcodes on VF.
+ */
+static void
+ice_vc_allowlist_opcodes(struct ice_vf *vf, const u32 *opcodes, size_t size)
+{
+	unsigned int i;
+
+	for (i = 0; i < size; i++)
+		set_bit(opcodes[i], vf->opcodes_allowlist);
+}
+
+/**
+ * ice_vc_clear_allowlist - clear all allowlist opcodes
+ * @vf: pointer to VF structure
+ */
+static void ice_vc_clear_allowlist(struct ice_vf *vf)
+{
+	bitmap_zero(vf->opcodes_allowlist, VIRTCHNL_OP_MAX);
+}
+
+/**
+ * ice_vc_set_default_allowlist - allowlist default opcodes for VF
+ * @vf: pointer to VF structure
+ */
+void ice_vc_set_default_allowlist(struct ice_vf *vf)
+{
+	ice_vc_clear_allowlist(vf);
+	ice_vc_allowlist_opcodes(vf, default_allowlist_opcodes,
+				 ARRAY_SIZE(default_allowlist_opcodes));
+}
+
+/**
+ * ice_vc_set_working_allowlist - allowlist opcodes needed to by VF to work
+ * @vf: pointer to VF structure
+ *
+ * Whitelist opcodes that aren't associated with specific caps, but
+ * are needed by VF to work.
+ */
+void ice_vc_set_working_allowlist(struct ice_vf *vf)
+{
+	ice_vc_allowlist_opcodes(vf, working_allowlist_opcodes,
+				 ARRAY_SIZE(working_allowlist_opcodes));
+}
+
+/**
+ * ice_vc_set_allowlist_based_on_caps - allowlist VF opcodes according caps
+ * @vf: pointer to VF structure
+ */
+void ice_vc_set_caps_allowlist(struct ice_vf *vf)
+{
+	unsigned long caps = vf->driver_caps;
+	unsigned int i;
+
+	for_each_set_bit(i, &caps, ARRAY_SIZE(allowlist_opcodes))
+		ice_vc_allowlist_opcodes(vf, allowlist_opcodes[i].opcodes,
+					 allowlist_opcodes[i].size);
+}
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h
new file mode 100644
index 000000000000..c33bc6ac3f54
--- /dev/null
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h
@@ -0,0 +1,13 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2018-2021, Intel Corporation. */
+
+#ifndef _ICE_VIRTCHNL_ALLOWLIST_H_
+#define _ICE_VIRTCHNL_ALLOWLIST_H_
+#include "ice.h"
+
+bool ice_vc_is_opcode_allowed(struct ice_vf *vf, u32 opcode);
+
+void ice_vc_set_default_allowlist(struct ice_vf *vf);
+void ice_vc_set_working_allowlist(struct ice_vf *vf);
+void ice_vc_set_caps_allowlist(struct ice_vf *vf);
+#endif
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index 0da9c84ed30f..f09367eb242a 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -5,6 +5,7 @@
 #include "ice_base.h"
 #include "ice_lib.h"
 #include "ice_fltr.h"
+#include "ice_virtchnl_allowlist.h"
 
 /**
  * ice_validate_vf_id - helper to check if VF ID is valid
@@ -1317,6 +1318,9 @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
 	ice_for_each_vf(pf, v) {
 		vf = &pf->vf[v];
 
+		vf->driver_caps = 0;
+		ice_vc_set_default_allowlist(vf);
+
 		ice_vf_fdir_exit(vf);
 		/* clean VF control VSI when resetting VFs since it should be
 		 * setup only when iAVF creates its first FDIR rule.
@@ -1421,6 +1425,9 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
 		usleep_range(10, 20);
 	}
 
+	vf->driver_caps = 0;
+	ice_vc_set_default_allowlist(vf);
+
 	/* Display a warning if VF didn't manage to reset in time, but need to
 	 * continue on with the operation.
 	 */
@@ -1633,6 +1640,7 @@ static void ice_set_dflt_settings_vfs(struct ice_pf *pf)
 		set_bit(ICE_VIRTCHNL_VF_CAP_L2, &vf->vf_caps);
 		vf->spoofchk = true;
 		vf->num_vf_qs = pf->num_qps_per_vf;
+		ice_vc_set_default_allowlist(vf);
 
 		/* ctrl_vsi_idx will be set to a valid value only when iAVF
 		 * creates its first fdir rule.
@@ -2135,6 +2143,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 	/* match guest capabilities */
 	vf->driver_caps = vfres->vf_cap_flags;
 
+	ice_vc_set_caps_allowlist(vf);
+	ice_vc_set_working_allowlist(vf);
+
 	set_bit(ICE_VF_STATE_ACTIVE, vf->vf_states);
 
 err:
@@ -3964,6 +3975,13 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
 			err = -EINVAL;
 	}
 
+	if (!ice_vc_is_opcode_allowed(vf, v_opcode)) {
+		ice_vc_send_msg_to_vf(vf, v_opcode,
+				      VIRTCHNL_STATUS_ERR_NOT_SUPPORTED, NULL,
+				      0);
+		return;
+	}
+
 error_handler:
 	if (err) {
 		ice_vc_send_msg_to_vf(vf, v_opcode, VIRTCHNL_STATUS_ERR_PARAM,
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
index 53391ac1f068..77ff0023f7be 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
@@ -110,6 +110,7 @@ struct ice_vf {
 	u16 num_vf_qs;			/* num of queue configured per VF */
 	struct ice_mdd_vf_events mdd_rx_events;
 	struct ice_mdd_vf_events mdd_tx_events;
+	DECLARE_BITMAP(opcodes_allowlist, VIRTCHNL_OP_MAX);
 };
 
 #ifdef CONFIG_PCI_IOV
diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
index e3d5ecf7cf41..228b90ef3361 100644
--- a/include/linux/avf/virtchnl.h
+++ b/include/linux/avf/virtchnl.h
@@ -139,6 +139,7 @@ enum virtchnl_ops {
 	/* opcode 34 - 46 are reserved */
 	VIRTCHNL_OP_ADD_FDIR_FILTER = 47,
 	VIRTCHNL_OP_DEL_FDIR_FILTER = 48,
+	VIRTCHNL_OP_MAX,
 };
 
 /* These macros are used to generate compilation errors if a structure/union
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 03/14] ice: Add Support for XPS
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on specific VF Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-12 22:13   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 04/14] ice: Delay netdev registration Tony Nguyen
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Benita Bose <benita.bose@intel.com>

Enable and configure XPS. The driver code implemented sets up the Transmit
Packet Steering Map, which in turn will be used by the kernel in queue
selection during Tx.

Signed-off-by: Benita Bose <benita.bose@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
Note: Emails to Benita will bounce as she has since left Intel
---
 drivers/net/ethernet/intel/ice/ice_base.c | 23 +++++++++++++++++++++++
 drivers/net/ethernet/intel/ice/ice_txrx.h |  6 ++++++
 2 files changed, 29 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_base.c b/drivers/net/ethernet/intel/ice/ice_base.c
index a99658462e29..c28cfd85587f 100644
--- a/drivers/net/ethernet/intel/ice/ice_base.c
+++ b/drivers/net/ethernet/intel/ice/ice_base.c
@@ -214,6 +214,26 @@ static u16 ice_calc_q_handle(struct ice_vsi *vsi, struct ice_ring *ring, u8 tc)
 	return ring->q_index - vsi->tc_cfg.tc_info[tc].qoffset;
 }
 
+/**
+ * ice_cfg_xps_tx_ring - Configure XPS for a Tx ring
+ * @ring: The Tx ring to configure
+ *
+ * This enables/disables XPS for a given Tx descriptor ring
+ * based on the TCs enabled for the VSI that ring belongs to.
+ */
+static void ice_cfg_xps_tx_ring(struct ice_ring *ring)
+{
+	if (!ring->q_vector || !ring->netdev)
+		return;
+
+	/* We only initialize XPS once, so as not to overwrite user settings */
+	if (test_and_set_bit(ICE_TX_XPS_INIT_DONE, ring->xps_state))
+		return;
+
+	netif_set_xps_queue(ring->netdev, &ring->q_vector->affinity_mask,
+			    ring->q_index);
+}
+
 /**
  * ice_setup_tx_ctx - setup a struct ice_tlan_ctx instance
  * @ring: The Tx ring to configure
@@ -672,6 +692,9 @@ ice_vsi_cfg_txq(struct ice_vsi *vsi, struct ice_ring *ring,
 	u16 pf_q;
 	u8 tc;
 
+	/* Configure XPS */
+	ice_cfg_xps_tx_ring(ring);
+
 	pf_q = ring->reg_idx;
 	ice_setup_tx_ctx(ring, &tlan_ctx, pf_q);
 	/* copy context contents into the qg_buf */
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index 9d14bac658df..f72b154b03c5 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -195,6 +195,11 @@ struct ice_rxq_stats {
 	u64 gro_dropped; /* GRO returned dropped */
 };
 
+enum ice_ring_state_t {
+	ICE_TX_XPS_INIT_DONE,
+	ICE_TX_NBITS,
+};
+
 /* this enum matches hardware bits and is meant to be used by DYN_CTLN
  * registers and QINT registers or more generally anywhere in the manual
  * mentioning ITR_INDX, ITR_NONE cannot be used as an index 'n' into any
@@ -294,6 +299,7 @@ struct ice_ring {
 	};
 
 	struct rcu_head rcu;		/* to avoid race on free */
+	DECLARE_BITMAP(xps_state, ICE_TX_NBITS);	/* XPS Config State */
 	struct bpf_prog *xdp_prog;
 	struct xsk_buff_pool *xsk_pool;
 	u16 rx_offset;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 04/14] ice: Delay netdev registration
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on specific VF Tony Nguyen
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 03/14] ice: Add Support for XPS Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-10 23:44   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 05/14] ice: Update to use package info from ice segment Tony Nguyen
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>

Once a netdev is registered, the corresponding network interface can
be immediately used by userspace utilities (like say NetworkManager).
This can be problematic if the driver technically isn't fully up yet.

Move netdev registration to the end of probe, as by this time the
driver data structures and device will be initialized as expected.

However, delaying netdev registration causes a failure in the aRFS flow
where netdev->reg_state == NETREG_REGISTERED condition is checked. It's
not clear why this check was added to begin with, so remove it.
Local testing didn't indicate any issues with this change.

The state bit check in ice_open was put in as a stop-gap measure to
prevent a premature interface up operation. This is no longer needed,
so remove it.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_arfs.c |  6 +-
 drivers/net/ethernet/intel/ice/ice_main.c | 93 +++++++++++------------
 2 files changed, 47 insertions(+), 52 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_arfs.c b/drivers/net/ethernet/intel/ice/ice_arfs.c
index 6560acd76c94..88d98c9e5f91 100644
--- a/drivers/net/ethernet/intel/ice/ice_arfs.c
+++ b/drivers/net/ethernet/intel/ice/ice_arfs.c
@@ -581,8 +581,7 @@ void ice_free_cpu_rx_rmap(struct ice_vsi *vsi)
 		return;
 
 	netdev = vsi->netdev;
-	if (!netdev || !netdev->rx_cpu_rmap ||
-	    netdev->reg_state != NETREG_REGISTERED)
+	if (!netdev || !netdev->rx_cpu_rmap)
 		return;
 
 	free_irq_cpu_rmap(netdev->rx_cpu_rmap);
@@ -604,8 +603,7 @@ int ice_set_cpu_rx_rmap(struct ice_vsi *vsi)
 
 	pf = vsi->back;
 	netdev = vsi->netdev;
-	if (!pf || !netdev || !vsi->num_q_vectors ||
-	    vsi->netdev->reg_state != NETREG_REGISTERED)
+	if (!pf || !netdev || !vsi->num_q_vectors)
 		return -EINVAL;
 
 	netdev_dbg(netdev, "Setup CPU RMAP: vsi type 0x%x, ifname %s, q_vectors %d\n",
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 9cf876e420c9..862abd9ed660 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -140,21 +140,10 @@ static int ice_init_mac_fltr(struct ice_pf *pf)
 
 	perm_addr = vsi->port_info->mac.perm_addr;
 	status = ice_fltr_add_mac_and_broadcast(vsi, perm_addr, ICE_FWD_TO_VSI);
-	if (!status)
-		return 0;
-
-	/* We aren't useful with no MAC filters, so unregister if we
-	 * had an error
-	 */
-	if (vsi->netdev->reg_state == NETREG_REGISTERED) {
-		dev_err(ice_pf_to_dev(pf), "Could not add MAC filters error %s. Unregistering device\n",
-			ice_stat_str(status));
-		unregister_netdev(vsi->netdev);
-		free_netdev(vsi->netdev);
-		vsi->netdev = NULL;
-	}
+	if (status)
+		return -EIO;
 
-	return -EIO;
+	return 0;
 }
 
 /**
@@ -2986,18 +2975,11 @@ static int ice_cfg_netdev(struct ice_vsi *vsi)
 	struct ice_netdev_priv *np;
 	struct net_device *netdev;
 	u8 mac_addr[ETH_ALEN];
-	int err;
-
-	err = ice_devlink_create_port(vsi);
-	if (err)
-		return err;
 
 	netdev = alloc_etherdev_mqs(sizeof(*np), vsi->alloc_txq,
 				    vsi->alloc_rxq);
-	if (!netdev) {
-		err = -ENOMEM;
-		goto err_destroy_devlink_port;
-	}
+	if (!netdev)
+		return -ENOMEM;
 
 	vsi->netdev = netdev;
 	np = netdev_priv(netdev);
@@ -3025,25 +3007,7 @@ static int ice_cfg_netdev(struct ice_vsi *vsi)
 	netdev->min_mtu = ETH_MIN_MTU;
 	netdev->max_mtu = ICE_MAX_MTU;
 
-	err = register_netdev(vsi->netdev);
-	if (err)
-		goto err_free_netdev;
-
-	devlink_port_type_eth_set(&vsi->devlink_port, vsi->netdev);
-
-	netif_carrier_off(vsi->netdev);
-
-	/* make sure transmit queues start off as stopped */
-	netif_tx_stop_all_queues(vsi->netdev);
-
 	return 0;
-
-err_free_netdev:
-	free_netdev(vsi->netdev);
-	vsi->netdev = NULL;
-err_destroy_devlink_port:
-	ice_devlink_destroy_port(vsi);
-	return err;
 }
 
 /**
@@ -3241,8 +3205,6 @@ static int ice_setup_pf_sw(struct ice_pf *pf)
 	if (vsi) {
 		ice_napi_del(vsi);
 		if (vsi->netdev) {
-			if (vsi->netdev->reg_state == NETREG_REGISTERED)
-				unregister_netdev(vsi->netdev);
 			free_netdev(vsi->netdev);
 			vsi->netdev = NULL;
 		}
@@ -3995,6 +3957,40 @@ static void ice_print_wake_reason(struct ice_pf *pf)
 	dev_info(ice_pf_to_dev(pf), "Wake reason: %s", wake_str);
 }
 
+/**
+ * ice_register_netdev - register netdev and devlink port
+ * @pf: pointer to the PF struct
+ */
+static int ice_register_netdev(struct ice_pf *pf)
+{
+	struct ice_vsi *vsi;
+	int err = 0;
+
+	vsi = ice_get_main_vsi(pf);
+	if (!vsi || !vsi->netdev)
+		return -EIO;
+
+	err = register_netdev(vsi->netdev);
+	if (err)
+		goto err_register_netdev;
+
+	netif_carrier_off(vsi->netdev);
+	netif_tx_stop_all_queues(vsi->netdev);
+	err = ice_devlink_create_port(vsi);
+	if (err)
+		goto err_devlink_create;
+
+	devlink_port_type_eth_set(&vsi->devlink_port, vsi->netdev);
+
+	return 0;
+err_devlink_create:
+	unregister_netdev(vsi->netdev);
+err_register_netdev:
+	free_netdev(vsi->netdev);
+	vsi->netdev = NULL;
+	return err;
+}
+
 /**
  * ice_probe - Device initialization routine
  * @pdev: PCI device information struct
@@ -4273,10 +4269,16 @@ ice_probe(struct pci_dev *pdev, const struct pci_device_id __always_unused *ent)
 	pcie_print_link_status(pf->pdev);
 
 probe_done:
+	err = ice_register_netdev(pf);
+	if (err)
+		goto err_netdev_reg;
+
 	/* ready to go, so clear down state bit */
 	clear_bit(__ICE_DOWN, pf->state);
+
 	return 0;
 
+err_netdev_reg:
 err_send_version_unroll:
 	ice_vsi_release_all(pf);
 err_alloc_sw_unroll:
@@ -6679,11 +6681,6 @@ int ice_open_internal(struct net_device *netdev)
 		return -EIO;
 	}
 
-	if (test_bit(__ICE_DOWN, pf->state)) {
-		netdev_err(netdev, "device is not ready yet\n");
-		return -EBUSY;
-	}
-
 	netif_carrier_off(netdev);
 
 	pi = vsi->port_info;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 05/14] ice: Update to use package info from ice segment
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (2 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 04/14] ice: Delay netdev registration Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-11  0:02   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 06/14] ice: handle increasing Tx or Rx ring sizes Tony Nguyen
                   ` (9 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Dan Nowlin <dan.nowlin@intel.com>

There are two package versions in the package binary. Today, these two
version numbers are the same. However, in the future that may change.

Update code to use the package info from the ice segment metadata
section, which is the package information that is actually downloaded to
the firmware during the download package process.

Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
---
 .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  1 +
 .../net/ethernet/intel/ice/ice_flex_pipe.c    | 40 ++++++++++---------
 .../net/ethernet/intel/ice/ice_flex_type.h    |  9 +++++
 drivers/net/ethernet/intel/ice/ice_type.h     |  8 ++--
 4 files changed, 36 insertions(+), 22 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 770c99a5d181..8c22d0cda153 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1807,6 +1807,7 @@ struct ice_pkg_ver {
 };
 
 #define ICE_PKG_NAME_SIZE	32
+#define ICE_SEG_ID_SIZE		28
 #define ICE_SEG_NAME_SIZE	28
 
 struct ice_aqc_get_pkg_info {
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
index 88a0c2daf29f..01d6a64a5a27 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
+++ b/drivers/net/ethernet/intel/ice/ice_flex_pipe.c
@@ -1063,32 +1063,36 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg)
 static enum ice_status
 ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr)
 {
-	struct ice_global_metadata_seg *meta_seg;
 	struct ice_generic_seg_hdr *seg_hdr;
 
 	if (!pkg_hdr)
 		return ICE_ERR_PARAM;
 
-	meta_seg = (struct ice_global_metadata_seg *)
-		   ice_find_seg_in_pkg(hw, SEGMENT_TYPE_METADATA, pkg_hdr);
-	if (meta_seg) {
-		hw->pkg_ver = meta_seg->pkg_ver;
-		memcpy(hw->pkg_name, meta_seg->pkg_name, sizeof(hw->pkg_name));
+	seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr);
+	if (seg_hdr) {
+		struct ice_meta_sect *meta;
+		struct ice_pkg_enum state;
+
+		memset(&state, 0, sizeof(state));
+
+		/* Get package information from the Metadata Section */
+		meta = ice_pkg_enum_section((struct ice_seg *)seg_hdr, &state,
+					    ICE_SID_METADATA);
+		if (!meta) {
+			ice_debug(hw, ICE_DBG_INIT, "Did not find ice metadata section in package\n");
+			return ICE_ERR_CFG;
+		}
+
+		hw->pkg_ver = meta->ver;
+		memcpy(hw->pkg_name, meta->name, sizeof(meta->name));
 
 		ice_debug(hw, ICE_DBG_PKG, "Pkg: %d.%d.%d.%d, %s\n",
-			  meta_seg->pkg_ver.major, meta_seg->pkg_ver.minor,
-			  meta_seg->pkg_ver.update, meta_seg->pkg_ver.draft,
-			  meta_seg->pkg_name);
-	} else {
-		ice_debug(hw, ICE_DBG_INIT, "Did not find metadata segment in driver package\n");
-		return ICE_ERR_CFG;
-	}
+			  meta->ver.major, meta->ver.minor, meta->ver.update,
+			  meta->ver.draft, meta->name);
 
-	seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr);
-	if (seg_hdr) {
-		hw->ice_pkg_ver = seg_hdr->seg_format_ver;
-		memcpy(hw->ice_pkg_name, seg_hdr->seg_id,
-		       sizeof(hw->ice_pkg_name));
+		hw->ice_seg_fmt_ver = seg_hdr->seg_format_ver;
+		memcpy(hw->ice_seg_id, seg_hdr->seg_id,
+		       sizeof(hw->ice_seg_id));
 
 		ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n",
 			  seg_hdr->seg_format_ver.major,
diff --git a/drivers/net/ethernet/intel/ice/ice_flex_type.h b/drivers/net/ethernet/intel/ice/ice_flex_type.h
index 2221ae3b22f6..bc20cff7ab9d 100644
--- a/drivers/net/ethernet/intel/ice/ice_flex_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_flex_type.h
@@ -109,6 +109,7 @@ struct ice_buf_hdr {
 	(ent_sz))
 
 /* ice package section IDs */
+#define ICE_SID_METADATA		1
 #define ICE_SID_XLT0_SW			10
 #define ICE_SID_XLT_KEY_BUILDER_SW	11
 #define ICE_SID_XLT1_SW			12
@@ -117,6 +118,14 @@ struct ice_buf_hdr {
 #define ICE_SID_PROFID_REDIR_SW		15
 #define ICE_SID_FLD_VEC_SW		16
 #define ICE_SID_CDID_KEY_BUILDER_SW	17
+
+struct ice_meta_sect {
+	struct ice_pkg_ver ver;
+#define ICE_META_SECT_NAME_SIZE	28
+	char name[ICE_META_SECT_NAME_SIZE];
+	__le32 track_id;
+};
+
 #define ICE_SID_CDID_REDIR_SW		18
 
 #define ICE_SID_XLT0_ACL		20
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 420fd487fd57..8545cba987b1 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -805,13 +805,13 @@ struct ice_hw {
 
 	enum ice_aq_err pkg_dwnld_status;
 
-	/* Driver's package ver - (from the Metadata seg) */
+	/* Driver's package ver - (from the Ice Metadata section) */
 	struct ice_pkg_ver pkg_ver;
 	u8 pkg_name[ICE_PKG_NAME_SIZE];
 
-	/* Driver's Ice package version (from the Ice seg) */
-	struct ice_pkg_ver ice_pkg_ver;
-	u8 ice_pkg_name[ICE_PKG_NAME_SIZE];
+	/* Driver's Ice segment format version and ID (from the Ice seg) */
+	struct ice_pkg_ver ice_seg_fmt_ver;
+	u8 ice_seg_id[ICE_SEG_ID_SIZE];
 
 	/* Pointer to the ice segment */
 	struct ice_seg *seg;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 06/14] ice: handle increasing Tx or Rx ring sizes
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (3 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 05/14] ice: Update to use package info from ice segment Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-11  0:27   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 07/14] ice: use kernel definitions for IANA protocol ports and ether-types Tony Nguyen
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>

There is an issue when the Tx or Rx ring size increases using
'ethtool -L ...' where the new rings don't get the correct ITR
values because when we rebuild the VSI we don't know that some
of the rings may be new.

Fix this by looking at the original number of rings and
determining if the rings in ice_vsi_rebuild_set_coalesce()
were not present in the original rings received in
ice_vsi_rebuild_get_coalesce().

Also change the code to return an error if we can't allocate
memory for the coalesce data in ice_vsi_rebuild().

Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c  | 123 ++++++++++++++++------
 drivers/net/ethernet/intel/ice/ice_txrx.h |   2 +
 2 files changed, 92 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 1fad88b369d3..a305dc2c3c10 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2862,38 +2862,46 @@ int ice_vsi_release(struct ice_vsi *vsi)
 }
 
 /**
- * ice_vsi_rebuild_update_coalesce - set coalesce for a q_vector
+ * ice_vsi_rebuild_update_coalesce_intrl - set interrupt rate limit for a q_vector
  * @q_vector: pointer to q_vector which is being updated
- * @coalesce: pointer to array of struct with stored coalesce
+ * @stored_intrl_setting: original INTRL setting
  *
  * Set coalesce param in q_vector and update these parameters in HW.
  */
 static void
-ice_vsi_rebuild_update_coalesce(struct ice_q_vector *q_vector,
-				struct ice_coalesce_stored *coalesce)
+ice_vsi_rebuild_update_coalesce_intrl(struct ice_q_vector *q_vector,
+				      u16 stored_intrl_setting)
 {
-	struct ice_ring_container *rx_rc = &q_vector->rx;
-	struct ice_ring_container *tx_rc = &q_vector->tx;
 	struct ice_hw *hw = &q_vector->vsi->back->hw;
 
-	tx_rc->itr_setting = coalesce->itr_tx;
-	rx_rc->itr_setting = coalesce->itr_rx;
-
-	/* dynamic ITR values will be updated during Tx/Rx */
-	if (!ITR_IS_DYNAMIC(tx_rc->itr_setting))
-		wr32(hw, GLINT_ITR(tx_rc->itr_idx, q_vector->reg_idx),
-		     ITR_REG_ALIGN(tx_rc->itr_setting) >>
-		     ICE_ITR_GRAN_S);
-	if (!ITR_IS_DYNAMIC(rx_rc->itr_setting))
-		wr32(hw, GLINT_ITR(rx_rc->itr_idx, q_vector->reg_idx),
-		     ITR_REG_ALIGN(rx_rc->itr_setting) >>
-		     ICE_ITR_GRAN_S);
-
-	q_vector->intrl = coalesce->intrl;
+	q_vector->intrl = stored_intrl_setting;
 	wr32(hw, GLINT_RATE(q_vector->reg_idx),
 	     ice_intrl_usec_to_reg(q_vector->intrl, hw->intrl_gran));
 }
 
+/**
+ * ice_vsi_rebuild_update_coalesce_itr - set coalesce for a q_vector
+ * @q_vector: pointer to q_vector which is being updated
+ * @rc: pointer to ring container
+ * @stored_itr_setting: original ITR setting
+ *
+ * Set coalesce param in q_vector and update these parameters in HW.
+ */
+static void
+ice_vsi_rebuild_update_coalesce_itr(struct ice_q_vector *q_vector,
+				    struct ice_ring_container *rc,
+				    u16 stored_itr_setting)
+{
+	struct ice_hw *hw = &q_vector->vsi->back->hw;
+
+	rc->itr_setting = stored_itr_setting;
+
+	/* dynamic ITR values will be updated during Tx/Rx */
+	if (!ITR_IS_DYNAMIC(rc->itr_setting))
+		wr32(hw, GLINT_ITR(rc->itr_idx, q_vector->reg_idx),
+		     ITR_REG_ALIGN(rc->itr_setting) >> ICE_ITR_GRAN_S);
+}
+
 /**
  * ice_vsi_rebuild_get_coalesce - get coalesce from all q_vectors
  * @vsi: VSI connected with q_vectors
@@ -2913,6 +2921,11 @@ ice_vsi_rebuild_get_coalesce(struct ice_vsi *vsi,
 		coalesce[i].itr_tx = q_vector->tx.itr_setting;
 		coalesce[i].itr_rx = q_vector->rx.itr_setting;
 		coalesce[i].intrl = q_vector->intrl;
+
+		if (i < vsi->num_txq)
+			coalesce[i].tx_valid = true;
+		if (i < vsi->num_rxq)
+			coalesce[i].rx_valid = true;
 	}
 
 	return vsi->num_q_vectors;
@@ -2937,17 +2950,59 @@ ice_vsi_rebuild_set_coalesce(struct ice_vsi *vsi,
 	if ((size && !coalesce) || !vsi)
 		return;
 
-	for (i = 0; i < size && i < vsi->num_q_vectors; i++)
-		ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
-						&coalesce[i]);
-
-	/* number of q_vectors increased, so assume coalesce settings were
-	 * changed globally (i.e. ethtool -C eth0 instead of per-queue) and use
-	 * the previous settings from q_vector 0 for all of the new q_vectors
+	/* There are a couple of cases that have to be handled here:
+	 *   1. The case where the number of queue vectors stays the same, but
+	 *      the number of Tx or Rx rings changes (the first for loop)
+	 *   2. The case where the number of queue vectors increased (the
+	 *      second for loop)
 	 */
-	for (; i < vsi->num_q_vectors; i++)
-		ice_vsi_rebuild_update_coalesce(vsi->q_vectors[i],
-						&coalesce[0]);
+	for (i = 0; i < size && i < vsi->num_q_vectors; i++) {
+		/* There are 2 cases to handle here and they are the same for
+		 * both Tx and Rx:
+		 *   if the entry was valid previously (coalesce[i].[tr]x_valid
+		 *   and the loop variable is less than the number of rings
+		 *   allocated, then write the previous values
+		 *
+		 *   if the entry was not valid previously, but the number of
+		 *   rings is less than are allocated (this means the number of
+		 *   rings increased from previously), then write out the
+		 *   values in the first element
+		 */
+		if (i < vsi->alloc_rxq && coalesce[i].rx_valid)
+			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+							    &vsi->q_vectors[i]->rx,
+							    coalesce[i].itr_rx);
+		else if (i < vsi->alloc_rxq)
+			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+							    &vsi->q_vectors[i]->rx,
+							    coalesce[0].itr_rx);
+
+		if (i < vsi->alloc_txq && coalesce[i].tx_valid)
+			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+							    &vsi->q_vectors[i]->tx,
+							    coalesce[i].itr_tx);
+		else if (i < vsi->alloc_txq)
+			ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+							    &vsi->q_vectors[i]->tx,
+							    coalesce[0].itr_tx);
+
+		ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
+						      coalesce[i].intrl);
+	}
+
+	/* the number of queue vectors increased so write whatever is in
+	 * the first element
+	 */
+	for (; i < vsi->num_q_vectors; i++) {
+		ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+						    &vsi->q_vectors[i]->tx,
+						    coalesce[0].itr_tx);
+		ice_vsi_rebuild_update_coalesce_itr(vsi->q_vectors[i],
+						    &vsi->q_vectors[i]->rx,
+						    coalesce[0].itr_rx);
+		ice_vsi_rebuild_update_coalesce_intrl(vsi->q_vectors[i],
+						      coalesce[0].intrl);
+	}
 }
 
 /**
@@ -2976,9 +3031,11 @@ int ice_vsi_rebuild(struct ice_vsi *vsi, bool init_vsi)
 
 	coalesce = kcalloc(vsi->num_q_vectors,
 			   sizeof(struct ice_coalesce_stored), GFP_KERNEL);
-	if (coalesce)
-		prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi,
-								  coalesce);
+	if (!coalesce)
+		return -ENOMEM;
+
+	prev_num_q_vectors = ice_vsi_rebuild_get_coalesce(vsi, coalesce);
+
 	ice_rm_vsi_lan_cfg(vsi->port_info, vsi->idx);
 	ice_vsi_free_q_vectors(vsi);
 
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index f72b154b03c5..a13eb8cf1731 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -359,6 +359,8 @@ struct ice_coalesce_stored {
 	u16 itr_tx;
 	u16 itr_rx;
 	u8 intrl;
+	u8 tx_valid;
+	u8 rx_valid;
 };
 
 /* iterator for handling rings in ring container */
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 07/14] ice: use kernel definitions for IANA protocol ports and ether-types
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (4 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 06/14] ice: handle increasing Tx or Rx ring sizes Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-19 22:43   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 08/14] ice: change link misconfiguration message Tony Nguyen
                   ` (7 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Bruce Allan <bruce.w.allan@intel.com>

The well-known IANA protocol port 3260 (iscsi-target 0x0cbc) and the
ether-types 0x8906 (ETH_P_FCOE) and 0x8914 (ETH_P_FIP) are already defined
in kernel header files.  Use those definitions instead of open-coding the
same.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h         | 3 +++
 drivers/net/ethernet/intel/ice/ice_dcb.c     | 8 ++++----
 drivers/net/ethernet/intel/ice/ice_dcb_lib.c | 2 +-
 drivers/net/ethernet/intel/ice/ice_type.h    | 3 ---
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index ca94b01626d2..a0233962a672 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -44,6 +44,9 @@
 #include <net/gre.h>
 #include <net/udp_tunnel.h>
 #include <net/vxlan.h>
+#if IS_ENABLED(CONFIG_DCB)
+#include <scsi/iscsi_proto.h>
+#endif /* CONFIG_DCB */
 #include "ice_devids.h"
 #include "ice_type.h"
 #include "ice_txrx.h"
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb.c b/drivers/net/ethernet/intel/ice/ice_dcb.c
index 211ac6f907ad..356fc0f025a2 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb.c
@@ -804,7 +804,7 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
 			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_FCOE_M;
 			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_FCOE_S;
 			ice_app_sel_type = ICE_APP_SEL_ETHTYPE;
-			ice_app_prot_id_type = ICE_APP_PROT_ID_FCOE;
+			ice_app_prot_id_type = ETH_P_FCOE;
 		} else if (i == 1) {
 			/* iSCSI APP */
 			ice_aqc_cee_status_mask = ICE_AQC_CEE_ISCSI_STATUS_M;
@@ -812,14 +812,14 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
 			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_ISCSI_M;
 			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_ISCSI_S;
 			ice_app_sel_type = ICE_APP_SEL_TCPIP;
-			ice_app_prot_id_type = ICE_APP_PROT_ID_ISCSI;
+			ice_app_prot_id_type = ISCSI_LISTEN_PORT;
 
 			for (j = 0; j < cmp_dcbcfg->numapps; j++) {
 				u16 prot_id = cmp_dcbcfg->app[j].prot_id;
 				u8 sel = cmp_dcbcfg->app[j].selector;
 
 				if  (sel == ICE_APP_SEL_TCPIP &&
-				     (prot_id == ICE_APP_PROT_ID_ISCSI ||
+				     (prot_id == ISCSI_LISTEN_PORT ||
 				      prot_id == ICE_APP_PROT_ID_ISCSI_860)) {
 					ice_app_prot_id_type = prot_id;
 					break;
@@ -832,7 +832,7 @@ ice_cee_to_dcb_cfg(struct ice_aqc_get_cee_dcb_cfg_resp *cee_cfg,
 			ice_aqc_cee_app_mask = ICE_AQC_CEE_APP_FIP_M;
 			ice_aqc_cee_app_shift = ICE_AQC_CEE_APP_FIP_S;
 			ice_app_sel_type = ICE_APP_SEL_ETHTYPE;
-			ice_app_prot_id_type = ICE_APP_PROT_ID_FIP;
+			ice_app_prot_id_type = ETH_P_FIP;
 		}
 
 		status = (tlv_status & ice_aqc_cee_status_mask) >>
diff --git a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
index 1e8f71ffc8ce..df02cffdf209 100644
--- a/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_dcb_lib.c
@@ -563,7 +563,7 @@ static int ice_dcb_sw_dflt_cfg(struct ice_pf *pf, bool ets_willing, bool locked)
 	dcbcfg->numapps = 1;
 	dcbcfg->app[0].selector = ICE_APP_SEL_ETHTYPE;
 	dcbcfg->app[0].priority = 3;
-	dcbcfg->app[0].prot_id = ICE_APP_PROT_ID_FCOE;
+	dcbcfg->app[0].prot_id = ETH_P_FCOE;
 
 	ret = ice_pf_dcb_cfg(pf, dcbcfg, locked);
 	kfree(dcbcfg);
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 8545cba987b1..208989b5629d 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -561,10 +561,7 @@ struct ice_dcb_app_priority_table {
 #define ICE_TLV_STATUS_OPER	0x1
 #define ICE_TLV_STATUS_SYNC	0x2
 #define ICE_TLV_STATUS_ERR	0x4
-#define ICE_APP_PROT_ID_FCOE	0x8906
-#define ICE_APP_PROT_ID_ISCSI	0x0cbc
 #define ICE_APP_PROT_ID_ISCSI_860 0x035c
-#define ICE_APP_PROT_ID_FIP	0x8914
 #define ICE_APP_SEL_ETHTYPE	0x1
 #define ICE_APP_SEL_TCPIP	0x2
 #define ICE_CEE_APP_SEL_ETHTYPE	0x0
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 08/14] ice: change link misconfiguration message
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (5 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 07/14] ice: use kernel definitions for IANA protocol ports and ether-types Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-11  0:31   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 09/14] ice: remove unnecessary duplicated AQ command flag setting Tony Nguyen
                   ` (6 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Paul Greenwalt <paul.greenwalt@intel.com>

Change link misconfiguration message since the configuration
could be intended by the user.

Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 862abd9ed660..73e328b0680b 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -598,7 +598,7 @@ static void ice_print_topo_conflict(struct ice_vsi *vsi)
 	case ICE_AQ_LINK_TOPO_UNREACH_PRT:
 	case ICE_AQ_LINK_TOPO_UNDRUTIL_PRT:
 	case ICE_AQ_LINK_TOPO_UNDRUTIL_MEDIA:
-		netdev_info(vsi->netdev, "Possible mis-configuration of the Ethernet port detected, please use the Intel(R) Ethernet Port Configuration Tool application to address the issue.\n");
+		netdev_info(vsi->netdev, "Potential misconfiguration of the Ethernet port detected. If it was not intended, please use the Intel (R) Ethernet Port Configuration Tool to address the issue.\n");
 		break;
 	case ICE_AQ_LINK_TOPO_UNSUPP_MEDIA:
 		netdev_info(vsi->netdev, "Rx/Tx is disabled on this device because an unsupported module type was detected. Refer to the Intel(R) Ethernet Adapters and Devices User Guide for a list of supported modules.\n");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 09/14] ice: remove unnecessary duplicated AQ command flag setting
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (6 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 08/14] ice: change link misconfiguration message Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-11  0:33   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 10/14] ice: Check for bail out condition early Tony Nguyen
                   ` (5 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Bruce Allan <bruce.w.allan@intel.com>

Commit a012dca9f7a2 ("ice: add ethtool -m support for reading i2c eeprom
modules") unnecessarily added the ICE_AQ_FLAG_BUF flag to the descriptor
when sending the indirect Read/Write SFF EEPROM AQ command. The flag is
already added later in the code flow for all indirect AQ commands, i.e.
commands that provide an additional data buffer.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index a20edf1538a0..105504c8cfe7 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -3186,7 +3186,7 @@ ice_aq_sff_eeprom(struct ice_hw *hw, u16 lport, u8 bus_addr,
 
 	ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_sff_eeprom);
 	cmd = &desc.params.read_write_sff_param;
-	desc.flags = cpu_to_le16(ICE_AQ_FLAG_RD | ICE_AQ_FLAG_BUF);
+	desc.flags = cpu_to_le16(ICE_AQ_FLAG_RD);
 	cmd->lport_num = (u8)(lport & 0xff);
 	cmd->lport_num_valid = (u8)((lport >> 8) & 0x01);
 	cmd->i2c_bus_addr = cpu_to_le16(((bus_addr >> 1) &
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 10/14] ice: Check for bail out condition early
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (7 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 09/14] ice: remove unnecessary duplicated AQ command flag setting Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-11 22:52   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 11/14] ice: correct memory allocation call Tony Nguyen
                   ` (4 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>

Check for bail out condition before calling ice_aq_sff_eeprom

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_ethtool.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index d6a9fd912971..26a7be2f7193 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -3958,14 +3958,14 @@ ice_get_module_eeprom(struct net_device *netdev,
 	u8 value = 0;
 	u8 page = 0;
 
-	status = ice_aq_sff_eeprom(hw, 0, addr, offset, page, 0,
-				   &value, 1, 0, NULL);
-	if (status)
-		return -EIO;
-
 	if (!ee || !ee->len || !data)
 		return -EINVAL;
 
+	status = ice_aq_sff_eeprom(hw, 0, addr, offset, page, 0, &value, 1, 0,
+				   NULL);
+	if (status)
+		return -EIO;
+
 	if (value == ICE_MODULE_TYPE_SFP)
 		is_sfp = true;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 11/14] ice: correct memory allocation call
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (8 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 10/14] ice: Check for bail out condition early Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-11 22:53   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 12/14] ice: rename ptype bitmap Tony Nguyen
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Bruce Allan <bruce.w.allan@intel.com>

Use *malloc() instead of *calloc() when allocating only a single object as
opposed to an array of objects.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_switch.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index 834cbd3f7b31..357d3073d814 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -920,7 +920,7 @@ ice_create_vsi_list_map(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi,
 	struct ice_vsi_list_map_info *v_map;
 	int i;
 
-	v_map = devm_kcalloc(ice_hw_to_dev(hw), 1, sizeof(*v_map), GFP_KERNEL);
+	v_map = devm_kzalloc(ice_hw_to_dev(hw), sizeof(*v_map), GFP_KERNEL);
 	if (!v_map)
 		return NULL;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 12/14] ice: rename ptype bitmap
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (9 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 11/14] ice: correct memory allocation call Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-03-11 22:55   ` Brelinski, TonyX
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 13/14] ice: Advertise virtchnl UDP segmentation offload capability Tony Nguyen
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Qi Zhang <qi.z.zhang@intel.com>

Align all ptype bitmap to follow ice_ptypes_xxx prefix.

Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_flow.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_flow.c b/drivers/net/ethernet/intel/ice/ice_flow.c
index ceb802ab76c0..2eec44ef2346 100644
--- a/drivers/net/ethernet/intel/ice/ice_flow.c
+++ b/drivers/net/ethernet/intel/ice/ice_flow.c
@@ -237,7 +237,7 @@ static const u32 ice_ptypes_ipv6_il[] = {
 };
 
 /* Packet types for packets with an Outer/First/Single IPv4 header - no L4 */
-static const u32 ice_ipv4_ofos_no_l4[] = {
+static const u32 ice_ptypes_ipv4_ofos_no_l4[] = {
 	0x10C00000, 0x04000800, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -261,7 +261,7 @@ static const u32 ice_ptypes_arp_of[] = {
 };
 
 /* Packet types for packets with an Innermost/Last IPv4 header - no L4 */
-static const u32 ice_ipv4_il_no_l4[] = {
+static const u32 ice_ptypes_ipv4_il_no_l4[] = {
 	0x60000000, 0x18043008, 0x80000002, 0x6010c021,
 	0x00000008, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -273,7 +273,7 @@ static const u32 ice_ipv4_il_no_l4[] = {
 };
 
 /* Packet types for packets with an Outer/First/Single IPv6 header - no L4 */
-static const u32 ice_ipv6_ofos_no_l4[] = {
+static const u32 ice_ptypes_ipv6_ofos_no_l4[] = {
 	0x00000000, 0x00000000, 0x43000000, 0x10002000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -285,7 +285,7 @@ static const u32 ice_ipv6_ofos_no_l4[] = {
 };
 
 /* Packet types for packets with an Innermost/Last IPv6 header - no L4 */
-static const u32 ice_ipv6_il_no_l4[] = {
+static const u32 ice_ptypes_ipv6_il_no_l4[] = {
 	0x00000000, 0x02180430, 0x0000010c, 0x086010c0,
 	0x00000430, 0x00000000, 0x00000000, 0x00000000,
 	0x00000000, 0x00000000, 0x00000000, 0x00000000,
@@ -749,8 +749,8 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				   ICE_FLOW_PTYPE_MAX);
 		} else if ((hdrs & ICE_FLOW_SEG_HDR_IPV4) &&
 			   !(hdrs & ICE_FLOW_SEG_HDRS_L4_MASK_NO_OTHER)) {
-			src = !i ? (const unsigned long *)ice_ipv4_ofos_no_l4 :
-				(const unsigned long *)ice_ipv4_il_no_l4;
+			src = !i ? (const unsigned long *)ice_ptypes_ipv4_ofos_no_l4 :
+				(const unsigned long *)ice_ptypes_ipv4_il_no_l4;
 			bitmap_and(params->ptypes, params->ptypes, src,
 				   ICE_FLOW_PTYPE_MAX);
 		} else if (hdrs & ICE_FLOW_SEG_HDR_IPV4) {
@@ -760,8 +760,8 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params)
 				   ICE_FLOW_PTYPE_MAX);
 		} else if ((hdrs & ICE_FLOW_SEG_HDR_IPV6) &&
 			   !(hdrs & ICE_FLOW_SEG_HDRS_L4_MASK_NO_OTHER)) {
-			src = !i ? (const unsigned long *)ice_ipv6_ofos_no_l4 :
-				(const unsigned long *)ice_ipv6_il_no_l4;
+			src = !i ? (const unsigned long *)ice_ptypes_ipv6_ofos_no_l4 :
+				(const unsigned long *)ice_ptypes_ipv6_il_no_l4;
 			bitmap_and(params->ptypes, params->ptypes, src,
 				   ICE_FLOW_PTYPE_MAX);
 		} else if (hdrs & ICE_FLOW_SEG_HDR_IPV6) {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 13/14] ice: Advertise virtchnl UDP segmentation offload capability
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (10 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 12/14] ice: rename ptype bitmap Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-04-16 12:27   ` Jankowski, Konrad0
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 14/14] iavf: add support for UDP Segmentation Offload Tony Nguyen
  2021-04-21 18:49 ` [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Jankowski, Konrad0
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

As the hardware is capable of supporting UDP segmentation offload, add a
capability bit to virtchnl.h to communicate this and have the driver
advertise its support.

Suggested-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 3 +++
 include/linux/avf/virtchnl.h                     | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index f09367eb242a..43d309aa9efe 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -2126,6 +2126,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 	if (vf->driver_caps & VIRTCHNL_VF_CAP_ADV_LINK_SPEED)
 		vfres->vf_cap_flags |= VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
 
+	if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO)
+		vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_USO;
+
 	vfres->num_vsis = 1;
 	/* Tx and Rx queue are equal for VF */
 	vfres->num_queue_pairs = vsi->num_txq;
diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h
index 228b90ef3361..ddcff6219b61 100644
--- a/include/linux/avf/virtchnl.h
+++ b/include/linux/avf/virtchnl.h
@@ -252,6 +252,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16, virtchnl_vsi_resource);
 #define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
 #define VIRTCHNL_VF_OFFLOAD_ADQ			0X00800000
 #define VIRTCHNL_VF_OFFLOAD_FDIR_PF		0X10000000
+#define VIRTCHNL_VF_OFFLOAD_USO			0X02000000
 
 /* Define below the capability flags that are not offloads */
 #define VIRTCHNL_VF_CAP_ADV_LINK_SPEED		0x00000080
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 14/14] iavf: add support for UDP Segmentation Offload
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (11 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 13/14] ice: Advertise virtchnl UDP segmentation offload capability Tony Nguyen
@ 2021-03-02 18:12 ` Tony Nguyen
  2021-04-16 12:28   ` Jankowski, Konrad0
  2021-04-21 18:49 ` [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Jankowski, Konrad0
  13 siblings, 1 reply; 28+ messages in thread
From: Tony Nguyen @ 2021-03-02 18:12 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

Add code to support UDP segmentation offload (USO) for
hardware that supports it.

Suggested-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Brett Creeley <brett.creeley@intel.com>
---
 drivers/net/ethernet/intel/iavf/iavf_main.c     |  2 ++
 drivers/net/ethernet/intel/iavf/iavf_txrx.c     | 15 +++++++++++----
 drivers/net/ethernet/intel/iavf/iavf_virtchnl.c |  1 +
 3 files changed, 14 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c b/drivers/net/ethernet/intel/iavf/iavf_main.c
index 84430a304f3e..5a7ebbf0fb42 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_main.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
@@ -3560,6 +3560,8 @@ int iavf_process_config(struct iavf_adapter *adapter)
 	/* Enable cloud filter if ADQ is supported */
 	if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ)
 		hw_features |= NETIF_F_HW_TC;
+	if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_USO)
+		hw_features |= NETIF_F_GSO_UDP_L4;
 
 	netdev->hw_features |= hw_features;
 
diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
index ffaf2742a2e0..23a51ebe13c5 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
@@ -1905,13 +1905,20 @@ static int iavf_tso(struct iavf_tx_buffer *first, u8 *hdr_len,
 
 	/* determine offset of inner transport header */
 	l4_offset = l4.hdr - skb->data;
-
 	/* remove payload length from inner checksum */
 	paylen = skb->len - l4_offset;
-	csum_replace_by_diff(&l4.tcp->check, (__force __wsum)htonl(paylen));
 
-	/* compute length of segmentation header */
-	*hdr_len = (l4.tcp->doff * 4) + l4_offset;
+	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
+		csum_replace_by_diff(&l4.udp->check,
+				     (__force __wsum)htonl(paylen));
+		/* compute length of UDP segmentation header */
+		*hdr_len = (u8)sizeof(l4.udp) + l4_offset;
+	} else {
+		csum_replace_by_diff(&l4.tcp->check,
+				     (__force __wsum)htonl(paylen));
+		/* compute length of TCP segmentation header */
+		*hdr_len = (u8)((l4.tcp->doff * 4) + l4_offset);
+	}
 
 	/* pull values out of skb_shinfo */
 	gso_size = skb_shinfo(skb)->gso_size;
diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
index b41dff44f65b..3d7643ea8d46 100644
--- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
+++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
@@ -140,6 +140,7 @@ int iavf_send_vf_config_msg(struct iavf_adapter *adapter)
 	       VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM |
 	       VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
 	       VIRTCHNL_VF_OFFLOAD_ADQ |
+	       VIRTCHNL_VF_OFFLOAD_USO |
 	       VIRTCHNL_VF_OFFLOAD_FDIR_PF |
 	       VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 04/14] ice: Delay netdev registration
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 04/14] ice: Delay netdev registration Tony Nguyen
@ 2021-03-10 23:44   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-10 23:44 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 04/14] ice: Delay netdev registration
> 
> From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> 
> Once a netdev is registered, the corresponding network interface can be
> immediately used by userspace utilities (like say NetworkManager).
> This can be problematic if the driver technically isn't fully up yet.
> 
> Move netdev registration to the end of probe, as by this time the driver data
> structures and device will be initialized as expected.
> 
> However, delaying netdev registration causes a failure in the aRFS flow
> where netdev->reg_state == NETREG_REGISTERED condition is checked. It's
> not clear why this check was added to begin with, so remove it.
> Local testing didn't indicate any issues with this change.
> 
> The state bit check in ice_open was put in as a stop-gap measure to prevent
> a premature interface up operation. This is no longer needed, so remove it.
> 
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_arfs.c |  6 +-
> drivers/net/ethernet/intel/ice/ice_main.c | 93 +++++++++++------------
>  2 files changed, 47 insertions(+), 52 deletions(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 05/14] ice: Update to use package info from ice segment
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 05/14] ice: Update to use package info from ice segment Tony Nguyen
@ 2021-03-11  0:02   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-11  0:02 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 05/14] ice: Update to use package info
> from ice segment
> 
> From: Dan Nowlin <dan.nowlin@intel.com>
> 
> There are two package versions in the package binary. Today, these two
> version numbers are the same. However, in the future that may change.
> 
> Update code to use the package info from the ice segment metadata
> section, which is the package information that is actually downloaded to the
> firmware during the download package process.
> 
> Signed-off-by: Dan Nowlin <dan.nowlin@intel.com>
> ---
>  .../net/ethernet/intel/ice/ice_adminq_cmd.h   |  1 +
>  .../net/ethernet/intel/ice/ice_flex_pipe.c    | 40 ++++++++++---------
>  .../net/ethernet/intel/ice/ice_flex_type.h    |  9 +++++
>  drivers/net/ethernet/intel/ice/ice_type.h     |  8 ++--
>  4 files changed, 36 insertions(+), 22 deletions(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 06/14] ice: handle increasing Tx or Rx ring sizes
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 06/14] ice: handle increasing Tx or Rx ring sizes Tony Nguyen
@ 2021-03-11  0:27   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-11  0:27 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 06/14] ice: handle increasing Tx or Rx
> ring sizes
> 
> From: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> 
> There is an issue when the Tx or Rx ring size increases using 'ethtool -L ...'
> where the new rings don't get the correct ITR values because when we
> rebuild the VSI we don't know that some of the rings may be new.
> 
> Fix this by looking at the original number of rings and determining if the rings
> in ice_vsi_rebuild_set_coalesce() were not present in the original rings
> received in ice_vsi_rebuild_get_coalesce().
> 
> Also change the code to return an error if we can't allocate memory for the
> coalesce data in ice_vsi_rebuild().
> 
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c  | 123 ++++++++++++++++------
>  drivers/net/ethernet/intel/ice/ice_txrx.h |   2 +
>  2 files changed, 92 insertions(+), 33 deletions(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 08/14] ice: change link misconfiguration message
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 08/14] ice: change link misconfiguration message Tony Nguyen
@ 2021-03-11  0:31   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-11  0:31 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 08/14] ice: change link misconfiguration
> message
> 
> From: Paul Greenwalt <paul.greenwalt@intel.com>
> 
> Change link misconfiguration message since the configuration could be
> intended by the user.
> 
> Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_main.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 09/14] ice: remove unnecessary duplicated AQ command flag setting
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 09/14] ice: remove unnecessary duplicated AQ command flag setting Tony Nguyen
@ 2021-03-11  0:33   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-11  0:33 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 09/14] ice: remove unnecessary
> duplicated AQ command flag setting
> 
> From: Bruce Allan <bruce.w.allan@intel.com>
> 
> Commit a012dca9f7a2 ("ice: add ethtool -m support for reading i2c eeprom
> modules") unnecessarily added the ICE_AQ_FLAG_BUF flag to the descriptor
> when sending the indirect Read/Write SFF EEPROM AQ command. The flag is
> already added later in the code flow for all indirect AQ commands, i.e.
> commands that provide an additional data buffer.
> 
> Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_common.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 10/14] ice: Check for bail out condition early
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 10/14] ice: Check for bail out condition early Tony Nguyen
@ 2021-03-11 22:52   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-11 22:52 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 10/14] ice: Check for bail out condition
> early
> 
> From: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
> 
> Check for bail out condition before calling ice_aq_sff_eeprom
> 
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_ethtool.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 11/14] ice: correct memory allocation call
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 11/14] ice: correct memory allocation call Tony Nguyen
@ 2021-03-11 22:53   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-11 22:53 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 11/14] ice: correct memory allocation
> call
> 
> From: Bruce Allan <bruce.w.allan@intel.com>
> 
> Use *malloc() instead of *calloc() when allocating only a single object as
> opposed to an array of objects.
> 
> Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_switch.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 12/14] ice: rename ptype bitmap
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 12/14] ice: rename ptype bitmap Tony Nguyen
@ 2021-03-11 22:55   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-11 22:55 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 12/14] ice: rename ptype bitmap
> 
> From: Qi Zhang <qi.z.zhang@intel.com>
> 
> Align all ptype bitmap to follow ice_ptypes_xxx prefix.
> 
> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_flow.c | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 03/14] ice: Add Support for XPS
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 03/14] ice: Add Support for XPS Tony Nguyen
@ 2021-03-12 22:13   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-12 22:13 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 03/14] ice: Add Support for XPS
> 
> From: Benita Bose <benita.bose@intel.com>
> 
> Enable and configure XPS. The driver code implemented sets up the Transmit
> Packet Steering Map, which in turn will be used by the kernel in queue
> selection during Tx.
> 
> Signed-off-by: Benita Bose <benita.bose@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
> Note: Emails to Benita will bounce as she has since left Intel
> ---
>  drivers/net/ethernet/intel/ice/ice_base.c | 23
> +++++++++++++++++++++++  drivers/net/ethernet/intel/ice/ice_txrx.h |  6
> ++++++
>  2 files changed, 29 insertions(+)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 07/14] ice: use kernel definitions for IANA protocol ports and ether-types
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 07/14] ice: use kernel definitions for IANA protocol ports and ether-types Tony Nguyen
@ 2021-03-19 22:43   ` Brelinski, TonyX
  0 siblings, 0 replies; 28+ messages in thread
From: Brelinski, TonyX @ 2021-03-19 22:43 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: Tuesday, March 2, 2021 10:12 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 07/14] ice: use kernel definitions for
> IANA protocol ports and ether-types
> 
> From: Bruce Allan <bruce.w.allan@intel.com>
> 
> The well-known IANA protocol port 3260 (iscsi-target 0x0cbc) and the ether-
> types 0x8906 (ETH_P_FCOE) and 0x8914 (ETH_P_FIP) are already defined in
> kernel header files.  Use those definitions instead of open-coding the same.
> 
> Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice.h         | 3 +++
>  drivers/net/ethernet/intel/ice/ice_dcb.c     | 8 ++++----
>  drivers/net/ethernet/intel/ice/ice_dcb_lib.c | 2 +-
>  drivers/net/ethernet/intel/ice/ice_type.h    | 3 ---
>  4 files changed, 8 insertions(+), 8 deletions(-)

Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> A Contingent Worker at Intel



^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 13/14] ice: Advertise virtchnl UDP segmentation offload capability
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 13/14] ice: Advertise virtchnl UDP segmentation offload capability Tony Nguyen
@ 2021-04-16 12:27   ` Jankowski, Konrad0
  0 siblings, 0 replies; 28+ messages in thread
From: Jankowski, Konrad0 @ 2021-04-16 12:27 UTC (permalink / raw)
  To: intel-wired-lan


> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: wtorek, 2 marca 2021 19:12
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 13/14] ice: Advertise virtchnl UDP
> segmentation offload capability
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> As the hardware is capable of supporting UDP segmentation offload, add a
> capability bit to virtchnl.h to communicate this and have the driver advertise
> its support.
> 
> Suggested-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 3 +++
>  include/linux/avf/virtchnl.h                     | 1 +
>  2 files changed, 4 insertions(+)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> index f09367eb242a..43d309aa9efe 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> @@ -2126,6 +2126,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf,
> u8 *msg)
>  	if (vf->driver_caps & VIRTCHNL_VF_CAP_ADV_LINK_SPEED)
>  		vfres->vf_cap_flags |=
> VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
> 
> +	if (vf->driver_caps & VIRTCHNL_VF_OFFLOAD_USO)
> +		vfres->vf_cap_flags |= VIRTCHNL_VF_OFFLOAD_USO;
> +
>  	vfres->num_vsis = 1;
>  	/* Tx and Rx queue are equal for VF */
>  	vfres->num_queue_pairs = vsi->num_txq; diff --git
> a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index
> 228b90ef3361..ddcff6219b61 100644
> --- a/include/linux/avf/virtchnl.h
> +++ b/include/linux/avf/virtchnl.h
> @@ -252,6 +252,7 @@ VIRTCHNL_CHECK_STRUCT_LEN(16,
> virtchnl_vsi_resource);
>  #define VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM	0X00400000
>  #define VIRTCHNL_VF_OFFLOAD_ADQ			0X00800000
>  #define VIRTCHNL_VF_OFFLOAD_FDIR_PF		0X10000000
> +#define VIRTCHNL_VF_OFFLOAD_USO			0X02000000
> 
>  /* Define below the capability flags that are not offloads */
>  #define VIRTCHNL_VF_CAP_ADV_LINK_SPEED		0x00000080
> --
> 2.20.1
> 
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 14/14] iavf: add support for UDP Segmentation Offload
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 14/14] iavf: add support for UDP Segmentation Offload Tony Nguyen
@ 2021-04-16 12:28   ` Jankowski, Konrad0
  0 siblings, 0 replies; 28+ messages in thread
From: Jankowski, Konrad0 @ 2021-04-16 12:28 UTC (permalink / raw)
  To: intel-wired-lan



> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: wtorek, 2 marca 2021 19:12
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 14/14] iavf: add support for UDP
> Segmentation Offload
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> Add code to support UDP segmentation offload (USO) for hardware that
> supports it.
> 
> Suggested-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> ---
>  drivers/net/ethernet/intel/iavf/iavf_main.c     |  2 ++
>  drivers/net/ethernet/intel/iavf/iavf_txrx.c     | 15 +++++++++++----
>  drivers/net/ethernet/intel/iavf/iavf_virtchnl.c |  1 +
>  3 files changed, 14 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/iavf/iavf_main.c
> b/drivers/net/ethernet/intel/iavf/iavf_main.c
> index 84430a304f3e..5a7ebbf0fb42 100644
> --- a/drivers/net/ethernet/intel/iavf/iavf_main.c
> +++ b/drivers/net/ethernet/intel/iavf/iavf_main.c
> @@ -3560,6 +3560,8 @@ int iavf_process_config(struct iavf_adapter
> *adapter)
>  	/* Enable cloud filter if ADQ is supported */
>  	if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_ADQ)
>  		hw_features |= NETIF_F_HW_TC;
> +	if (vfres->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_USO)
> +		hw_features |= NETIF_F_GSO_UDP_L4;
> 
>  	netdev->hw_features |= hw_features;
> 
> diff --git a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
> b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
> index ffaf2742a2e0..23a51ebe13c5 100644
> --- a/drivers/net/ethernet/intel/iavf/iavf_txrx.c
> +++ b/drivers/net/ethernet/intel/iavf/iavf_txrx.c
> @@ -1905,13 +1905,20 @@ static int iavf_tso(struct iavf_tx_buffer *first, u8
> *hdr_len,
> 
>  	/* determine offset of inner transport header */
>  	l4_offset = l4.hdr - skb->data;
> -
>  	/* remove payload length from inner checksum */
>  	paylen = skb->len - l4_offset;
> -	csum_replace_by_diff(&l4.tcp->check, (__force
> __wsum)htonl(paylen));
> 
> -	/* compute length of segmentation header */
> -	*hdr_len = (l4.tcp->doff * 4) + l4_offset;
> +	if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_L4) {
> +		csum_replace_by_diff(&l4.udp->check,
> +				     (__force __wsum)htonl(paylen));
> +		/* compute length of UDP segmentation header */
> +		*hdr_len = (u8)sizeof(l4.udp) + l4_offset;
> +	} else {
> +		csum_replace_by_diff(&l4.tcp->check,
> +				     (__force __wsum)htonl(paylen));
> +		/* compute length of TCP segmentation header */
> +		*hdr_len = (u8)((l4.tcp->doff * 4) + l4_offset);
> +	}
> 
>  	/* pull values out of skb_shinfo */
>  	gso_size = skb_shinfo(skb)->gso_size;
> diff --git a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
> b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
> index b41dff44f65b..3d7643ea8d46 100644
> --- a/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
> +++ b/drivers/net/ethernet/intel/iavf/iavf_virtchnl.c
> @@ -140,6 +140,7 @@ int iavf_send_vf_config_msg(struct iavf_adapter
> *adapter)
>  	       VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM |
>  	       VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
>  	       VIRTCHNL_VF_OFFLOAD_ADQ |
> +	       VIRTCHNL_VF_OFFLOAD_USO |
>  	       VIRTCHNL_VF_OFFLOAD_FDIR_PF |
>  	       VIRTCHNL_VF_CAP_ADV_LINK_SPEED;
> 
> --
> 2.20.1
> 
Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on specific VF
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on specific VF Tony Nguyen
@ 2021-04-21 18:48   ` Jankowski, Konrad0
  0 siblings, 0 replies; 28+ messages in thread
From: Jankowski, Konrad0 @ 2021-04-21 18:48 UTC (permalink / raw)
  To: intel-wired-lan



> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: wtorek, 2 marca 2021 19:12
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on
> specific VF
> 
> From: Michal Swiatkowski <michal.swiatkowski@intel.com>
> 
> Declare bitmap of allowed commands on VF. Initialize default opcodes list
> that should be always supported. Declare array of supported opcodes for
> each caps used in virtchnl code.
> 
> Change allowed bitmap by setting or clearing corresponding bit to allowlist
> (bit set) or denylist (bit clear).
> 
> Signed-off-by: Michal Swiatkowski <michal.swiatkowski@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/Makefile       |   2 +-
>  .../intel/ice/ice_virtchnl_allowlist.c        | 165 ++++++++++++++++++
>  .../intel/ice/ice_virtchnl_allowlist.h        |  13 ++
>  .../net/ethernet/intel/ice/ice_virtchnl_pf.c  |  18 ++
>  .../net/ethernet/intel/ice/ice_virtchnl_pf.h  |   1 +
>  include/linux/avf/virtchnl.h                  |   1 +
>  6 files changed, 199 insertions(+), 1 deletion(-)  create mode 100644
> drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
>  create mode 100644 drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h
> 
> diff --git a/drivers/net/ethernet/intel/ice/Makefile
> b/drivers/net/ethernet/intel/ice/Makefile
> index f391691e2c7e..dc24ce7d1c1e 100644
> --- a/drivers/net/ethernet/intel/ice/Makefile
> +++ b/drivers/net/ethernet/intel/ice/Makefile
> @@ -26,7 +26,7 @@ ice-y := ice_main.o	\
>  	 ice_fw_update.o \
>  	 ice_lag.o	\
>  	 ice_ethtool.o
> -ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o ice_virtchnl_fdir.o
> +ice-$(CONFIG_PCI_IOV) += ice_virtchnl_pf.o ice_sriov.o
> +ice_virtchnl_allowlist.o ice_virtchnl_fdir.o
>  ice-$(CONFIG_DCB) += ice_dcb.o ice_dcb_nl.o ice_dcb_lib.o
>  ice-$(CONFIG_RFS_ACCEL) += ice_arfs.o
>  ice-$(CONFIG_XDP_SOCKETS) += ice_xsk.o
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
> b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
> new file mode 100644
> index 000000000000..64b1314d4761
> --- /dev/null
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.c
> @@ -0,0 +1,165 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/* Copyright (C) 2018-2021, Intel Corporation. */
> +
> +#include "ice_virtchnl_allowlist.h"
> +
> +/* Purpose of this file is to share functionality to allowlist or
> +denylist
> + * opcodes used in PF <-> VF communication. Group of opcodes:
> + * - default -> should be always allowed after creating VF,
> + *   default_allowlist_opcodes
> + * - opcodes needed by VF to work correctly, but not associated with caps -
> >
> + *   should be allowed after successful VF resources allocation,
> + *   working_allowlist_opcodes
> + * - opcodes needed by VF when caps are activated
> + *
> + * Caps that don't use new opcodes (no opcodes should be allowed):
> + * - VIRTCHNL_VF_OFFLOAD_RSS_AQ
> + * - VIRTCHNL_VF_OFFLOAD_RSS_REG
> + * - VIRTCHNL_VF_OFFLOAD_WB_ON_ITR
> + * - VIRTCHNL_VF_OFFLOAD_CRC
> + * - VIRTCHNL_VF_OFFLOAD_RX_POLLING
> + * - VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2
> + * - VIRTCHNL_VF_OFFLOAD_ENCAP
> + * - VIRTCHNL_VF_OFFLOAD_ENCAP_CSUM
> + * - VIRTCHNL_VF_OFFLOAD_RX_ENCAP_CSUM
> + * - VIRTCHNL_VF_OFFLOAD_USO
> + */
> +
> +/* default opcodes to communicate with VF */ static const u32
> +default_allowlist_opcodes[] = {
> +	VIRTCHNL_OP_GET_VF_RESOURCES, VIRTCHNL_OP_VERSION,
> +VIRTCHNL_OP_RESET_VF, };
> +
> +/* opcodes supported after successful VIRTCHNL_OP_GET_VF_RESOURCES
> */
> +static const u32 working_allowlist_opcodes[] = {
> +	VIRTCHNL_OP_CONFIG_TX_QUEUE,
> VIRTCHNL_OP_CONFIG_RX_QUEUE,
> +	VIRTCHNL_OP_CONFIG_VSI_QUEUES,
> VIRTCHNL_OP_CONFIG_IRQ_MAP,
> +	VIRTCHNL_OP_ENABLE_QUEUES, VIRTCHNL_OP_DISABLE_QUEUES,
> +	VIRTCHNL_OP_GET_STATS, VIRTCHNL_OP_EVENT, };
> +
> +/* VIRTCHNL_VF_OFFLOAD_L2 */
> +static const u32 l2_allowlist_opcodes[] = {
> +	VIRTCHNL_OP_ADD_ETH_ADDR, VIRTCHNL_OP_DEL_ETH_ADDR,
> +	VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE,
> +};
> +
> +/* VIRTCHNL_VF_OFFLOAD_REQ_QUEUES */
> +static const u32 req_queues_allowlist_opcodes[] = {
> +	VIRTCHNL_OP_REQUEST_QUEUES,
> +};
> +
> +/* VIRTCHNL_VF_OFFLOAD_VLAN */
> +static const u32 vlan_allowlist_opcodes[] = {
> +	VIRTCHNL_OP_ADD_VLAN, VIRTCHNL_OP_DEL_VLAN,
> +	VIRTCHNL_OP_ENABLE_VLAN_STRIPPING,
> VIRTCHNL_OP_DISABLE_VLAN_STRIPPING,
> +};
> +
> +/* VIRTCHNL_VF_OFFLOAD_RSS_PF */
> +static const u32 rss_pf_allowlist_opcodes[] = {
> +	VIRTCHNL_OP_CONFIG_RSS_KEY, VIRTCHNL_OP_CONFIG_RSS_LUT,
> +	VIRTCHNL_OP_GET_RSS_HENA_CAPS,
> VIRTCHNL_OP_SET_RSS_HENA, };
> +
> +/* VIRTCHNL_VF_OFFLOAD_FDIR_PF */
> +static const u32 fdir_pf_allowlist_opcodes[] = {
> +	VIRTCHNL_OP_ADD_FDIR_FILTER, VIRTCHNL_OP_DEL_FDIR_FILTER,
> };
> +
> +struct allowlist_opcode_info {
> +	const u32 *opcodes;
> +	size_t size;
> +};
> +
> +#define BIT_INDEX(caps) (HWEIGHT((caps) - 1)) #define
> ALLOW_ITEM(caps,
> +list) \
> +	[BIT_INDEX(caps)] = { \
> +		.opcodes = list, \
> +		.size = ARRAY_SIZE(list) \
> +	}
> +static const struct allowlist_opcode_info allowlist_opcodes[] = {
> +	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_L2, l2_allowlist_opcodes),
> +	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_REQ_QUEUES,
> req_queues_allowlist_opcodes),
> +	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_VLAN,
> vlan_allowlist_opcodes),
> +	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_RSS_PF,
> rss_pf_allowlist_opcodes),
> +	ALLOW_ITEM(VIRTCHNL_VF_OFFLOAD_FDIR_PF,
> fdir_pf_allowlist_opcodes), };
> +
> +/**
> + * ice_vc_opcode_is_allowed - check if this opcode is allowed on this
> +VF
> + * @vf: pointer to VF structure
> + * @opcode: virtchnl opcode
> + *
> + * Return true if message is allowed on this VF  */ bool
> +ice_vc_is_opcode_allowed(struct ice_vf *vf, u32 opcode) {
> +	if (opcode >= VIRTCHNL_OP_MAX)
> +		return false;
> +
> +	return test_bit(opcode, vf->opcodes_allowlist); }
> +
> +/**
> + * ice_vc_allowlist_opcodes - allowlist selected opcodes
> + * @vf: pointer to VF structure
> + * @opcodes: array of opocodes to allowlist
> + * @size: size of opcodes array
> + *
> + * Function should be called to allowlist opcodes on VF.
> + */
> +static void
> +ice_vc_allowlist_opcodes(struct ice_vf *vf, const u32 *opcodes, size_t
> +size) {
> +	unsigned int i;
> +
> +	for (i = 0; i < size; i++)
> +		set_bit(opcodes[i], vf->opcodes_allowlist); }
> +
> +/**
> + * ice_vc_clear_allowlist - clear all allowlist opcodes
> + * @vf: pointer to VF structure
> + */
> +static void ice_vc_clear_allowlist(struct ice_vf *vf) {
> +	bitmap_zero(vf->opcodes_allowlist, VIRTCHNL_OP_MAX); }
> +
> +/**
> + * ice_vc_set_default_allowlist - allowlist default opcodes for VF
> + * @vf: pointer to VF structure
> + */
> +void ice_vc_set_default_allowlist(struct ice_vf *vf) {
> +	ice_vc_clear_allowlist(vf);
> +	ice_vc_allowlist_opcodes(vf, default_allowlist_opcodes,
> +				 ARRAY_SIZE(default_allowlist_opcodes));
> +}
> +
> +/**
> + * ice_vc_set_working_allowlist - allowlist opcodes needed to by VF to
> +work
> + * @vf: pointer to VF structure
> + *
> + * Whitelist opcodes that aren't associated with specific caps, but
> + * are needed by VF to work.
> + */
> +void ice_vc_set_working_allowlist(struct ice_vf *vf) {
> +	ice_vc_allowlist_opcodes(vf, working_allowlist_opcodes,
> +				 ARRAY_SIZE(working_allowlist_opcodes));
> +}
> +
> +/**
> + * ice_vc_set_allowlist_based_on_caps - allowlist VF opcodes according
> +caps
> + * @vf: pointer to VF structure
> + */
> +void ice_vc_set_caps_allowlist(struct ice_vf *vf) {
> +	unsigned long caps = vf->driver_caps;
> +	unsigned int i;
> +
> +	for_each_set_bit(i, &caps, ARRAY_SIZE(allowlist_opcodes))
> +		ice_vc_allowlist_opcodes(vf, allowlist_opcodes[i].opcodes,
> +					 allowlist_opcodes[i].size);
> +}
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h
> b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h
> new file mode 100644
> index 000000000000..c33bc6ac3f54
> --- /dev/null
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_allowlist.h
> @@ -0,0 +1,13 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright (C) 2018-2021, Intel Corporation. */
> +
> +#ifndef _ICE_VIRTCHNL_ALLOWLIST_H_
> +#define _ICE_VIRTCHNL_ALLOWLIST_H_
> +#include "ice.h"
> +
> +bool ice_vc_is_opcode_allowed(struct ice_vf *vf, u32 opcode);
> +
> +void ice_vc_set_default_allowlist(struct ice_vf *vf); void
> +ice_vc_set_working_allowlist(struct ice_vf *vf); void
> +ice_vc_set_caps_allowlist(struct ice_vf *vf); #endif
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> index 0da9c84ed30f..f09367eb242a 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> @@ -5,6 +5,7 @@
>  #include "ice_base.h"
>  #include "ice_lib.h"
>  #include "ice_fltr.h"
> +#include "ice_virtchnl_allowlist.h"
> 
>  /**
>   * ice_validate_vf_id - helper to check if VF ID is valid @@ -1317,6 +1318,9
> @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
>  	ice_for_each_vf(pf, v) {
>  		vf = &pf->vf[v];
> 
> +		vf->driver_caps = 0;
> +		ice_vc_set_default_allowlist(vf);
> +
>  		ice_vf_fdir_exit(vf);
>  		/* clean VF control VSI when resetting VFs since it should be
>  		 * setup only when iAVF creates its first FDIR rule.
> @@ -1421,6 +1425,9 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
>  		usleep_range(10, 20);
>  	}
> 
> +	vf->driver_caps = 0;
> +	ice_vc_set_default_allowlist(vf);
> +
>  	/* Display a warning if VF didn't manage to reset in time, but need to
>  	 * continue on with the operation.
>  	 */
> @@ -1633,6 +1640,7 @@ static void ice_set_dflt_settings_vfs(struct ice_pf
> *pf)
>  		set_bit(ICE_VIRTCHNL_VF_CAP_L2, &vf->vf_caps);
>  		vf->spoofchk = true;
>  		vf->num_vf_qs = pf->num_qps_per_vf;
> +		ice_vc_set_default_allowlist(vf);
> 
>  		/* ctrl_vsi_idx will be set to a valid value only when iAVF
>  		 * creates its first fdir rule.
> @@ -2135,6 +2143,9 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf,
> u8 *msg)
>  	/* match guest capabilities */
>  	vf->driver_caps = vfres->vf_cap_flags;
> 
> +	ice_vc_set_caps_allowlist(vf);
> +	ice_vc_set_working_allowlist(vf);
> +
>  	set_bit(ICE_VF_STATE_ACTIVE, vf->vf_states);
> 
>  err:
> @@ -3964,6 +3975,13 @@ void ice_vc_process_vf_msg(struct ice_pf *pf,
> struct ice_rq_event_info *event)
>  			err = -EINVAL;
>  	}
> 
> +	if (!ice_vc_is_opcode_allowed(vf, v_opcode)) {
> +		ice_vc_send_msg_to_vf(vf, v_opcode,
> +
> VIRTCHNL_STATUS_ERR_NOT_SUPPORTED, NULL,
> +				      0);
> +		return;
> +	}
> +
>  error_handler:
>  	if (err) {
>  		ice_vc_send_msg_to_vf(vf, v_opcode,
> VIRTCHNL_STATUS_ERR_PARAM, diff --git
> a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> index 53391ac1f068..77ff0023f7be 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> @@ -110,6 +110,7 @@ struct ice_vf {
>  	u16 num_vf_qs;			/* num of queue configured
> per VF */
>  	struct ice_mdd_vf_events mdd_rx_events;
>  	struct ice_mdd_vf_events mdd_tx_events;
> +	DECLARE_BITMAP(opcodes_allowlist, VIRTCHNL_OP_MAX);
>  };
> 
>  #ifdef CONFIG_PCI_IOV
> diff --git a/include/linux/avf/virtchnl.h b/include/linux/avf/virtchnl.h index
> e3d5ecf7cf41..228b90ef3361 100644
> --- a/include/linux/avf/virtchnl.h
> +++ b/include/linux/avf/virtchnl.h
> @@ -139,6 +139,7 @@ enum virtchnl_ops {
>  	/* opcode 34 - 46 are reserved */
>  	VIRTCHNL_OP_ADD_FDIR_FILTER = 47,
>  	VIRTCHNL_OP_DEL_FDIR_FILTER = 48,
> +	VIRTCHNL_OP_MAX,
>  };
> 
>  /* These macros are used to generate compilation errors if a structure/union

Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs
  2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
                   ` (12 preceding siblings ...)
  2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 14/14] iavf: add support for UDP Segmentation Offload Tony Nguyen
@ 2021-04-21 18:49 ` Jankowski, Konrad0
  13 siblings, 0 replies; 28+ messages in thread
From: Jankowski, Konrad0 @ 2021-04-21 18:49 UTC (permalink / raw)
  To: intel-wired-lan



> -----Original Message-----
> From: Intel-wired-lan <intel-wired-lan-bounces@osuosl.org> On Behalf Of
> Tony Nguyen
> Sent: wtorek, 2 marca 2021 19:12
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially
> malicious VFs
> 
> From: Vignesh Sridhar <vignesh.sridhar@intel.com>
> 
> Attempt to detect malicious VFs and, if suspected, log the information but
> keep going to allow the user to take any desired actions.
> 
> Potentially malicious VFs are identified by checking if the VFs are transmitting
> too many messages via the PF-VF mailbox which could cause an overflow of
> this channel resulting in denial of service. This is done by creating a snapshot
> or static capture of the mailbox buffer which can be traversed and in which
> the messages sent by VFs are tracked.
> 
> Co-developed-by: Yashaswini Raghuram Prathivadi Bhayankaram
> <yashaswini.raghuram.prathivadi.bhayankaram@intel.com>
> Signed-off-by: Yashaswini Raghuram Prathivadi Bhayankaram
> <yashaswini.raghuram.prathivadi.bhayankaram@intel.com>
> Co-developed-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Signed-off-by: Paul M Stillwell Jr <paul.m.stillwell.jr@intel.com>
> Co-developed-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Vignesh Sridhar <vignesh.sridhar@intel.com>
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice.h          |   1 +
>  drivers/net/ethernet/intel/ice/ice_main.c     |   7 +-
>  drivers/net/ethernet/intel/ice/ice_sriov.c    | 400 +++++++++++++++++-
>  drivers/net/ethernet/intel/ice/ice_sriov.h    |  20 +-
>  drivers/net/ethernet/intel/ice/ice_type.h     |  75 ++++
>  .../net/ethernet/intel/ice/ice_virtchnl_pf.c  |  92 +++-
> .../net/ethernet/intel/ice/ice_virtchnl_pf.h  |  12 +
>  7 files changed, 603 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/net/ethernet/intel/ice/ice.h
> b/drivers/net/ethernet/intel/ice/ice.h
> index 07d4715e1bcd..ca94b01626d2 100644
> --- a/drivers/net/ethernet/intel/ice/ice.h
> +++ b/drivers/net/ethernet/intel/ice/ice.h
> @@ -414,6 +414,7 @@ struct ice_pf {
>  	u16 num_msix_per_vf;
>  	/* used to ratelimit the MDD event logging */
>  	unsigned long last_printed_mdd_jiffies;
> +	DECLARE_BITMAP(malvfs, ICE_MAX_VF_COUNT);
>  	DECLARE_BITMAP(state, __ICE_STATE_NBITS);
>  	DECLARE_BITMAP(flags, ICE_PF_FLAGS_NBITS);
>  	unsigned long *avail_txqs;	/* bitmap to track PF Tx queue usage
> */
> diff --git a/drivers/net/ethernet/intel/ice/ice_main.c
> b/drivers/net/ethernet/intel/ice/ice_main.c
> index 5b66b27a98aa..9cf876e420c9 100644
> --- a/drivers/net/ethernet/intel/ice/ice_main.c
> +++ b/drivers/net/ethernet/intel/ice/ice_main.c
> @@ -1210,6 +1210,10 @@ static int __ice_clean_ctrlq(struct ice_pf *pf,
> enum ice_ctl_q q_type)
>  	case ICE_CTL_Q_MAILBOX:
>  		cq = &hw->mailboxq;
>  		qtype = "Mailbox";
> +		/* we are going to try to detect a malicious VF, so set the
> +		 * state to begin detection
> +		 */
> +		hw->mbx_snapshot.mbx_buf.state =
> +ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
>  		break;
>  	default:
>  		dev_warn(dev, "Unknown control queue type 0x%x\n",
> q_type); @@ -1291,7 +1295,8 @@ static int __ice_clean_ctrlq(struct ice_pf
> *pf, enum ice_ctl_q q_type)
>  			ice_vf_lan_overflow_event(pf, &event);
>  			break;
>  		case ice_mbx_opc_send_msg_to_pf:
> -			ice_vc_process_vf_msg(pf, &event);
> +			if (!ice_is_malicious_vf(pf, &event, i, pending))
> +				ice_vc_process_vf_msg(pf, &event);
>  			break;
>  		case ice_aqc_opc_fw_logging:
>  			ice_output_fw_log(hw, &event.desc,
> event.msg_buf); diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.c
> b/drivers/net/ethernet/intel/ice/ice_sriov.c
> index 554f567476f3..aa11d07793d4 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.c
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.c
> @@ -2,7 +2,6 @@
>  /* Copyright (c) 2018, Intel Corporation. */
> 
>  #include "ice_common.h"
> -#include "ice_adminq_cmd.h"
>  #include "ice_sriov.h"
> 
>  /**
> @@ -132,3 +131,402 @@ u32 ice_conv_link_speed_to_virtchnl(bool
> adv_link_support, u16 link_speed)
> 
>  	return speed;
>  }
> +
> +/* The mailbox overflow detection algorithm helps to check if there
> + * is a possibility of a malicious VF transmitting too many MBX
> +messages to the
> + * PF.
> + * 1. The mailbox snapshot structure, ice_mbx_snapshot, is initialized
> +during
> + * driver initialization in ice_init_hw() using ice_mbx_init_snapshot().
> + * The struct ice_mbx_snapshot helps to track and traverse a static
> +window of
> + * messages within the mailbox queue while looking for a malicious VF.
> + *
> + * 2. When the caller starts processing its mailbox queue in response
> +to an
> + * interrupt, the structure ice_mbx_snapshot is expected to be cleared
> +before
> + * the algorithm can be run for the first time for that interrupt. This
> +can be
> + * done via ice_mbx_reset_snapshot().
> + *
> + * 3. For every message read by the caller from the MBX Queue, the
> +caller must
> + * call the detection algorithm's entry function ice_mbx_vf_state_handler().
> + * Before every call to ice_mbx_vf_state_handler() the struct
> +ice_mbx_data is
> + * filled as it is required to be passed to the algorithm.
> + *
> + * 4. Every time a message is read from the MBX queue, a VFId is
> +received which
> + * is passed to the state handler. The boolean output is_malvf of the
> +state
> + * handler ice_mbx_vf_state_handler() serves as an indicator to the
> +caller
> + * whether this VF is malicious or not.
> + *
> + * 5. When a VF is identified to be malicious, the caller can send a
> +message
> + * to the system administrator. The caller can invoke
> +ice_mbx_report_malvf()
> + * to help determine if a malicious VF is to be reported or not. This
> +function
> + * requires the caller to maintain a global bitmap to track all
> +malicious VFs
> + * and pass that to ice_mbx_report_malvf() along with the VFID which
> +was identified
> + * to be malicious by ice_mbx_vf_state_handler().
> + *
> + * 6. The global bitmap maintained by PF can be cleared completely if
> +PF is in
> + * reset or the bit corresponding to a VF can be cleared if that VF is in reset.
> + * When a VF is shut down and brought back up, we assume that the new
> +VF
> + * brought up is not malicious and hence report it if found malicious.
> + *
> + * 7. The function ice_mbx_reset_snapshot() is called to reset the
> +information
> + * in ice_mbx_snapshot for every new mailbox interrupt handled.
> + *
> + * 8. The memory allocated for variables in ice_mbx_snapshot is
> +de-allocated
> + * when driver is unloaded.
> + */
> +#define ICE_RQ_DATA_MASK(rq_data) ((rq_data) &
> PF_MBX_ARQH_ARQH_M)
> +/* Using the highest value for an unsigned 16-bit value 0xFFFF to
> +indicate that
> + * the max messages check must be ignored in the algorithm  */
> +#define ICE_IGNORE_MAX_MSG_CNT	0xFFFF
> +
> +/**
> + * ice_mbx_traverse - Pass through mailbox snapshot
> + * @hw: pointer to the HW struct
> + * @new_state: new algorithm state
> + *
> + * Traversing the mailbox static snapshot without checking
> + * for malicious VFs.
> + */
> +static void
> +ice_mbx_traverse(struct ice_hw *hw,
> +		 enum ice_mbx_snapshot_state *new_state) {
> +	struct ice_mbx_snap_buffer_data *snap_buf;
> +	u32 num_iterations;
> +
> +	snap_buf = &hw->mbx_snapshot.mbx_buf;
> +
> +	/* As mailbox buffer is circular, applying a mask
> +	 * on the incremented iteration count.
> +	 */
> +	num_iterations = ICE_RQ_DATA_MASK(++snap_buf-
> >num_iterations);
> +
> +	/* Checking either of the below conditions to exit snapshot traversal:
> +	 * Condition-1: If the number of iterations in the mailbox is equal to
> +	 * the mailbox head which would indicate that we have reached the
> end
> +	 * of the static snapshot.
> +	 * Condition-2: If the maximum messages serviced in the mailbox for
> a
> +	 * given interrupt is the highest possible value then there is no need
> +	 * to check if the number of messages processed is equal to it. If not
> +	 * check if the number of messages processed is greater than or
> equal
> +	 * to the maximum number of mailbox entries serviced in current
> work item.
> +	 */
> +	if (num_iterations == snap_buf->head ||
> +	    (snap_buf->max_num_msgs_mbx <
> ICE_IGNORE_MAX_MSG_CNT &&
> +	     ++snap_buf->num_msg_proc >= snap_buf-
> >max_num_msgs_mbx))
> +		*new_state =
> ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
> +}
> +
> +/**
> + * ice_mbx_detect_malvf - Detect malicious VF in snapshot
> + * @hw: pointer to the HW struct
> + * @vf_id: relative virtual function ID
> + * @new_state: new algorithm state
> + * @is_malvf: boolean output to indicate if VF is malicious
> + *
> + * This function tracks the number of asynchronous messages
> + * sent per VF and marks the VF as malicious if it exceeds
> + * the permissible number of messages to send.
> + */
> +static enum ice_status
> +ice_mbx_detect_malvf(struct ice_hw *hw, u16 vf_id,
> +		     enum ice_mbx_snapshot_state *new_state,
> +		     bool *is_malvf)
> +{
> +	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
> +
> +	if (vf_id >= snap->mbx_vf.vfcntr_len)
> +		return ICE_ERR_OUT_OF_RANGE;
> +
> +	/* increment the message count in the VF array */
> +	snap->mbx_vf.vf_cntr[vf_id]++;
> +
> +	if (snap->mbx_vf.vf_cntr[vf_id] >=
> ICE_ASYNC_VF_MSG_THRESHOLD)
> +		*is_malvf = true;
> +
> +	/* continue to iterate through the mailbox snapshot */
> +	ice_mbx_traverse(hw, new_state);
> +
> +	return 0;
> +}
> +
> +/**
> + * ice_mbx_reset_snapshot - Reset mailbox snapshot structure
> + * @snap: pointer to mailbox snapshot structure in the ice_hw struct
> + *
> + * Reset the mailbox snapshot structure and clear VF counter array.
> + */
> +static void ice_mbx_reset_snapshot(struct ice_mbx_snapshot *snap) {
> +	u32 vfcntr_len;
> +
> +	if (!snap || !snap->mbx_vf.vf_cntr)
> +		return;
> +
> +	/* Clear VF counters. */
> +	vfcntr_len = snap->mbx_vf.vfcntr_len;
> +	if (vfcntr_len)
> +		memset(snap->mbx_vf.vf_cntr, 0,
> +		       (vfcntr_len * sizeof(*snap->mbx_vf.vf_cntr)));
> +
> +	/* Reset mailbox snapshot for a new capture. */
> +	memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
> +	snap->mbx_buf.state =
> ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
> +}
> +
> +/**
> + * ice_mbx_vf_state_handler - Handle states of the overflow algorithm
> + * @hw: pointer to the HW struct
> + * @mbx_data: pointer to structure containing mailbox data
> + * @vf_id: relative virtual function (VF) ID
> + * @is_malvf: boolean output to indicate if VF is malicious
> + *
> + * The function serves as an entry point for the malicious VF
> + * detection algorithm by handling the different states and state
> + * transitions of the algorithm:
> + * New snapshot: This state is entered when creating a new static
> + * snapshot. The data from any previous mailbox snapshot is
> + * cleared and a new capture of the mailbox head and tail is
> + * logged. This will be the new static snapshot to detect
> + * asynchronous messages sent by VFs. On capturing the snapshot
> + * and depending on whether the number of pending messages in that
> + * snapshot exceed the watermark value, the state machine enters
> + * traverse or detect states.
> + * Traverse: If pending message count is below watermark then iterate
> + * through the snapshot without any action on VF.
> + * Detect: If pending message count exceeds watermark traverse
> + * the static snapshot and look for a malicious VF.
> + */
> +enum ice_status
> +ice_mbx_vf_state_handler(struct ice_hw *hw,
> +			 struct ice_mbx_data *mbx_data, u16 vf_id,
> +			 bool *is_malvf)
> +{
> +	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
> +	struct ice_mbx_snap_buffer_data *snap_buf;
> +	struct ice_ctl_q_info *cq = &hw->mailboxq;
> +	enum ice_mbx_snapshot_state new_state;
> +	enum ice_status status = 0;
> +
> +	if (!is_malvf || !mbx_data)
> +		return ICE_ERR_BAD_PTR;
> +
> +	/* When entering the mailbox state machine assume that the VF
> +	 * is not malicious until detected.
> +	 */
> +	*is_malvf = false;
> +
> +	 /* Checking if max messages allowed to be processed while servicing
> current
> +	  * interrupt is not less than the defined AVF message threshold.
> +	  */
> +	if (mbx_data->max_num_msgs_mbx <=
> ICE_ASYNC_VF_MSG_THRESHOLD)
> +		return ICE_ERR_INVAL_SIZE;
> +
> +	/* The watermark value should not be lesser than the threshold limit
> +	 * set for the number of asynchronous messages a VF can send to
> mailbox
> +	 * nor should it be greater than the maximum number of messages in
> the
> +	 * mailbox serviced in current interrupt.
> +	 */
> +	if (mbx_data->async_watermark_val <
> ICE_ASYNC_VF_MSG_THRESHOLD ||
> +	    mbx_data->async_watermark_val > mbx_data-
> >max_num_msgs_mbx)
> +		return ICE_ERR_PARAM;
> +
> +	new_state = ICE_MAL_VF_DETECT_STATE_INVALID;
> +	snap_buf = &snap->mbx_buf;
> +
> +	switch (snap_buf->state) {
> +	case ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT:
> +		/* Clear any previously held data in mailbox snapshot
> structure. */
> +		ice_mbx_reset_snapshot(snap);
> +
> +		/* Collect the pending ARQ count, number of messages
> processed and
> +		 * the maximum number of messages allowed to be
> processed from the
> +		 * Mailbox for current interrupt.
> +		 */
> +		snap_buf->num_pending_arq = mbx_data-
> >num_pending_arq;
> +		snap_buf->num_msg_proc = mbx_data->num_msg_proc;
> +		snap_buf->max_num_msgs_mbx = mbx_data-
> >max_num_msgs_mbx;
> +
> +		/* Capture a new static snapshot of the mailbox by logging
> the
> +		 * head and tail of snapshot and set num_iterations to the tail
> +		 * value to mark the start of the iteration through the
> snapshot.
> +		 */
> +		snap_buf->head = ICE_RQ_DATA_MASK(cq-
> >rq.next_to_clean +
> +						  mbx_data-
> >num_pending_arq);
> +		snap_buf->tail = ICE_RQ_DATA_MASK(cq->rq.next_to_clean
> - 1);
> +		snap_buf->num_iterations = snap_buf->tail;
> +
> +		/* Pending ARQ messages returned by ice_clean_rq_elem
> +		 * is the difference between the head and tail of the
> +		 * mailbox queue. Comparing this value against the
> watermark
> +		 * helps to check if we potentially have malicious VFs.
> +		 */
> +		if (snap_buf->num_pending_arq >=
> +		    mbx_data->async_watermark_val) {
> +			new_state = ICE_MAL_VF_DETECT_STATE_DETECT;
> +			status = ice_mbx_detect_malvf(hw, vf_id,
> &new_state, is_malvf);
> +		} else {
> +			new_state =
> ICE_MAL_VF_DETECT_STATE_TRAVERSE;
> +			ice_mbx_traverse(hw, &new_state);
> +		}
> +		break;
> +
> +	case ICE_MAL_VF_DETECT_STATE_TRAVERSE:
> +		new_state = ICE_MAL_VF_DETECT_STATE_TRAVERSE;
> +		ice_mbx_traverse(hw, &new_state);
> +		break;
> +
> +	case ICE_MAL_VF_DETECT_STATE_DETECT:
> +		new_state = ICE_MAL_VF_DETECT_STATE_DETECT;
> +		status = ice_mbx_detect_malvf(hw, vf_id, &new_state,
> is_malvf);
> +		break;
> +
> +	default:
> +		new_state = ICE_MAL_VF_DETECT_STATE_INVALID;
> +		status = ICE_ERR_CFG;
> +	}
> +
> +	snap_buf->state = new_state;
> +
> +	return status;
> +}
> +
> +/**
> + * ice_mbx_report_malvf - Track and note malicious VF
> + * @hw: pointer to the HW struct
> + * @all_malvfs: all malicious VFs tracked by PF
> + * @bitmap_len: length of bitmap in bits
> + * @vf_id: relative virtual function ID of the malicious VF
> + * @report_malvf: boolean to indicate if malicious VF must be reported
> + *
> + * This function will update a bitmap that keeps track of the malicious
> + * VFs attached to the PF. A malicious VF must be reported only once if
> + * discovered between VF resets or loading so the function checks
> + * the input vf_id against the bitmap to verify if the VF has been
> + * detected in any previous mailbox iterations.
> + */
> +enum ice_status
> +ice_mbx_report_malvf(struct ice_hw *hw, unsigned long *all_malvfs,
> +		     u16 bitmap_len, u16 vf_id, bool *report_malvf) {
> +	if (!all_malvfs || !report_malvf)
> +		return ICE_ERR_PARAM;
> +
> +	*report_malvf = false;
> +
> +	if (bitmap_len < hw->mbx_snapshot.mbx_vf.vfcntr_len)
> +		return ICE_ERR_INVAL_SIZE;
> +
> +	if (vf_id >= bitmap_len)
> +		return ICE_ERR_OUT_OF_RANGE;
> +
> +	/* If the vf_id is found in the bitmap set bit and boolean to true */
> +	if (!test_and_set_bit(vf_id, all_malvfs))
> +		*report_malvf = true;
> +
> +	return 0;
> +}
> +
> +/**
> + * ice_mbx_clear_malvf - Clear VF bitmap and counter for VF ID
> + * @snap: pointer to the mailbox snapshot structure
> + * @all_malvfs: all malicious VFs tracked by PF
> + * @bitmap_len: length of bitmap in bits
> + * @vf_id: relative virtual function ID of the malicious VF
> + *
> + * In case of a VF reset, this function can be called to clear
> + * the bit corresponding to the VF ID in the bitmap tracking all
> + * malicious VFs attached to the PF. The function also clears the
> + * VF counter array at the index of the VF ID. This is to ensure
> + * that the new VF loaded is not considered malicious before going
> + * through the overflow detection algorithm.
> + */
> +enum ice_status
> +ice_mbx_clear_malvf(struct ice_mbx_snapshot *snap, unsigned long
> *all_malvfs,
> +		    u16 bitmap_len, u16 vf_id)
> +{
> +	if (!snap || !all_malvfs)
> +		return ICE_ERR_PARAM;
> +
> +	if (bitmap_len < snap->mbx_vf.vfcntr_len)
> +		return ICE_ERR_INVAL_SIZE;
> +
> +	/* Ensure VF ID value is not larger than bitmap or VF counter length
> */
> +	if (vf_id >= bitmap_len || vf_id >= snap->mbx_vf.vfcntr_len)
> +		return ICE_ERR_OUT_OF_RANGE;
> +
> +	/* Clear VF ID bit in the bitmap tracking malicious VFs attached to PF
> */
> +	clear_bit(vf_id, all_malvfs);
> +
> +	/* Clear the VF counter in the mailbox snapshot structure for that VF
> ID.
> +	 * This is to ensure that if a VF is unloaded and a new one brought
> back
> +	 * up with the same VF ID for a snapshot currently in traversal or
> detect
> +	 * state the counter for that VF ID does not increment on top of
> existing
> +	 * values in the mailbox overflow detection algorithm.
> +	 */
> +	snap->mbx_vf.vf_cntr[vf_id] = 0;
> +
> +	return 0;
> +}
> +
> +/**
> + * ice_mbx_init_snapshot - Initialize mailbox snapshot structure
> + * @hw: pointer to the hardware structure
> + * @vf_count: number of VFs allocated on a PF
> + *
> + * Clear the mailbox snapshot structure and allocate memory
> + * for the VF counter array based on the number of VFs allocated
> + * on that PF.
> + *
> + * Assumption: This function will assume ice_get_caps() has already
> +been
> + * called to ensure that the vf_count can be compared against the
> +number
> + * of VFs supported as defined in the functional capabilities of the device.
> + */
> +enum ice_status ice_mbx_init_snapshot(struct ice_hw *hw, u16 vf_count)
> +{
> +	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
> +
> +	/* Ensure that the number of VFs allocated is non-zero and
> +	 * is not greater than the number of supported VFs defined in
> +	 * the functional capabilities of the PF.
> +	 */
> +	if (!vf_count || vf_count > hw->func_caps.num_allocd_vfs)
> +		return ICE_ERR_INVAL_SIZE;
> +
> +	snap->mbx_vf.vf_cntr = devm_kcalloc(ice_hw_to_dev(hw),
> vf_count,
> +					    sizeof(*snap->mbx_vf.vf_cntr),
> +					    GFP_KERNEL);
> +	if (!snap->mbx_vf.vf_cntr)
> +		return ICE_ERR_NO_MEMORY;
> +
> +	/* Setting the VF counter length to the number of allocated
> +	 * VFs for given PF's functional capabilities.
> +	 */
> +	snap->mbx_vf.vfcntr_len = vf_count;
> +
> +	/* Clear mbx_buf in the mailbox snaphot structure and setting the
> +	 * mailbox snapshot state to a new capture.
> +	 */
> +	memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf));
> +	snap->mbx_buf.state =
> ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT;
> +
> +	return 0;
> +}
> +
> +/**
> + * ice_mbx_deinit_snapshot - Free mailbox snapshot structure
> + * @hw: pointer to the hardware structure
> + *
> + * Clear the mailbox snapshot structure and free the VF counter array.
> + */
> +void ice_mbx_deinit_snapshot(struct ice_hw *hw) {
> +	struct ice_mbx_snapshot *snap = &hw->mbx_snapshot;
> +
> +	/* Free VF counter array and reset VF counter length */
> +	devm_kfree(ice_hw_to_dev(hw), snap->mbx_vf.vf_cntr);
> +	snap->mbx_vf.vfcntr_len = 0;
> +
> +	/* Clear mbx_buf in the mailbox snaphot structure */
> +	memset(&snap->mbx_buf, 0, sizeof(snap->mbx_buf)); }
> diff --git a/drivers/net/ethernet/intel/ice/ice_sriov.h
> b/drivers/net/ethernet/intel/ice/ice_sriov.h
> index 3d78a0795138..161dc55d9e9c 100644
> --- a/drivers/net/ethernet/intel/ice/ice_sriov.h
> +++ b/drivers/net/ethernet/intel/ice/ice_sriov.h
> @@ -4,7 +4,14 @@
>  #ifndef _ICE_SRIOV_H_
>  #define _ICE_SRIOV_H_
> 
> -#include "ice_common.h"
> +#include "ice_type.h"
> +#include "ice_controlq.h"
> +
> +/* Defining the mailbox message threshold as 63 asynchronous
> + * pending messages. Normal VF functionality does not require
> + * sending more than 63 asynchronous pending message.
> + */
> +#define ICE_ASYNC_VF_MSG_THRESHOLD	63
> 
>  #ifdef CONFIG_PCI_IOV
>  enum ice_status
> @@ -12,6 +19,17 @@ ice_aq_send_msg_to_vf(struct ice_hw *hw, u16 vfid,
> u32 v_opcode, u32 v_retval,
>  		      u8 *msg, u16 msglen, struct ice_sq_cd *cd);
> 
>  u32 ice_conv_link_speed_to_virtchnl(bool adv_link_support, u16
> link_speed);
> +enum ice_status
> +ice_mbx_vf_state_handler(struct ice_hw *hw, struct ice_mbx_data
> *mbx_data,
> +			 u16 vf_id, bool *is_mal_vf);
> +enum ice_status
> +ice_mbx_clear_malvf(struct ice_mbx_snapshot *snap, unsigned long
> *all_malvfs,
> +		    u16 bitmap_len, u16 vf_id);
> +enum ice_status ice_mbx_init_snapshot(struct ice_hw *hw, u16 vf_count);
> +void ice_mbx_deinit_snapshot(struct ice_hw *hw); enum ice_status
> +ice_mbx_report_malvf(struct ice_hw *hw, unsigned long *all_malvfs,
> +		     u16 bitmap_len, u16 vf_id, bool *report_malvf);
>  #else /* CONFIG_PCI_IOV */
>  static inline enum ice_status
>  ice_aq_send_msg_to_vf(struct ice_hw __always_unused *hw, diff --git
> a/drivers/net/ethernet/intel/ice/ice_type.h
> b/drivers/net/ethernet/intel/ice/ice_type.h
> index eae7ba73731e..420fd487fd57 100644
> --- a/drivers/net/ethernet/intel/ice/ice_type.h
> +++ b/drivers/net/ethernet/intel/ice/ice_type.h
> @@ -643,6 +643,80 @@ struct ice_fw_log_cfg {
>  	struct ice_fw_log_evnt evnts[ICE_AQC_FW_LOG_ID_MAX];  };
> 
> +/* Enum defining the different states of the mailbox snapshot in the
> + * PF-VF mailbox overflow detection algorithm. The snapshot can be in
> + * states:
> + * 1. ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT - generate a new static
> +snapshot
> + * within the mailbox buffer.
> + * 2. ICE_MAL_VF_DETECT_STATE_TRAVERSE - iterate through the mailbox
> +snaphot
> + * 3. ICE_MAL_VF_DETECT_STATE_DETECT - track the messages sent per VF
> +via the
> + * mailbox and mark any VFs sending more messages than the threshold
> limit set.
> + * 4. ICE_MAL_VF_DETECT_STATE_INVALID - Invalid mailbox state set to
> 0xFFFFFFFF.
> + */
> +enum ice_mbx_snapshot_state {
> +	ICE_MAL_VF_DETECT_STATE_NEW_SNAPSHOT = 0,
> +	ICE_MAL_VF_DETECT_STATE_TRAVERSE,
> +	ICE_MAL_VF_DETECT_STATE_DETECT,
> +	ICE_MAL_VF_DETECT_STATE_INVALID = 0xFFFFFFFF, };
> +
> +/* Structure to hold information of the static snapshot and the mailbox
> + * buffer data used to generate and track the snapshot.
> + * 1. state: the state of the mailbox snapshot in the malicious VF
> + * detection state handler ice_mbx_vf_state_handler()
> + * 2. head: head of the mailbox snapshot in a circular mailbox buffer
> + * 3. tail: tail of the mailbox snapshot in a circular mailbox buffer
> + * 4. num_iterations: number of messages traversed in circular mailbox
> +buffer
> + * 5. num_msg_proc: number of messages processed in mailbox
> + * 6. num_pending_arq: number of pending asynchronous messages
> + * 7. max_num_msgs_mbx: maximum messages in mailbox for currently
> + * serviced work item or interrupt.
> + */
> +struct ice_mbx_snap_buffer_data {
> +	enum ice_mbx_snapshot_state state;
> +	u32 head;
> +	u32 tail;
> +	u32 num_iterations;
> +	u16 num_msg_proc;
> +	u16 num_pending_arq;
> +	u16 max_num_msgs_mbx;
> +};
> +
> +/* Structure to track messages sent by VFs on mailbox:
> + * 1. vf_cntr: a counter array of VFs to track the number of
> + * asynchronous messages sent by each VF
> + * 2. vfcntr_len: number of entries in VF counter array  */ struct
> +ice_mbx_vf_counter {
> +	u32 *vf_cntr;
> +	u32 vfcntr_len;
> +};
> +
> +/* Structure to hold data relevant to the captured static snapshot
> + * of the PF-VF mailbox.
> + */
> +struct ice_mbx_snapshot {
> +	struct ice_mbx_snap_buffer_data mbx_buf;
> +	struct ice_mbx_vf_counter mbx_vf;
> +};
> +
> +/* Structure to hold data to be used for capturing or updating a
> + * static snapshot.
> + * 1. num_msg_proc: number of messages processed in mailbox
> + * 2. num_pending_arq: number of pending asynchronous messages
> + * 3. max_num_msgs_mbx: maximum messages in mailbox for currently
> + * serviced work item or interrupt.
> + * 4. async_watermark_val: An upper threshold set by caller to
> +determine
> + * if the pending arq count is large enough to assume that there is
> + * the possibility of a mailicious VF.
> + */
> +struct ice_mbx_data {
> +	u16 num_msg_proc;
> +	u16 num_pending_arq;
> +	u16 max_num_msgs_mbx;
> +	u16 async_watermark_val;
> +};
> +
>  /* Port hardware description */
>  struct ice_hw {
>  	u8 __iomem *hw_addr;
> @@ -774,6 +848,7 @@ struct ice_hw {
>  	DECLARE_BITMAP(fdir_perfect_fltr, ICE_FLTR_PTYPE_MAX);
>  	struct mutex rss_locks;	/* protect RSS configuration */
>  	struct list_head rss_list_head;
> +	struct ice_mbx_snapshot mbx_snapshot;
>  };
> 
>  /* Statistics collected by each port, VSI, VEB, and S-channel */ diff --git
> a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> index 6be6a54eb29c..0da9c84ed30f 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
> @@ -424,6 +424,12 @@ void ice_free_vfs(struct ice_pf *pf)
>  			wr32(hw, GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
>  		}
>  	}
> +
> +	/* clear malicious info if the VFs are getting released */
> +	for (i = 0; i < tmp; i++)
> +		if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->malvfs,
> ICE_MAX_VF_COUNT, i))
> +			dev_dbg(dev, "failed to clear malicious VF state for
> VF %u\n", i);
> +
>  	clear_bit(__ICE_VF_DIS, pf->state);
>  	clear_bit(ICE_FLAG_SRIOV_ENA, pf->flags);  } @@ -1262,6 +1268,11
> @@ bool ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr)
>  	if (!pf->num_alloc_vfs)
>  		return false;
> 
> +	/* clear all malicious info if the VFs are getting reset */
> +	ice_for_each_vf(pf, i)
> +		if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->malvfs,
> ICE_MAX_VF_COUNT, i))
> +			dev_dbg(dev, "failed to clear malicious VF state for
> VF %u\n", i);
> +
>  	/* If VFs have been disabled, there is no need to reset */
>  	if (test_and_set_bit(__ICE_VF_DIS, pf->state))
>  		return false;
> @@ -1447,6 +1458,10 @@ bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
> 
>  	ice_vf_post_vsi_rebuild(vf);
> 
> +	/* if the VF has been reset allow it to come up again */
> +	if (ice_mbx_clear_malvf(&hw->mbx_snapshot, pf->malvfs,
> ICE_MAX_VF_COUNT, vf->vf_id))
> +		dev_dbg(dev, "failed to clear malicious VF state for VF
> %u\n", i);
> +
>  	return true;
>  }
> 
> @@ -1779,6 +1794,7 @@ int ice_sriov_configure(struct pci_dev *pdev, int
> num_vfs)  {
>  	struct ice_pf *pf = pci_get_drvdata(pdev);
>  	struct device *dev = ice_pf_to_dev(pf);
> +	enum ice_status status;
>  	int err;
> 
>  	err = ice_check_sriov_allowed(pf);
> @@ -1787,6 +1803,7 @@ int ice_sriov_configure(struct pci_dev *pdev, int
> num_vfs)
> 
>  	if (!num_vfs) {
>  		if (!pci_vfs_assigned(pdev)) {
> +			ice_mbx_deinit_snapshot(&pf->hw);
>  			ice_free_vfs(pf);
>  			if (pf->lag)
>  				ice_enable_lag(pf->lag);
> @@ -1797,9 +1814,15 @@ int ice_sriov_configure(struct pci_dev *pdev, int
> num_vfs)
>  		return -EBUSY;
>  	}
> 
> +	status = ice_mbx_init_snapshot(&pf->hw, num_vfs);
> +	if (status)
> +		return ice_status_to_errno(status);
> +
>  	err = ice_pci_sriov_ena(pf, num_vfs);
> -	if (err)
> +	if (err) {
> +		ice_mbx_deinit_snapshot(&pf->hw);
>  		return err;
> +	}
> 
>  	if (pf->lag)
>  		ice_disable_lag(pf->lag);
> @@ -4382,3 +4405,70 @@ void ice_restore_all_vfs_msi_state(struct pci_dev
> *pdev)
>  		}
>  	}
>  }
> +
> +/**
> + * ice_is_malicious_vf - helper function to detect a malicious VF
> + * @pf: ptr to struct ice_pf
> + * @event: pointer to the AQ event
> + * @num_msg_proc: the number of messages processed so far
> + * @num_msg_pending: the number of messages peinding in admin queue
> */
> +bool ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info
> +*event,
> +		    u16 num_msg_proc, u16 num_msg_pending) {
> +	s16 vf_id = le16_to_cpu(event->desc.retval);
> +	struct device *dev = ice_pf_to_dev(pf);
> +	struct ice_mbx_data mbxdata;
> +	enum ice_status status;
> +	bool malvf = false;
> +	struct ice_vf *vf;
> +
> +	if (ice_validate_vf_id(pf, vf_id))
> +		return false;
> +
> +	vf = &pf->vf[vf_id];
> +	/* Check if VF is disabled. */
> +	if (test_bit(ICE_VF_STATE_DIS, vf->vf_states))
> +		return false;
> +
> +	mbxdata.num_msg_proc = num_msg_proc;
> +	mbxdata.num_pending_arq = num_msg_pending;
> +	mbxdata.max_num_msgs_mbx = pf->hw.mailboxq.num_rq_entries;
> #define
> +ICE_MBX_OVERFLOW_WATERMARK 64
> +	mbxdata.async_watermark_val =
> ICE_MBX_OVERFLOW_WATERMARK;
> +
> +	/* check to see if we have a malicious VF */
> +	status = ice_mbx_vf_state_handler(&pf->hw, &mbxdata, vf_id,
> &malvf);
> +	if (status)
> +		return false;
> +
> +	if (malvf) {
> +		bool report_vf = false;
> +
> +		/* if the VF is malicious and we haven't let the user
> +		 * know about it, then let them know now
> +		 */
> +		status = ice_mbx_report_malvf(&pf->hw, pf->malvfs,
> +					      ICE_MAX_VF_COUNT, vf_id,
> +					      &report_vf);
> +		if (status)
> +			dev_dbg(dev, "Error reporting malicious VF\n");
> +
> +		if (report_vf) {
> +			struct ice_vsi *pf_vsi = ice_get_main_vsi(pf);
> +
> +			if (pf_vsi)
> +				dev_warn(dev, "VF MAC %pM on PF MAC
> %pM is generating asynchronous messages and may be overflowing the PF
> message queue. Please see the Adapter User Guide for more
> information\n",
> +					 &vf->dev_lan_addr.addr[0],
> +					 pf_vsi->netdev->dev_addr);
> +		}
> +
> +		return true;
> +	}
> +
> +	/* if there was an error in detection or the VF is not malicious then
> +	 * return false
> +	 */
> +	return false;
> +}
> diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> index afd5c22015e1..53391ac1f068 100644
> --- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> +++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.h
> @@ -126,6 +126,9 @@ void ice_vc_notify_reset(struct ice_pf *pf);  bool
> ice_reset_all_vfs(struct ice_pf *pf, bool is_vflr);  bool ice_reset_vf(struct
> ice_vf *vf, bool is_vflr);  void ice_restore_all_vfs_msi_state(struct pci_dev
> *pdev);
> +bool
> +ice_is_malicious_vf(struct ice_pf *pf, struct ice_rq_event_info *event,
> +		    u16 num_msg_proc, u16 num_msg_pending);
> 
>  int
>  ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8
> qos, @@ -165,6 +168,15 @@ bool ice_vc_isvalid_vsi_id(struct ice_vf *vf, u16
> vsi_id);  #define ice_print_vf_rx_mdd_event(vf) do {} while (0)  #define
> ice_restore_all_vfs_msi_state(pdev) do {} while (0)
> 
> +static inline bool
> +ice_is_malicious_vf(struct ice_pf __always_unused *pf,
> +		    struct ice_rq_event_info __always_unused *event,
> +		    u16 __always_unused num_msg_proc,
> +		    u16 __always_unused num_msg_pending) {
> +	return false;
> +}
> +
>  static inline bool
>  ice_reset_all_vfs(struct ice_pf __always_unused *pf,
>  		  bool __always_unused is_vflr)

Tested-by: Konrad Jankowski <konrad0.jankowski@intel.com>


^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2021-04-21 18:49 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-02 18:12 [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Tony Nguyen
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 02/14] ice: Allow ignoring opcodes on specific VF Tony Nguyen
2021-04-21 18:48   ` Jankowski, Konrad0
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 03/14] ice: Add Support for XPS Tony Nguyen
2021-03-12 22:13   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 04/14] ice: Delay netdev registration Tony Nguyen
2021-03-10 23:44   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 05/14] ice: Update to use package info from ice segment Tony Nguyen
2021-03-11  0:02   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 06/14] ice: handle increasing Tx or Rx ring sizes Tony Nguyen
2021-03-11  0:27   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 07/14] ice: use kernel definitions for IANA protocol ports and ether-types Tony Nguyen
2021-03-19 22:43   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 08/14] ice: change link misconfiguration message Tony Nguyen
2021-03-11  0:31   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 09/14] ice: remove unnecessary duplicated AQ command flag setting Tony Nguyen
2021-03-11  0:33   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 10/14] ice: Check for bail out condition early Tony Nguyen
2021-03-11 22:52   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 11/14] ice: correct memory allocation call Tony Nguyen
2021-03-11 22:53   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 12/14] ice: rename ptype bitmap Tony Nguyen
2021-03-11 22:55   ` Brelinski, TonyX
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 13/14] ice: Advertise virtchnl UDP segmentation offload capability Tony Nguyen
2021-04-16 12:27   ` Jankowski, Konrad0
2021-03-02 18:12 ` [Intel-wired-lan] [PATCH S55 14/14] iavf: add support for UDP Segmentation Offload Tony Nguyen
2021-04-16 12:28   ` Jankowski, Konrad0
2021-04-21 18:49 ` [Intel-wired-lan] [PATCH S55 01/14] ice: warn about potentially malicious VFs Jankowski, Konrad0

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.