All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice
@ 2019-02-28 23:25 Anirudh Venkataramanan
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment based on direct calculation Anirudh Venkataramanan
                   ` (16 more replies)
  0 siblings, 17 replies; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

Akeem G Abodunrin (2):
  ice: Return configuration error without queue to disable
  ice: Fix issue when adding more than allowed VLANs

Anirudh Venkataramanan (1):
  ice: Create framework for VSI queue context

Brett Creeley (8):
  ice: Calculate ITR increment based on direct calculation
  ice: Reduce scope of variable in ice_vsi_cfg_rxqs
  ice: Use ice_for_each_q_vector macro where possible
  ice: Add ability to update rx-usecs-high
  ice: Remove unnecessary wait when disabling/enabling Rx queues
  ice: Add reg_idx variable in ice_q_vector structure
  ice: Refactor link event flow
  ice: Use dev_err when ice_cfg_vsi_lan fails

Bruce Allan (1):
  ice: Resolve static analysis reported issue

Jesse Brandeburg (1):
  ice: Use pf instead of vsi-back

Maciej Fijalkowski (1):
  ice: Validate ring existence and its q_vector per VSI

Md Fahad Iqbal Polash (1):
  ice: Remove runtime change of PFINT_OICR_ENA register

Paul Greenwalt (1):
  ice: Add 52 byte RSS hash key support

Tony Nguyen (1):
  ice: Add missing PHY type to link settings

 drivers/net/ethernet/intel/ice/ice.h             |  23 +-
 drivers/net/ethernet/intel/ice/ice_adminq_cmd.h  |   3 +
 drivers/net/ethernet/intel/ice/ice_common.c      |  91 +++++--
 drivers/net/ethernet/intel/ice/ice_common.h      |  11 +-
 drivers/net/ethernet/intel/ice/ice_ethtool.c     |  32 ++-
 drivers/net/ethernet/intel/ice/ice_lib.c         | 308 ++++++++++++++---------
 drivers/net/ethernet/intel/ice/ice_lib.h         |   1 +
 drivers/net/ethernet/intel/ice/ice_main.c        | 129 +++++-----
 drivers/net/ethernet/intel/ice/ice_sched.c       |  54 +++-
 drivers/net/ethernet/intel/ice/ice_switch.c      |  22 ++
 drivers/net/ethernet/intel/ice/ice_switch.h      |   9 +
 drivers/net/ethernet/intel/ice/ice_txrx.c        | 137 +++++-----
 drivers/net/ethernet/intel/ice/ice_txrx.h        |   1 +
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c |  30 +--
 14 files changed, 541 insertions(+), 310 deletions(-)

-- 
2.14.5


^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment based on direct calculation
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:26   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 02/17] ice: Create framework for VSI queue context Anirudh Venkataramanan
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

Currently when calculating how much to increment ITR by inside of
ice_update_itr() we do some estimations and intermediate
calculations. Instead of doing estimations, just do the
calculation directly. This allows for a more accurate value and it
makes it easier for the next person to understand and update.

Also, remove the dividing the ITR value by 2 when latency
driven because the ITR values are already so low for 100Gbps
speed. This should help get to the desired ITR value faster.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 135 ++++++++++++++----------------
 1 file changed, 63 insertions(+), 72 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 4b08cb3be28e..fabee3e59eff 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -1097,19 +1097,69 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 	return failure ? budget : (int)total_rx_pkts;
 }
 
-static unsigned int ice_itr_divisor(struct ice_port_info *pi)
+/**
+ * ice_adjust_itr_by_size_and_speed - Adjust ITR based on current traffic
+ * @port_info: port_info structure containing the current link speed
+ * @avg_pkt_size: average size of Tx or Rx packets based on clean routine
+ * @itr: itr value to update
+ *
+ * Calculate how big of an increment should be applied to the ITR value passed
+ * in based on wmem_default, SKB overhead, ethernet overhead, and the current
+ * link speed.
+ *
+ * The following is a calculation derived from:
+ *  wmem_default / (size + overhead) = desired_pkts_per_int
+ *  rate / bits_per_byte / (size + ethernet overhead) = pkt_rate
+ *  (desired_pkt_rate / pkt_rate) * usecs_per_sec = ITR value
+ *
+ * Assuming wmem_default is 212992 and overhead is 640 bytes per
+ * packet, (256 skb, 64 headroom, 320 shared info), we can reduce the
+ * formula down to:
+ *
+ *	 wmem_default * bits_per_byte * usecs_per_sec   pkt_size + 24
+ * ITR = -------------------------------------------- * --------------
+ *			     rate			pkt_size + 640
+ */
+static unsigned int
+ice_adjust_itr_by_size_and_speed(struct ice_port_info *port_info,
+				 unsigned int avg_pkt_size,
+				 unsigned int itr)
 {
-	switch (pi->phy.link_info.link_speed) {
+	switch (port_info->phy.link_info.link_speed) {
+	case ICE_AQ_LINK_SPEED_100GB:
+		itr += DIV_ROUND_UP(17 * (avg_pkt_size + 24),
+				    avg_pkt_size + 640);
+		break;
+	case ICE_AQ_LINK_SPEED_50GB:
+		itr += DIV_ROUND_UP(34 * (avg_pkt_size + 24),
+				    avg_pkt_size + 640);
+		break;
 	case ICE_AQ_LINK_SPEED_40GB:
-		return ICE_ITR_ADAPTIVE_MIN_INC * 1024;
+		itr += DIV_ROUND_UP(43 * (avg_pkt_size + 24),
+				    avg_pkt_size + 640);
+		break;
 	case ICE_AQ_LINK_SPEED_25GB:
+		itr += DIV_ROUND_UP(68 * (avg_pkt_size + 24),
+				    avg_pkt_size + 640);
+		break;
 	case ICE_AQ_LINK_SPEED_20GB:
-		return ICE_ITR_ADAPTIVE_MIN_INC * 512;
-	case ICE_AQ_LINK_SPEED_100MB:
-		return ICE_ITR_ADAPTIVE_MIN_INC * 32;
+		itr += DIV_ROUND_UP(85 * (avg_pkt_size + 24),
+				    avg_pkt_size + 640);
+		break;
+	case ICE_AQ_LINK_SPEED_10GB:
+		/* fall through */
 	default:
-		return ICE_ITR_ADAPTIVE_MIN_INC * 256;
+		itr += DIV_ROUND_UP(170 * (avg_pkt_size + 24),
+				    avg_pkt_size + 640);
+		break;
 	}
+
+	if ((itr & ICE_ITR_MASK) > ICE_ITR_ADAPTIVE_MAX_USECS) {
+		itr &= ICE_ITR_ADAPTIVE_LATENCY;
+		itr += ICE_ITR_ADAPTIVE_MAX_USECS;
+	}
+
+	return itr;
 }
 
 /**
@@ -1128,8 +1178,8 @@ static unsigned int ice_itr_divisor(struct ice_port_info *pi)
 static void
 ice_update_itr(struct ice_q_vector *q_vector, struct ice_ring_container *rc)
 {
-	unsigned int avg_wire_size, packets, bytes, itr;
 	unsigned long next_update = jiffies;
+	unsigned int packets, bytes, itr;
 	bool container_is_rx;
 
 	if (!rc->ring || !ITR_IS_DYNAMIC(rc->itr_setting))
@@ -1174,7 +1224,7 @@ ice_update_itr(struct ice_q_vector *q_vector, struct ice_ring_container *rc)
 		if (packets && packets < 4 && bytes < 9000 &&
 		    (q_vector->tx.target_itr & ICE_ITR_ADAPTIVE_LATENCY)) {
 			itr = ICE_ITR_ADAPTIVE_LATENCY;
-			goto adjust_by_size;
+			goto adjust_by_size_and_speed;
 		}
 	} else if (packets < 4) {
 		/* If we have Tx and Rx ITR maxed and Tx ITR is running in
@@ -1242,70 +1292,11 @@ ice_update_itr(struct ice_q_vector *q_vector, struct ice_ring_container *rc)
 	 */
 	itr = ICE_ITR_ADAPTIVE_BULK;
 
-adjust_by_size:
-	/* If packet counts are 256 or greater we can assume we have a gross
-	 * overestimation of what the rate should be. Instead of trying to fine
-	 * tune it just use the formula below to try and dial in an exact value
-	 * gives the current packet size of the frame.
-	 */
-	avg_wire_size = bytes / packets;
+adjust_by_size_and_speed:
 
-	/* The following is a crude approximation of:
-	 *  wmem_default / (size + overhead) = desired_pkts_per_int
-	 *  rate / bits_per_byte / (size + ethernet overhead) = pkt_rate
-	 *  (desired_pkt_rate / pkt_rate) * usecs_per_sec = ITR value
-	 *
-	 * Assuming wmem_default is 212992 and overhead is 640 bytes per
-	 * packet, (256 skb, 64 headroom, 320 shared info), we can reduce the
-	 * formula down to
-	 *
-	 *  (170 * (size + 24)) / (size + 640) = ITR
-	 *
-	 * We first do some math on the packet size and then finally bitshift
-	 * by 8 after rounding up. We also have to account for PCIe link speed
-	 * difference as ITR scales based on this.
-	 */
-	if (avg_wire_size <= 60) {
-		/* Start at 250k ints/sec */
-		avg_wire_size = 4096;
-	} else if (avg_wire_size <= 380) {
-		/* 250K ints/sec to 60K ints/sec */
-		avg_wire_size *= 40;
-		avg_wire_size += 1696;
-	} else if (avg_wire_size <= 1084) {
-		/* 60K ints/sec to 36K ints/sec */
-		avg_wire_size *= 15;
-		avg_wire_size += 11452;
-	} else if (avg_wire_size <= 1980) {
-		/* 36K ints/sec to 30K ints/sec */
-		avg_wire_size *= 5;
-		avg_wire_size += 22420;
-	} else {
-		/* plateau@a limit of 30K ints/sec */
-		avg_wire_size = 32256;
-	}
-
-	/* If we are in low latency mode halve our delay which doubles the
-	 * rate to somewhere between 100K to 16K ints/sec
-	 */
-	if (itr & ICE_ITR_ADAPTIVE_LATENCY)
-		avg_wire_size >>= 1;
-
-	/* Resultant value is 256 times larger than it needs to be. This
-	 * gives us room to adjust the value as needed to either increase
-	 * or decrease the value based on link speeds of 10G, 2.5G, 1G, etc.
-	 *
-	 * Use addition as we have already recorded the new latency flag
-	 * for the ITR value.
-	 */
-	itr += DIV_ROUND_UP(avg_wire_size,
-			    ice_itr_divisor(q_vector->vsi->port_info)) *
-	       ICE_ITR_ADAPTIVE_MIN_INC;
-
-	if ((itr & ICE_ITR_MASK) > ICE_ITR_ADAPTIVE_MAX_USECS) {
-		itr &= ICE_ITR_ADAPTIVE_LATENCY;
-		itr += ICE_ITR_ADAPTIVE_MAX_USECS;
-	}
+	/* based on checks above packets cannot be 0 so division is safe */
+	itr = ice_adjust_itr_by_size_and_speed(q_vector->vsi->port_info,
+					       bytes / packets, itr);
 
 clear_counts:
 	/* write back value */
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 02/17] ice: Create framework for VSI queue context
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment based on direct calculation Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:26   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 03/17] ice: Return configuration error without queue to disable Anirudh Venkataramanan
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

This patch introduces a framework to store queue specific information
in VSI queue contexts. Currently VSI queue context (represented by
struct ice_q_ctx) only has q_handle as a member. In future patches,
this structure will be updated to hold queue specific information.

Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c      | 62 +++++++++++++--
 drivers/net/ethernet/intel/ice/ice_common.h      | 11 +--
 drivers/net/ethernet/intel/ice/ice_lib.c         | 99 ++++++++++++++----------
 drivers/net/ethernet/intel/ice/ice_sched.c       | 54 +++++++++++--
 drivers/net/ethernet/intel/ice/ice_switch.c      | 22 ++++++
 drivers/net/ethernet/intel/ice/ice_switch.h      |  9 +++
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c |  4 +-
 7 files changed, 205 insertions(+), 56 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 2937c6be1aee..0a57b726b1f0 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -2790,11 +2790,36 @@ ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
 	return 0;
 }
 
+/**
+ * ice_get_lan_q_ctx - get the lan queue context for the given VSI and TC
+ * @hw: pointer to the hw struct
+ * @vsi_handle: software VSI handle
+ * @tc: tc number
+ * @q_handle: software queue handle
+ */
+static struct ice_q_ctx *
+ice_get_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 q_handle)
+{
+	struct ice_vsi_ctx *vsi;
+	struct ice_q_ctx *q_ctx;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi)
+		return NULL;
+	if (q_handle >= vsi->num_lan_q_entries[tc])
+		return NULL;
+	if (!vsi->lan_q_ctx[tc])
+		return NULL;
+	q_ctx = vsi->lan_q_ctx[tc];
+	return &q_ctx[q_handle];
+}
+
 /**
  * ice_ena_vsi_txq
  * @pi: port information structure
  * @vsi_handle: software VSI handle
  * @tc: TC number
+ * @q_handle: software queue handle
  * @num_qgrps: Number of added queue groups
  * @buf: list of queue groups to be added
  * @buf_size: size of buffer for indirect command
@@ -2803,12 +2828,13 @@ ice_set_ctx(u8 *src_ctx, u8 *dest_ctx, const struct ice_ctx_ele *ce_info)
  * This function adds one LAN queue
  */
 enum ice_status
-ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
-		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
+		u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
 		struct ice_sq_cd *cd)
 {
 	struct ice_aqc_txsched_elem_data node = { 0 };
 	struct ice_sched_node *parent;
+	struct ice_q_ctx *q_ctx;
 	enum ice_status status;
 	struct ice_hw *hw;
 
@@ -2825,6 +2851,14 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
 
 	mutex_lock(&pi->sched_lock);
 
+	q_ctx = ice_get_lan_q_ctx(hw, vsi_handle, tc, q_handle);
+	if (!q_ctx) {
+		ice_debug(hw, ICE_DBG_SCHED, "Enaq: invalid queue handle %d\n",
+			  q_handle);
+		status = ICE_ERR_PARAM;
+		goto ena_txq_exit;
+	}
+
 	/* find a parent node */
 	parent = ice_sched_get_free_qparent(pi, vsi_handle, tc,
 					    ICE_SCHED_NODE_OWNER_LAN);
@@ -2851,7 +2885,7 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
 	/* add the LAN queue */
 	status = ice_aq_add_lan_txq(hw, num_qgrps, buf, buf_size, cd);
 	if (status) {
-		ice_debug(hw, ICE_DBG_SCHED, "enable Q %d failed %d\n",
+		ice_debug(hw, ICE_DBG_SCHED, "enable queue %d failed %d\n",
 			  le16_to_cpu(buf->txqs[0].txq_id),
 			  hw->adminq.sq_last_status);
 		goto ena_txq_exit;
@@ -2859,6 +2893,7 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
 
 	node.node_teid = buf->txqs[0].q_teid;
 	node.data.elem_type = ICE_AQC_ELEM_TYPE_LEAF;
+	q_ctx->q_handle = q_handle;
 
 	/* add a leaf node into schduler tree queue layer */
 	status = ice_sched_add_node(pi, hw->num_tx_sched_layers - 1, &node);
@@ -2871,7 +2906,10 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
 /**
  * ice_dis_vsi_txq
  * @pi: port information structure
+ * @vsi_handle: software VSI handle
+ * @tc: TC number
  * @num_queues: number of queues
+ * @q_handles: pointer to software queue handle array
  * @q_ids: pointer to the q_id array
  * @q_teids: pointer to queue node teids
  * @rst_src: if called due to reset, specifies the reset source
@@ -2881,12 +2919,14 @@ ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
  * This function removes queues and their corresponding nodes in SW DB
  */
 enum ice_status
-ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
-		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
+ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
+		u16 *q_handles, u16 *q_ids, u32 *q_teids,
+		enum ice_disq_rst_src rst_src, u16 vmvf_num,
 		struct ice_sq_cd *cd)
 {
 	enum ice_status status = ICE_ERR_DOES_NOT_EXIST;
 	struct ice_aqc_dis_txq_item qg_list;
+	struct ice_q_ctx *q_ctx;
 	u16 i;
 
 	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
@@ -2909,6 +2949,17 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
 		node = ice_sched_find_node_by_teid(pi->root, q_teids[i]);
 		if (!node)
 			continue;
+		q_ctx = ice_get_lan_q_ctx(pi->hw, vsi_handle, tc, q_handles[i]);
+		if (!q_ctx) {
+			ice_debug(pi->hw, ICE_DBG_SCHED, "invalid queue handle%d\n",
+				  q_handles[i]);
+			continue;
+		}
+		if (q_ctx->q_handle != q_handles[i]) {
+			ice_debug(pi->hw, ICE_DBG_SCHED, "Err:handles %d %d\n",
+				  q_ctx->q_handle, q_handles[i]);
+			continue;
+		}
 		qg_list.parent_teid = node->info.parent_teid;
 		qg_list.num_qs = 1;
 		qg_list.q_id[0] = cpu_to_le16(q_ids[i]);
@@ -2919,6 +2970,7 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
 		if (status)
 			break;
 		ice_free_sched_node(pi, node);
+		q_ctx->q_handle = ICE_INVAL_Q_HANDLE;
 	}
 	mutex_unlock(&pi->sched_lock);
 	return status;
diff --git a/drivers/net/ethernet/intel/ice/ice_common.h b/drivers/net/ethernet/intel/ice/ice_common.h
index faefc45e4a1e..f1ddebf45231 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.h
+++ b/drivers/net/ethernet/intel/ice/ice_common.h
@@ -99,15 +99,16 @@ ice_aq_set_port_id_led(struct ice_port_info *pi, bool is_orig_mode,
 		       struct ice_sq_cd *cd);
 
 enum ice_status
-ice_dis_vsi_txq(struct ice_port_info *pi, u8 num_queues, u16 *q_ids,
-		u32 *q_teids, enum ice_disq_rst_src rst_src, u16 vmvf_num,
-		struct ice_sq_cd *cmd_details);
+ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
+		u16 *q_handle, u16 *q_ids, u32 *q_teids,
+		enum ice_disq_rst_src rst_src, u16 vmvf_num,
+		struct ice_sq_cd *cd);
 enum ice_status
 ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
 		u16 *max_lanqs);
 enum ice_status
-ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_qgrps,
-		struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
+ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle,
+		u8 num_qgrps, struct ice_aqc_add_tx_qgrp *buf, u16 buf_size,
 		struct ice_sq_cd *cd);
 enum ice_status ice_replay_vsi(struct ice_hw *hw, u16 vsi_handle);
 void ice_replay_post(struct ice_hw *hw);
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index f31129e4e9cf..fa8ebd8a10ce 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1715,8 +1715,8 @@ ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_ring **rings, int offset)
 			rings[q_idx]->tail =
 				pf->hw.hw_addr + QTX_COMM_DBELL(pf_q);
 			status = ice_ena_vsi_txq(vsi->port_info, vsi->idx, tc,
-						 num_q_grps, qg_buf, buf_len,
-						 NULL);
+						 i, num_q_grps, qg_buf,
+						 buf_len, NULL);
 			if (status) {
 				dev_err(&vsi->back->pdev->dev,
 					"Failed to set LAN Tx queue context, error: %d\n",
@@ -2033,10 +2033,10 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
 {
 	struct ice_pf *pf = vsi->back;
 	struct ice_hw *hw = &pf->hw;
+	int tc, q_idx = 0, err = 0;
+	u16 *q_ids, *q_handles, i;
 	enum ice_status status;
 	u32 *q_teids, val;
-	u16 *q_ids, i;
-	int err = 0;
 
 	if (vsi->num_txq > ICE_LAN_TXQ_MAX_QDIS)
 		return -EINVAL;
@@ -2053,50 +2053,71 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
 		goto err_alloc_q_ids;
 	}
 
-	/* set up the Tx queue list to be disabled */
-	ice_for_each_txq(vsi, i) {
-		u16 v_idx;
+	q_handles = devm_kcalloc(&pf->pdev->dev, vsi->num_txq,
+				 sizeof(*q_handles), GFP_KERNEL);
+	if (!q_handles) {
+		err = -ENOMEM;
+		goto err_alloc_q_handles;
+	}
 
-		if (!rings || !rings[i] || !rings[i]->q_vector) {
-			err = -EINVAL;
-			goto err_out;
-		}
+	/* set up the Tx queue list to be disabled for each enabled TC */
+	ice_for_each_traffic_class(tc) {
+		if (!(vsi->tc_cfg.ena_tc & BIT(tc)))
+			break;
+
+		for (i = 0; i < vsi->tc_cfg.tc_info[tc].qcount_tx; i++) {
+			u16 v_idx;
+
+			if (!rings || !rings[i] || !rings[i]->q_vector) {
+				err = -EINVAL;
+				goto err_out;
+			}
 
-		q_ids[i] = vsi->txq_map[i + offset];
-		q_teids[i] = rings[i]->txq_teid;
+			q_ids[i] = vsi->txq_map[q_idx + offset];
+			q_teids[i] = rings[q_idx]->txq_teid;
+			q_handles[i] = i;
 
-		/* clear cause_ena bit for disabled queues */
-		val = rd32(hw, QINT_TQCTL(rings[i]->reg_idx));
-		val &= ~QINT_TQCTL_CAUSE_ENA_M;
-		wr32(hw, QINT_TQCTL(rings[i]->reg_idx), val);
+			/* clear cause_ena bit for disabled queues */
+			val = rd32(hw, QINT_TQCTL(rings[i]->reg_idx));
+			val &= ~QINT_TQCTL_CAUSE_ENA_M;
+			wr32(hw, QINT_TQCTL(rings[i]->reg_idx), val);
 
-		/* software is expected to wait for 100 ns */
-		ndelay(100);
+			/* software is expected to wait for 100 ns */
+			ndelay(100);
 
-		/* trigger a software interrupt for the vector associated to
-		 * the queue to schedule NAPI handler
+			/* trigger a software interrupt for the vector
+			 * associated to the queue to schedule NAPI handler
+			 */
+			v_idx = rings[i]->q_vector->v_idx;
+			wr32(hw, GLINT_DYN_CTL(vsi->hw_base_vector + v_idx),
+			     GLINT_DYN_CTL_SWINT_TRIG_M |
+			     GLINT_DYN_CTL_INTENA_MSK_M);
+			q_idx++;
+		}
+		status = ice_dis_vsi_txq(vsi->port_info, vsi->idx, tc,
+					 vsi->num_txq, q_handles, q_ids,
+					 q_teids, rst_src, rel_vmvf_num, NULL);
+
+		/* if the disable queue command was exercised during an active
+		 * reset flow, ICE_ERR_RESET_ONGOING is returned. This is not
+		 * an error as the reset operation disables queues at the
+		 * hardware level anyway.
 		 */
-		v_idx = rings[i]->q_vector->v_idx;
-		wr32(hw, GLINT_DYN_CTL(vsi->hw_base_vector + v_idx),
-		     GLINT_DYN_CTL_SWINT_TRIG_M | GLINT_DYN_CTL_INTENA_MSK_M);
-	}
-	status = ice_dis_vsi_txq(vsi->port_info, vsi->num_txq, q_ids, q_teids,
-				 rst_src, rel_vmvf_num, NULL);
-	/* if the disable queue command was exercised during an active reset
-	 * flow, ICE_ERR_RESET_ONGOING is returned. This is not an error as
-	 * the reset operation disables queues at the hardware level anyway.
-	 */
-	if (status == ICE_ERR_RESET_ONGOING) {
-		dev_info(&pf->pdev->dev,
-			 "Reset in progress. LAN Tx queues already disabled\n");
-	} else if (status) {
-		dev_err(&pf->pdev->dev,
-			"Failed to disable LAN Tx queues, error: %d\n",
-			status);
-		err = -ENODEV;
+		if (status == ICE_ERR_RESET_ONGOING) {
+			dev_dbg(&pf->pdev->dev,
+				"Reset in progress. LAN Tx queues already disabled\n");
+		} else if (status) {
+			dev_err(&pf->pdev->dev,
+				"Failed to disable LAN Tx queues, error: %d\n",
+				status);
+			err = -ENODEV;
+		}
 	}
 
 err_out:
+	devm_kfree(&pf->pdev->dev, q_handles);
+
+err_alloc_q_handles:
 	devm_kfree(&pf->pdev->dev, q_ids);
 
 err_alloc_q_ids:
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index 124feaf0e730..8d49f83be7a5 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -532,6 +532,50 @@ ice_sched_suspend_resume_elems(struct ice_hw *hw, u8 num_nodes, u32 *node_teids,
 	return status;
 }
 
+/**
+ * ice_alloc_lan_q_ctx - allocate LAN queue contexts for the given VSI and TC
+ * @hw: pointer to the HW struct
+ * @vsi_handle: VSI handle
+ * @tc: TC number
+ * @new_numqs: number of queues
+ */
+static enum ice_status
+ice_alloc_lan_q_ctx(struct ice_hw *hw, u16 vsi_handle, u8 tc, u16 new_numqs)
+{
+	struct ice_vsi_ctx *vsi_ctx;
+	struct ice_q_ctx *q_ctx;
+
+	vsi_ctx = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi_ctx)
+		return ICE_ERR_PARAM;
+	/* allocate LAN queue contexts */
+	if (!vsi_ctx->lan_q_ctx[tc]) {
+		vsi_ctx->lan_q_ctx[tc] = devm_kcalloc(ice_hw_to_dev(hw),
+						      new_numqs,
+						      sizeof(*q_ctx),
+						      GFP_KERNEL);
+		if (!vsi_ctx->lan_q_ctx[tc])
+			return ICE_ERR_NO_MEMORY;
+		vsi_ctx->num_lan_q_entries[tc] = new_numqs;
+		return 0;
+	}
+	/* num queues are increased, update the queue contexts */
+	if (new_numqs > vsi_ctx->num_lan_q_entries[tc]) {
+		u16 prev_num = vsi_ctx->num_lan_q_entries[tc];
+
+		q_ctx = devm_kcalloc(ice_hw_to_dev(hw), new_numqs,
+				     sizeof(*q_ctx), GFP_KERNEL);
+		if (!q_ctx)
+			return ICE_ERR_NO_MEMORY;
+		memcpy(q_ctx, vsi_ctx->lan_q_ctx[tc],
+		       prev_num * sizeof(*q_ctx));
+		devm_kfree(ice_hw_to_dev(hw), vsi_ctx->lan_q_ctx[tc]);
+		vsi_ctx->lan_q_ctx[tc] = q_ctx;
+		vsi_ctx->num_lan_q_entries[tc] = new_numqs;
+	}
+	return 0;
+}
+
 /**
  * ice_sched_clear_agg - clears the aggregator related information
  * @hw: pointer to the hardware structure
@@ -1403,14 +1447,14 @@ ice_sched_update_vsi_child_nodes(struct ice_port_info *pi, u16 vsi_handle,
 	if (!vsi_ctx)
 		return ICE_ERR_PARAM;
 
-	if (owner == ICE_SCHED_NODE_OWNER_LAN)
-		prev_numqs = vsi_ctx->sched.max_lanq[tc];
-	else
-		return ICE_ERR_PARAM;
-
+	prev_numqs = vsi_ctx->sched.max_lanq[tc];
 	/* num queues are not changed or less than the previous number */
 	if (new_numqs <= prev_numqs)
 		return status;
+	status = ice_alloc_lan_q_ctx(hw, vsi_handle, tc, new_numqs);
+	if (status)
+		return status;
+
 	if (new_numqs)
 		ice_sched_calc_vsi_child_nodes(hw, new_numqs, new_num_nodes);
 	/* Keep the max number of queue configuration all the time. Update the
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.c b/drivers/net/ethernet/intel/ice/ice_switch.c
index 87d87ae3f551..b7975971403d 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.c
+++ b/drivers/net/ethernet/intel/ice/ice_switch.c
@@ -328,6 +328,27 @@ ice_save_vsi_ctx(struct ice_hw *hw, u16 vsi_handle, struct ice_vsi_ctx *vsi)
 	hw->vsi_ctx[vsi_handle] = vsi;
 }
 
+/**
+ * ice_clear_vsi_q_ctx - clear VSI queue contexts for all TCs
+ * @hw: pointer to the hw struct
+ * @vsi_handle: VSI handle
+ */
+static void ice_clear_vsi_q_ctx(struct ice_hw *hw, u16 vsi_handle)
+{
+	struct ice_vsi_ctx *vsi;
+	u8 i;
+
+	vsi = ice_get_vsi_ctx(hw, vsi_handle);
+	if (!vsi)
+		return;
+	ice_for_each_traffic_class(i) {
+		if (vsi->lan_q_ctx[i]) {
+			devm_kfree(ice_hw_to_dev(hw), vsi->lan_q_ctx[i]);
+			vsi->lan_q_ctx[i] = NULL;
+		}
+	}
+}
+
 /**
  * ice_clear_vsi_ctx - clear the VSI context entry
  * @hw: pointer to the HW struct
@@ -341,6 +362,7 @@ static void ice_clear_vsi_ctx(struct ice_hw *hw, u16 vsi_handle)
 
 	vsi = ice_get_vsi_ctx(hw, vsi_handle);
 	if (vsi) {
+		ice_clear_vsi_q_ctx(hw, vsi_handle);
 		devm_kfree(ice_hw_to_dev(hw), vsi);
 		hw->vsi_ctx[vsi_handle] = NULL;
 	}
diff --git a/drivers/net/ethernet/intel/ice/ice_switch.h b/drivers/net/ethernet/intel/ice/ice_switch.h
index 64a2fecfce20..88eb4be4d5a4 100644
--- a/drivers/net/ethernet/intel/ice/ice_switch.h
+++ b/drivers/net/ethernet/intel/ice/ice_switch.h
@@ -9,6 +9,13 @@
 #define ICE_SW_CFG_MAX_BUF_LEN 2048
 #define ICE_DFLT_VSI_INVAL 0xff
 #define ICE_VSI_INVAL_ID 0xffff
+#define ICE_INVAL_Q_HANDLE 0xFFFF
+#define ICE_INVAL_Q_HANDLE 0xFFFF
+
+/* VSI queue context structure */
+struct ice_q_ctx {
+	u16  q_handle;
+};
 
 /* VSI context structure for add/get/update/free operations */
 struct ice_vsi_ctx {
@@ -20,6 +27,8 @@ struct ice_vsi_ctx {
 	struct ice_sched_vsi_info sched;
 	u8 alloc_from_pool;
 	u8 vf_num;
+	u16 num_lan_q_entries[ICE_MAX_TRAFFIC_CLASS];
+	struct ice_q_ctx *lan_q_ctx[ICE_MAX_TRAFFIC_CLASS];
 };
 
 enum ice_sw_fwd_act_type {
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index e562ea15b79b..789b6f10b381 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -996,8 +996,8 @@ static bool ice_reset_vf(struct ice_vf *vf, bool is_vflr)
 		/* Call Disable LAN Tx queue AQ call even when queues are not
 		 * enabled. This is needed for successful completiom of VFR
 		 */
-		ice_dis_vsi_txq(vsi->port_info, 0, NULL, NULL, ICE_VF_RESET,
-				vf->vf_id, NULL);
+		ice_dis_vsi_txq(vsi->port_info, vsi->idx, 0, 0, NULL, NULL,
+				NULL, ICE_VF_RESET, vf->vf_id, NULL);
 	}
 
 	hw = &pf->hw;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 03/17] ice: Return configuration error without queue to disable
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment based on direct calculation Anirudh Venkataramanan
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 02/17] ice: Create framework for VSI queue context Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:27   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 04/17] ice: Resolve static analysis reported issue Anirudh Venkataramanan
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>

If there is no queue to disable, return appropriate configuration error
earlier without acquiring the lock.

Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c | 17 ++++++++++-------
 1 file changed, 10 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 0a57b726b1f0..9e2171b93d77 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -2932,14 +2932,17 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues,
 	if (!pi || pi->port_state != ICE_SCHED_PORT_STATE_READY)
 		return ICE_ERR_CFG;
 
-	/* if queue is disabled already yet the disable queue command has to be
-	 * sent to complete the VF reset, then call ice_aq_dis_lan_txq without
-	 * any queue information
-	 */
 
-	if (!num_queues && rst_src)
-		return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src, vmvf_num,
-					  NULL);
+	if (!num_queues) {
+		/* if queue is disabled already yet the disable queue command
+		 * has to be sent to complete the VF reset, then call
+		 * ice_aq_dis_lan_txq without any queue information
+		 */
+		if (rst_src)
+			return ice_aq_dis_lan_txq(pi->hw, 0, NULL, 0, rst_src,
+						  vmvf_num, NULL);
+		return ICE_ERR_CFG;
+	}
 
 	mutex_lock(&pi->sched_lock);
 
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 04/17] ice: Resolve static analysis reported issue
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (2 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 03/17] ice: Return configuration error without queue to disable Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:27   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 05/17] ice: Reduce scope of variable in ice_vsi_cfg_rxqs Anirudh Venkataramanan
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Bruce Allan <bruce.w.allan@intel.com>

Static analysis points out the default case in the switch statement in
ice_get_itr_intrl_gran() is an infeasible condition causing the default
case statement to be unreachable.  Remove it and since the function no
longer returns anything but success, change it to just return void and
update the only call to it accordingly.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_common.c | 12 ++----------
 1 file changed, 2 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index 9e2171b93d77..977e2a50dc93 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -647,7 +647,7 @@ void ice_output_fw_log(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf)
  * Determines the itr/intrl granularities based on the maximum aggregate
  * bandwidth according to the device's configuration during power-on.
  */
-static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
+static void ice_get_itr_intrl_gran(struct ice_hw *hw)
 {
 	u8 max_agg_bw = (rd32(hw, GL_PWR_MODE_CTL) &
 			 GL_PWR_MODE_CTL_CAR_MAX_BW_M) >>
@@ -664,13 +664,7 @@ static enum ice_status ice_get_itr_intrl_gran(struct ice_hw *hw)
 		hw->itr_gran = ICE_ITR_GRAN_MAX_25;
 		hw->intrl_gran = ICE_INTRL_GRAN_MAX_25;
 		break;
-	default:
-		ice_debug(hw, ICE_DBG_INIT,
-			  "Failed to determine itr/intrl granularity\n");
-		return ICE_ERR_CFG;
 	}
-
-	return 0;
 }
 
 /**
@@ -697,9 +691,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw)
 	if (status)
 		return status;
 
-	status = ice_get_itr_intrl_gran(hw);
-	if (status)
-		return status;
+	ice_get_itr_intrl_gran(hw);
 
 	status = ice_init_all_ctrlq(hw);
 	if (status)
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 05/17] ice: Reduce scope of variable in ice_vsi_cfg_rxqs
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (3 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 04/17] ice: Resolve static analysis reported issue Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:28   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 06/17] ice: Validate ring existence and its q_vector per VSI Anirudh Venkataramanan
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

Reduce scope of the variable 'err' to inside the for loop instead
of using it as a second looping conditional. Also while here,
improve the debug message if we fail to configure a Rx queue.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index fa8ebd8a10ce..61bb9e92f6ce 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1641,7 +1641,6 @@ int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid)
  */
 int ice_vsi_cfg_rxqs(struct ice_vsi *vsi)
 {
-	int err = 0;
 	u16 i;
 
 	if (vsi->type == ICE_VSI_VF)
@@ -1656,14 +1655,19 @@ int ice_vsi_cfg_rxqs(struct ice_vsi *vsi)
 	vsi->rx_buf_len = ICE_RXBUF_2048;
 setup_rings:
 	/* set up individual rings */
-	for (i = 0; i < vsi->num_rxq && !err; i++)
-		err = ice_setup_rx_ctx(vsi->rx_rings[i]);
+	for (i = 0; i < vsi->num_rxq; i++) {
+		int err;
 
-	if (err) {
-		dev_err(&vsi->back->pdev->dev, "ice_setup_rx_ctx failed\n");
-		return -EIO;
+		err = ice_setup_rx_ctx(vsi->rx_rings[i]);
+		if (err) {
+			dev_err(&vsi->back->pdev->dev,
+				"ice_setup_rx_ctx failed for RxQ %d, err %d\n",
+				i, err);
+			return err;
+		}
 	}
-	return err;
+
+	return 0;
 }
 
 /**
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 06/17] ice: Validate ring existence and its q_vector per VSI
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (4 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 05/17] ice: Reduce scope of variable in ice_vsi_cfg_rxqs Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:28   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 07/17] ice: Use ice_for_each_q_vector macro where possible Anirudh Venkataramanan
                   ` (10 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

When stopping Tx rings, we use 'i' as an ring array index for looking up
whether the ice_ring exists and have assigned a q_vector. This checks
rings only within a given TC and we need to go through every ring in
VSI. Use 'q_idx' instead.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_lib.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 61bb9e92f6ce..57b2873a6123 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2072,7 +2072,8 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
 		for (i = 0; i < vsi->tc_cfg.tc_info[tc].qcount_tx; i++) {
 			u16 v_idx;
 
-			if (!rings || !rings[i] || !rings[i]->q_vector) {
+			if (!rings || !rings[q_idx] ||
+			    !rings[q_idx]->q_vector) {
 				err = -EINVAL;
 				goto err_out;
 			}
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 07/17] ice: Use ice_for_each_q_vector macro where possible
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (5 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 06/17] ice: Validate ring existence and its q_vector per VSI Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:29   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 08/17] ice: Add 52 byte RSS hash key support Anirudh Venkataramanan
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

There are many places in the code where we do the following:

for (i = 0; i < vsi->num_q_vectors; i++)

Instead use the macro mentioned in the commit title:

ice_for_each_q_vector(vsi, i)

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c  |  6 +++---
 drivers/net/ethernet/intel/ice/ice_main.c | 10 +++++-----
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 57b2873a6123..e75d8c4fadc6 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1054,7 +1054,7 @@ void ice_vsi_free_q_vectors(struct ice_vsi *vsi)
 {
 	int v_idx;
 
-	for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++)
+	ice_for_each_q_vector(vsi, v_idx)
 		ice_free_q_vector(vsi, v_idx);
 }
 
@@ -2409,7 +2409,7 @@ void ice_vsi_free_irq(struct ice_vsi *vsi)
 			return;
 
 		vsi->irqs_ready = false;
-		for (i = 0; i < vsi->num_q_vectors; i++) {
+		ice_for_each_q_vector(vsi, i) {
 			u16 vector = i + base;
 			int irq_num;
 
@@ -2633,7 +2633,7 @@ void ice_vsi_dis_irq(struct ice_vsi *vsi)
 			wr32(hw, GLINT_DYN_CTL(i), 0);
 
 		ice_flush(hw);
-		for (i = 0; i < vsi->num_q_vectors; i++)
+		ice_for_each_q_vector(vsi, i)
 			synchronize_irq(pf->msix_entries[i + base].vector);
 	}
 }
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 94105f0a0b48..260ab69c2c00 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -1338,7 +1338,7 @@ static int ice_vsi_ena_irq(struct ice_vsi *vsi)
 	if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) {
 		int i;
 
-		for (i = 0; i < vsi->num_q_vectors; i++)
+		ice_for_each_q_vector(vsi, i)
 			ice_irq_dynamic_ena(hw, vsi, vsi->q_vectors[i]);
 	}
 
@@ -1705,7 +1705,7 @@ void ice_napi_del(struct ice_vsi *vsi)
 	if (!vsi->netdev)
 		return;
 
-	for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++)
+	ice_for_each_q_vector(vsi, v_idx)
 		netif_napi_del(&vsi->q_vectors[v_idx]->napi);
 }
 
@@ -1724,7 +1724,7 @@ static void ice_napi_add(struct ice_vsi *vsi)
 	if (!vsi->netdev)
 		return;
 
-	for (v_idx = 0; v_idx < vsi->num_q_vectors; v_idx++)
+	ice_for_each_q_vector(vsi, v_idx)
 		netif_napi_add(vsi->netdev, &vsi->q_vectors[v_idx]->napi,
 			       ice_napi_poll, NAPI_POLL_WEIGHT);
 }
@@ -2960,7 +2960,7 @@ static void ice_napi_enable_all(struct ice_vsi *vsi)
 	if (!vsi->netdev)
 		return;
 
-	for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++) {
+	ice_for_each_q_vector(vsi, q_idx)  {
 		struct ice_q_vector *q_vector = vsi->q_vectors[q_idx];
 
 		if (q_vector->rx.ring || q_vector->tx.ring)
@@ -3334,7 +3334,7 @@ static void ice_napi_disable_all(struct ice_vsi *vsi)
 	if (!vsi->netdev)
 		return;
 
-	for (q_idx = 0; q_idx < vsi->num_q_vectors; q_idx++) {
+	ice_for_each_q_vector(vsi, q_idx) {
 		struct ice_q_vector *q_vector = vsi->q_vectors[q_idx];
 
 		if (q_vector->rx.ring || q_vector->tx.ring)
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 08/17] ice: Add 52 byte RSS hash key support
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (6 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 07/17] ice: Use ice_for_each_q_vector macro where possible Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:29   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 09/17] ice: Add ability to update rx-usecs-high Anirudh Venkataramanan
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Paul Greenwalt <paul.greenwalt@intel.com>

Add support to set 52 byte RSS hash key.

Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_adminq_cmd.h |  3 +++
 drivers/net/ethernet/intel/ice/ice_lib.c        | 12 +++++-------
 2 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
index 583f92d4db4c..6ef083002f5b 100644
--- a/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/ice/ice_adminq_cmd.h
@@ -1291,6 +1291,9 @@ struct ice_aqc_get_set_rss_key {
 
 #define ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE	0x28
 #define ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE	0xC
+#define ICE_GET_SET_RSS_KEY_EXTEND_KEY_SIZE \
+				(ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE + \
+				 ICE_AQC_GET_SET_RSS_KEY_DATA_HASH_KEY_SIZE)
 
 struct ice_aqc_get_set_rss_keys {
 	u8 standard_rss_key[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index e75d8c4fadc6..982a3a9e9b8d 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1394,7 +1394,6 @@ int ice_vsi_manage_rss_lut(struct ice_vsi *vsi, bool ena)
  */
 static int ice_vsi_cfg_rss_lut_key(struct ice_vsi *vsi)
 {
-	u8 seed[ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE];
 	struct ice_aqc_get_set_rss_keys *key;
 	struct ice_pf *pf = vsi->back;
 	enum ice_status status;
@@ -1429,13 +1428,12 @@ static int ice_vsi_cfg_rss_lut_key(struct ice_vsi *vsi)
 	}
 
 	if (vsi->rss_hkey_user)
-		memcpy(seed, vsi->rss_hkey_user,
-		       ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE);
+		memcpy(key,
+		       (struct ice_aqc_get_set_rss_keys *)vsi->rss_hkey_user,
+		       ICE_GET_SET_RSS_KEY_EXTEND_KEY_SIZE);
 	else
-		netdev_rss_key_fill((void *)seed,
-				    ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE);
-	memcpy(&key->standard_rss_key, seed,
-	       ICE_AQC_GET_SET_RSS_KEY_DATA_RSS_KEY_SIZE);
+		netdev_rss_key_fill((void *)key,
+				    ICE_GET_SET_RSS_KEY_EXTEND_KEY_SIZE);
 
 	status = ice_aq_set_rss_key(&pf->hw, vsi->idx, key);
 
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 09/17] ice: Add ability to update rx-usecs-high
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (7 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 08/17] ice: Add 52 byte RSS hash key support Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:33   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 10/17] ice: Remove unnecessary wait when disabling/enabling Rx queues Anirudh Venkataramanan
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

Currently the driver allows rx-usecs-high values to be set,
but when querying the device for rx-usecs-high the value
does not stick. This is because it was not yet implemented.
Add code to allow the user to change rx-usecs-high and
use this to set the q_vector's intrl value.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_ethtool.c | 31 +++++++++++++++++++++++++++-
 drivers/net/ethernet/intel/ice/ice_lib.c     |  2 +-
 drivers/net/ethernet/intel/ice/ice_lib.h     |  1 +
 drivers/net/ethernet/intel/ice/ice_txrx.h    |  1 +
 4 files changed, 33 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index 64a4c4456ba0..f995ed599cd9 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -2228,12 +2228,18 @@ static int
 ice_get_rc_coalesce(struct ethtool_coalesce *ec, enum ice_container_type c_type,
 		    struct ice_ring_container *rc)
 {
-	struct ice_pf *pf = rc->ring->vsi->back;
+	struct ice_pf *pf;
+
+	if (!rc->ring)
+		return -EINVAL;
+
+	pf = rc->ring->vsi->back;
 
 	switch (c_type) {
 	case ICE_RX_CONTAINER:
 		ec->use_adaptive_rx_coalesce = ITR_IS_DYNAMIC(rc->itr_setting);
 		ec->rx_coalesce_usecs = rc->itr_setting & ~ICE_ITR_DYNAMIC;
+		ec->rx_coalesce_usecs_high = rc->ring->q_vector->intrl;
 		break;
 	case ICE_TX_CONTAINER:
 		ec->use_adaptive_tx_coalesce = ITR_IS_DYNAMIC(rc->itr_setting);
@@ -2342,6 +2348,23 @@ ice_set_rc_coalesce(enum ice_container_type c_type, struct ethtool_coalesce *ec,
 
 	switch (c_type) {
 	case ICE_RX_CONTAINER:
+		if (ec->rx_coalesce_usecs_high > ICE_MAX_INTRL ||
+		    (ec->rx_coalesce_usecs_high &&
+		     ec->rx_coalesce_usecs_high < pf->hw.intrl_gran)) {
+			netdev_info(vsi->netdev,
+				    "Invalid value, rx-usecs-high valid values are 0 (disabled), %d-%d\n",
+				    pf->hw.intrl_gran, ICE_MAX_INTRL);
+			return -EINVAL;
+		}
+
+		if (ec->rx_coalesce_usecs_high != rc->ring->q_vector->intrl) {
+			rc->ring->q_vector->intrl = ec->rx_coalesce_usecs_high;
+			wr32(&pf->hw, GLINT_RATE(vsi->hw_base_vector +
+						 rc->ring->q_vector->v_idx),
+			     ice_intrl_usec_to_reg(ec->rx_coalesce_usecs_high,
+						   pf->hw.intrl_gran));
+		}
+
 		if (ec->rx_coalesce_usecs != itr_setting &&
 		    ec->use_adaptive_rx_coalesce) {
 			netdev_info(vsi->netdev,
@@ -2364,6 +2387,12 @@ ice_set_rc_coalesce(enum ice_container_type c_type, struct ethtool_coalesce *ec,
 		}
 		break;
 	case ICE_TX_CONTAINER:
+		if (ec->tx_coalesce_usecs_high) {
+			netdev_info(vsi->netdev,
+				    "setting tx-usecs-high is not supported\n");
+			return -EINVAL;
+		}
+
 		if (ec->tx_coalesce_usecs != itr_setting &&
 		    ec->use_adaptive_tx_coalesce) {
 			netdev_info(vsi->netdev,
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 982a3a9e9b8d..4c6ecc25aaa0 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1764,7 +1764,7 @@ int ice_vsi_cfg_lan_txqs(struct ice_vsi *vsi)
  * This function converts a decimal interrupt rate limit in usecs to the format
  * expected by firmware.
  */
-static u32 ice_intrl_usec_to_reg(u8 intrl, u8 gran)
+u32 ice_intrl_usec_to_reg(u8 intrl, u8 gran)
 {
 	u32 val = intrl / gran;
 
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.h b/drivers/net/ethernet/intel/ice/ice_lib.h
index 714ace077796..a91d3553cc89 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.h
+++ b/drivers/net/ethernet/intel/ice/ice_lib.h
@@ -80,4 +80,5 @@ void ice_vsi_free_tx_rings(struct ice_vsi *vsi);
 
 int ice_vsi_manage_rss_lut(struct ice_vsi *vsi, bool ena);
 
+u32 ice_intrl_usec_to_reg(u8 intrl, u8 gran);
 #endif /* !_ICE_LIB_H_ */
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index c75d9fd12a68..66e05032ee56 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -142,6 +142,7 @@ enum ice_rx_dtype {
 #define ICE_ITR_ADAPTIVE_BULK		0x0000
 
 #define ICE_DFLT_INTRL	0
+#define ICE_MAX_INTRL	236
 
 /* Legacy or Advanced Mode Queue */
 #define ICE_TX_ADVANCED	0
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 10/17] ice: Remove unnecessary wait when disabling/enabling Rx queues
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (8 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 09/17] ice: Add ability to update rx-usecs-high Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:34   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 11/17] ice: Fix issue when adding more than allowed VLANs Anirudh Venkataramanan
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

In ice_vsi_ctrl_rx_rings() we are unnecessarily waiting for
QRX_CTRL_QENA_REQ and QRX_CTRL_QENA_STAT to be the same value prior to
disabling each Rx queue. There is no reason to do this so remove
this wait loop as we already have a wait loop after disabling/enabling
the Rx queue through the QRX_CTRL register to make sure it gets
successfully disabled/enabled.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c | 10 ++--------
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 4c6ecc25aaa0..8e0a23e6b563 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -197,19 +197,13 @@ static int ice_vsi_ctrl_rx_rings(struct ice_vsi *vsi, bool ena)
 {
 	struct ice_pf *pf = vsi->back;
 	struct ice_hw *hw = &pf->hw;
-	int i, j, ret = 0;
+	int i, ret = 0;
 
 	for (i = 0; i < vsi->num_rxq; i++) {
 		int pf_q = vsi->rxq_map[i];
 		u32 rx_reg;
 
-		for (j = 0; j < ICE_Q_WAIT_MAX_RETRY; j++) {
-			rx_reg = rd32(hw, QRX_CTRL(pf_q));
-			if (((rx_reg >> QRX_CTRL_QENA_REQ_S) & 1) ==
-			    ((rx_reg >> QRX_CTRL_QENA_STAT_S) & 1))
-				break;
-			usleep_range(1000, 2000);
-		}
+		rx_reg = rd32(hw, QRX_CTRL(pf_q));
 
 		/* Skip if the queue is already in the requested state */
 		if (ena == !!(rx_reg & QRX_CTRL_QENA_STAT_M))
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 11/17] ice: Fix issue when adding more than allowed VLANs
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (9 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 10/17] ice: Remove unnecessary wait when disabling/enabling Rx queues Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:34   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 12/17] ice: Remove runtime change of PFINT_OICR_ENA register Anirudh Venkataramanan
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>

This patch fixes issue with non trusted VFs being able to add more than
permitted number of VLANs by adding a check in ice_vc_process_vlan_msg.
Also don't return an error in this case as the VF does not need to know
that it is not trusted.

Also rework ice_vsi_kill_vlan to use the right types.

Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_lib.c         | 15 +++++++++------
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 13 ++++++++++++-
 2 files changed, 21 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 8e0a23e6b563..6d9571c8826d 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1598,7 +1598,8 @@ int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid)
 	struct ice_fltr_list_entry *list;
 	struct ice_pf *pf = vsi->back;
 	LIST_HEAD(tmp_add_list);
-	int status = 0;
+	enum ice_status status;
+	int err = 0;
 
 	list = devm_kzalloc(&pf->pdev->dev, sizeof(*list), GFP_KERNEL);
 	if (!list)
@@ -1614,14 +1615,16 @@ int ice_vsi_kill_vlan(struct ice_vsi *vsi, u16 vid)
 	INIT_LIST_HEAD(&list->list_entry);
 	list_add(&list->list_entry, &tmp_add_list);
 
-	if (ice_remove_vlan(&pf->hw, &tmp_add_list)) {
-		dev_err(&pf->pdev->dev, "Error removing VLAN %d on vsi %i\n",
-			vid, vsi->vsi_num);
-		status = -EIO;
+	status = ice_remove_vlan(&pf->hw, &tmp_add_list);
+	if (status) {
+		dev_err(&pf->pdev->dev,
+			"Error removing VLAN %d on vsi %i error: %d\n",
+			vid, vsi->vsi_num, status);
+		err = -EIO;
 	}
 
 	ice_free_fltr_list(&pf->pdev->dev, &tmp_add_list);
-	return status;
+	return err;
 }
 
 /**
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index 789b6f10b381..f52f0fc52f46 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -2329,7 +2329,6 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 		/* There is no need to let VF know about being not trusted,
 		 * so we can just return success message here
 		 */
-		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -2370,6 +2369,18 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 		for (i = 0; i < vfl->num_elements; i++) {
 			u16 vid = vfl->vlan_id[i];
 
+			if (!ice_is_vf_trusted(vf) &&
+			    vf->num_vlan >= ICE_MAX_VLAN_PER_VF) {
+				dev_info(&pf->pdev->dev,
+					 "VF-%d is not trusted, switch the VF to trusted mode, in order to add more VLAN addresses\n",
+					 vf->vf_id);
+				/* There is no need to let VF know about being
+				 * not trusted, so we can just return success
+				 * message here as well.
+				 */
+				goto error_param;
+			}
+
 			if (ice_vsi_add_vlan(vsi, vid)) {
 				v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 				goto error_param;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 12/17] ice: Remove runtime change of PFINT_OICR_ENA register
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (10 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 11/17] ice: Fix issue when adding more than allowed VLANs Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:35   ` Bowers, AndrewX
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 13/17] ice: Add reg_idx variable in ice_q_vector structure Anirudh Venkataramanan
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Md Fahad Iqbal Polash <md.fahad.iqbal.polash@intel.com>

Runtime change of PFINT_OICR_ENA register is unnecessary.
The handlers should always clear the atomic bit for each
task as they start, because it will make sure that any late
interrupt will either 1) re-set the bit, or 2) be handled
directly in the "already running" task handler.

Signed-off-by: Md Fahad Iqbal Polash <md.fahad.iqbal.polash@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_main.c        | 13 ++-----------
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 13 +------------
 2 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 260ab69c2c00..94b2aa6b4c3d 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -1096,7 +1096,7 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
 	u32 reg;
 	int i;
 
-	if (!test_bit(__ICE_MDD_EVENT_PENDING, pf->state))
+	if (!test_and_clear_bit(__ICE_MDD_EVENT_PENDING, pf->state))
 		return;
 
 	/* find what triggered the MDD event */
@@ -1229,12 +1229,6 @@ static void ice_handle_mdd_event(struct ice_pf *pf)
 		}
 	}
 
-	/* re-enable MDD interrupt cause */
-	clear_bit(__ICE_MDD_EVENT_PENDING, pf->state);
-	reg = rd32(hw, PFINT_OICR_ENA);
-	reg |= PFINT_OICR_MAL_DETECT_M;
-	wr32(hw, PFINT_OICR_ENA, reg);
-	ice_flush(hw);
 }
 
 /**
@@ -1523,7 +1517,7 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
 			rd32(hw, PFHMC_ERRORDATA));
 	}
 
-	/* Report and mask off any remaining unexpected interrupts */
+	/* Report any remaining unexpected interrupts */
 	oicr &= ena_mask;
 	if (oicr) {
 		dev_dbg(&pf->pdev->dev, "unhandled interrupt oicr=0x%08x\n",
@@ -1537,12 +1531,9 @@ static irqreturn_t ice_misc_intr(int __always_unused irq, void *data)
 			set_bit(__ICE_PFR_REQ, pf->state);
 			ice_service_task_schedule(pf);
 		}
-		ena_mask &= ~oicr;
 	}
 	ret = IRQ_HANDLED;
 
-	/* re-enable interrupt causes that are not handled during this pass */
-	wr32(hw, PFINT_OICR_ENA, ena_mask);
 	if (!test_bit(__ICE_DOWN, pf->state)) {
 		ice_service_task_schedule(pf);
 		ice_irq_dynamic_ena(hw, NULL, NULL);
diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index f52f0fc52f46..abc958788267 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -1273,21 +1273,10 @@ void ice_process_vflr_event(struct ice_pf *pf)
 	int vf_id;
 	u32 reg;
 
-	if (!test_bit(__ICE_VFLR_EVENT_PENDING, pf->state) ||
+	if (!test_and_clear_bit(__ICE_VFLR_EVENT_PENDING, pf->state) ||
 	    !pf->num_alloc_vfs)
 		return;
 
-	/* Re-enable the VFLR interrupt cause here, before looking for which
-	 * VF got reset. Otherwise, if another VF gets a reset while the
-	 * first one is being processed, that interrupt will be lost, and
-	 * that VF will be stuck in reset forever.
-	 */
-	reg = rd32(hw, PFINT_OICR_ENA);
-	reg |= PFINT_OICR_VFLR_M;
-	wr32(hw, PFINT_OICR_ENA, reg);
-	ice_flush(hw);
-
-	clear_bit(__ICE_VFLR_EVENT_PENDING, pf->state);
 	for (vf_id = 0; vf_id < pf->num_alloc_vfs; vf_id++) {
 		struct ice_vf *vf = &pf->vf[vf_id];
 		u32 reg_idx, bit_idx;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 13/17] ice: Add reg_idx variable in ice_q_vector structure
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (11 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 12/17] ice: Remove runtime change of PFINT_OICR_ENA register Anirudh Venkataramanan
@ 2019-02-28 23:25 ` Anirudh Venkataramanan
  2019-03-08  0:35   ` Bowers, AndrewX
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 14/17] ice: Add missing PHY type to link settings Anirudh Venkataramanan
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:25 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

Every time we want to re-enable interrupts and/or write to a register
that requires an interrupt vector's hardware index we do the following:

vsi->hw_base_vector + q_vector->v_idx

This is a wasteful operation, especially in the hot path. Fix this by
adding a u16 reg_idx member to the ice_q_vector structure and make the
necessary changes to make this work.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice.h      |  3 +-
 drivers/net/ethernet/intel/ice/ice_lib.c  | 84 ++++++++++++++++++++++++-------
 drivers/net/ethernet/intel/ice/ice_main.c | 13 +++--
 drivers/net/ethernet/intel/ice/ice_txrx.c |  2 +-
 4 files changed, 76 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index 878a75182d6d..d66aad49bfd4 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -297,6 +297,7 @@ struct ice_q_vector {
 	struct ice_vsi *vsi;
 
 	u16 v_idx;			/* index in the vsi->q_vector array. */
+	u16 reg_idx;
 	u8 num_ring_rx;			/* total number of Rx rings in vector */
 	u8 num_ring_tx;			/* total number of Tx rings in vector */
 	u8 itr_countdown;		/* when 0 should adjust adaptive ITR */
@@ -403,7 +404,7 @@ static inline void
 ice_irq_dynamic_ena(struct ice_hw *hw, struct ice_vsi *vsi,
 		    struct ice_q_vector *q_vector)
 {
-	u32 vector = (vsi && q_vector) ? vsi->hw_base_vector + q_vector->v_idx :
+	u32 vector = (vsi && q_vector) ? q_vector->reg_idx :
 				((struct ice_pf *)hw->back)->hw_oicr_idx;
 	int itr = ICE_ITR_NONE;
 	u32 val;
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 6d9571c8826d..399905396134 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -1805,13 +1805,12 @@ static void ice_cfg_itr_gran(struct ice_hw *hw)
  * ice_cfg_itr - configure the initial interrupt throttle values
  * @hw: pointer to the HW structure
  * @q_vector: interrupt vector that's being configured
- * @vector: HW vector index to apply the interrupt throttling to
  *
  * Configure interrupt throttling values for the ring containers that are
  * associated with the interrupt vector passed in.
  */
 static void
-ice_cfg_itr(struct ice_hw *hw, struct ice_q_vector *q_vector, u16 vector)
+ice_cfg_itr(struct ice_hw *hw, struct ice_q_vector *q_vector)
 {
 	ice_cfg_itr_gran(hw);
 
@@ -1825,7 +1824,7 @@ ice_cfg_itr(struct ice_hw *hw, struct ice_q_vector *q_vector, u16 vector)
 		rc->target_itr = ITR_TO_REG(rc->itr_setting);
 		rc->next_update = jiffies + 1;
 		rc->current_itr = rc->target_itr;
-		wr32(hw, GLINT_ITR(rc->itr_idx, vector),
+		wr32(hw, GLINT_ITR(rc->itr_idx, q_vector->reg_idx),
 		     ITR_REG_ALIGN(rc->current_itr) >> ICE_ITR_GRAN_S);
 	}
 
@@ -1839,7 +1838,7 @@ ice_cfg_itr(struct ice_hw *hw, struct ice_q_vector *q_vector, u16 vector)
 		rc->target_itr = ITR_TO_REG(rc->itr_setting);
 		rc->next_update = jiffies + 1;
 		rc->current_itr = rc->target_itr;
-		wr32(hw, GLINT_ITR(rc->itr_idx, vector),
+		wr32(hw, GLINT_ITR(rc->itr_idx, q_vector->reg_idx),
 		     ITR_REG_ALIGN(rc->current_itr) >> ICE_ITR_GRAN_S);
 	}
 }
@@ -1851,17 +1850,17 @@ ice_cfg_itr(struct ice_hw *hw, struct ice_q_vector *q_vector, u16 vector)
 void ice_vsi_cfg_msix(struct ice_vsi *vsi)
 {
 	struct ice_pf *pf = vsi->back;
-	u16 vector = vsi->hw_base_vector;
 	struct ice_hw *hw = &pf->hw;
 	u32 txq = 0, rxq = 0;
 	int i, q;
 
-	for (i = 0; i < vsi->num_q_vectors; i++, vector++) {
+	for (i = 0; i < vsi->num_q_vectors; i++) {
 		struct ice_q_vector *q_vector = vsi->q_vectors[i];
+		u16 reg_idx = q_vector->reg_idx;
 
-		ice_cfg_itr(hw, q_vector, vector);
+		ice_cfg_itr(hw, q_vector);
 
-		wr32(hw, GLINT_RATE(vector),
+		wr32(hw, GLINT_RATE(reg_idx),
 		     ice_intrl_usec_to_reg(q_vector->intrl, hw->intrl_gran));
 
 		/* Both Transmit Queue Interrupt Cause Control register
@@ -1886,7 +1885,7 @@ void ice_vsi_cfg_msix(struct ice_vsi *vsi)
 			else
 				val = QINT_TQCTL_CAUSE_ENA_M |
 				      (itr_idx << QINT_TQCTL_ITR_INDX_S)  |
-				      (vector << QINT_TQCTL_MSIX_INDX_S);
+				      (reg_idx << QINT_TQCTL_MSIX_INDX_S);
 			wr32(hw, QINT_TQCTL(vsi->txq_map[txq]), val);
 			txq++;
 		}
@@ -1902,7 +1901,7 @@ void ice_vsi_cfg_msix(struct ice_vsi *vsi)
 			else
 				val = QINT_RQCTL_CAUSE_ENA_M |
 				      (itr_idx << QINT_RQCTL_ITR_INDX_S)  |
-				      (vector << QINT_RQCTL_MSIX_INDX_S);
+				      (reg_idx << QINT_RQCTL_MSIX_INDX_S);
 			wr32(hw, QINT_RQCTL(vsi->rxq_map[rxq]), val);
 			rxq++;
 		}
@@ -2065,8 +2064,6 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
 			break;
 
 		for (i = 0; i < vsi->tc_cfg.tc_info[tc].qcount_tx; i++) {
-			u16 v_idx;
-
 			if (!rings || !rings[q_idx] ||
 			    !rings[q_idx]->q_vector) {
 				err = -EINVAL;
@@ -2088,8 +2085,7 @@ ice_vsi_stop_tx_rings(struct ice_vsi *vsi, enum ice_disq_rst_src rst_src,
 			/* trigger a software interrupt for the vector
 			 * associated to the queue to schedule NAPI handler
 			 */
-			v_idx = rings[i]->q_vector->v_idx;
-			wr32(hw, GLINT_DYN_CTL(vsi->hw_base_vector + v_idx),
+			wr32(hw, GLINT_DYN_CTL(rings[i]->q_vector->reg_idx),
 			     GLINT_DYN_CTL_SWINT_TRIG_M |
 			     GLINT_DYN_CTL_INTENA_MSK_M);
 			q_idx++;
@@ -2208,6 +2204,44 @@ static void ice_vsi_set_tc_cfg(struct ice_vsi *vsi)
 	vsi->tc_cfg.numtc = ice_dcb_get_num_tc(cfg);
 }
 
+/**
+ * ice_vsi_set_q_vectors_reg_idx - set the HW register index for all q_vectors
+ * @vsi: VSI to set the q_vectors register index on
+ */
+static int
+ice_vsi_set_q_vectors_reg_idx(struct ice_vsi *vsi)
+{
+	u16 i;
+
+	if (!vsi || !vsi->q_vectors)
+		return -EINVAL;
+
+	ice_for_each_q_vector(vsi, i) {
+		struct ice_q_vector *q_vector = vsi->q_vectors[i];
+
+		if (!q_vector) {
+			dev_err(&vsi->back->pdev->dev,
+				"Failed to set reg_idx on q_vector %d VSI %d\n",
+				i, vsi->vsi_num);
+			goto clear_reg_idx;
+		}
+
+		q_vector->reg_idx = q_vector->v_idx + vsi->hw_base_vector;
+	}
+
+	return 0;
+
+clear_reg_idx:
+	ice_for_each_q_vector(vsi, i) {
+		struct ice_q_vector *q_vector = vsi->q_vectors[i];
+
+		if (q_vector)
+			q_vector->reg_idx = 0;
+	}
+
+	return -EINVAL;
+}
+
 /**
  * ice_vsi_setup - Set up a VSI by a given type
  * @pf: board private structure
@@ -2273,6 +2307,10 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
 		if (ret)
 			goto unroll_alloc_q_vector;
 
+		ret = ice_vsi_set_q_vectors_reg_idx(vsi);
+		if (ret)
+			goto unroll_vector_base;
+
 		ret = ice_vsi_alloc_rings(vsi);
 		if (ret)
 			goto unroll_vector_base;
@@ -2311,6 +2349,10 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
 		} else {
 			vsi->hw_base_vector = pf->vf[vf_id].first_vector_idx;
 		}
+		ret = ice_vsi_set_q_vectors_reg_idx(vsi);
+		if (ret)
+			goto unroll_vector_base;
+
 		pf->q_left_tx -= vsi->alloc_txq;
 		pf->q_left_rx -= vsi->alloc_rxq;
 		break;
@@ -2623,11 +2665,11 @@ void ice_vsi_dis_irq(struct ice_vsi *vsi)
 
 	/* disable each interrupt */
 	if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags)) {
-		for (i = vsi->hw_base_vector;
-		     i < (vsi->num_q_vectors + vsi->hw_base_vector); i++)
-			wr32(hw, GLINT_DYN_CTL(i), 0);
+		ice_for_each_q_vector(vsi, i)
+			wr32(hw, GLINT_DYN_CTL(vsi->q_vectors[i]->reg_idx), 0);
 
 		ice_flush(hw);
+
 		ice_for_each_q_vector(vsi, i)
 			synchronize_irq(pf->msix_entries[i + base].vector);
 	}
@@ -2780,6 +2822,10 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
 		if (ret)
 			goto err_vectors;
 
+		ret = ice_vsi_set_q_vectors_reg_idx(vsi);
+		if (ret)
+			goto err_vectors;
+
 		ret = ice_vsi_alloc_rings(vsi);
 		if (ret)
 			goto err_vectors;
@@ -2801,6 +2847,10 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
 		if (ret)
 			goto err_vectors;
 
+		ret = ice_vsi_set_q_vectors_reg_idx(vsi);
+		if (ret)
+			goto err_vectors;
+
 		ret = ice_vsi_alloc_rings(vsi);
 		if (ret)
 			goto err_vectors;
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 94b2aa6b4c3d..a0d2c337fede 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -1592,23 +1592,23 @@ static void ice_free_irq_msix_misc(struct ice_pf *pf)
 /**
  * ice_ena_ctrlq_interrupts - enable control queue interrupts
  * @hw: pointer to HW structure
- * @v_idx: HW vector index to associate the control queue interrupts with
+ * @reg_idx: HW vector index to associate the control queue interrupts with
  */
-static void ice_ena_ctrlq_interrupts(struct ice_hw *hw, u16 v_idx)
+static void ice_ena_ctrlq_interrupts(struct ice_hw *hw, u16 reg_idx)
 {
 	u32 val;
 
-	val = ((v_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
+	val = ((reg_idx & PFINT_OICR_CTL_MSIX_INDX_M) |
 	       PFINT_OICR_CTL_CAUSE_ENA_M);
 	wr32(hw, PFINT_OICR_CTL, val);
 
 	/* enable Admin queue Interrupt causes */
-	val = ((v_idx & PFINT_FW_CTL_MSIX_INDX_M) |
+	val = ((reg_idx & PFINT_FW_CTL_MSIX_INDX_M) |
 	       PFINT_FW_CTL_CAUSE_ENA_M);
 	wr32(hw, PFINT_FW_CTL, val);
 
 	/* enable Mailbox queue Interrupt causes */
-	val = ((v_idx & PFINT_MBX_CTL_MSIX_INDX_M) |
+	val = ((reg_idx & PFINT_MBX_CTL_MSIX_INDX_M) |
 	       PFINT_MBX_CTL_CAUSE_ENA_M);
 	wr32(hw, PFINT_MBX_CTL, val);
 
@@ -4214,8 +4214,7 @@ static void ice_tx_timeout(struct net_device *netdev)
 		/* Read interrupt register */
 		if (test_bit(ICE_FLAG_MSIX_ENA, pf->flags))
 			val = rd32(hw,
-				   GLINT_DYN_CTL(tx_ring->q_vector->v_idx +
-						 tx_ring->vsi->hw_base_vector));
+				   GLINT_DYN_CTL(tx_ring->q_vector->reg_idx));
 
 		netdev_info(netdev, "tx_timeout: VSI_num: %d, Q %d, NTC: 0x%x, HW_HEAD: 0x%x, NTU: 0x%x, INT: 0x%x\n",
 			    vsi->vsi_num, hung_queue, tx_ring->next_to_clean,
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index fabee3e59eff..99f1a18af4ac 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -1391,7 +1391,7 @@ ice_update_ena_itr(struct ice_vsi *vsi, struct ice_q_vector *q_vector)
 
 	if (!test_bit(__ICE_DOWN, vsi->state))
 		wr32(&vsi->back->hw,
-		     GLINT_DYN_CTL(vsi->hw_base_vector + q_vector->v_idx),
+		     GLINT_DYN_CTL(q_vector->reg_idx),
 		     itr_val);
 }
 
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 14/17] ice: Add missing PHY type to link settings
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (12 preceding siblings ...)
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 13/17] ice: Add reg_idx variable in ice_q_vector structure Anirudh Venkataramanan
@ 2019-02-28 23:26 ` Anirudh Venkataramanan
  2019-03-08  0:36   ` Bowers, AndrewX
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 15/17] ice: Refactor link event flow Anirudh Venkataramanan
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:26 UTC (permalink / raw)
  To: intel-wired-lan

From: Tony Nguyen <anthony.l.nguyen@intel.com>

The PHY type ICE_PHY_TYPE_LOW_25G_AUI_C2C is missing from
ice_get_settings_link_up() which is causing a warning
message for unrecognized PHY.  Add the PHY type to
correctly set the settings and avoid the warning message.

Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_ethtool.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c
index f995ed599cd9..0bfe696d8077 100644
--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c
+++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c
@@ -1034,6 +1034,7 @@ ice_get_settings_link_up(struct ethtool_link_ksettings *ks,
 						     25000baseCR_Full);
 		break;
 	case ICE_PHY_TYPE_LOW_25G_AUI_AOC_ACC:
+	case ICE_PHY_TYPE_LOW_25G_AUI_C2C:
 		ethtool_link_ksettings_add_link_mode(ks, supported,
 						     25000baseCR_Full);
 		break;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 15/17] ice: Refactor link event flow
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (13 preceding siblings ...)
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 14/17] ice: Add missing PHY type to link settings Anirudh Venkataramanan
@ 2019-02-28 23:26 ` Anirudh Venkataramanan
  2019-03-08  0:36   ` Bowers, AndrewX
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 16/17] ice: Use dev_err when ice_cfg_vsi_lan fails Anirudh Venkataramanan
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 17/17] ice: Use pf instead of vsi-back Anirudh Venkataramanan
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:26 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

Currently the link event flow works, but can be much better.
Refactor the link event flow to make it cleaner and more clear
on what is going on.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice.h      | 20 +++++++
 drivers/net/ethernet/intel/ice/ice_main.c | 93 +++++++++++++++----------------
 2 files changed, 65 insertions(+), 48 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice.h b/drivers/net/ethernet/intel/ice/ice.h
index d66aad49bfd4..804d12c2f1df 100644
--- a/drivers/net/ethernet/intel/ice/ice.h
+++ b/drivers/net/ethernet/intel/ice/ice.h
@@ -420,6 +420,26 @@ ice_irq_dynamic_ena(struct ice_hw *hw, struct ice_vsi *vsi,
 	wr32(hw, GLINT_DYN_CTL(vector), val);
 }
 
+/**
+ * ice_find_vsi_by_type - Find and return VSI of a given type
+ * @pf: PF to search for VSI
+ * @type: Value indicating type of VSI we are looking for
+ */
+static inline struct ice_vsi *
+ice_find_vsi_by_type(struct ice_pf *pf, enum ice_vsi_type type)
+{
+	int i;
+
+	for (i = 0; i < pf->num_alloc_vsi; i++) {
+		struct ice_vsi *vsi = pf->vsi[i];
+
+		if (vsi && vsi->type == type)
+			return vsi;
+	}
+
+	return NULL;
+}
+
 void ice_set_ethtool_ops(struct net_device *netdev);
 int ice_up(struct ice_vsi *vsi);
 int ice_down(struct ice_vsi *vsi);
diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index a0d2c337fede..d57ac8ce9e42 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -590,6 +590,9 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
 	const char *speed;
 	const char *fc;
 
+	if (!vsi)
+		return;
+
 	if (vsi->current_isup == isup)
 		return;
 
@@ -659,15 +662,16 @@ void ice_print_link_msg(struct ice_vsi *vsi, bool isup)
  */
 static void ice_vsi_link_event(struct ice_vsi *vsi, bool link_up)
 {
-	if (!vsi || test_bit(__ICE_DOWN, vsi->state))
+	if (!vsi)
+		return;
+
+	if (test_bit(__ICE_DOWN, vsi->state) || !vsi->netdev)
 		return;
 
 	if (vsi->type == ICE_VSI_PF) {
-		if (!vsi->netdev) {
-			dev_dbg(&vsi->back->pdev->dev,
-				"vsi->netdev is not initialized!\n");
+		if (link_up == netif_carrier_ok(vsi->netdev))
 			return;
-		}
+
 		if (link_up) {
 			netif_carrier_on(vsi->netdev);
 			netif_tx_wake_all_queues(vsi->netdev);
@@ -682,61 +686,51 @@ static void ice_vsi_link_event(struct ice_vsi *vsi, bool link_up)
  * ice_link_event - process the link event
  * @pf: pf that the link event is associated with
  * @pi: port_info for the port that the link event is associated with
+ * @link_up: true if the physical link is up and false if it is down
+ * @link_speed: current link speed received from the link event
  *
- * Returns -EIO if ice_get_link_status() fails
- * Returns 0 on success
+ * Returns 0 on success and negative on failure
  */
 static int
-ice_link_event(struct ice_pf *pf, struct ice_port_info *pi)
+ice_link_event(struct ice_pf *pf, struct ice_port_info *pi, bool link_up,
+	       u16 link_speed)
 {
-	u8 new_link_speed, old_link_speed;
 	struct ice_phy_info *phy_info;
-	bool new_link_same_as_old;
-	bool new_link, old_link;
-	u8 lport;
-	u16 v;
+	struct ice_vsi *vsi;
+	u16 old_link_speed;
+	bool old_link;
+	int result;
 
 	phy_info = &pi->phy;
 	phy_info->link_info_old = phy_info->link_info;
-	/* Force ice_get_link_status() to update link info */
-	phy_info->get_link_info = true;
 
-	old_link = (phy_info->link_info_old.link_info & ICE_AQ_LINK_UP);
+	old_link = !!(phy_info->link_info_old.link_info & ICE_AQ_LINK_UP);
 	old_link_speed = phy_info->link_info_old.link_speed;
 
-	lport = pi->lport;
-	if (ice_get_link_status(pi, &new_link)) {
+	/* update the link info structures and re-enable link events,
+	 * don't bail on failure due to other book keeping needed
+	 */
+	result = ice_update_link_info(pi);
+	if (result)
 		dev_dbg(&pf->pdev->dev,
-			"Could not get link status for port %d\n", lport);
-		return -EIO;
-	}
-
-	new_link_speed = phy_info->link_info.link_speed;
-
-	new_link_same_as_old = (new_link == old_link &&
-				new_link_speed == old_link_speed);
+			"Failed to update link status and re-enable link events for port %d\n",
+			pi->lport);
 
-	ice_for_each_vsi(pf, v) {
-		struct ice_vsi *vsi = pf->vsi[v];
+	/* if the old link up/down and speed is the same as the new */
+	if (link_up == old_link && link_speed == old_link_speed)
+		return result;
 
-		if (!vsi || !vsi->port_info)
-			continue;
+	vsi = ice_find_vsi_by_type(pf, ICE_VSI_PF);
+	if (!vsi || !vsi->port_info)
+		return -EINVAL;
 
-		if (new_link_same_as_old &&
-		    (test_bit(__ICE_DOWN, vsi->state) ||
-		    new_link == netif_carrier_ok(vsi->netdev)))
-			continue;
+	ice_vsi_link_event(vsi, link_up);
+	ice_print_link_msg(vsi, link_up);
 
-		if (vsi->port_info->lport == lport) {
-			ice_print_link_msg(vsi, new_link);
-			ice_vsi_link_event(vsi, new_link);
-		}
-	}
-
-	if (!new_link_same_as_old && pf->num_alloc_vfs)
+	if (pf->num_alloc_vfs)
 		ice_vc_notify_link_state(pf);
 
-	return 0;
+	return result;
 }
 
 /**
@@ -801,20 +795,23 @@ static int ice_init_link_events(struct ice_port_info *pi)
 /**
  * ice_handle_link_event - handle link event via ARQ
  * @pf: pf that the link event is associated with
- *
- * Return -EINVAL if port_info is null
- * Return status on success
+ * @event: event structure containing link status info
  */
-static int ice_handle_link_event(struct ice_pf *pf)
+static int
+ice_handle_link_event(struct ice_pf *pf, struct ice_rq_event_info *event)
 {
+	struct ice_aqc_get_link_status_data *link_data;
 	struct ice_port_info *port_info;
 	int status;
 
+	link_data = (struct ice_aqc_get_link_status_data *)event->msg_buf;
 	port_info = pf->hw.port_info;
 	if (!port_info)
 		return -EINVAL;
 
-	status = ice_link_event(pf, port_info);
+	status = ice_link_event(pf, port_info,
+				!!(link_data->link_info & ICE_AQ_LINK_UP),
+				le16_to_cpu(link_data->link_speed));
 	if (status)
 		dev_dbg(&pf->pdev->dev,
 			"Could not process link event, error %d\n", status);
@@ -926,7 +923,7 @@ static int __ice_clean_ctrlq(struct ice_pf *pf, enum ice_ctl_q q_type)
 
 		switch (opcode) {
 		case ice_aqc_opc_get_link_status:
-			if (ice_handle_link_event(pf))
+			if (ice_handle_link_event(pf, &event))
 				dev_err(&pf->pdev->dev,
 					"Could not handle link event\n");
 			break;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 16/17] ice: Use dev_err when ice_cfg_vsi_lan fails
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (14 preceding siblings ...)
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 15/17] ice: Refactor link event flow Anirudh Venkataramanan
@ 2019-02-28 23:26 ` Anirudh Venkataramanan
  2019-03-08  0:36   ` Bowers, AndrewX
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 17/17] ice: Use pf instead of vsi-back Anirudh Venkataramanan
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:26 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

dev_err makes more sense than dev_info when this call fails.

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_lib.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 399905396134..49c75371af08 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -2368,7 +2368,9 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
 	ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
 			      max_txqs);
 	if (ret) {
-		dev_info(&pf->pdev->dev, "Failed VSI lan queue config\n");
+		dev_err(&pf->pdev->dev,
+			"VSI %d failed lan queue config, error %d\n",
+			vsi->vsi_num, ret);
 		goto unroll_vector_base;
 	}
 
@@ -2869,8 +2871,9 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
 	ret = ice_cfg_vsi_lan(vsi->port_info, vsi->idx, vsi->tc_cfg.ena_tc,
 			      max_txqs);
 	if (ret) {
-		dev_info(&vsi->back->pdev->dev,
-			 "Failed VSI lan queue config\n");
+		dev_err(&pf->pdev->dev,
+			"VSI %d failed lan queue config, error %d\n",
+			vsi->vsi_num, ret);
 		goto err_vectors;
 	}
 	return 0;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 17/17] ice: Use pf instead of vsi-back
  2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
                   ` (15 preceding siblings ...)
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 16/17] ice: Use dev_err when ice_cfg_vsi_lan fails Anirudh Venkataramanan
@ 2019-02-28 23:26 ` Anirudh Venkataramanan
  2019-03-08  0:37   ` Bowers, AndrewX
  16 siblings, 1 reply; 35+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-28 23:26 UTC (permalink / raw)
  To: intel-wired-lan

From: Jesse Brandeburg <jesse.brandeburg@intel.com>

Many times in our functions we have a local variable pf, which is
equivalent to vsi->back. Just use pf consistently instead of vsi->back
where available.

Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_lib.c | 60 ++++++++++++++++----------------
 1 file changed, 30 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 49c75371af08..bda6ade755c3 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -301,7 +301,6 @@ static void ice_vsi_set_num_desc(struct ice_vsi *vsi)
 static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
 {
 	struct ice_pf *pf = vsi->back;
-
 	struct ice_vf *vf = NULL;
 
 	if (vsi->type == ICE_VSI_VF)
@@ -325,8 +324,7 @@ static void ice_vsi_set_num_qs(struct ice_vsi *vsi, u16 vf_id)
 		vsi->num_q_vectors = pf->num_vf_msix - 1;
 		break;
 	default:
-		dev_warn(&vsi->back->pdev->dev, "Unknown VSI type %d\n",
-			 vsi->type);
+		dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n", vsi->type);
 		break;
 	}
 
@@ -573,7 +571,7 @@ static int __ice_vsi_get_qs_contig(struct ice_qs_cfg *qs_cfg)
 
 /**
  * __ice_vsi_get_qs_sc - Assign a scattered queues from PF to VSI
- * @qs_cfg: gathered variables needed for PF->VSI queues assignment
+ * @qs_cfg: gathered variables needed for pf->vsi queues assignment
  *
  * Return 0 on success and -ENOMEM in case of no left space in PF queue bitmap
  */
@@ -917,6 +915,9 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
 static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
 {
 	u8 lut_type, hash_type;
+	struct ice_pf *pf;
+
+	pf = vsi->back;
 
 	switch (vsi->type) {
 	case ICE_VSI_PF:
@@ -930,8 +931,7 @@ static void ice_set_rss_vsi_ctx(struct ice_vsi_ctx *ctxt, struct ice_vsi *vsi)
 		hash_type = ICE_AQ_VSI_Q_OPT_RSS_TPLZ;
 		break;
 	default:
-		dev_warn(&vsi->back->pdev->dev, "Unknown VSI type %d\n",
-			 vsi->type);
+		dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n", vsi->type);
 		return;
 	}
 
@@ -1018,10 +1018,11 @@ static int ice_vsi_init(struct ice_vsi *vsi)
 static void ice_free_q_vector(struct ice_vsi *vsi, int v_idx)
 {
 	struct ice_q_vector *q_vector;
+	struct ice_pf *pf = vsi->back;
 	struct ice_ring *ring;
 
 	if (!vsi->q_vectors[v_idx]) {
-		dev_dbg(&vsi->back->pdev->dev, "Queue vector at index %d not found\n",
+		dev_dbg(&pf->pdev->dev, "Queue vector at index %d not found\n",
 			v_idx);
 		return;
 	}
@@ -1036,7 +1037,7 @@ static void ice_free_q_vector(struct ice_vsi *vsi, int v_idx)
 	if (vsi->netdev)
 		netif_napi_del(&q_vector->napi);
 
-	devm_kfree(&vsi->back->pdev->dev, q_vector);
+	devm_kfree(&pf->pdev->dev, q_vector);
 	vsi->q_vectors[v_idx] = NULL;
 }
 
@@ -1188,8 +1189,7 @@ static int ice_vsi_setup_vector_base(struct ice_vsi *vsi)
 						  num_q_vectors, vsi->idx);
 		break;
 	default:
-		dev_warn(&vsi->back->pdev->dev, "Unknown VSI type %d\n",
-			 vsi->type);
+		dev_warn(&pf->pdev->dev, "Unknown VSI type %d\n", vsi->type);
 		break;
 	}
 
@@ -1198,7 +1198,7 @@ static int ice_vsi_setup_vector_base(struct ice_vsi *vsi)
 			"Failed to get tracking for %d HW vectors for VSI %d, err=%d\n",
 			num_q_vectors, vsi->vsi_num, vsi->hw_base_vector);
 		if (vsi->type != ICE_VSI_VF) {
-			ice_free_res(vsi->back->sw_irq_tracker,
+			ice_free_res(pf->sw_irq_tracker,
 				     vsi->sw_base_vector, vsi->idx);
 			pf->num_avail_sw_msix += num_q_vectors;
 		}
@@ -1409,13 +1409,13 @@ static int ice_vsi_cfg_rss_lut_key(struct ice_vsi *vsi)
 				    vsi->rss_table_size);
 
 	if (status) {
-		dev_err(&vsi->back->pdev->dev,
+		dev_err(&pf->pdev->dev,
 			"set_rss_lut failed, error %d\n", status);
 		err = -EIO;
 		goto ice_vsi_cfg_rss_exit;
 	}
 
-	key = devm_kzalloc(&vsi->back->pdev->dev, sizeof(*key), GFP_KERNEL);
+	key = devm_kzalloc(&pf->pdev->dev, sizeof(*key), GFP_KERNEL);
 	if (!key) {
 		err = -ENOMEM;
 		goto ice_vsi_cfg_rss_exit;
@@ -1432,7 +1432,7 @@ static int ice_vsi_cfg_rss_lut_key(struct ice_vsi *vsi)
 	status = ice_aq_set_rss_key(&pf->hw, vsi->idx, key);
 
 	if (status) {
-		dev_err(&vsi->back->pdev->dev, "set_rss_key failed, error %d\n",
+		dev_err(&pf->pdev->dev, "set_rss_key failed, error %d\n",
 			status);
 		err = -EIO;
 	}
@@ -1717,7 +1717,7 @@ ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_ring **rings, int offset)
 						 i, num_q_grps, qg_buf,
 						 buf_len, NULL);
 			if (status) {
-				dev_err(&vsi->back->pdev->dev,
+				dev_err(&pf->pdev->dev,
 					"Failed to set LAN Tx queue context, error: %d\n",
 					status);
 				err = -ENODEV;
@@ -2148,12 +2148,14 @@ int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena, bool vlan_promisc)
 {
 	struct ice_vsi_ctx *ctxt;
 	struct device *dev;
+	struct ice_pf *pf;
 	int status;
 
 	if (!vsi)
 		return -EINVAL;
 
-	dev = &vsi->back->pdev->dev;
+	pf = vsi->back;
+	dev = &pf->pdev->dev;
 	ctxt = devm_kzalloc(dev, sizeof(*ctxt), GFP_KERNEL);
 	if (!ctxt)
 		return -ENOMEM;
@@ -2177,11 +2179,11 @@ int ice_cfg_vlan_pruning(struct ice_vsi *vsi, bool ena, bool vlan_promisc)
 			cpu_to_le16(ICE_AQ_VSI_PROP_SECURITY_VALID |
 				    ICE_AQ_VSI_PROP_SW_VALID);
 
-	status = ice_update_vsi(&vsi->back->hw, vsi->idx, ctxt, NULL);
+	status = ice_update_vsi(&pf->hw, vsi->idx, ctxt, NULL);
 	if (status) {
 		netdev_err(vsi->netdev, "%sabling VLAN pruning on VSI handle: %d, VSI HW ID: %d failed, err = %d, aq_err = %d\n",
 			   ena ? "En" : "Dis", vsi->idx, vsi->vsi_num, status,
-			   vsi->back->hw.adminq.sq_last_status);
+			   pf->hw.adminq.sq_last_status);
 		goto err_out;
 	}
 
@@ -2378,10 +2380,10 @@ ice_vsi_setup(struct ice_pf *pf, struct ice_port_info *pi,
 
 unroll_vector_base:
 	/* reclaim SW interrupts back to the common pool */
-	ice_free_res(vsi->back->sw_irq_tracker, vsi->sw_base_vector, vsi->idx);
+	ice_free_res(pf->sw_irq_tracker, vsi->sw_base_vector, vsi->idx);
 	pf->num_avail_sw_msix += vsi->num_q_vectors;
 	/* reclaim HW interrupt back to the common pool */
-	ice_free_res(vsi->back->hw_irq_tracker, vsi->hw_base_vector, vsi->idx);
+	ice_free_res(pf->hw_irq_tracker, vsi->hw_base_vector, vsi->idx);
 	pf->num_avail_hw_msix += vsi->num_q_vectors;
 unroll_alloc_q_vector:
 	ice_vsi_free_q_vectors(vsi);
@@ -2718,18 +2720,16 @@ int ice_vsi_release(struct ice_vsi *vsi)
 	/* reclaim interrupt vectors back to PF */
 	if (vsi->type != ICE_VSI_VF) {
 		/* reclaim SW interrupts back to the common pool */
-		ice_free_res(vsi->back->sw_irq_tracker, vsi->sw_base_vector,
-			     vsi->idx);
+		ice_free_res(pf->sw_irq_tracker, vsi->sw_base_vector, vsi->idx);
 		pf->num_avail_sw_msix += vsi->num_q_vectors;
 		/* reclaim HW interrupts back to the common pool */
-		ice_free_res(vsi->back->hw_irq_tracker, vsi->hw_base_vector,
-			     vsi->idx);
+		ice_free_res(pf->hw_irq_tracker, vsi->hw_base_vector, vsi->idx);
 		pf->num_avail_hw_msix += vsi->num_q_vectors;
 	} else if (test_bit(ICE_VF_STATE_CFG_INTR, vf->vf_states)) {
 		/* Reclaim VF resources back only while freeing all VFs or
 		 * vector reassignment is requested
 		 */
-		ice_free_res(vsi->back->hw_irq_tracker, vf->first_vector_idx,
+		ice_free_res(pf->hw_irq_tracker, vf->first_vector_idx,
 			     vsi->idx);
 		pf->num_avail_hw_msix += pf->num_vf_msix;
 	}
@@ -2798,7 +2798,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
 
 	ice_vsi_clear_rings(vsi);
 	ice_vsi_free_arrays(vsi, false);
-	ice_dev_onetime_setup(&vsi->back->hw);
+	ice_dev_onetime_setup(&pf->hw);
 	if (vsi->type == ICE_VSI_VF)
 		ice_vsi_set_num_qs(vsi, vf->vf_id);
 	else
@@ -2837,7 +2837,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
 		 * receive traffic on first queue. Hence no need to capture
 		 * return value
 		 */
-		if (test_bit(ICE_FLAG_RSS_ENA, vsi->back->flags))
+		if (test_bit(ICE_FLAG_RSS_ENA, pf->flags))
 			ice_vsi_cfg_rss_lut_key(vsi);
 		break;
 	case ICE_VSI_VF:
@@ -2857,8 +2857,8 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
 		if (ret)
 			goto err_vectors;
 
-		vsi->back->q_left_tx -= vsi->alloc_txq;
-		vsi->back->q_left_rx -= vsi->alloc_rxq;
+		pf->q_left_tx -= vsi->alloc_txq;
+		pf->q_left_rx -= vsi->alloc_rxq;
 		break;
 	default:
 		break;
@@ -2889,7 +2889,7 @@ int ice_vsi_rebuild(struct ice_vsi *vsi)
 	}
 err_vsi:
 	ice_vsi_clear(vsi);
-	set_bit(__ICE_RESET_FAILED, vsi->back->state);
+	set_bit(__ICE_RESET_FAILED, pf->state);
 	return ret;
 }
 
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment based on direct calculation
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment based on direct calculation Anirudh Venkataramanan
@ 2019-03-08  0:26   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:26 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment
> based on direct calculation
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> Currently when calculating how much to increment ITR by inside of
> ice_update_itr() we do some estimations and intermediate calculations.
> Instead of doing estimations, just do the calculation directly. This allows for a
> more accurate value and it makes it easier for the next person to understand
> and update.
> 
> Also, remove the dividing the ITR value by 2 when latency driven because the
> ITR values are already so low for 100Gbps speed. This should help get to the
> desired ITR value faster.
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 135 ++++++++++++++-------------
> ---
>  1 file changed, 63 insertions(+), 72 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 02/17] ice: Create framework for VSI queue context
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 02/17] ice: Create framework for VSI queue context Anirudh Venkataramanan
@ 2019-03-08  0:26   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:26 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 02/17] ice: Create framework for VSI
> queue context
> 
> This patch introduces a framework to store queue specific information in VSI
> queue contexts. Currently VSI queue context (represented by struct
> ice_q_ctx) only has q_handle as a member. In future patches, this structure
> will be updated to hold queue specific information.
> 
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_common.c      | 62 +++++++++++++--
>  drivers/net/ethernet/intel/ice/ice_common.h      | 11 +--
>  drivers/net/ethernet/intel/ice/ice_lib.c         | 99 ++++++++++++++----------
>  drivers/net/ethernet/intel/ice/ice_sched.c       | 54 +++++++++++--
>  drivers/net/ethernet/intel/ice/ice_switch.c      | 22 ++++++
>  drivers/net/ethernet/intel/ice/ice_switch.h      |  9 +++
>  drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c |  4 +-
>  7 files changed, 205 insertions(+), 56 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 03/17] ice: Return configuration error without queue to disable
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 03/17] ice: Return configuration error without queue to disable Anirudh Venkataramanan
@ 2019-03-08  0:27   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:27 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 03/17] ice: Return configuration error
> without queue to disable
> 
> From: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
> 
> If there is no queue to disable, return appropriate configuration error earlier
> without acquiring the lock.
> 
> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_common.c | 17 ++++++++++-------
>  1 file changed, 10 insertions(+), 7 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 04/17] ice: Resolve static analysis reported issue
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 04/17] ice: Resolve static analysis reported issue Anirudh Venkataramanan
@ 2019-03-08  0:27   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:27 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 04/17] ice: Resolve static analysis
> reported issue
> 
> From: Bruce Allan <bruce.w.allan@intel.com>
> 
> Static analysis points out the default case in the switch statement in
> ice_get_itr_intrl_gran() is an infeasible condition causing the default case
> statement to be unreachable.  Remove it and since the function no longer
> returns anything but success, change it to just return void and update the
> only call to it accordingly.
> 
> Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_common.c | 12 ++----------
>  1 file changed, 2 insertions(+), 10 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 05/17] ice: Reduce scope of variable in ice_vsi_cfg_rxqs
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 05/17] ice: Reduce scope of variable in ice_vsi_cfg_rxqs Anirudh Venkataramanan
@ 2019-03-08  0:28   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:28 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 05/17] ice: Reduce scope of variable in
> ice_vsi_cfg_rxqs
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> Reduce scope of the variable 'err' to inside the for loop instead of using it as a
> second looping conditional. Also while here, improve the debug message if
> we fail to configure a Rx queue.
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c | 18 +++++++++++-------
>  1 file changed, 11 insertions(+), 7 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 06/17] ice: Validate ring existence and its q_vector per VSI
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 06/17] ice: Validate ring existence and its q_vector per VSI Anirudh Venkataramanan
@ 2019-03-08  0:28   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:28 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 06/17] ice: Validate ring existence and
> its q_vector per VSI
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> When stopping Tx rings, we use 'i' as an ring array index for looking up
> whether the ice_ring exists and have assigned a q_vector. This checks rings
> only within a given TC and we need to go through every ring in VSI. Use
> 'q_idx' instead.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 07/17] ice: Use ice_for_each_q_vector macro where possible
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 07/17] ice: Use ice_for_each_q_vector macro where possible Anirudh Venkataramanan
@ 2019-03-08  0:29   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:29 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 07/17] ice: Use ice_for_each_q_vector
> macro where possible
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> There are many places in the code where we do the following:
> 
> for (i = 0; i < vsi->num_q_vectors; i++)
> 
> Instead use the macro mentioned in the commit title:
> 
> ice_for_each_q_vector(vsi, i)
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c  |  6 +++---
> drivers/net/ethernet/intel/ice/ice_main.c | 10 +++++-----
>  2 files changed, 8 insertions(+), 8 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 08/17] ice: Add 52 byte RSS hash key support
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 08/17] ice: Add 52 byte RSS hash key support Anirudh Venkataramanan
@ 2019-03-08  0:29   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:29 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 08/17] ice: Add 52 byte RSS hash key
> support
> 
> From: Paul Greenwalt <paul.greenwalt@intel.com>
> 
> Add support to set 52 byte RSS hash key.
> 
> Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_adminq_cmd.h |  3 +++
>  drivers/net/ethernet/intel/ice/ice_lib.c        | 12 +++++-------
>  2 files changed, 8 insertions(+), 7 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 09/17] ice: Add ability to update rx-usecs-high
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 09/17] ice: Add ability to update rx-usecs-high Anirudh Venkataramanan
@ 2019-03-08  0:33   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:33 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 09/17] ice: Add ability to update rx-
> usecs-high
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> Currently the driver allows rx-usecs-high values to be set, but when querying
> the device for rx-usecs-high the value does not stick. This is because it was
> not yet implemented.
> Add code to allow the user to change rx-usecs-high and use this to set the
> q_vector's intrl value.
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_ethtool.c | 31
> +++++++++++++++++++++++++++-
>  drivers/net/ethernet/intel/ice/ice_lib.c     |  2 +-
>  drivers/net/ethernet/intel/ice/ice_lib.h     |  1 +
>  drivers/net/ethernet/intel/ice/ice_txrx.h    |  1 +
>  4 files changed, 33 insertions(+), 2 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 10/17] ice: Remove unnecessary wait when disabling/enabling Rx queues
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 10/17] ice: Remove unnecessary wait when disabling/enabling Rx queues Anirudh Venkataramanan
@ 2019-03-08  0:34   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:34 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 10/17] ice: Remove unnecessary wait
> when disabling/enabling Rx queues
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> In ice_vsi_ctrl_rx_rings() we are unnecessarily waiting for
> QRX_CTRL_QENA_REQ and QRX_CTRL_QENA_STAT to be the same value
> prior to disabling each Rx queue. There is no reason to do this so remove this
> wait loop as we already have a wait loop after disabling/enabling the Rx
> queue through the QRX_CTRL register to make sure it gets successfully
> disabled/enabled.
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c | 10 ++--------
>  1 file changed, 2 insertions(+), 8 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 11/17] ice: Fix issue when adding more than allowed VLANs
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 11/17] ice: Fix issue when adding more than allowed VLANs Anirudh Venkataramanan
@ 2019-03-08  0:34   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:34 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 11/17] ice: Fix issue when adding more
> than allowed VLANs
> 
> From: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
> 
> This patch fixes issue with non trusted VFs being able to add more than
> permitted number of VLANs by adding a check in ice_vc_process_vlan_msg.
> Also don't return an error in this case as the VF does not need to know that it
> is not trusted.
> 
> Also rework ice_vsi_kill_vlan to use the right types.
> 
> Signed-off-by: Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c         | 15 +++++++++------
>  drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 13 ++++++++++++-
>  2 files changed, 21 insertions(+), 7 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 12/17] ice: Remove runtime change of PFINT_OICR_ENA register
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 12/17] ice: Remove runtime change of PFINT_OICR_ENA register Anirudh Venkataramanan
@ 2019-03-08  0:35   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:35 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 12/17] ice: Remove runtime change of
> PFINT_OICR_ENA register
> 
> From: Md Fahad Iqbal Polash <md.fahad.iqbal.polash@intel.com>
> 
> Runtime change of PFINT_OICR_ENA register is unnecessary.
> The handlers should always clear the atomic bit for each task as they start,
> because it will make sure that any late interrupt will either 1) re-set the bit, or
> 2) be handled directly in the "already running" task handler.
> 
> Signed-off-by: Md Fahad Iqbal Polash <md.fahad.iqbal.polash@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_main.c        | 13 ++-----------
>  drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 13 +------------
>  2 files changed, 3 insertions(+), 23 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 13/17] ice: Add reg_idx variable in ice_q_vector structure
  2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 13/17] ice: Add reg_idx variable in ice_q_vector structure Anirudh Venkataramanan
@ 2019-03-08  0:35   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:35 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 13/17] ice: Add reg_idx variable in
> ice_q_vector structure
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> Every time we want to re-enable interrupts and/or write to a register that
> requires an interrupt vector's hardware index we do the following:
> 
> vsi->hw_base_vector + q_vector->v_idx
> 
> This is a wasteful operation, especially in the hot path. Fix this by adding a u16
> reg_idx member to the ice_q_vector structure and make the necessary
> changes to make this work.
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice.h      |  3 +-
>  drivers/net/ethernet/intel/ice/ice_lib.c  | 84
> ++++++++++++++++++++++++-------
> drivers/net/ethernet/intel/ice/ice_main.c | 13 +++--
> drivers/net/ethernet/intel/ice/ice_txrx.c |  2 +-
>  4 files changed, 76 insertions(+), 26 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 14/17] ice: Add missing PHY type to link settings
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 14/17] ice: Add missing PHY type to link settings Anirudh Venkataramanan
@ 2019-03-08  0:36   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:36 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 14/17] ice: Add missing PHY type to link
> settings
> 
> From: Tony Nguyen <anthony.l.nguyen@intel.com>
> 
> The PHY type ICE_PHY_TYPE_LOW_25G_AUI_C2C is missing from
> ice_get_settings_link_up() which is causing a warning message for
> unrecognized PHY.  Add the PHY type to correctly set the settings and avoid
> the warning message.
> 
> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_ethtool.c | 1 +
>  1 file changed, 1 insertion(+)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 15/17] ice: Refactor link event flow
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 15/17] ice: Refactor link event flow Anirudh Venkataramanan
@ 2019-03-08  0:36   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:36 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 15/17] ice: Refactor link event flow
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> Currently the link event flow works, but can be much better.
> Refactor the link event flow to make it cleaner and more clear on what is
> going on.
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice.h      | 20 +++++++
>  drivers/net/ethernet/intel/ice/ice_main.c | 93 +++++++++++++++------------
> ----
>  2 files changed, 65 insertions(+), 48 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 16/17] ice: Use dev_err when ice_cfg_vsi_lan fails
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 16/17] ice: Use dev_err when ice_cfg_vsi_lan fails Anirudh Venkataramanan
@ 2019-03-08  0:36   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:36 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 16/17] ice: Use dev_err when
> ice_cfg_vsi_lan fails
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> dev_err makes more sense than dev_info when this call fails.
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

* [Intel-wired-lan] [PATCH S17 17/17] ice: Use pf instead of vsi-back
  2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 17/17] ice: Use pf instead of vsi-back Anirudh Venkataramanan
@ 2019-03-08  0:37   ` Bowers, AndrewX
  0 siblings, 0 replies; 35+ messages in thread
From: Bowers, AndrewX @ 2019-03-08  0:37 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Thursday, February 28, 2019 3:26 PM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S17 17/17] ice: Use pf instead of vsi-back
> 
> From: Jesse Brandeburg <jesse.brandeburg@intel.com>
> 
> Many times in our functions we have a local variable pf, which is equivalent
> to vsi->back. Just use pf consistently instead of vsi->back where available.
> 
> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_lib.c | 60 ++++++++++++++++-------------
> ---
>  1 file changed, 30 insertions(+), 30 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2019-03-08  0:37 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-28 23:25 [Intel-wired-lan] [PATCH S17 00/17] Implementation updates for ice Anirudh Venkataramanan
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 01/17] ice: Calculate ITR increment based on direct calculation Anirudh Venkataramanan
2019-03-08  0:26   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 02/17] ice: Create framework for VSI queue context Anirudh Venkataramanan
2019-03-08  0:26   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 03/17] ice: Return configuration error without queue to disable Anirudh Venkataramanan
2019-03-08  0:27   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 04/17] ice: Resolve static analysis reported issue Anirudh Venkataramanan
2019-03-08  0:27   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 05/17] ice: Reduce scope of variable in ice_vsi_cfg_rxqs Anirudh Venkataramanan
2019-03-08  0:28   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 06/17] ice: Validate ring existence and its q_vector per VSI Anirudh Venkataramanan
2019-03-08  0:28   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 07/17] ice: Use ice_for_each_q_vector macro where possible Anirudh Venkataramanan
2019-03-08  0:29   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 08/17] ice: Add 52 byte RSS hash key support Anirudh Venkataramanan
2019-03-08  0:29   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 09/17] ice: Add ability to update rx-usecs-high Anirudh Venkataramanan
2019-03-08  0:33   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 10/17] ice: Remove unnecessary wait when disabling/enabling Rx queues Anirudh Venkataramanan
2019-03-08  0:34   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 11/17] ice: Fix issue when adding more than allowed VLANs Anirudh Venkataramanan
2019-03-08  0:34   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 12/17] ice: Remove runtime change of PFINT_OICR_ENA register Anirudh Venkataramanan
2019-03-08  0:35   ` Bowers, AndrewX
2019-02-28 23:25 ` [Intel-wired-lan] [PATCH S17 13/17] ice: Add reg_idx variable in ice_q_vector structure Anirudh Venkataramanan
2019-03-08  0:35   ` Bowers, AndrewX
2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 14/17] ice: Add missing PHY type to link settings Anirudh Venkataramanan
2019-03-08  0:36   ` Bowers, AndrewX
2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 15/17] ice: Refactor link event flow Anirudh Venkataramanan
2019-03-08  0:36   ` Bowers, AndrewX
2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 16/17] ice: Use dev_err when ice_cfg_vsi_lan fails Anirudh Venkataramanan
2019-03-08  0:36   ` Bowers, AndrewX
2019-02-28 23:26 ` [Intel-wired-lan] [PATCH S17 17/17] ice: Use pf instead of vsi-back Anirudh Venkataramanan
2019-03-08  0:37   ` Bowers, AndrewX

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.