All of lore.kernel.org
 help / color / mirror / Atom feed
* [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice
@ 2019-02-13 18:51 Anirudh Venkataramanan
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate function Anirudh Venkataramanan
                   ` (14 more replies)
  0 siblings, 15 replies; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

Brett Creeley (2):
  ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset
  ice: Implement pci_error_handler ops

Bruce Allan (1):
  ice: add and use new ice_for_each_traffic_class() macro

Chinh T Cao (1):
  ice: Create a generic name for the ice_rx_flg64_bits structure

Dave Ertman (1):
  ice: Prevent unintended multiple chain resets

Jeremiah Kyle (1):
  ice: Remove unnecessary newlines from log messages

Maciej Fijalkowski (7):
  ice: Retrieve rx_buf in separate function
  ice: Pull out page reuse checks onto separate function
  ice: Get rid of ice_pull_tail
  ice: Introduce bulk update for page count
  ice: Gather the rx buf clean-up logic for better reuse
  ice: Limit the ice_add_rx_frag to frag addition
  ice: map rx buffer pages with DMA attributes

Mitch Williams (1):
  ice: use virt channel status codes

Preethi Banala (1):
  ice: change VF VSI tc info along with num_queues

 drivers/net/ethernet/intel/ice/ice_common.c      |  28 +-
 drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h   |  34 +--
 drivers/net/ethernet/intel/ice/ice_lib.c         |   4 +-
 drivers/net/ethernet/intel/ice/ice_main.c        | 168 ++++++++++-
 drivers/net/ethernet/intel/ice/ice_sched.c       |   2 +-
 drivers/net/ethernet/intel/ice/ice_txrx.c        | 351 +++++++++++++----------
 drivers/net/ethernet/intel/ice/ice_txrx.h        |   4 +
 drivers/net/ethernet/intel/ice/ice_type.h        |   3 +
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 280 ++++++++++--------
 9 files changed, 563 insertions(+), 311 deletions(-)

-- 
2.14.5


^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate function
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 22:24   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks onto " Anirudh Venkataramanan
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

Introduce ice_get_rx_buf, which will fetch the rx buffer and do the DMA
synchronization. Length of the packet that hardware rx descriptor
contains is now read in ice_clean_rx_irq, so we can feed ice_get_rx_buf
with it and resign from rx_desc passed as argument in ice_fetch_rx_buf
and ice_add_rx_frag.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 75 +++++++++++++++++--------------
 1 file changed, 42 insertions(+), 33 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 64baedee6336..8c0a8b63670b 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -499,8 +499,8 @@ static bool ice_page_is_reserved(struct page *page)
 /**
  * ice_add_rx_frag - Add contents of Rx buffer to sk_buff
  * @rx_buf: buffer containing page to add
- * @rx_desc: descriptor containing length of buffer written by hardware
  * @skb: sk_buf to place the data into
+ * @size: the length of the packet
  *
  * This function will add the data contained in rx_buf->page to the skb.
  * This is done either through a direct copy if the data in the buffer is
@@ -511,8 +511,8 @@ static bool ice_page_is_reserved(struct page *page)
  * true if the buffer can be reused by the adapter.
  */
 static bool
-ice_add_rx_frag(struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *rx_desc,
-		struct sk_buff *skb)
+ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
+		unsigned int size)
 {
 #if (PAGE_SIZE < 8192)
 	unsigned int truesize = ICE_RXBUF_2048;
@@ -522,10 +522,6 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, union ice_32b_rx_flex_desc *rx_desc,
 #endif /* PAGE_SIZE < 8192) */
 
 	struct page *page;
-	unsigned int size;
-
-	size = le16_to_cpu(rx_desc->wb.pkt_len) &
-		ICE_RX_FLX_DESC_PKT_LEN_M;
 
 	page = rx_buf->page;
 
@@ -603,10 +599,35 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf)
 	*new_buf = *old_buf;
 }
 
+/**
+ * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use
+ * @rx_ring: Rx descriptor ring to transact packets on
+ * @size: size of buffer to add to skb
+ *
+ * This function will pull an Rx buffer from the ring and synchronize it
+ * for use by the CPU.
+ */
+static struct ice_rx_buf *
+ice_get_rx_buf(struct ice_ring *rx_ring, const unsigned int size)
+{
+	struct ice_rx_buf *rx_buf;
+
+	rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean];
+	prefetchw(rx_buf->page);
+
+	/* we are reusing so sync this buffer for CPU use */
+	dma_sync_single_range_for_cpu(rx_ring->dev, rx_buf->dma,
+				      rx_buf->page_offset, size,
+				      DMA_FROM_DEVICE);
+
+	return rx_buf;
+}
+
 /**
  * ice_fetch_rx_buf - Allocate skb and populate it
  * @rx_ring: Rx descriptor ring to transact packets on
- * @rx_desc: descriptor containing info written by hardware
+ * @rx_buf: Rx buffer to pull data from
+ * @size: the length of the packet
  *
  * This function allocates an skb on the fly, and populates it with the page
  * data from the current receive descriptor, taking care to set up the skb
@@ -614,20 +635,14 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf)
  * necessary.
  */
 static struct sk_buff *
-ice_fetch_rx_buf(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc)
+ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
+		 unsigned int size)
 {
-	struct ice_rx_buf *rx_buf;
-	struct sk_buff *skb;
-	struct page *page;
-
-	rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean];
-	page = rx_buf->page;
-	prefetchw(page);
-
-	skb = rx_buf->skb;
+	struct sk_buff *skb = rx_buf->skb;
 
 	if (likely(!skb)) {
-		u8 *page_addr = page_address(page) + rx_buf->page_offset;
+		u8 *page_addr = page_address(rx_buf->page) +
+				rx_buf->page_offset;
 
 		/* prefetch first cache line of first page */
 		prefetch(page_addr);
@@ -644,25 +659,13 @@ ice_fetch_rx_buf(struct ice_ring *rx_ring, union ice_32b_rx_flex_desc *rx_desc)
 			return NULL;
 		}
 
-		/* we will be copying header into skb->data in
-		 * pskb_may_pull so it is in our interest to prefetch
-		 * it now to avoid a possible cache miss
-		 */
-		prefetchw(skb->data);
-
 		skb_record_rx_queue(skb, rx_ring->q_index);
 	} else {
-		/* we are reusing so sync this buffer for CPU use */
-		dma_sync_single_range_for_cpu(rx_ring->dev, rx_buf->dma,
-					      rx_buf->page_offset,
-					      ICE_RXBUF_2048,
-					      DMA_FROM_DEVICE);
-
 		rx_buf->skb = NULL;
 	}
 
 	/* pull page into skb */
-	if (ice_add_rx_frag(rx_buf, rx_desc, skb)) {
+	if (ice_add_rx_frag(rx_buf, skb, size)) {
 		/* hand second half of page back to the ring */
 		ice_reuse_rx_page(rx_ring, rx_buf);
 		rx_ring->rx_stats.page_reuse_count++;
@@ -963,7 +966,9 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 	/* start the loop to process RX packets bounded by 'budget' */
 	while (likely(total_rx_pkts < (unsigned int)budget)) {
 		union ice_32b_rx_flex_desc *rx_desc;
+		struct ice_rx_buf *rx_buf;
 		struct sk_buff *skb;
+		unsigned int size;
 		u16 stat_err_bits;
 		u16 vlan_tag = 0;
 		u8 rx_ptype;
@@ -993,8 +998,12 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 		 */
 		dma_rmb();
 
+		size = le16_to_cpu(rx_desc->wb.pkt_len) &
+			ICE_RX_FLX_DESC_PKT_LEN_M;
+
+		rx_buf = ice_get_rx_buf(rx_ring, size);
 		/* allocate (if needed) and populate skb */
-		skb = ice_fetch_rx_buf(rx_ring, rx_desc);
+		skb = ice_fetch_rx_buf(rx_ring, rx_buf, size);
 		if (!skb)
 			break;
 
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks onto separate function
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate function Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 22:57   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 03/15] ice: Get rid of ice_pull_tail Anirudh Venkataramanan
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

Introduce ice_can_reuse_rx_page which will verify whether the page can
be reused and return the boolean result to caller.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 80 +++++++++++++++++--------------
 1 file changed, 45 insertions(+), 35 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 8c0a8b63670b..d1f4aef9bcc2 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -496,6 +496,48 @@ static bool ice_page_is_reserved(struct page *page)
 	return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
 }
 
+/**
+ * ice_can_reuse_rx_page - Determine if page can be reused for another rx
+ * @rx_buf: buffer containing the page
+ * @truesize: the offset that needs to be applied to page
+ *
+ * If page is reusable, we have a green light for calling ice_reuse_rx_page,
+ * which will assign the current buffer to the buffer that next_to_alloc is
+ * pointing to; otherwise, the dma mapping needs to be destroyed and
+ * page freed
+ */
+static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf,
+				  unsigned int truesize)
+{
+	struct page *page = rx_buf->page;
+
+	/* avoid re-using remote pages */
+	if (unlikely(ice_page_is_reserved(page)))
+		return false;
+
+#if (PAGE_SIZE < 8192)
+	/* if we are only owner of page we can reuse it */
+	if (unlikely(page_count(page) != 1))
+		return false;
+
+	/* flip page offset to other buffer */
+	rx_buf->page_offset ^= truesize;
+#else
+	/* move offset up to the next cache line */
+	rx_buf->page_offset += truesize;
+
+	if (rx_buf->page_offset > PAGE_SIZE - ICE_RXBUF_2048)
+		return false;
+#endif /* PAGE_SIZE < 8192) */
+
+	/* Even if we own the page, we are not allowed to use atomic_set()
+	 * This would break get_page_unless_zero() users.
+	 */
+	get_page(page);
+
+	return true;
+}
+
 /**
  * ice_add_rx_frag - Add contents of Rx buffer to sk_buff
  * @rx_buf: buffer containing page to add
@@ -517,17 +559,9 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 #if (PAGE_SIZE < 8192)
 	unsigned int truesize = ICE_RXBUF_2048;
 #else
-	unsigned int last_offset = PAGE_SIZE - ICE_RXBUF_2048;
-	unsigned int truesize;
+	unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
 #endif /* PAGE_SIZE < 8192) */
-
-	struct page *page;
-
-	page = rx_buf->page;
-
-#if (PAGE_SIZE >= 8192)
-	truesize = ALIGN(size, L1_CACHE_BYTES);
-#endif /* PAGE_SIZE >= 8192) */
+	struct page *page = rx_buf->page;
 
 	/* will the data fit in the skb we allocated? if so, just
 	 * copy it as it is pretty small anyway
@@ -549,31 +583,7 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
 			rx_buf->page_offset, size, truesize);
 
-	/* avoid re-using remote pages */
-	if (unlikely(ice_page_is_reserved(page)))
-		return false;
-
-#if (PAGE_SIZE < 8192)
-	/* if we are only owner of page we can reuse it */
-	if (unlikely(page_count(page) != 1))
-		return false;
-
-	/* flip page offset to other buffer */
-	rx_buf->page_offset ^= truesize;
-#else
-	/* move offset up to the next cache line */
-	rx_buf->page_offset += truesize;
-
-	if (rx_buf->page_offset > last_offset)
-		return false;
-#endif /* PAGE_SIZE < 8192) */
-
-	/* Even if we own the page, we are not allowed to use atomic_set()
-	 * This would break get_page_unless_zero() users.
-	 */
-	get_page(rx_buf->page);
-
-	return true;
+	return ice_can_reuse_rx_page(rx_buf, truesize);
 }
 
 /**
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 03/15] ice: Get rid of ice_pull_tail
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate function Anirudh Venkataramanan
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks onto " Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:00   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 04/15] ice: Introduce bulk update for page count Anirudh Venkataramanan
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

Instead of adding a frag and later when dealing with EOP frame accessing
that frag in order to copy the headers onto linear part of skb, we can do
this in ice_add_rx_frag in case where the data_len is still 0 and frame
won't fit onto the linear part as a whole.

Function comment of ice_pull_tail was a bit misleading because of
mentioned optimizations that can be performed (drop a frag/maintaining
accurate truesize of skb) - it seems that this part of logic was dropped
and the comment was not updated to reflect this change.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 70 +++++++++++--------------------
 1 file changed, 24 insertions(+), 46 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index d1f4aef9bcc2..03dddbd8c108 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -562,13 +562,17 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 	unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
 #endif /* PAGE_SIZE < 8192) */
 	struct page *page = rx_buf->page;
+	unsigned int pull_len;
+	unsigned char *va;
+
+	va = page_address(page) + rx_buf->page_offset;
+	if (unlikely(skb_is_nonlinear(skb)))
+		goto add_tail_frag;
 
 	/* will the data fit in the skb we allocated? if so, just
 	 * copy it as it is pretty small anyway
 	 */
-	if (size <= ICE_RX_HDR_SIZE && !skb_is_nonlinear(skb)) {
-		unsigned char *va = page_address(page) + rx_buf->page_offset;
-
+	if (size <= ICE_RX_HDR_SIZE) {
 		memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
 
 		/* page is not reserved, we can reuse buffer as-is */
@@ -580,8 +584,24 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 		return false;
 	}
 
+	/* we need the header to contain the greater of either ETH_HLEN or
+	 * 60 bytes if the skb->len is less than 60 for skb_pad.
+	 */
+	pull_len = eth_get_headlen(va, ICE_RX_HDR_SIZE);
+
+	/* align pull length to size of long to optimize memcpy performance */
+	memcpy(__skb_put(skb, pull_len), va, ALIGN(pull_len, sizeof(long)));
+
+	/* the header from the frame that we're adding as a frag was added to
+	 * linear part of skb so move the pointer past that header and
+	 * reduce the size of data
+	 */
+	va += pull_len;
+	size -= pull_len;
+
+add_tail_frag:
 	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
-			rx_buf->page_offset, size, truesize);
+			(unsigned long)va & ~PAGE_MASK, size, truesize);
 
 	return ice_can_reuse_rx_page(rx_buf, truesize);
 }
@@ -691,44 +711,6 @@ ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
 	return skb;
 }
 
-/**
- * ice_pull_tail - ice specific version of skb_pull_tail
- * @skb: pointer to current skb being adjusted
- *
- * This function is an ice specific version of __pskb_pull_tail. The
- * main difference between this version and the original function is that
- * this function can make several assumptions about the state of things
- * that allow for significant optimizations versus the standard function.
- * As a result we can do things like drop a frag and maintain an accurate
- * truesize for the skb.
- */
-static void ice_pull_tail(struct sk_buff *skb)
-{
-	struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
-	unsigned int pull_len;
-	unsigned char *va;
-
-	/* it is valid to use page_address instead of kmap since we are
-	 * working with pages allocated out of the lomem pool per
-	 * alloc_page(GFP_ATOMIC)
-	 */
-	va = skb_frag_address(frag);
-
-	/* we need the header to contain the greater of either ETH_HLEN or
-	 * 60 bytes if the skb->len is less than 60 for skb_pad.
-	 */
-	pull_len = eth_get_headlen(va, ICE_RX_HDR_SIZE);
-
-	/* align pull length to size of long to optimize memcpy performance */
-	skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
-
-	/* update all of the pointers */
-	skb_frag_size_sub(frag, pull_len);
-	frag->page_offset += pull_len;
-	skb->data_len -= pull_len;
-	skb->tail += pull_len;
-}
-
 /**
  * ice_cleanup_headers - Correct empty headers
  * @skb: pointer to current skb being fixed
@@ -743,10 +725,6 @@ static void ice_pull_tail(struct sk_buff *skb)
  */
 static bool ice_cleanup_headers(struct sk_buff *skb)
 {
-	/* place header in linear portion of buffer */
-	if (skb_is_nonlinear(skb))
-		ice_pull_tail(skb);
-
 	/* if eth_skb_pad returns an error the skb was freed */
 	if (eth_skb_pad(skb))
 		return true;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 04/15] ice: Introduce bulk update for page count
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (2 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 03/15] ice: Get rid of ice_pull_tail Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:02   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 05/15] ice: Gather the rx buf clean-up logic for better reuse Anirudh Venkataramanan
                   ` (10 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

{get,put}_page are atomic operations which we use for page count
handling. The current logic for refcount handling is that we increment
it when passing a skb with the data from the first half of page up to
netstack and recycle the second half of page. This operation protects us
from losing a page since the network stack can decrement the refcount of
page from skb.

The performance can be gently improved by doing the bulk updates of
refcount instead of doing it one by one. During the buffer initialization,
maximize the page's refcount and don't allow the refcount to become
less than two.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 26 +++++++++++++++++++-------
 drivers/net/ethernet/intel/ice/ice_txrx.h |  1 +
 2 files changed, 20 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 03dddbd8c108..1c49d245d889 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -283,7 +283,7 @@ void ice_clean_rx_ring(struct ice_ring *rx_ring)
 			continue;
 
 		dma_unmap_page(dev, rx_buf->dma, PAGE_SIZE, DMA_FROM_DEVICE);
-		__free_pages(rx_buf->page, 0);
+		__page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias);
 
 		rx_buf->page = NULL;
 		rx_buf->page_offset = 0;
@@ -423,6 +423,8 @@ ice_alloc_mapped_page(struct ice_ring *rx_ring, struct ice_rx_buf *bi)
 	bi->dma = dma;
 	bi->page = page;
 	bi->page_offset = 0;
+	page_ref_add(page, USHRT_MAX - 1);
+	bi->pagecnt_bias = USHRT_MAX;
 
 	return true;
 }
@@ -509,6 +511,7 @@ static bool ice_page_is_reserved(struct page *page)
 static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf,
 				  unsigned int truesize)
 {
+	unsigned int pagecnt_bias = rx_buf->pagecnt_bias;
 	struct page *page = rx_buf->page;
 
 	/* avoid re-using remote pages */
@@ -517,7 +520,7 @@ static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf,
 
 #if (PAGE_SIZE < 8192)
 	/* if we are only owner of page we can reuse it */
-	if (unlikely(page_count(page) != 1))
+	if (unlikely((page_count(page) - pagecnt_bias) > 1))
 		return false;
 
 	/* flip page offset to other buffer */
@@ -530,10 +533,14 @@ static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf,
 		return false;
 #endif /* PAGE_SIZE < 8192) */
 
-	/* Even if we own the page, we are not allowed to use atomic_set()
-	 * This would break get_page_unless_zero() users.
+	/* If we have drained the page fragment pool we need to update
+	 * the pagecnt_bias and page count so that we fully restock the
+	 * number of references the driver holds.
 	 */
-	get_page(page);
+	if (unlikely(pagecnt_bias == 1)) {
+		page_ref_add(page, USHRT_MAX - 1);
+		rx_buf->pagecnt_bias = USHRT_MAX;
+	}
 
 	return true;
 }
@@ -576,11 +583,12 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 		memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
 
 		/* page is not reserved, we can reuse buffer as-is */
-		if (likely(!ice_page_is_reserved(page)))
+		if (likely(!ice_page_is_reserved(page))) {
+			rx_buf->pagecnt_bias++;
 			return true;
+		}
 
 		/* this page cannot be reused so discard it */
-		__free_pages(page, 0);
 		return false;
 	}
 
@@ -650,6 +658,9 @@ ice_get_rx_buf(struct ice_ring *rx_ring, const unsigned int size)
 				      rx_buf->page_offset, size,
 				      DMA_FROM_DEVICE);
 
+	/* We have pulled a buffer for use, so decrement pagecnt_bias */
+	rx_buf->pagecnt_bias--;
+
 	return rx_buf;
 }
 
@@ -703,6 +714,7 @@ ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
 		/* we are not reusing the buffer so unmap it */
 		dma_unmap_page(rx_ring->dev, rx_buf->dma, PAGE_SIZE,
 			       DMA_FROM_DEVICE);
+		__page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias);
 	}
 
 	/* clear contents of buffer_info */
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index b7ff0ff82517..43b39e7ce470 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -73,6 +73,7 @@ struct ice_rx_buf {
 	dma_addr_t dma;
 	struct page *page;
 	unsigned int page_offset;
+	u16 pagecnt_bias;
 };
 
 struct ice_q_stats {
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 05/15] ice: Gather the rx buf clean-up logic for better reuse
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (3 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 04/15] ice: Introduce bulk update for page count Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:04   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 06/15] ice: Limit the ice_add_rx_frag to frag addition Anirudh Venkataramanan
                   ` (9 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

Pull out the code responsible for page counting and buffer recycling so
that it will be possible to clean up the rx buffers in cases where we
won't allocate skb (ex. XDP)

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 76 ++++++++++++++++++++-----------
 1 file changed, 50 insertions(+), 26 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 1c49d245d889..0eb594abe6ef 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -498,19 +498,42 @@ static bool ice_page_is_reserved(struct page *page)
 	return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
 }
 
+/**
+ * ice_rx_buf_adjust_pg_offset - Prepare rx buffer for reuse
+ * @rx_buf: Rx buffer to adjust
+ * @size: Size of adjustment
+ *
+ * Update the offset within page so that rx buf will be ready to be reused.
+ * For systems with PAGE_SIZE < 8192 this function will flip the page offset
+ * so the second half of page assigned to rx buffer will be used, otherwise
+ * the offset is moved by the @size bytes
+ */
+static void
+ice_rx_buf_adjust_pg_offset(struct ice_rx_buf *rx_buf, unsigned int size)
+{
+#if (PAGE_SIZE < 8192)
+	/* flip page offset to other buffer */
+	rx_buf->page_offset ^= size;
+#else
+	/* move offset up to the next cache line */
+	rx_buf->page_offset += size;
+#endif
+}
+
 /**
  * ice_can_reuse_rx_page - Determine if page can be reused for another rx
  * @rx_buf: buffer containing the page
- * @truesize: the offset that needs to be applied to page
  *
  * If page is reusable, we have a green light for calling ice_reuse_rx_page,
  * which will assign the current buffer to the buffer that next_to_alloc is
  * pointing to; otherwise, the dma mapping needs to be destroyed and
  * page freed
  */
-static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf,
-				  unsigned int truesize)
+static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf)
 {
+#if (PAGE_SIZE >= 8192)
+	unsigned int last_offset = PAGE_SIZE - ICE_RXBUF_2048;
+#endif
 	unsigned int pagecnt_bias = rx_buf->pagecnt_bias;
 	struct page *page = rx_buf->page;
 
@@ -522,14 +545,8 @@ static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf,
 	/* if we are only owner of page we can reuse it */
 	if (unlikely((page_count(page) - pagecnt_bias) > 1))
 		return false;
-
-	/* flip page offset to other buffer */
-	rx_buf->page_offset ^= truesize;
 #else
-	/* move offset up to the next cache line */
-	rx_buf->page_offset += truesize;
-
-	if (rx_buf->page_offset > PAGE_SIZE - ICE_RXBUF_2048)
+	if (rx_buf->page_offset > last_offset)
 		return false;
 #endif /* PAGE_SIZE < 8192) */
 
@@ -556,10 +573,9 @@ static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf,
  * less than the skb header size, otherwise it will just attach the page as
  * a frag to the skb.
  *
- * The function will then update the page offset if necessary and return
- * true if the buffer can be reused by the adapter.
+ * The function will then update the page offset
  */
-static bool
+static void
 ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 		unsigned int size)
 {
@@ -582,14 +598,8 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 	if (size <= ICE_RX_HDR_SIZE) {
 		memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
 
-		/* page is not reserved, we can reuse buffer as-is */
-		if (likely(!ice_page_is_reserved(page))) {
-			rx_buf->pagecnt_bias++;
-			return true;
-		}
-
-		/* this page cannot be reused so discard it */
-		return false;
+		rx_buf->pagecnt_bias++;
+		return;
 	}
 
 	/* we need the header to contain the greater of either ETH_HLEN or
@@ -610,8 +620,7 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 add_tail_frag:
 	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
 			(unsigned long)va & ~PAGE_MASK, size, truesize);
-
-	return ice_can_reuse_rx_page(rx_buf, truesize);
+	ice_rx_buf_adjust_pg_offset(rx_buf, truesize);
 }
 
 /**
@@ -697,6 +706,7 @@ ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
 				       GFP_ATOMIC | __GFP_NOWARN);
 		if (unlikely(!skb)) {
 			rx_ring->rx_stats.alloc_buf_failed++;
+			rx_buf->pagecnt_bias++;
 			return NULL;
 		}
 
@@ -706,8 +716,23 @@ ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
 	}
 
 	/* pull page into skb */
-	if (ice_add_rx_frag(rx_buf, skb, size)) {
+	ice_add_rx_frag(rx_buf, skb, size);
+
+	return skb;
+}
+
+/**
+ * ice_put_rx_buf - Clean up used buffer and either recycle or free
+ * @rx_ring: Rx descriptor ring to transact packets on
+ * @rx_buf: Rx buffer to pull data from
+ *
+ * This function will  clean up the contents of the rx_buf. It will
+ * either recycle the buffer or unmap it and free the associated resources.
+ */
+static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf)
+{
 		/* hand second half of page back to the ring */
+	if (ice_can_reuse_rx_page(rx_buf)) {
 		ice_reuse_rx_page(rx_ring, rx_buf);
 		rx_ring->rx_stats.page_reuse_count++;
 	} else {
@@ -719,8 +744,6 @@ ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
 
 	/* clear contents of buffer_info */
 	rx_buf->page = NULL;
-
-	return skb;
 }
 
 /**
@@ -1007,6 +1030,7 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 		if (!skb)
 			break;
 
+		ice_put_rx_buf(rx_ring, rx_buf);
 		cleaned_count++;
 
 		/* skip if it is NOP desc */
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 06/15] ice: Limit the ice_add_rx_frag to frag addition
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (4 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 05/15] ice: Gather the rx buf clean-up logic for better reuse Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:16   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 07/15] ice: map rx buffer pages with DMA attributes Anirudh Venkataramanan
                   ` (8 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

Refactor ice_fetch_rx_buf and ice_add_rx_frag in a way that we have
standalone functions that do either the skb construction or frag
addition to previously constructed skb.

The skb handling between rx_bufs is spread among various functions. The
ice_get_rx_buf will retrieve the skb pointer from rx_buf and if it is a
NULL pointer then we do the ice_construct_skb, otherwise we add a frag
to the current skb via ice_add_rx_frag. Then, on the ice_put_rx_buf the
skb pointer that belongs to rx_buf will be cleared. Moving further, if
the current frame is not EOP frame we assign the current skb to the
rx_buf that is pointed by updated next_to_clean indicator.

What is more during the buffer reuse let's assign each member of
ice_rx_buf individually so we avoid the unnecessary copy of skb.

Last but not least, this logic split will allow us for better code reuse
when adding a support for build_skb.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 160 +++++++++++++++---------------
 1 file changed, 79 insertions(+), 81 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index 0eb594abe6ef..aaa29ac18cdb 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -563,63 +563,29 @@ static bool ice_can_reuse_rx_page(struct ice_rx_buf *rx_buf)
 }
 
 /**
- * ice_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * ice_add_rx_frag - Add contents of Rx buffer to sk_buff as a frag
  * @rx_buf: buffer containing page to add
- * @skb: sk_buf to place the data into
- * @size: the length of the packet
+ * @skb: sk_buff to place the data into
+ * @size: packet length from rx_desc
  *
  * This function will add the data contained in rx_buf->page to the skb.
- * This is done either through a direct copy if the data in the buffer is
- * less than the skb header size, otherwise it will just attach the page as
- * a frag to the skb.
- *
- * The function will then update the page offset
+ * It will just attach the page as a frag to the skb.
+ * The function will then update the page offset.
  */
 static void
 ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 		unsigned int size)
 {
-#if (PAGE_SIZE < 8192)
-	unsigned int truesize = ICE_RXBUF_2048;
+#if (PAGE_SIZE >= 8192)
+	unsigned int truesize = SKB_DATA_ALIGN(size);
 #else
-	unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
-#endif /* PAGE_SIZE < 8192) */
-	struct page *page = rx_buf->page;
-	unsigned int pull_len;
-	unsigned char *va;
-
-	va = page_address(page) + rx_buf->page_offset;
-	if (unlikely(skb_is_nonlinear(skb)))
-		goto add_tail_frag;
-
-	/* will the data fit in the skb we allocated? if so, just
-	 * copy it as it is pretty small anyway
-	 */
-	if (size <= ICE_RX_HDR_SIZE) {
-		memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
-
-		rx_buf->pagecnt_bias++;
-		return;
-	}
-
-	/* we need the header to contain the greater of either ETH_HLEN or
-	 * 60 bytes if the skb->len is less than 60 for skb_pad.
-	 */
-	pull_len = eth_get_headlen(va, ICE_RX_HDR_SIZE);
-
-	/* align pull length to size of long to optimize memcpy performance */
-	memcpy(__skb_put(skb, pull_len), va, ALIGN(pull_len, sizeof(long)));
+	unsigned int truesize = ICE_RXBUF_2048;
+#endif
 
-	/* the header from the frame that we're adding as a frag was added to
-	 * linear part of skb so move the pointer past that header and
-	 * reduce the size of data
-	 */
-	va += pull_len;
-	size -= pull_len;
+	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page,
+			rx_buf->page_offset, size, truesize);
 
-add_tail_frag:
-	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
-			(unsigned long)va & ~PAGE_MASK, size, truesize);
+	/* page is being used so we must update the page offset */
 	ice_rx_buf_adjust_pg_offset(rx_buf, truesize);
 }
 
@@ -642,25 +608,34 @@ ice_reuse_rx_page(struct ice_ring *rx_ring, struct ice_rx_buf *old_buf)
 	nta++;
 	rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
 
-	/* transfer page from old buffer to new buffer */
-	*new_buf = *old_buf;
+	/* Transfer page from old buffer to new buffer.
+	 * Move each member individually to avoid possible store
+	 * forwarding stalls and unnecessary copy of skb.
+	 */
+	new_buf->dma = old_buf->dma;
+	new_buf->page = old_buf->page;
+	new_buf->page_offset = old_buf->page_offset;
+	new_buf->pagecnt_bias = old_buf->pagecnt_bias;
 }
 
 /**
  * ice_get_rx_buf - Fetch Rx buffer and synchronize data for use
  * @rx_ring: Rx descriptor ring to transact packets on
+ * @skb: skb to be used
  * @size: size of buffer to add to skb
  *
  * This function will pull an Rx buffer from the ring and synchronize it
  * for use by the CPU.
  */
 static struct ice_rx_buf *
-ice_get_rx_buf(struct ice_ring *rx_ring, const unsigned int size)
+ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb,
+	       const unsigned int size)
 {
 	struct ice_rx_buf *rx_buf;
 
 	rx_buf = &rx_ring->rx_buf[rx_ring->next_to_clean];
 	prefetchw(rx_buf->page);
+	*skb = rx_buf->skb;
 
 	/* we are reusing so sync this buffer for CPU use */
 	dma_sync_single_range_for_cpu(rx_ring->dev, rx_buf->dma,
@@ -674,50 +649,64 @@ ice_get_rx_buf(struct ice_ring *rx_ring, const unsigned int size)
 }
 
 /**
- * ice_fetch_rx_buf - Allocate skb and populate it
+ * ice_construct_skb - Allocate skb and populate it
  * @rx_ring: Rx descriptor ring to transact packets on
  * @rx_buf: Rx buffer to pull data from
  * @size: the length of the packet
  *
- * This function allocates an skb on the fly, and populates it with the page
- * data from the current receive descriptor, taking care to set up the skb
- * correctly, as well as handling calling the page recycle function if
- * necessary.
+ * This function allocates an skb. It then populates it with the page
+ * data from the current receive descriptor, taking care to set up the
+ * skb correctly.
  */
 static struct sk_buff *
-ice_fetch_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
-		 unsigned int size)
+ice_construct_skb(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
+		  unsigned int size)
 {
-	struct sk_buff *skb = rx_buf->skb;
-
-	if (likely(!skb)) {
-		u8 *page_addr = page_address(rx_buf->page) +
-				rx_buf->page_offset;
+	void *va = page_address(rx_buf->page) + rx_buf->page_offset;
+	unsigned int headlen;
+	struct sk_buff *skb;
 
-		/* prefetch first cache line of first page */
-		prefetch(page_addr);
+	/* prefetch first cache line of first page */
+	prefetch(va);
 #if L1_CACHE_BYTES < 128
-		prefetch((void *)(page_addr + L1_CACHE_BYTES));
+	prefetch((u8 *)va + L1_CACHE_BYTES);
 #endif /* L1_CACHE_BYTES */
 
-		/* allocate a skb to store the frags */
-		skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
-				       ICE_RX_HDR_SIZE,
-				       GFP_ATOMIC | __GFP_NOWARN);
-		if (unlikely(!skb)) {
-			rx_ring->rx_stats.alloc_buf_failed++;
-			rx_buf->pagecnt_bias++;
-			return NULL;
-		}
+	/* allocate a skb to store the frags */
+	skb = __napi_alloc_skb(&rx_ring->q_vector->napi, ICE_RX_HDR_SIZE,
+			       GFP_ATOMIC | __GFP_NOWARN);
+	if (unlikely(!skb))
+		return NULL;
+
+	skb_record_rx_queue(skb, rx_ring->q_index);
+	/* Determine available headroom for copy */
+	headlen = size;
+	if (headlen > ICE_RX_HDR_SIZE)
+		headlen = eth_get_headlen(va, ICE_RX_HDR_SIZE);
 
-		skb_record_rx_queue(skb, rx_ring->q_index);
+	/* align pull length to size of long to optimize memcpy performance */
+	memcpy(__skb_put(skb, headlen), va, ALIGN(headlen, sizeof(long)));
+
+	/* if we exhaust the linear part then add what is left as a frag */
+	size -= headlen;
+	if (size) {
+#if (PAGE_SIZE >= 8192)
+		unsigned int truesize = SKB_DATA_ALIGN(size);
+#else
+		unsigned int truesize = ICE_RXBUF_2048;
+#endif
+		skb_add_rx_frag(skb, 0, rx_buf->page,
+				rx_buf->page_offset + headlen, size, truesize);
+		/* buffer is used by skb, update page_offset */
+		ice_rx_buf_adjust_pg_offset(rx_buf, truesize);
 	} else {
-		rx_buf->skb = NULL;
+		/* buffer is unused, reset bias back to rx_buf; data was copied
+		 * onto skb's linear part so there's no need for adjusting
+		 * page offset and we can reuse this buffer as-is
+		 */
+		rx_buf->pagecnt_bias++;
 	}
 
-	/* pull page into skb */
-	ice_add_rx_frag(rx_buf, skb, size);
-
 	return skb;
 }
 
@@ -744,6 +733,7 @@ static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf)
 
 	/* clear contents of buffer_info */
 	rx_buf->page = NULL;
+	rx_buf->skb = NULL;
 }
 
 /**
@@ -1024,11 +1014,19 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 		size = le16_to_cpu(rx_desc->wb.pkt_len) &
 			ICE_RX_FLX_DESC_PKT_LEN_M;
 
-		rx_buf = ice_get_rx_buf(rx_ring, size);
+		rx_buf = ice_get_rx_buf(rx_ring, &skb, size);
 		/* allocate (if needed) and populate skb */
-		skb = ice_fetch_rx_buf(rx_ring, rx_buf, size);
-		if (!skb)
+		if (skb)
+			ice_add_rx_frag(rx_buf, skb, size);
+		else
+			skb = ice_construct_skb(rx_ring, rx_buf, size);
+
+		/* exit if we failed to retrieve a buffer */
+		if (!skb) {
+			rx_ring->rx_stats.alloc_buf_failed++;
+			rx_buf->pagecnt_bias++;
 			break;
+		}
 
 		ice_put_rx_buf(rx_ring, rx_buf);
 		cleaned_count++;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 07/15] ice: map rx buffer pages with DMA attributes
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (5 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 06/15] ice: Limit the ice_add_rx_frag to frag addition Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:16   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 08/15] ice: Prevent unintended multiple chain resets Anirudh Venkataramanan
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>

Provide DMA_ATTR_WEAK_ORDERING and DMA_ATTR_SKIP_CPU_SYNC attributes to
the DMA API during the mapping operations on rx side. With this change
the non-x86 platforms will be able to sync only with what is being used
(2k buffer) instead of entire page. This should yield a slight
performance improvement.

Furthermore, DMA unmap may destroy the changes that were made to the
buffer by CPU when platform is not a x86 one. DMA_ATTR_SKIP_CPU_SYNC
attribute usage fixes this issue.

Also add a sync_single_for_device call during the rx buffer assignment,
to make sure that the cache lines are cleared before device attempting
to write to the buffer.

Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 24 ++++++++++++++++++++----
 drivers/net/ethernet/intel/ice/ice_txrx.h |  3 +++
 2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index aaa29ac18cdb..2f5981dbdff9 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -282,7 +282,16 @@ void ice_clean_rx_ring(struct ice_ring *rx_ring)
 		if (!rx_buf->page)
 			continue;
 
-		dma_unmap_page(dev, rx_buf->dma, PAGE_SIZE, DMA_FROM_DEVICE);
+		/* Invalidate cache lines that may have been written to by
+		 * device so that we avoid corrupting memory.
+		 */
+		dma_sync_single_range_for_cpu(dev, rx_buf->dma,
+					      rx_buf->page_offset,
+					      ICE_RXBUF_2048, DMA_FROM_DEVICE);
+
+		/* free resources associated with mapping */
+		dma_unmap_page_attrs(dev, rx_buf->dma, PAGE_SIZE,
+				     DMA_FROM_DEVICE, ICE_RX_DMA_ATTR);
 		__page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias);
 
 		rx_buf->page = NULL;
@@ -409,7 +418,8 @@ ice_alloc_mapped_page(struct ice_ring *rx_ring, struct ice_rx_buf *bi)
 	}
 
 	/* map page for use */
-	dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
+	dma = dma_map_page_attrs(rx_ring->dev, page, 0, PAGE_SIZE,
+				 DMA_FROM_DEVICE, ICE_RX_DMA_ATTR);
 
 	/* if mapping failed free memory back to system since
 	 * there isn't much point in holding memory we can't use
@@ -454,6 +464,12 @@ bool ice_alloc_rx_bufs(struct ice_ring *rx_ring, u16 cleaned_count)
 		if (!ice_alloc_mapped_page(rx_ring, bi))
 			goto no_bufs;
 
+		/* sync the buffer for use by the device */
+		dma_sync_single_range_for_device(rx_ring->dev, bi->dma,
+						 bi->page_offset,
+						 ICE_RXBUF_2048,
+						 DMA_FROM_DEVICE);
+
 		/* Refresh the desc even if buffer_addrs didn't change
 		 * because each write-back erases this info.
 		 */
@@ -726,8 +742,8 @@ static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf)
 		rx_ring->rx_stats.page_reuse_count++;
 	} else {
 		/* we are not reusing the buffer so unmap it */
-		dma_unmap_page(rx_ring->dev, rx_buf->dma, PAGE_SIZE,
-			       DMA_FROM_DEVICE);
+		dma_unmap_page_attrs(rx_ring->dev, rx_buf->dma, PAGE_SIZE,
+				     DMA_FROM_DEVICE, ICE_RX_DMA_ATTR);
 		__page_frag_cache_drain(rx_buf->page, rx_buf->pagecnt_bias);
 	}
 
diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.h b/drivers/net/ethernet/intel/ice/ice_txrx.h
index 43b39e7ce470..bd446ed423e5 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.h
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.h
@@ -47,6 +47,9 @@
 #define ICE_TX_FLAGS_VLAN_M	0xffff0000
 #define ICE_TX_FLAGS_VLAN_S	16
 
+#define ICE_RX_DMA_ATTR \
+	(DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_WEAK_ORDERING)
+
 struct ice_tx_buf {
 	struct ice_tx_desc *next_to_watch;
 	struct sk_buff *skb;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 08/15] ice: Prevent unintended multiple chain resets
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (6 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 07/15] ice: map rx buffer pages with DMA attributes Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:18   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 09/15] ice: change VF VSI tc info along with num_queues Anirudh Venkataramanan
                   ` (6 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Dave Ertman <david.m.ertman@intel.com>

In the current implementation of ice_reset_subtask, if multiple reset
types are set in the pf->state, the most intrusive one is meant to be
performed only, but the bits requesting the other types are not being
cleared. This would lead to another reset being performed the next time
the service task is scheduled.

Change the flow of ice_reset_subtask so that all reset request bits in
pf->state are cleared, and we still perform the most intrusive of the
resets requested.

Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_main.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index c0b7e695cc43..ced774dd879b 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -478,8 +478,14 @@ static void ice_reset_subtask(struct ice_pf *pf)
 	 * for the reset now), poll for reset done, rebuild and return.
 	 */
 	if (test_bit(__ICE_RESET_OICR_RECV, pf->state)) {
-		clear_bit(__ICE_GLOBR_RECV, pf->state);
-		clear_bit(__ICE_CORER_RECV, pf->state);
+		/* Perform the largest reset requested */
+		if (test_and_clear_bit(__ICE_CORER_RECV, pf->state))
+			reset_type = ICE_RESET_CORER;
+		if (test_and_clear_bit(__ICE_GLOBR_RECV, pf->state))
+			reset_type = ICE_RESET_GLOBR;
+		/* return if no valid reset type requested */
+		if (reset_type == ICE_RESET_INVAL)
+			return;
 		if (!test_bit(__ICE_PREPARED_FOR_RESET, pf->state))
 			ice_prepare_for_reset(pf);
 
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 09/15] ice: change VF VSI tc info along with num_queues
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (7 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 08/15] ice: Prevent unintended multiple chain resets Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:45   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 10/15] ice: add and use new ice_for_each_traffic_class() macro Anirudh Venkataramanan
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Preethi Banala <preethi.banala@intel.com>

Update VF VSI tc info along with vsi->num_txq/num_rxq when VF requests to
configure queues.

Signed-off-by: Preethi Banala <preethi.banala@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index ca8780891540..cc15b3c90656 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -1927,6 +1927,9 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
 	 */
 	vsi->num_txq = qci->num_queue_pairs;
 	vsi->num_rxq = qci->num_queue_pairs;
+	/* All queues of VF VSI are in TC 0 */
+	vsi->tc_cfg.tc_info[0].qcount_tx = qci->num_queue_pairs;
+	vsi->tc_cfg.tc_info[0].qcount_rx = qci->num_queue_pairs;
 
 	if (!ice_vsi_cfg_lan_txqs(vsi) && !ice_vsi_cfg_rxqs(vsi))
 		aq_ret = 0;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 10/15] ice: add and use new ice_for_each_traffic_class() macro
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (8 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 09/15] ice: change VF VSI tc info along with num_queues Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:46   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 11/15] ice: Create a generic name for the ice_rx_flg64_bits structure Anirudh Venkataramanan
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Bruce Allan <bruce.w.allan@intel.com>

There are numerous for() loops iterating over each of the max traffic
classes.  Use a simple iterator macro instead to make the code cleaner.

Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_common.c | 2 +-
 drivers/net/ethernet/intel/ice/ice_lib.c    | 4 ++--
 drivers/net/ethernet/intel/ice/ice_sched.c  | 2 +-
 drivers/net/ethernet/intel/ice/ice_type.h   | 3 +++
 4 files changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index be67d07b75cb..ca9a8c52a8d6 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -2949,7 +2949,7 @@ ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap,
 
 	mutex_lock(&pi->sched_lock);
 
-	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+	ice_for_each_traffic_class(i) {
 		/* configuration is possible only if TC node is present */
 		if (!ice_sched_get_tc_node(pi, i))
 			continue;
diff --git a/drivers/net/ethernet/intel/ice/ice_lib.c b/drivers/net/ethernet/intel/ice/ice_lib.c
index 914e59272a91..f0c87c40fd9d 100644
--- a/drivers/net/ethernet/intel/ice/ice_lib.c
+++ b/drivers/net/ethernet/intel/ice/ice_lib.c
@@ -868,7 +868,7 @@ static void ice_vsi_setup_q_map(struct ice_vsi *vsi, struct ice_vsi_ctx *ctxt)
 	/* find the (rounded up) power-of-2 of qcount */
 	pow = order_base_2(qcount_rx);
 
-	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+	ice_for_each_traffic_class(i) {
 		if (!(vsi->tc_cfg.ena_tc & BIT(i))) {
 			/* TC is not enabled */
 			vsi->tc_cfg.tc_info[i].qoffset = 0;
@@ -1715,7 +1715,7 @@ ice_vsi_cfg_txqs(struct ice_vsi *vsi, struct ice_ring **rings, int offset)
 	num_q_grps = 1;
 
 	/* set up and configure the Tx queues for each enabled TC */
-	for (tc = 0; tc < ICE_MAX_TRAFFIC_CLASS; tc++) {
+	ice_for_each_traffic_class(tc) {
 		if (!(vsi->tc_cfg.ena_tc & BIT(tc)))
 			break;
 
diff --git a/drivers/net/ethernet/intel/ice/ice_sched.c b/drivers/net/ethernet/intel/ice/ice_sched.c
index efbebf7a050f..e0218f4c8f0b 100644
--- a/drivers/net/ethernet/intel/ice/ice_sched.c
+++ b/drivers/net/ethernet/intel/ice/ice_sched.c
@@ -1606,7 +1606,7 @@ ice_sched_rm_vsi_cfg(struct ice_port_info *pi, u16 vsi_handle, u8 owner)
 	if (!vsi_ctx)
 		goto exit_sched_rm_vsi_cfg;
 
-	for (i = 0; i < ICE_MAX_TRAFFIC_CLASS; i++) {
+	ice_for_each_traffic_class(i) {
 		struct ice_sched_node *vsi_node, *tc_node;
 		u8 j = 0;
 
diff --git a/drivers/net/ethernet/intel/ice/ice_type.h b/drivers/net/ethernet/intel/ice/ice_type.h
index 584f260f2e4f..3a4e67484487 100644
--- a/drivers/net/ethernet/intel/ice/ice_type.h
+++ b/drivers/net/ethernet/intel/ice/ice_type.h
@@ -210,6 +210,9 @@ struct ice_nvm_info {
 #define ICE_MAX_TRAFFIC_CLASS 8
 #define ICE_TXSCHED_MAX_BRANCHES ICE_MAX_TRAFFIC_CLASS
 
+#define ice_for_each_traffic_class(_i)	\
+	for ((_i) = 0; (_i) < ICE_MAX_TRAFFIC_CLASS; (_i)++)
+
 struct ice_sched_node {
 	struct ice_sched_node *parent;
 	struct ice_sched_node *sibling; /* next sibling in the same layer */
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 11/15] ice: Create a generic name for the ice_rx_flg64_bits structure
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (9 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 10/15] ice: add and use new ice_for_each_traffic_class() macro Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:46   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 12/15] ice: Remove unnecessary newlines from log messages Anirudh Venkataramanan
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Chinh T Cao <chinh.t.cao@intel.com>

This structure is used to define the packet flags. These flags are
applicable for both TX and RX packet. Thus, this patch changes its
name from ice_rx_flag64_bits to ice_flg64_bits, and its member definition.

Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
Reviewed-by: Bruce Allan <bruce.w.allan@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_common.c    | 26 ++++++++++----------
 drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 34 +++++++++++++-------------
 2 files changed, 30 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_common.c b/drivers/net/ethernet/intel/ice/ice_common.c
index ca9a8c52a8d6..5e7a31421c0d 100644
--- a/drivers/net/ethernet/intel/ice/ice_common.c
+++ b/drivers/net/ethernet/intel/ice/ice_common.c
@@ -358,22 +358,22 @@ static void ice_init_flex_flags(struct ice_hw *hw, enum ice_rxdid prof_id)
 	 */
 	case ICE_RXDID_FLEX_NIC:
 	case ICE_RXDID_FLEX_NIC_2:
-		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_FRG,
-				   ICE_RXFLG_UDP_GRE, ICE_RXFLG_PKT_DSI,
-				   ICE_RXFLG_FIN, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_FLG_PKT_FRG,
+				   ICE_FLG_UDP_GRE, ICE_FLG_PKT_DSI,
+				   ICE_FLG_FIN, idx++);
 		/* flex flag 1 is not used for flexi-flag programming, skipping
 		 * these four FLG64 bits.
 		 */
-		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_SYN, ICE_RXFLG_RST,
-				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx++);
-		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_PKT_DSI,
-				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_EVLAN_x8100,
-				   ICE_RXFLG_EVLAN_x9100, idx++);
-		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_VLAN_x8100,
-				   ICE_RXFLG_TNL_VLAN, ICE_RXFLG_TNL_MAC,
-				   ICE_RXFLG_TNL0, idx++);
-		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_RXFLG_TNL1, ICE_RXFLG_TNL2,
-				   ICE_RXFLG_PKT_DSI, ICE_RXFLG_PKT_DSI, idx);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_FLG_SYN, ICE_FLG_RST,
+				   ICE_FLG_PKT_DSI, ICE_FLG_PKT_DSI, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_FLG_PKT_DSI,
+				   ICE_FLG_PKT_DSI, ICE_FLG_EVLAN_x8100,
+				   ICE_FLG_EVLAN_x9100, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_FLG_VLAN_x8100,
+				   ICE_FLG_TNL_VLAN, ICE_FLG_TNL_MAC,
+				   ICE_FLG_TNL0, idx++);
+		ICE_PROG_FLG_ENTRY(hw, prof_id, ICE_FLG_TNL1, ICE_FLG_TNL2,
+				   ICE_FLG_PKT_DSI, ICE_FLG_PKT_DSI, idx);
 		break;
 
 	default:
diff --git a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
index ef4c79b5aa32..2e87b69aff4f 100644
--- a/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
+++ b/drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h
@@ -208,23 +208,23 @@ enum ice_flex_rx_mdid {
 	ICE_RX_MDID_HASH_HIGH,
 };
 
-/* Rx Flag64 packet flag bits */
-enum ice_rx_flg64_bits {
-	ICE_RXFLG_PKT_DSI	= 0,
-	ICE_RXFLG_EVLAN_x8100	= 15,
-	ICE_RXFLG_EVLAN_x9100,
-	ICE_RXFLG_VLAN_x8100,
-	ICE_RXFLG_TNL_MAC	= 22,
-	ICE_RXFLG_TNL_VLAN,
-	ICE_RXFLG_PKT_FRG,
-	ICE_RXFLG_FIN		= 32,
-	ICE_RXFLG_SYN,
-	ICE_RXFLG_RST,
-	ICE_RXFLG_TNL0		= 38,
-	ICE_RXFLG_TNL1,
-	ICE_RXFLG_TNL2,
-	ICE_RXFLG_UDP_GRE,
-	ICE_RXFLG_RSVD		= 63
+/* RX/TX Flag64 packet flag bits */
+enum ice_flg64_bits {
+	ICE_FLG_PKT_DSI		= 0,
+	ICE_FLG_EVLAN_x8100	= 15,
+	ICE_FLG_EVLAN_x9100,
+	ICE_FLG_VLAN_x8100,
+	ICE_FLG_TNL_MAC		= 22,
+	ICE_FLG_TNL_VLAN,
+	ICE_FLG_PKT_FRG,
+	ICE_FLG_FIN		= 32,
+	ICE_FLG_SYN,
+	ICE_FLG_RST,
+	ICE_FLG_TNL0		= 38,
+	ICE_FLG_TNL1,
+	ICE_FLG_TNL2,
+	ICE_FLG_UDP_GRE,
+	ICE_FLG_RSVD		= 63
 };
 
 /* for ice_32byte_rx_flex_desc.ptype_flexi_flags0 member */
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 12/15] ice: Remove unnecessary newlines from log messages
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (10 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 11/15] ice: Create a generic name for the ice_rx_flg64_bits structure Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:47   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 13/15] ice: use virt channel status codes Anirudh Venkataramanan
                   ` (2 subsequent siblings)
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Jeremiah Kyle <jeremiah.kyle@intel.com>

Two log messages contained newlines in the middle of the message. This
resulted in unexpected driver log output.

This patch removes the newlines to restore consistency with the rest of
the driver log messages.

Signed-off-by: Jeremiah Kyle <jeremiah.kyle@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index cc15b3c90656..5a6d6c8e3060 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -2611,7 +2611,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
 		 * as it is busy with pending work.
 		 */
 		dev_info(&pf->pdev->dev,
-			 "PF failed to honor VF %d, opcode %d\n, error %d\n",
+			 "PF failed to honor VF %d, opcode %d, error %d\n",
 			 vf_id, v_opcode, err);
 	}
 }
@@ -2771,7 +2771,7 @@ int ice_set_vf_mac(struct net_device *netdev, int vf_id, u8 *mac)
 	ether_addr_copy(vf->dflt_lan_addr.addr, mac);
 	vf->pf_set_mac = true;
 	netdev_info(netdev,
-		    "mac on VF %d set to %pM\n. VF driver will be reinitialized\n",
+		    "mac on VF %d set to %pM. VF driver will be reinitialized\n",
 		    vf_id, mac);
 
 	ice_vc_dis_vf(vf);
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 13/15] ice: use virt channel status codes
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (11 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 12/15] ice: Remove unnecessary newlines from log messages Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:47   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 14/15] ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset Anirudh Venkataramanan
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 15/15] ice: Implement pci_error_handler ops Anirudh Venkataramanan
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Mitch Williams <mitch.a.williams@intel.com>

When communicating with the AVF driver, we need to use the status codes
from virtchnl.h, not our own ice-specific codes. Without this, when an
error occurs, the VF will report nonsensical results.

NOTE: this depends on changes made to include/linux/avf/virtchnl.h by
commit bb58fd7eeffc ("i40e: Update status codes")

Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
---
 drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 273 +++++++++++++----------
 1 file changed, 154 insertions(+), 119 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
index 5a6d6c8e3060..e2517ab696e9 100644
--- a/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c
@@ -4,6 +4,37 @@
 #include "ice.h"
 #include "ice_lib.h"
 
+/**
+ * ice_err_to_virt err - translate errors for VF return code
+ * @ice_err: error return code
+ */
+static enum virtchnl_status_code ice_err_to_virt_err(enum ice_status ice_err)
+{
+	switch (ice_err) {
+	case ICE_SUCCESS:
+		return VIRTCHNL_STATUS_SUCCESS;
+	case ICE_ERR_BAD_PTR:
+	case ICE_ERR_INVAL_SIZE:
+	case ICE_ERR_DEVICE_NOT_SUPPORTED:
+	case ICE_ERR_PARAM:
+	case ICE_ERR_CFG:
+		return VIRTCHNL_STATUS_ERR_PARAM;
+	case ICE_ERR_NO_MEMORY:
+		return VIRTCHNL_STATUS_ERR_NO_MEMORY;
+	case ICE_ERR_NOT_READY:
+	case ICE_ERR_RESET_FAILED:
+	case ICE_ERR_FW_API_VER:
+	case ICE_ERR_AQ_ERROR:
+	case ICE_ERR_AQ_TIMEOUT:
+	case ICE_ERR_AQ_FULL:
+	case ICE_ERR_AQ_NO_WORK:
+	case ICE_ERR_AQ_EMPTY:
+		return VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR;
+	default:
+		return VIRTCHNL_STATUS_ERR_NOT_SUPPORTED;
+	}
+}
+
 /**
  * ice_vc_vf_broadcast - Broadcast a message to all VFs on PF
  * @pf: pointer to the PF structure
@@ -14,7 +45,7 @@
  */
 static void
 ice_vc_vf_broadcast(struct ice_pf *pf, enum virtchnl_ops v_opcode,
-		    enum ice_status v_retval, u8 *msg, u16 msglen)
+		    enum virtchnl_status_code v_retval, u8 *msg, u16 msglen)
 {
 	struct ice_hw *hw = &pf->hw;
 	struct ice_vf *vf = pf->vf;
@@ -104,7 +135,8 @@ static void ice_vc_notify_vf_link_state(struct ice_vf *vf)
 		ice_set_pfe_link(vf, &pfe, ls->link_speed, ls->link_info &
 				 ICE_AQ_LINK_UP);
 
-	ice_aq_send_msg_to_vf(hw, vf->vf_id, VIRTCHNL_OP_EVENT, 0, (u8 *)&pfe,
+	ice_aq_send_msg_to_vf(hw, vf->vf_id, VIRTCHNL_OP_EVENT,
+			      VIRTCHNL_STATUS_SUCCESS, (u8 *)&pfe,
 			      sizeof(pfe), NULL);
 }
 
@@ -1043,7 +1075,7 @@ void ice_vc_notify_reset(struct ice_pf *pf)
 
 	pfe.event = VIRTCHNL_EVENT_RESET_IMPENDING;
 	pfe.severity = PF_EVENT_SEVERITY_CERTAIN_DOOM;
-	ice_vc_vf_broadcast(pf, VIRTCHNL_OP_EVENT, ICE_SUCCESS,
+	ice_vc_vf_broadcast(pf, VIRTCHNL_OP_EVENT, VIRTCHNL_STATUS_SUCCESS,
 			    (u8 *)&pfe, sizeof(struct virtchnl_pf_event));
 }
 
@@ -1066,8 +1098,9 @@ static void ice_vc_notify_vf_reset(struct ice_vf *vf)
 
 	pfe.event = VIRTCHNL_EVENT_RESET_IMPENDING;
 	pfe.severity = PF_EVENT_SEVERITY_CERTAIN_DOOM;
-	ice_aq_send_msg_to_vf(&vf->pf->hw, vf->vf_id, VIRTCHNL_OP_EVENT, 0,
-			      (u8 *)&pfe, sizeof(pfe), NULL);
+	ice_aq_send_msg_to_vf(&vf->pf->hw, vf->vf_id, VIRTCHNL_OP_EVENT,
+			      VIRTCHNL_STATUS_SUCCESS, (u8 *)&pfe, sizeof(pfe),
+			      NULL);
 }
 
 /**
@@ -1288,8 +1321,8 @@ static void ice_vc_dis_vf(struct ice_vf *vf)
  * send msg to VF
  */
 static int
-ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode, enum ice_status v_retval,
-		      u8 *msg, u16 msglen)
+ice_vc_send_msg_to_vf(struct ice_vf *vf, u32 v_opcode,
+		      enum virtchnl_status_code v_retval, u8 *msg, u16 msglen)
 {
 	enum ice_status aq_ret;
 	struct ice_pf *pf;
@@ -1349,8 +1382,8 @@ static int ice_vc_get_ver_msg(struct ice_vf *vf, u8 *msg)
 	if (VF_IS_V10(&vf->vf_ver))
 		info.minor = VIRTCHNL_VERSION_MINOR_NO_VF_CAPS;
 
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION, ICE_SUCCESS,
-				     (u8 *)&info,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_VERSION,
+				     VIRTCHNL_STATUS_SUCCESS, (u8 *)&info,
 				     sizeof(struct virtchnl_version_info));
 }
 
@@ -1363,15 +1396,15 @@ static int ice_vc_get_ver_msg(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_vf_resource *vfres = NULL;
-	enum ice_status aq_ret = 0;
 	struct ice_pf *pf = vf->pf;
 	struct ice_vsi *vsi;
 	int len = 0;
 	int ret;
 
 	if (!test_bit(ICE_VF_STATE_INIT, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto err;
 	}
 
@@ -1379,7 +1412,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 
 	vfres = devm_kzalloc(&pf->pdev->dev, len, GFP_KERNEL);
 	if (!vfres) {
-		aq_ret = ICE_ERR_NO_MEMORY;
+		v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY;
 		len = 0;
 		goto err;
 	}
@@ -1393,7 +1426,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 	vfres->vf_cap_flags = VIRTCHNL_VF_OFFLOAD_L2;
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto err;
 	}
 
@@ -1447,7 +1480,7 @@ static int ice_vc_get_vf_res_msg(struct ice_vf *vf, u8 *msg)
 
 err:
 	/* send the response back to the VF */
-	ret = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES, aq_ret,
+	ret = ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_VF_RESOURCES, v_ret,
 				    (u8 *)vfres, len);
 
 	devm_kfree(&pf->pdev->dev, vfres);
@@ -1527,43 +1560,42 @@ static bool ice_vc_isvalid_q_id(struct ice_vf *vf, u16 vsi_id, u8 qid)
  */
 static int ice_vc_config_rss_key(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_rss_key *vrk =
 		(struct virtchnl_rss_key *)msg;
-	struct ice_vsi *vsi = NULL;
 	struct ice_pf *pf = vf->pf;
-	enum ice_status aq_ret;
-	int ret;
+	struct ice_vsi *vsi = NULL;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!ice_vc_isvalid_vsi_id(vf, vrk->vsi_id)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (vrk->key_len != ICE_VSIQF_HKEY_ARRAY_SIZE) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!test_bit(ICE_FLAG_RSS_ENA, vf->pf->flags)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
-	ret = ice_set_rss(vsi, vrk->key, NULL, 0);
-	aq_ret = ret ? ICE_ERR_PARAM : ICE_SUCCESS;
+	if (ice_set_rss(vsi, vrk->key, NULL, 0))
+		v_ret = VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR;
 error_param:
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY, aq_ret,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_KEY, v_ret,
 				     NULL, 0);
 }
 
@@ -1577,41 +1609,40 @@ static int ice_vc_config_rss_key(struct ice_vf *vf, u8 *msg)
 static int ice_vc_config_rss_lut(struct ice_vf *vf, u8 *msg)
 {
 	struct virtchnl_rss_lut *vrl = (struct virtchnl_rss_lut *)msg;
-	struct ice_vsi *vsi = NULL;
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct ice_pf *pf = vf->pf;
-	enum ice_status aq_ret;
-	int ret;
+	struct ice_vsi *vsi = NULL;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!ice_vc_isvalid_vsi_id(vf, vrl->vsi_id)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (vrl->lut_entries != ICE_VSIQF_HLUT_ARRAY_SIZE) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!test_bit(ICE_FLAG_RSS_ENA, vf->pf->flags)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
-	ret = ice_set_rss(vsi, NULL, vrl->lut, ICE_VSIQF_HLUT_ARRAY_SIZE);
-	aq_ret = ret ? ICE_ERR_PARAM : ICE_SUCCESS;
+	if (ice_set_rss(vsi, NULL, vrl->lut, ICE_VSIQF_HLUT_ARRAY_SIZE))
+		v_ret = VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR;
 error_param:
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT, aq_ret,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_RSS_LUT, v_ret,
 				     NULL, 0);
 }
 
@@ -1624,26 +1655,26 @@ static int ice_vc_config_rss_lut(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_get_stats_msg(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_queue_select *vqs =
 		(struct virtchnl_queue_select *)msg;
-	enum ice_status aq_ret = 0;
 	struct ice_pf *pf = vf->pf;
 	struct ice_eth_stats stats;
 	struct ice_vsi *vsi;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!ice_vc_isvalid_vsi_id(vf, vqs->vsi_id)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -1654,7 +1685,7 @@ static int ice_vc_get_stats_msg(struct ice_vf *vf, u8 *msg)
 
 error_param:
 	/* send the response to the VF */
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_STATS, aq_ret,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_GET_STATS, v_ret,
 				     (u8 *)&stats, sizeof(stats));
 }
 
@@ -1667,30 +1698,30 @@ static int ice_vc_get_stats_msg(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_queue_select *vqs =
 	    (struct virtchnl_queue_select *)msg;
-	enum ice_status aq_ret = 0;
 	struct ice_pf *pf = vf->pf;
 	struct ice_vsi *vsi;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!ice_vc_isvalid_vsi_id(vf, vqs->vsi_id)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!vqs->rx_queues && !vqs->tx_queues) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -1699,15 +1730,15 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)
 	 * programmed using ice_vsi_cfg_txqs
 	 */
 	if (ice_vsi_start_rx_rings(vsi))
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 
 	/* Set flag to indicate that queues are enabled */
-	if (!aq_ret)
+	if (v_ret == VIRTCHNL_STATUS_SUCCESS)
 		set_bit(ICE_VF_STATE_ENA, vf->vf_states);
 
 error_param:
 	/* send the response to the VF */
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_QUEUES, aq_ret,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_QUEUES, v_ret,
 				     NULL, 0);
 }
 
@@ -1721,31 +1752,31 @@ static int ice_vc_ena_qs_msg(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_queue_select *vqs =
 	    (struct virtchnl_queue_select *)msg;
-	enum ice_status aq_ret = 0;
 	struct ice_pf *pf = vf->pf;
 	struct ice_vsi *vsi;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) &&
 	    !test_bit(ICE_VF_STATE_ENA, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!ice_vc_isvalid_vsi_id(vf, vqs->vsi_id)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!vqs->rx_queues && !vqs->tx_queues) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -1753,23 +1784,23 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
 		dev_err(&vsi->back->pdev->dev,
 			"Failed to stop tx rings on VSI %d\n",
 			vsi->vsi_num);
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 	}
 
 	if (ice_vsi_stop_rx_rings(vsi)) {
 		dev_err(&vsi->back->pdev->dev,
 			"Failed to stop rx rings on VSI %d\n",
 			vsi->vsi_num);
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 	}
 
 	/* Clear enabled queues flag */
-	if (!aq_ret)
+	if (v_ret == VIRTCHNL_STATUS_SUCCESS)
 		clear_bit(ICE_VF_STATE_ENA, vf->vf_states);
 
 error_param:
 	/* send the response to the VF */
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_QUEUES, aq_ret,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_QUEUES, v_ret,
 				     NULL, 0);
 }
 
@@ -1782,18 +1813,18 @@ static int ice_vc_dis_qs_msg(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_irq_map_info *irqmap_info =
 	    (struct virtchnl_irq_map_info *)msg;
 	u16 vsi_id, vsi_q_id, vector_id;
 	struct virtchnl_vector_map *map;
 	struct ice_vsi *vsi = NULL;
 	struct ice_pf *pf = vf->pf;
-	enum ice_status aq_ret = 0;
 	unsigned long qmap;
 	int i;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -1805,13 +1836,13 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
 		/* validate msg params */
 		if (!(vector_id < pf->hw.func_caps.common_cap
 		    .num_msix_vectors) || !ice_vc_isvalid_vsi_id(vf, vsi_id)) {
-			aq_ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			goto error_param;
 		}
 
 		vsi = pf->vsi[vf->lan_vsi_idx];
 		if (!vsi) {
-			aq_ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			goto error_param;
 		}
 
@@ -1821,7 +1852,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
 			struct ice_q_vector *q_vector;
 
 			if (!ice_vc_isvalid_q_id(vf, vsi_id, vsi_q_id)) {
-				aq_ret = ICE_ERR_PARAM;
+				v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 				goto error_param;
 			}
 			q_vector = vsi->q_vectors[i];
@@ -1835,7 +1866,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
 			struct ice_q_vector *q_vector;
 
 			if (!ice_vc_isvalid_q_id(vf, vsi_id, vsi_q_id)) {
-				aq_ret = ICE_ERR_PARAM;
+				v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 				goto error_param;
 			}
 			q_vector = vsi->q_vectors[i];
@@ -1849,7 +1880,7 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
 		ice_vsi_cfg_msix(vsi);
 error_param:
 	/* send the response to the VF */
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_IRQ_MAP, aq_ret,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_IRQ_MAP, v_ret,
 				     NULL, 0);
 }
 
@@ -1862,27 +1893,26 @@ static int ice_vc_cfg_irq_map_msg(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_vsi_queue_config_info *qci =
 	    (struct virtchnl_vsi_queue_config_info *)msg;
 	struct virtchnl_queue_pair_info *qpi;
 	struct ice_pf *pf = vf->pf;
-	enum ice_status aq_ret = 0;
 	struct ice_vsi *vsi;
 	int i;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!ice_vc_isvalid_vsi_id(vf, qci->vsi_id)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -1890,7 +1920,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
 		dev_err(&pf->pdev->dev,
 			"VF-%d requesting more than supported number of queues: %d\n",
 			vf->vf_id, qci->num_queue_pairs);
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -1900,7 +1930,7 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
 		    qpi->rxq.vsi_id != qci->vsi_id ||
 		    qpi->rxq.queue_id != qpi->txq.queue_id ||
 		    !ice_vc_isvalid_q_id(vf, qci->vsi_id, qpi->txq.queue_id)) {
-			aq_ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			goto error_param;
 		}
 		/* copy Tx queue info from VF into VSI */
@@ -1910,13 +1940,13 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
 		vsi->rx_rings[i]->dma = qpi->rxq.dma_ring_addr;
 		vsi->rx_rings[i]->count = qpi->rxq.ring_len;
 		if (qpi->rxq.databuffer_size > ((16 * 1024) - 128)) {
-			aq_ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			goto error_param;
 		}
 		vsi->rx_buf_len = qpi->rxq.databuffer_size;
 		if (qpi->rxq.max_pkt_size >= (16 * 1024) ||
 		    qpi->rxq.max_pkt_size < 64) {
-			aq_ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			goto error_param;
 		}
 		vsi->max_frame = qpi->rxq.max_pkt_size;
@@ -1931,14 +1961,12 @@ static int ice_vc_cfg_qs_msg(struct ice_vf *vf, u8 *msg)
 	vsi->tc_cfg.tc_info[0].qcount_tx = qci->num_queue_pairs;
 	vsi->tc_cfg.tc_info[0].qcount_rx = qci->num_queue_pairs;
 
-	if (!ice_vsi_cfg_lan_txqs(vsi) && !ice_vsi_cfg_rxqs(vsi))
-		aq_ret = 0;
-	else
-		aq_ret = ICE_ERR_PARAM;
+	if (ice_vsi_cfg_lan_txqs(vsi) || ice_vsi_cfg_rxqs(vsi))
+		v_ret = VIRTCHNL_STATUS_ERR_ADMIN_QUEUE_ERROR;
 
 error_param:
 	/* send the response to the VF */
-	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_VSI_QUEUES, aq_ret,
+	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_CONFIG_VSI_QUEUES, v_ret,
 				     NULL, 0);
 }
 
@@ -1980,11 +2008,11 @@ static bool ice_can_vf_change_mac(struct ice_vf *vf)
 static int
 ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_ether_addr_list *al =
 	    (struct virtchnl_ether_addr_list *)msg;
 	struct ice_pf *pf = vf->pf;
 	enum virtchnl_ops vc_op;
-	enum ice_status ret = 0;
 	LIST_HEAD(mac_list);
 	struct ice_vsi *vsi;
 	int mac_count = 0;
@@ -1997,7 +2025,7 @@ ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states) ||
 	    !ice_vc_isvalid_vsi_id(vf, al->vsi_id)) {
-		ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto handle_mac_exit;
 	}
 
@@ -2009,12 +2037,13 @@ ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 		/* There is no need to let VF know about not being trusted
 		 * to add more MAC addr, so we can just return success message.
 		 */
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto handle_mac_exit;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto handle_mac_exit;
 	}
 
@@ -2036,7 +2065,7 @@ ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 				dev_err(&pf->pdev->dev,
 					"can't remove mac %pM for VF %d\n",
 					maddr, vf->vf_id);
-				ret = ICE_ERR_PARAM;
+				v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 				goto handle_mac_exit;
 			}
 		}
@@ -2046,7 +2075,7 @@ ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 			dev_err(&pf->pdev->dev,
 				"invalid mac %pM provided for VF %d\n",
 				maddr, vf->vf_id);
-			ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			goto handle_mac_exit;
 		}
 
@@ -2055,13 +2084,13 @@ ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 			dev_err(&pf->pdev->dev,
 				"can't change unicast mac for untrusted VF %d\n",
 				vf->vf_id);
-			ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			goto handle_mac_exit;
 		}
 
 		/* get here if maddr is multicast or if VF can change mac */
 		if (ice_add_mac_to_list(vsi, &mac_list, al->list[i].addr)) {
-			ret = ICE_ERR_NO_MEMORY;
+			v_ret = VIRTCHNL_STATUS_ERR_NO_MEMORY;
 			goto handle_mac_exit;
 		}
 		mac_count++;
@@ -2069,14 +2098,14 @@ ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 
 	/* program the updated filter list */
 	if (set)
-		ret = ice_add_mac(&pf->hw, &mac_list);
+		v_ret = ice_err_to_virt_err(ice_add_mac(&pf->hw, &mac_list));
 	else
-		ret = ice_remove_mac(&pf->hw, &mac_list);
+		v_ret = ice_err_to_virt_err(ice_remove_mac(&pf->hw, &mac_list));
 
-	if (ret) {
+	if (v_ret) {
 		dev_err(&pf->pdev->dev,
 			"can't update mac filters for VF %d, error %d\n",
-			vf->vf_id, ret);
+			vf->vf_id, v_ret);
 	} else {
 		if (set)
 			vf->num_mac += mac_count;
@@ -2087,7 +2116,7 @@ ice_vc_handle_mac_addr_msg(struct ice_vf *vf, u8 *msg, bool set)
 handle_mac_exit:
 	ice_free_fltr_list(&pf->pdev->dev, &mac_list);
 	/* send the response to the VF */
-	return ice_vc_send_msg_to_vf(vf, vc_op, ret, NULL, 0);
+	return ice_vc_send_msg_to_vf(vf, vc_op, v_ret, NULL, 0);
 }
 
 /**
@@ -2126,17 +2155,17 @@ static int ice_vc_del_mac_addr_msg(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_vf_res_request *vfres =
 		(struct virtchnl_vf_res_request *)msg;
 	int req_queues = vfres->num_queue_pairs;
-	enum ice_status aq_ret = 0;
 	struct ice_pf *pf = vf->pf;
 	int max_allowed_vf_queues;
 	int tx_rx_queue_left;
 	int cur_queues;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -2171,7 +2200,7 @@ static int ice_vc_request_qs_msg(struct ice_vf *vf, u8 *msg)
 error_param:
 	/* send the response to the VF */
 	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_REQUEST_QUEUES,
-				     aq_ret, (u8 *)vfres, sizeof(*vfres));
+				     v_ret, (u8 *)vfres, sizeof(*vfres));
 }
 
 /**
@@ -2268,9 +2297,9 @@ ice_set_vf_port_vlan(struct net_device *netdev, int vf_id, u16 vlan_id, u8 qos,
  */
 static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 {
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct virtchnl_vlan_filter_list *vfl =
 	    (struct virtchnl_vlan_filter_list *)msg;
-	enum ice_status aq_ret = 0;
 	struct ice_pf *pf = vf->pf;
 	bool vlan_promisc = false;
 	struct ice_vsi *vsi;
@@ -2280,12 +2309,12 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 	int i;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (!ice_vc_isvalid_vsi_id(vf, vfl->vsi_id)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -2297,12 +2326,13 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 		/* There is no need to let VF know about being not trusted,
 		 * so we can just return success message here
 		 */
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	for (i = 0; i < vfl->num_elements; i++) {
 		if (vfl->vlan_id[i] > ICE_MAX_VLANID) {
-			aq_ret = ICE_ERR_PARAM;
+			v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 			dev_err(&pf->pdev->dev,
 				"invalid VF VLAN id %d\n", vfl->vlan_id[i]);
 			goto error_param;
@@ -2312,12 +2342,12 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 	hw = &pf->hw;
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (vsi->info.pvid) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -2325,7 +2355,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 		dev_err(&pf->pdev->dev,
 			"%sable VLAN stripping failed for VSI %i\n",
 			 add_v ? "en" : "dis", vsi->vsi_num);
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
@@ -2338,7 +2368,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 			u16 vid = vfl->vlan_id[i];
 
 			if (ice_vsi_add_vlan(vsi, vid)) {
-				aq_ret = ICE_ERR_PARAM;
+				v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 				goto error_param;
 			}
 
@@ -2347,7 +2377,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 			if (!vlan_promisc) {
 				status = ice_cfg_vlan_pruning(vsi, true, false);
 				if (status) {
-					aq_ret = ICE_ERR_PARAM;
+					v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 					dev_err(&pf->pdev->dev,
 						"Enable VLAN pruning on VLAN ID: %d failed error-%d\n",
 						vid, status);
@@ -2360,10 +2390,12 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 
 				status = ice_set_vsi_promisc(hw, vsi->idx,
 							     promisc_m, vid);
-				if (status)
+				if (status) {
+					v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 					dev_err(&pf->pdev->dev,
 						"Enable Unicast/multicast promiscuous mode on VLAN ID:%d failed error-%d\n",
 						vid, status);
+				}
 			}
 		}
 	} else {
@@ -2374,7 +2406,7 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 			 * updating VLAN information
 			 */
 			if (ice_vsi_kill_vlan(vsi, vid)) {
-				aq_ret = ICE_ERR_PARAM;
+				v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 				goto error_param;
 			}
 
@@ -2396,10 +2428,10 @@ static int ice_vc_process_vlan_msg(struct ice_vf *vf, u8 *msg, bool add_v)
 error_param:
 	/* send the response to the VF */
 	if (add_v)
-		return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_VLAN, aq_ret,
+		return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ADD_VLAN, v_ret,
 					     NULL, 0);
 	else
-		return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DEL_VLAN, aq_ret,
+		return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DEL_VLAN, v_ret,
 					     NULL, 0);
 }
 
@@ -2435,22 +2467,22 @@ static int ice_vc_remove_vlan_msg(struct ice_vf *vf, u8 *msg)
  */
 static int ice_vc_ena_vlan_stripping(struct ice_vf *vf)
 {
-	enum ice_status aq_ret = 0;
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct ice_pf *pf = vf->pf;
 	struct ice_vsi *vsi;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (ice_vsi_manage_vlan_stripping(vsi, true))
-		aq_ret = ICE_ERR_AQ_ERROR;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 
 error_param:
 	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_ENABLE_VLAN_STRIPPING,
-				     aq_ret, NULL, 0);
+				     v_ret, NULL, 0);
 }
 
 /**
@@ -2461,27 +2493,27 @@ static int ice_vc_ena_vlan_stripping(struct ice_vf *vf)
  */
 static int ice_vc_dis_vlan_stripping(struct ice_vf *vf)
 {
-	enum ice_status aq_ret = 0;
+	enum virtchnl_status_code v_ret = VIRTCHNL_STATUS_SUCCESS;
 	struct ice_pf *pf = vf->pf;
 	struct ice_vsi *vsi;
 
 	if (!test_bit(ICE_VF_STATE_ACTIVE, vf->vf_states)) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	vsi = pf->vsi[vf->lan_vsi_idx];
 	if (!vsi) {
-		aq_ret = ICE_ERR_PARAM;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 		goto error_param;
 	}
 
 	if (ice_vsi_manage_vlan_stripping(vsi, false))
-		aq_ret = ICE_ERR_AQ_ERROR;
+		v_ret = VIRTCHNL_STATUS_ERR_PARAM;
 
 error_param:
 	return ice_vc_send_msg_to_vf(vf, VIRTCHNL_OP_DISABLE_VLAN_STRIPPING,
-				     aq_ret, NULL, 0);
+				     v_ret, NULL, 0);
 }
 
 /**
@@ -2517,7 +2549,7 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
 	/* Perform basic checks on the msg */
 	err = virtchnl_vc_validate_vf_msg(&vf->vf_ver, v_opcode, msg, msglen);
 	if (err) {
-		if (err == VIRTCHNL_ERR_PARAM)
+		if (err == VIRTCHNL_STATUS_ERR_PARAM)
 			err = -EPERM;
 		else
 			err = -EINVAL;
@@ -2539,7 +2571,8 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
 
 error_handler:
 	if (err) {
-		ice_vc_send_msg_to_vf(vf, v_opcode, ICE_ERR_PARAM, NULL, 0);
+		ice_vc_send_msg_to_vf(vf, v_opcode, VIRTCHNL_STATUS_ERR_PARAM,
+				      NULL, 0);
 		dev_err(&pf->pdev->dev, "Invalid message from VF %d, opcode %d, len %d, error %d\n",
 			vf_id, v_opcode, msglen, err);
 		return;
@@ -2602,7 +2635,8 @@ void ice_vc_process_vf_msg(struct ice_pf *pf, struct ice_rq_event_info *event)
 	default:
 		dev_err(&pf->pdev->dev, "Unsupported opcode %d from VF %d\n",
 			v_opcode, vf_id);
-		err = ice_vc_send_msg_to_vf(vf, v_opcode, ICE_ERR_NOT_IMPL,
+		err = ice_vc_send_msg_to_vf(vf, v_opcode,
+					    VIRTCHNL_STATUS_ERR_NOT_SUPPORTED,
 					    NULL, 0);
 		break;
 	}
@@ -2874,7 +2908,8 @@ int ice_set_vf_link_state(struct net_device *netdev, int vf_id, int link_state)
 		ice_set_pfe_link(vf, &pfe, ls->link_speed, vf->link_up);
 
 	/* Notify the VF of its new link state */
-	ice_aq_send_msg_to_vf(hw, vf->vf_id, VIRTCHNL_OP_EVENT, 0, (u8 *)&pfe,
+	ice_aq_send_msg_to_vf(hw, vf->vf_id, VIRTCHNL_OP_EVENT,
+			      VIRTCHNL_STATUS_SUCCESS, (u8 *)&pfe,
 			      sizeof(pfe), NULL);
 
 	return 0;
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 14/15] ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (12 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 13/15] ice: use virt channel status codes Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:48   ` Bowers, AndrewX
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 15/15] ice: Implement pci_error_handler ops Anirudh Venkataramanan
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

Currently we check if the __ICE_PREPARED_FOR_RESET bit is set prior to
calling ice_prepare_for_reset in ice_reset_subtask(), but we aren't
checking that bit in ice_do_reset() before calling
ice_prepare_for_reset(). This is not consistent and can cause issues if
ice_prepare_for_reset() is called prior to ice_do_reset(). Fix this by
checking if the __ICE_PREPARED_FOR_RESET bit is set internal to
ice_prepare_for_reset().

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_main.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index ced774dd879b..6e069e84d486 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -403,6 +403,10 @@ ice_prepare_for_reset(struct ice_pf *pf)
 {
 	struct ice_hw *hw = &pf->hw;
 
+	/* already prepared for reset */
+	if (test_bit(__ICE_PREPARED_FOR_RESET, pf->state))
+		return;
+
 	/* Notify VFs of impending reset */
 	if (ice_check_sq_alive(hw, &hw->mailboxq))
 		ice_vc_notify_reset(pf);
@@ -486,8 +490,7 @@ static void ice_reset_subtask(struct ice_pf *pf)
 		/* return if no valid reset type requested */
 		if (reset_type == ICE_RESET_INVAL)
 			return;
-		if (!test_bit(__ICE_PREPARED_FOR_RESET, pf->state))
-			ice_prepare_for_reset(pf);
+		ice_prepare_for_reset(pf);
 
 		/* make sure we are ready to rebuild */
 		if (ice_check_reset(&pf->hw)) {
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 15/15] ice: Implement pci_error_handler ops
  2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
                   ` (13 preceding siblings ...)
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 14/15] ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset Anirudh Venkataramanan
@ 2019-02-13 18:51 ` Anirudh Venkataramanan
  2019-02-28 23:48   ` Bowers, AndrewX
  14 siblings, 1 reply; 31+ messages in thread
From: Anirudh Venkataramanan @ 2019-02-13 18:51 UTC (permalink / raw)
  To: intel-wired-lan

From: Brett Creeley <brett.creeley@intel.com>

This patch implements the following pci_error_handler ops:
	.error_detected = ice_pci_err_detected
	.slot_reset = ice_pci_err_slot_reset
	.reset_notify = ice_pci_err_reset_notify
	.reset_prepare = ice_pci_err_reset_prepare
	.reset_done = ice_pci_err_reset_done
	.resume = ice_pci_err_resume

Signed-off-by: Brett Creeley <brett.creeley@intel.com>
Signed-off-by: Anirudh Venkataramanan <anirudh.venkataramanan@intel.com>
---
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned up commit message]
[Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> fixed build failures]
---
 drivers/net/ethernet/intel/ice/ice_main.c | 151 ++++++++++++++++++++++++++++++
 1 file changed, 151 insertions(+)

diff --git a/drivers/net/ethernet/intel/ice/ice_main.c b/drivers/net/ethernet/intel/ice/ice_main.c
index 6e069e84d486..dad17d8c8b61 100644
--- a/drivers/net/ethernet/intel/ice/ice_main.c
+++ b/drivers/net/ethernet/intel/ice/ice_main.c
@@ -1004,6 +1004,18 @@ static void ice_service_task_stop(struct ice_pf *pf)
 	clear_bit(__ICE_SERVICE_SCHED, pf->state);
 }
 
+/**
+ * ice_service_task_restart - restart service task and schedule works
+ * @pf: board private structure
+ *
+ * This function is needed for suspend and resume works (e.g WoL scenario)
+ */
+static void ice_service_task_restart(struct ice_pf *pf)
+{
+	clear_bit(__ICE_SERVICE_DIS, pf->state);
+	ice_service_task_schedule(pf);
+}
+
 /**
  * ice_service_timer - timer callback to schedule service task
  * @t: pointer to timer_list
@@ -2395,6 +2407,136 @@ static void ice_remove(struct pci_dev *pdev)
 	pci_disable_pcie_error_reporting(pdev);
 }
 
+/**
+ * ice_pci_err_detected - warning that PCI error has been detected
+ * @pdev: PCI device information struct
+ * @err: the type of PCI error
+ *
+ * Called to warn that something happened on the PCI bus and the error handling
+ * is in progress.  Allows the driver to gracefully prepare/handle PCI errors.
+ */
+static pci_ers_result_t
+ice_pci_err_detected(struct pci_dev *pdev, enum pci_channel_state err)
+{
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+
+	if (!pf) {
+		dev_err(&pdev->dev, "%s: unrecoverable device error %d\n",
+			__func__, err);
+		return PCI_ERS_RESULT_DISCONNECT;
+	}
+
+	if (!test_bit(__ICE_SUSPENDED, pf->state)) {
+		ice_service_task_stop(pf);
+
+		if (!test_bit(__ICE_PREPARED_FOR_RESET, pf->state)) {
+			set_bit(__ICE_PFR_REQ, pf->state);
+			ice_prepare_for_reset(pf);
+		}
+	}
+
+	return PCI_ERS_RESULT_NEED_RESET;
+}
+
+/**
+ * ice_pci_err_slot_reset - a PCI slot reset has just happened
+ * @pdev: PCI device information struct
+ *
+ * Called to determine if the driver can recover from the PCI slot reset by
+ * using a register read to determine if the device is recoverable.
+ */
+static pci_ers_result_t ice_pci_err_slot_reset(struct pci_dev *pdev)
+{
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+	pci_ers_result_t result;
+	int err;
+	u32 reg;
+
+	err = pci_enable_device_mem(pdev);
+	if (err) {
+		dev_err(&pdev->dev,
+			"Cannot re-enable PCI device after reset, error %d\n",
+			err);
+		result = PCI_ERS_RESULT_DISCONNECT;
+	} else {
+		pci_set_master(pdev);
+		pci_restore_state(pdev);
+		pci_save_state(pdev);
+		pci_wake_from_d3(pdev, false);
+
+		/* Check for life */
+		reg = rd32(&pf->hw, GLGEN_RTRIG);
+		if (!reg)
+			result = PCI_ERS_RESULT_RECOVERED;
+		else
+			result = PCI_ERS_RESULT_DISCONNECT;
+	}
+
+	err = pci_cleanup_aer_uncorrect_error_status(pdev);
+	if (err)
+		dev_dbg(&pdev->dev,
+			"pci_cleanup_aer_uncorrect_error_status failed, error %d\n",
+			err);
+		/* non-fatal, continue */
+
+	return result;
+}
+
+/**
+ * ice_pci_err_resume - restart operations after PCI error recovery
+ * @pdev: PCI device information struct
+ *
+ * Called to allow the driver to bring things back up after PCI error and/or
+ * reset recovery have finished
+ */
+static void ice_pci_err_resume(struct pci_dev *pdev)
+{
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+
+	if (!pf) {
+		dev_err(&pdev->dev,
+			"%s failed, device is unrecoverable\n", __func__);
+		return;
+	}
+
+	if (test_bit(__ICE_SUSPENDED, pf->state)) {
+		dev_dbg(&pdev->dev, "%s failed to resume normal operations!\n",
+			__func__);
+		return;
+	}
+
+	ice_do_reset(pf, ICE_RESET_PFR);
+	ice_service_task_restart(pf);
+	mod_timer(&pf->serv_tmr, round_jiffies(jiffies + pf->serv_tmr_period));
+}
+
+/**
+ * ice_pci_err_reset_prepare - prepare device driver for pci reset
+ * @pdev: PCI device information struct
+ */
+static void ice_pci_err_reset_prepare(struct pci_dev *pdev)
+{
+	struct ice_pf *pf = pci_get_drvdata(pdev);
+
+	if (!test_bit(__ICE_SUSPENDED, pf->state)) {
+		ice_service_task_stop(pf);
+
+		if (!test_bit(__ICE_PREPARED_FOR_RESET, pf->state)) {
+			set_bit(__ICE_PFR_REQ, pf->state);
+			ice_prepare_for_reset(pf);
+		}
+	}
+}
+
+/**
+ * ice_pci_err_reset_done - PCI reset done, device driver reset can begin
+ * @pdev: PCI device information struct
+ */
+static void ice_pci_err_reset_done(struct pci_dev *pdev)
+{
+	ice_pci_err_resume(pdev);
+}
+
 /* ice_pci_tbl - PCI Device ID Table
  *
  * Wildcard entries (PCI_ANY_ID) should come last
@@ -2412,12 +2554,21 @@ static const struct pci_device_id ice_pci_tbl[] = {
 };
 MODULE_DEVICE_TABLE(pci, ice_pci_tbl);
 
+static const struct pci_error_handlers ice_pci_err_handler = {
+	.error_detected = ice_pci_err_detected,
+	.slot_reset = ice_pci_err_slot_reset,
+	.reset_prepare = ice_pci_err_reset_prepare,
+	.reset_done = ice_pci_err_reset_done,
+	.resume = ice_pci_err_resume
+};
+
 static struct pci_driver ice_driver = {
 	.name = KBUILD_MODNAME,
 	.id_table = ice_pci_tbl,
 	.probe = ice_probe,
 	.remove = ice_remove,
 	.sriov_configure = ice_sriov_configure,
+	.err_handler = &ice_pci_err_handler
 };
 
 /**
-- 
2.14.5


^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate function
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate function Anirudh Venkataramanan
@ 2019-02-28 22:24   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 22:24 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate
> function
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> Introduce ice_get_rx_buf, which will fetch the rx buffer and do the DMA
> synchronization. Length of the packet that hardware rx descriptor contains is
> now read in ice_clean_rx_irq, so we can feed ice_get_rx_buf with it and
> resign from rx_desc passed as argument in ice_fetch_rx_buf and
> ice_add_rx_frag.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 75 +++++++++++++++++----------
> ----
>  1 file changed, 42 insertions(+), 33 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks onto separate function
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks onto " Anirudh Venkataramanan
@ 2019-02-28 22:57   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 22:57 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks
> onto separate function
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> Introduce ice_can_reuse_rx_page which will verify whether the page can be
> reused and return the boolean result to caller.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 80 +++++++++++++++++----------
> ----
>  1 file changed, 45 insertions(+), 35 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 03/15] ice: Get rid of ice_pull_tail
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 03/15] ice: Get rid of ice_pull_tail Anirudh Venkataramanan
@ 2019-02-28 23:00   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:00 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 03/15] ice: Get rid of ice_pull_tail
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> Instead of adding a frag and later when dealing with EOP frame accessing
> that frag in order to copy the headers onto linear part of skb, we can do this
> in ice_add_rx_frag in case where the data_len is still 0 and frame won't fit
> onto the linear part as a whole.
> 
> Function comment of ice_pull_tail was a bit misleading because of
> mentioned optimizations that can be performed (drop a frag/maintaining
> accurate truesize of skb) - it seems that this part of logic was dropped and
> the comment was not updated to reflect this change.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 70 +++++++++++-------------------
> -
>  1 file changed, 24 insertions(+), 46 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 04/15] ice: Introduce bulk update for page count
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 04/15] ice: Introduce bulk update for page count Anirudh Venkataramanan
@ 2019-02-28 23:02   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:02 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 04/15] ice: Introduce bulk update for
> page count
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> {get,put}_page are atomic operations which we use for page count handling.
> The current logic for refcount handling is that we increment it when passing a
> skb with the data from the first half of page up to netstack and recycle the
> second half of page. This operation protects us from losing a page since the
> network stack can decrement the refcount of page from skb.
> 
> The performance can be gently improved by doing the bulk updates of
> refcount instead of doing it one by one. During the buffer initialization,
> maximize the page's refcount and don't allow the refcount to become less
> than two.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 26 +++++++++++++++++++------
> -  drivers/net/ethernet/intel/ice/ice_txrx.h |  1 +
>  2 files changed, 20 insertions(+), 7 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 05/15] ice: Gather the rx buf clean-up logic for better reuse
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 05/15] ice: Gather the rx buf clean-up logic for better reuse Anirudh Venkataramanan
@ 2019-02-28 23:04   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:04 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 05/15] ice: Gather the rx buf clean-up
> logic for better reuse
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> Pull out the code responsible for page counting and buffer recycling so that it
> will be possible to clean up the rx buffers in cases where we won't allocate
> skb (ex. XDP)
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 76 ++++++++++++++++++++-----
> ------
>  1 file changed, 50 insertions(+), 26 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 06/15] ice: Limit the ice_add_rx_frag to frag addition
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 06/15] ice: Limit the ice_add_rx_frag to frag addition Anirudh Venkataramanan
@ 2019-02-28 23:16   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:16 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 06/15] ice: Limit the ice_add_rx_frag to
> frag addition
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> Refactor ice_fetch_rx_buf and ice_add_rx_frag in a way that we have
> standalone functions that do either the skb construction or frag addition to
> previously constructed skb.
> 
> The skb handling between rx_bufs is spread among various functions. The
> ice_get_rx_buf will retrieve the skb pointer from rx_buf and if it is a NULL
> pointer then we do the ice_construct_skb, otherwise we add a frag to the
> current skb via ice_add_rx_frag. Then, on the ice_put_rx_buf the skb
> pointer that belongs to rx_buf will be cleared. Moving further, if the current
> frame is not EOP frame we assign the current skb to the rx_buf that is
> pointed by updated next_to_clean indicator.
> 
> What is more during the buffer reuse let's assign each member of ice_rx_buf
> individually so we avoid the unnecessary copy of skb.
> 
> Last but not least, this logic split will allow us for better code reuse when
> adding a support for build_skb.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 160 +++++++++++++++-----------
> ----
>  1 file changed, 79 insertions(+), 81 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 07/15] ice: map rx buffer pages with DMA attributes
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 07/15] ice: map rx buffer pages with DMA attributes Anirudh Venkataramanan
@ 2019-02-28 23:16   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:16 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 07/15] ice: map rx buffer pages with
> DMA attributes
> 
> From: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> 
> Provide DMA_ATTR_WEAK_ORDERING and DMA_ATTR_SKIP_CPU_SYNC
> attributes to the DMA API during the mapping operations on rx side. With
> this change the non-x86 platforms will be able to sync only with what is being
> used (2k buffer) instead of entire page. This should yield a slight
> performance improvement.
> 
> Furthermore, DMA unmap may destroy the changes that were made to the
> buffer by CPU when platform is not a x86 one. DMA_ATTR_SKIP_CPU_SYNC
> attribute usage fixes this issue.
> 
> Also add a sync_single_for_device call during the rx buffer assignment, to
> make sure that the cache lines are cleared before device attempting to write
> to the buffer.
> 
> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_txrx.c | 24 ++++++++++++++++++++----
> drivers/net/ethernet/intel/ice/ice_txrx.h |  3 +++
>  2 files changed, 23 insertions(+), 4 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 08/15] ice: Prevent unintended multiple chain resets
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 08/15] ice: Prevent unintended multiple chain resets Anirudh Venkataramanan
@ 2019-02-28 23:18   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:18 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 08/15] ice: Prevent unintended
> multiple chain resets
> 
> From: Dave Ertman <david.m.ertman@intel.com>
> 
> In the current implementation of ice_reset_subtask, if multiple reset types
> are set in the pf->state, the most intrusive one is meant to be performed
> only, but the bits requesting the other types are not being cleared. This
> would lead to another reset being performed the next time the service task
> is scheduled.
> 
> Change the flow of ice_reset_subtask so that all reset request bits in
> pf->state are cleared, and we still perform the most intrusive of the
> resets requested.
> 
> Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_main.c | 10 ++++++++--
>  1 file changed, 8 insertions(+), 2 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 09/15] ice: change VF VSI tc info along with num_queues
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 09/15] ice: change VF VSI tc info along with num_queues Anirudh Venkataramanan
@ 2019-02-28 23:45   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:45 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 09/15] ice: change VF VSI tc info along
> with num_queues
> 
> From: Preethi Banala <preethi.banala@intel.com>
> 
> Update VF VSI tc info along with vsi->num_txq/num_rxq when VF requests
> to configure queues.
> 
> Signed-off-by: Preethi Banala <preethi.banala@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 3 +++
>  1 file changed, 3 insertions(+)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 10/15] ice: add and use new ice_for_each_traffic_class() macro
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 10/15] ice: add and use new ice_for_each_traffic_class() macro Anirudh Venkataramanan
@ 2019-02-28 23:46   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:46 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 10/15] ice: add and use new
> ice_for_each_traffic_class() macro
> 
> From: Bruce Allan <bruce.w.allan@intel.com>
> 
> There are numerous for() loops iterating over each of the max traffic classes.
> Use a simple iterator macro instead to make the code cleaner.
> 
> Signed-off-by: Bruce Allan <bruce.w.allan@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_common.c | 2 +-
>  drivers/net/ethernet/intel/ice/ice_lib.c    | 4 ++--
>  drivers/net/ethernet/intel/ice/ice_sched.c  | 2 +-
>  drivers/net/ethernet/intel/ice/ice_type.h   | 3 +++
>  4 files changed, 7 insertions(+), 4 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 11/15] ice: Create a generic name for the ice_rx_flg64_bits structure
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 11/15] ice: Create a generic name for the ice_rx_flg64_bits structure Anirudh Venkataramanan
@ 2019-02-28 23:46   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:46 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 11/15] ice: Create a generic name for
> the ice_rx_flg64_bits structure
> 
> From: Chinh T Cao <chinh.t.cao@intel.com>
> 
> This structure is used to define the packet flags. These flags are applicable for
> both TX and RX packet. Thus, this patch changes its name from
> ice_rx_flag64_bits to ice_flg64_bits, and its member definition.
> 
> Signed-off-by: Chinh T Cao <chinh.t.cao@intel.com>
> Reviewed-by: Bruce Allan <bruce.w.allan@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_common.c    | 26 ++++++++++----------
>  drivers/net/ethernet/intel/ice/ice_lan_tx_rx.h | 34 +++++++++++++---------
> ----
>  2 files changed, 30 insertions(+), 30 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 12/15] ice: Remove unnecessary newlines from log messages
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 12/15] ice: Remove unnecessary newlines from log messages Anirudh Venkataramanan
@ 2019-02-28 23:47   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:47 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 12/15] ice: Remove unnecessary
> newlines from log messages
> 
> From: Jeremiah Kyle <jeremiah.kyle@intel.com>
> 
> Two log messages contained newlines in the middle of the message. This
> resulted in unexpected driver log output.
> 
> This patch removes the newlines to restore consistency with the rest of the
> driver log messages.
> 
> Signed-off-by: Jeremiah Kyle <jeremiah.kyle@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 13/15] ice: use virt channel status codes
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 13/15] ice: use virt channel status codes Anirudh Venkataramanan
@ 2019-02-28 23:47   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:47 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 13/15] ice: use virt channel status
> codes
> 
> From: Mitch Williams <mitch.a.williams@intel.com>
> 
> When communicating with the AVF driver, we need to use the status codes
> from virtchnl.h, not our own ice-specific codes. Without this, when an error
> occurs, the VF will report nonsensical results.
> 
> NOTE: this depends on changes made to include/linux/avf/virtchnl.h by
> commit bb58fd7eeffc ("i40e: Update status codes")
> 
> Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message]
> ---
>  drivers/net/ethernet/intel/ice/ice_virtchnl_pf.c | 273 +++++++++++++------
> ----
>  1 file changed, 154 insertions(+), 119 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 14/15] ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 14/15] ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset Anirudh Venkataramanan
@ 2019-02-28 23:48   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:48 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 14/15] ice: Put
> __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> Currently we check if the __ICE_PREPARED_FOR_RESET bit is set prior to
> calling ice_prepare_for_reset in ice_reset_subtask(), but we aren't checking
> that bit in ice_do_reset() before calling ice_prepare_for_reset(). This is not
> consistent and can cause issues if
> ice_prepare_for_reset() is called prior to ice_do_reset(). Fix this by checking
> if the __ICE_PREPARED_FOR_RESET bit is set internal to
> ice_prepare_for_reset().
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
>  drivers/net/ethernet/intel/ice/ice_main.c | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Intel-wired-lan] [PATCH S14 15/15] ice: Implement pci_error_handler ops
  2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 15/15] ice: Implement pci_error_handler ops Anirudh Venkataramanan
@ 2019-02-28 23:48   ` Bowers, AndrewX
  0 siblings, 0 replies; 31+ messages in thread
From: Bowers, AndrewX @ 2019-02-28 23:48 UTC (permalink / raw)
  To: intel-wired-lan

> -----Original Message-----
> From: Intel-wired-lan [mailto:intel-wired-lan-bounces at osuosl.org] On
> Behalf Of Anirudh Venkataramanan
> Sent: Wednesday, February 13, 2019 10:51 AM
> To: intel-wired-lan at lists.osuosl.org
> Subject: [Intel-wired-lan] [PATCH S14 15/15] ice: Implement
> pci_error_handler ops
> 
> From: Brett Creeley <brett.creeley@intel.com>
> 
> This patch implements the following pci_error_handler ops:
> 	.error_detected = ice_pci_err_detected
> 	.slot_reset = ice_pci_err_slot_reset
> 	.reset_notify = ice_pci_err_reset_notify
> 	.reset_prepare = ice_pci_err_reset_prepare
> 	.reset_done = ice_pci_err_reset_done
> 	.resume = ice_pci_err_resume
> 
> Signed-off-by: Brett Creeley <brett.creeley@intel.com>
> Signed-off-by: Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com>
> ---
> [Anirudh Venkataramanan <anirudh.venkataramanan@intel.com> cleaned
> up commit message] [Anirudh Venkataramanan
> <anirudh.venkataramanan@intel.com> fixed build failures]
> ---
>  drivers/net/ethernet/intel/ice/ice_main.c | 151
> ++++++++++++++++++++++++++++++
>  1 file changed, 151 insertions(+)

Tested-by: Andrew Bowers <andrewx.bowers@intel.com>



^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2019-02-28 23:48 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-13 18:51 [Intel-wired-lan] [PATCH S14 00/15] Implementation updates for ice Anirudh Venkataramanan
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 01/15] ice: Retrieve rx_buf in separate function Anirudh Venkataramanan
2019-02-28 22:24   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 02/15] ice: Pull out page reuse checks onto " Anirudh Venkataramanan
2019-02-28 22:57   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 03/15] ice: Get rid of ice_pull_tail Anirudh Venkataramanan
2019-02-28 23:00   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 04/15] ice: Introduce bulk update for page count Anirudh Venkataramanan
2019-02-28 23:02   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 05/15] ice: Gather the rx buf clean-up logic for better reuse Anirudh Venkataramanan
2019-02-28 23:04   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 06/15] ice: Limit the ice_add_rx_frag to frag addition Anirudh Venkataramanan
2019-02-28 23:16   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 07/15] ice: map rx buffer pages with DMA attributes Anirudh Venkataramanan
2019-02-28 23:16   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 08/15] ice: Prevent unintended multiple chain resets Anirudh Venkataramanan
2019-02-28 23:18   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 09/15] ice: change VF VSI tc info along with num_queues Anirudh Venkataramanan
2019-02-28 23:45   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 10/15] ice: add and use new ice_for_each_traffic_class() macro Anirudh Venkataramanan
2019-02-28 23:46   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 11/15] ice: Create a generic name for the ice_rx_flg64_bits structure Anirudh Venkataramanan
2019-02-28 23:46   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 12/15] ice: Remove unnecessary newlines from log messages Anirudh Venkataramanan
2019-02-28 23:47   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 13/15] ice: use virt channel status codes Anirudh Venkataramanan
2019-02-28 23:47   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 14/15] ice: Put __ICE_PREPARED_FOR_RESET check in ice_prepare_for_reset Anirudh Venkataramanan
2019-02-28 23:48   ` Bowers, AndrewX
2019-02-13 18:51 ` [Intel-wired-lan] [PATCH S14 15/15] ice: Implement pci_error_handler ops Anirudh Venkataramanan
2019-02-28 23:48   ` Bowers, AndrewX

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.