netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
To: davem@davemloft.net
Cc: Mitch Williams <mitch.a.williams@intel.com>,
	netdev@vger.kernel.org, nhorman@redhat.com, sassmann@redhat.com,
	Andrew Bowers <andrewx.bowers@intel.com>,
	Jeff Kirsher <jeffrey.t.kirsher@intel.com>
Subject: [net-next 08/15] ice: allow empty Rx descriptors
Date: Fri,  9 Aug 2019 11:31:32 -0700	[thread overview]
Message-ID: <20190809183139.30871-9-jeffrey.t.kirsher@intel.com> (raw)
In-Reply-To: <20190809183139.30871-1-jeffrey.t.kirsher@intel.com>

From: Mitch Williams <mitch.a.williams@intel.com>

In some circumstances, the hardware will hand us a receive descriptor
which has no data attached, but is otherwise valid. The receive code was
improperly ignoring these descriptors, which result in an infinite loop.

To fix this, change the receive code to process all descriptors,
regardless of the size of the associated data. Add checks to the
memory-handling functions to allow for zero size.

Signed-off-by: Mitch Williams <mitch.a.williams@intel.com>
Tested-by: Andrew Bowers <andrewx.bowers@intel.com>
Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
---
 drivers/net/ethernet/intel/ice/ice_txrx.c | 15 ++++++++++++---
 1 file changed, 12 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/intel/ice/ice_txrx.c b/drivers/net/ethernet/intel/ice/ice_txrx.c
index c88e0701e1d7..e5c4c9139e54 100644
--- a/drivers/net/ethernet/intel/ice/ice_txrx.c
+++ b/drivers/net/ethernet/intel/ice/ice_txrx.c
@@ -607,6 +607,8 @@ ice_add_rx_frag(struct ice_rx_buf *rx_buf, struct sk_buff *skb,
 	unsigned int truesize = ICE_RXBUF_2048;
 #endif
 
+	if (!size)
+		return;
 	skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, rx_buf->page,
 			rx_buf->page_offset, size, truesize);
 
@@ -662,6 +664,8 @@ ice_get_rx_buf(struct ice_ring *rx_ring, struct sk_buff **skb,
 	prefetchw(rx_buf->page);
 	*skb = rx_buf->skb;
 
+	if (!size)
+		return rx_buf;
 	/* we are reusing so sync this buffer for CPU use */
 	dma_sync_single_range_for_cpu(rx_ring->dev, rx_buf->dma,
 				      rx_buf->page_offset, size,
@@ -745,8 +749,11 @@ ice_construct_skb(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf,
  */
 static void ice_put_rx_buf(struct ice_ring *rx_ring, struct ice_rx_buf *rx_buf)
 {
-		/* hand second half of page back to the ring */
+	if (!rx_buf)
+		return;
+
 	if (ice_can_reuse_rx_page(rx_buf)) {
+		/* hand second half of page back to the ring */
 		ice_reuse_rx_page(rx_ring, rx_buf);
 		rx_ring->rx_stats.page_reuse_count++;
 	} else {
@@ -1031,8 +1038,9 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 		size = le16_to_cpu(rx_desc->wb.pkt_len) &
 			ICE_RX_FLX_DESC_PKT_LEN_M;
 
+		/* retrieve a buffer from the ring */
 		rx_buf = ice_get_rx_buf(rx_ring, &skb, size);
-		/* allocate (if needed) and populate skb */
+
 		if (skb)
 			ice_add_rx_frag(rx_buf, skb, size);
 		else
@@ -1041,7 +1049,8 @@ static int ice_clean_rx_irq(struct ice_ring *rx_ring, int budget)
 		/* exit if we failed to retrieve a buffer */
 		if (!skb) {
 			rx_ring->rx_stats.alloc_buf_failed++;
-			rx_buf->pagecnt_bias++;
+			if (rx_buf)
+				rx_buf->pagecnt_bias++;
 			break;
 		}
 
-- 
2.21.0


  parent reply	other threads:[~2019-08-09 18:31 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-09 18:31 [net-next 00/15][pull request] 100GbE Intel Wired LAN Driver Updates 2019-08-09 Jeff Kirsher
2019-08-09 18:31 ` [net-next 01/15] ice: Implement ethtool ops for channels Jeff Kirsher
2019-08-09 21:15   ` Jakub Kicinski
2019-08-12 15:07     ` Nguyen, Anthony L
2019-08-12 22:24       ` Jakub Kicinski
2019-08-16 18:01         ` Nguyen, Anthony L
2019-08-09 18:31 ` [net-next 02/15] ice: Use the software based tail when checking for hung Tx ring Jeff Kirsher
2019-08-09 18:31 ` [net-next 03/15] ice: Assume that more than one Rx queue is rare in ice_napi_poll Jeff Kirsher
2019-08-09 18:31 ` [net-next 04/15] ice: Restructure VFs initialization flows Jeff Kirsher
2019-08-09 18:31 ` [net-next 05/15] ice: fix set pause param autoneg check Jeff Kirsher
2019-08-09 18:31 ` [net-next 06/15] ice: Set WB_ON_ITR when we don't re-enable interrupts Jeff Kirsher
2019-08-09 18:31 ` [net-next 07/15] ice: Fix kernel hang with DCB reset in CEE mode Jeff Kirsher
2019-08-09 18:31 ` Jeff Kirsher [this message]
2019-08-09 18:31 ` [net-next 09/15] ice: Do not always bring up PF VSI in ice_ena_vsi() Jeff Kirsher
2019-08-09 18:31 ` [net-next 10/15] ice: update GLINT_DYN_CTL and GLINT_VECT2FUNC register access Jeff Kirsher
2019-08-09 18:31 ` [net-next 11/15] ice: Reduce wait times during VF bringup/reset Jeff Kirsher
2019-08-09 18:31 ` [net-next 12/15] ice: Increase size of Mailbox receive queue for many VFs Jeff Kirsher
2019-08-09 18:31 ` [net-next 13/15] ice: Move VF resources definition to SR-IOV specific file Jeff Kirsher
2019-08-09 18:31 ` [net-next 14/15] ice: Change type for queue counts Jeff Kirsher
2019-08-09 18:31 ` [net-next 15/15] ice: improve print for VF's when adding/deleting MAC filters Jeff Kirsher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190809183139.30871-9-jeffrey.t.kirsher@intel.com \
    --to=jeffrey.t.kirsher@intel.com \
    --cc=andrewx.bowers@intel.com \
    --cc=davem@davemloft.net \
    --cc=mitch.a.williams@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=nhorman@redhat.com \
    --cc=sassmann@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).