All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops
@ 2018-06-26 15:39 Jesper Dangaard Brouer
  2018-06-26 15:39 ` [PATCH 1/3] ixgbe: split XDP_TX tail and XDP_REDIRECT map flushing Jesper Dangaard Brouer
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Jesper Dangaard Brouer @ 2018-06-26 15:39 UTC (permalink / raw)
  To: netdev, Jesper Dangaard Brouer
  Cc: John Fastabend, Jason Wang, Daniel Borkmann,
	BjörnTöpel, Alexei Starovoitov

Fix driver logic that are combining XDP_TX flush and XDP_REDIRECT map
flushing.  These are two different XDP xmit modes, and it is clearly
wrong to invoke both types of flush operations when only one of the
XDP xmit modes is used.

---
Unsure what git tree to send this against. Thus, I'll leave it up-to
the patchwork assigner ;-)


Jesper Dangaard Brouer (3):
      ixgbe: split XDP_TX tail and XDP_REDIRECT map flushing
      i40e: split XDP_TX tail and XDP_REDIRECT map flushing
      virtio_net: split XDP_TX kick and XDP_REDIRECT map flushing


 drivers/net/ethernet/intel/i40e/i40e_txrx.c   |   24 +++++++++++++-------
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   24 ++++++++++++--------
 drivers/net/virtio_net.c                      |   30 ++++++++++++++++---------
 3 files changed, 48 insertions(+), 30 deletions(-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH 1/3] ixgbe: split XDP_TX tail and XDP_REDIRECT map flushing
  2018-06-26 15:39 [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops Jesper Dangaard Brouer
@ 2018-06-26 15:39 ` Jesper Dangaard Brouer
  2018-06-26 15:39 ` [PATCH 2/3] i40e: " Jesper Dangaard Brouer
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Jesper Dangaard Brouer @ 2018-06-26 15:39 UTC (permalink / raw)
  To: netdev, Jesper Dangaard Brouer
  Cc: John Fastabend, Jason Wang, Daniel Borkmann,
	BjörnTöpel, Alexei Starovoitov

The driver was combining the XDP_TX tail flush and XDP_REDIRECT
map flushing (xdp_do_flush_map).  This is suboptimal, these two
flush operations should be kept separate.

Fixes: 11393cc9b9be ("xdp: Add batching support to redirect map")
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 drivers/net/ethernet/intel/ixgbe/ixgbe_main.c |   24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 4929f7265598..5f8a969638b2 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -2186,9 +2186,10 @@ static struct sk_buff *ixgbe_build_skb(struct ixgbe_ring *rx_ring,
 	return skb;
 }
 
-#define IXGBE_XDP_PASS 0
-#define IXGBE_XDP_CONSUMED 1
-#define IXGBE_XDP_TX 2
+#define IXGBE_XDP_PASS		0
+#define IXGBE_XDP_CONSUMED	BIT(0)
+#define IXGBE_XDP_TX		BIT(1)
+#define IXGBE_XDP_REDIR		BIT(2)
 
 static int ixgbe_xmit_xdp_ring(struct ixgbe_adapter *adapter,
 			       struct xdp_frame *xdpf);
@@ -2225,7 +2226,7 @@ static struct sk_buff *ixgbe_run_xdp(struct ixgbe_adapter *adapter,
 	case XDP_REDIRECT:
 		err = xdp_do_redirect(adapter->netdev, xdp, xdp_prog);
 		if (!err)
-			result = IXGBE_XDP_TX;
+			result = IXGBE_XDP_REDIR;
 		else
 			result = IXGBE_XDP_CONSUMED;
 		break;
@@ -2285,7 +2286,7 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 	unsigned int mss = 0;
 #endif /* IXGBE_FCOE */
 	u16 cleaned_count = ixgbe_desc_unused(rx_ring);
-	bool xdp_xmit = false;
+	unsigned int xdp_xmit = 0;
 	struct xdp_buff xdp;
 
 	xdp.rxq = &rx_ring->xdp_rxq;
@@ -2328,8 +2329,10 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 		}
 
 		if (IS_ERR(skb)) {
-			if (PTR_ERR(skb) == -IXGBE_XDP_TX) {
-				xdp_xmit = true;
+			unsigned int xdp_res = -PTR_ERR(skb);
+
+			if (xdp_res & (IXGBE_XDP_TX | IXGBE_XDP_REDIR)) {
+				xdp_xmit |= xdp_res;
 				ixgbe_rx_buffer_flip(rx_ring, rx_buffer, size);
 			} else {
 				rx_buffer->pagecnt_bias++;
@@ -2401,7 +2404,10 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 		total_rx_packets++;
 	}
 
-	if (xdp_xmit) {
+	if (xdp_xmit & IXGBE_XDP_REDIR)
+		xdp_do_flush_map();
+
+	if (xdp_xmit & IXGBE_XDP_TX) {
 		struct ixgbe_ring *ring = adapter->xdp_ring[smp_processor_id()];
 
 		/* Force memory writes to complete before letting h/w
@@ -2409,8 +2415,6 @@ static int ixgbe_clean_rx_irq(struct ixgbe_q_vector *q_vector,
 		 */
 		wmb();
 		writel(ring->next_to_use, ring->tail);
-
-		xdp_do_flush_map();
 	}
 
 	u64_stats_update_begin(&rx_ring->syncp);

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 2/3] i40e: split XDP_TX tail and XDP_REDIRECT map flushing
  2018-06-26 15:39 [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops Jesper Dangaard Brouer
  2018-06-26 15:39 ` [PATCH 1/3] ixgbe: split XDP_TX tail and XDP_REDIRECT map flushing Jesper Dangaard Brouer
@ 2018-06-26 15:39 ` Jesper Dangaard Brouer
  2018-06-26 16:37     ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  2018-06-26 15:39 ` [PATCH 3/3] virtio_net: split XDP_TX kick " Jesper Dangaard Brouer
  2018-06-28  5:29 ` [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops David Miller
  3 siblings, 1 reply; 8+ messages in thread
From: Jesper Dangaard Brouer @ 2018-06-26 15:39 UTC (permalink / raw)
  To: netdev, Jesper Dangaard Brouer
  Cc: John Fastabend, Jason Wang, Daniel Borkmann,
	BjörnTöpel, Alexei Starovoitov

The driver was combining the XDP_TX tail flush and XDP_REDIRECT
map flushing (xdp_do_flush_map).  This is suboptimal, these two
flush operations should be kept separate.

It looks like the mistake was copy-pasted from ixgbe.

Fixes: d9314c474d4f ("i40e: add support for XDP_REDIRECT")
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 drivers/net/ethernet/intel/i40e/i40e_txrx.c |   24 +++++++++++++++---------
 1 file changed, 15 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 8ffb7454e67c..c1c027743159 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -2200,9 +2200,10 @@ static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
 	return true;
 }
 
-#define I40E_XDP_PASS 0
-#define I40E_XDP_CONSUMED 1
-#define I40E_XDP_TX 2
+#define I40E_XDP_PASS		0
+#define I40E_XDP_CONSUMED	BIT(0)
+#define I40E_XDP_TX		BIT(1)
+#define I40E_XDP_REDIR		BIT(2)
 
 static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf,
 			      struct i40e_ring *xdp_ring);
@@ -2249,7 +2250,7 @@ static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
 		break;
 	case XDP_REDIRECT:
 		err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
-		result = !err ? I40E_XDP_TX : I40E_XDP_CONSUMED;
+		result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;
 		break;
 	default:
 		bpf_warn_invalid_xdp_action(act);
@@ -2312,7 +2313,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 	unsigned int total_rx_bytes = 0, total_rx_packets = 0;
 	struct sk_buff *skb = rx_ring->skb;
 	u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
-	bool failure = false, xdp_xmit = false;
+	unsigned int xdp_xmit = 0;
+	bool failure = false;
 	struct xdp_buff xdp;
 
 	xdp.rxq = &rx_ring->xdp_rxq;
@@ -2373,8 +2375,10 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 		}
 
 		if (IS_ERR(skb)) {
-			if (PTR_ERR(skb) == -I40E_XDP_TX) {
-				xdp_xmit = true;
+			unsigned int xdp_res = -PTR_ERR(skb);
+
+			if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
+				xdp_xmit |= xdp_res;
 				i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
 			} else {
 				rx_buffer->pagecnt_bias++;
@@ -2428,12 +2432,14 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
 		total_rx_packets++;
 	}
 
-	if (xdp_xmit) {
+	if (xdp_xmit & I40E_XDP_REDIR)
+		xdp_do_flush_map();
+
+	if (xdp_xmit & I40E_XDP_TX) {
 		struct i40e_ring *xdp_ring =
 			rx_ring->vsi->xdp_rings[rx_ring->queue_index];
 
 		i40e_xdp_ring_update_tail(xdp_ring);
-		xdp_do_flush_map();
 	}
 
 	rx_ring->skb = skb;

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH 3/3] virtio_net: split XDP_TX kick and XDP_REDIRECT map flushing
  2018-06-26 15:39 [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops Jesper Dangaard Brouer
  2018-06-26 15:39 ` [PATCH 1/3] ixgbe: split XDP_TX tail and XDP_REDIRECT map flushing Jesper Dangaard Brouer
  2018-06-26 15:39 ` [PATCH 2/3] i40e: " Jesper Dangaard Brouer
@ 2018-06-26 15:39 ` Jesper Dangaard Brouer
  2018-06-27 14:56   ` Jason Wang
  2018-06-28  5:29 ` [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops David Miller
  3 siblings, 1 reply; 8+ messages in thread
From: Jesper Dangaard Brouer @ 2018-06-26 15:39 UTC (permalink / raw)
  To: netdev, Jesper Dangaard Brouer
  Cc: John Fastabend, Jason Wang, Daniel Borkmann,
	BjörnTöpel, Alexei Starovoitov

The driver was combining XDP_TX virtqueue_kick and XDP_REDIRECT
map flushing (xdp_do_flush_map).  This is suboptimal, these two
flush operations should be kept separate.

The suboptimal behavior was introduced in commit 9267c430c6b6
("virtio-net: add missing virtqueue kick when flushing packets").

Fixes: 9267c430c6b6 ("virtio-net: add missing virtqueue kick when flushing packets")
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
---
 drivers/net/virtio_net.c |   30 +++++++++++++++++++-----------
 1 file changed, 19 insertions(+), 11 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 1619ee3070b6..ae47ecf80c2d 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -53,6 +53,10 @@ module_param(napi_tx, bool, 0644);
 /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
 #define VIRTIO_XDP_HEADROOM 256
 
+/* Separating two types of XDP xmit */
+#define VIRTIO_XDP_TX		BIT(0)
+#define VIRTIO_XDP_REDIR	BIT(1)
+
 /* RX packet size EWMA. The average packet size is used to determine the packet
  * buffer size when refilling RX rings. As the entire RX ring may be refilled
  * at once, the weight is chosen so that the EWMA will be insensitive to short-
@@ -582,7 +586,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
 				     struct receive_queue *rq,
 				     void *buf, void *ctx,
 				     unsigned int len,
-				     bool *xdp_xmit)
+				     unsigned int *xdp_xmit)
 {
 	struct sk_buff *skb;
 	struct bpf_prog *xdp_prog;
@@ -654,14 +658,14 @@ static struct sk_buff *receive_small(struct net_device *dev,
 				trace_xdp_exception(vi->dev, xdp_prog, act);
 				goto err_xdp;
 			}
-			*xdp_xmit = true;
+			*xdp_xmit |= VIRTIO_XDP_TX;
 			rcu_read_unlock();
 			goto xdp_xmit;
 		case XDP_REDIRECT:
 			err = xdp_do_redirect(dev, &xdp, xdp_prog);
 			if (err)
 				goto err_xdp;
-			*xdp_xmit = true;
+			*xdp_xmit |= VIRTIO_XDP_REDIR;
 			rcu_read_unlock();
 			goto xdp_xmit;
 		default:
@@ -723,7 +727,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 					 void *buf,
 					 void *ctx,
 					 unsigned int len,
-					 bool *xdp_xmit)
+					 unsigned int *xdp_xmit)
 {
 	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
 	u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
@@ -818,7 +822,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 					put_page(xdp_page);
 				goto err_xdp;
 			}
-			*xdp_xmit = true;
+			*xdp_xmit |= VIRTIO_XDP_TX;
 			if (unlikely(xdp_page != page))
 				put_page(page);
 			rcu_read_unlock();
@@ -830,7 +834,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 					put_page(xdp_page);
 				goto err_xdp;
 			}
-			*xdp_xmit = true;
+			*xdp_xmit |= VIRTIO_XDP_REDIR;
 			if (unlikely(xdp_page != page))
 				put_page(page);
 			rcu_read_unlock();
@@ -939,7 +943,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 }
 
 static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
-		       void *buf, unsigned int len, void **ctx, bool *xdp_xmit)
+		       void *buf, unsigned int len, void **ctx,
+		       unsigned int *xdp_xmit)
 {
 	struct net_device *dev = vi->dev;
 	struct sk_buff *skb;
@@ -1232,7 +1237,8 @@ static void refill_work(struct work_struct *work)
 	}
 }
 
-static int virtnet_receive(struct receive_queue *rq, int budget, bool *xdp_xmit)
+static int virtnet_receive(struct receive_queue *rq, int budget,
+			   unsigned int *xdp_xmit)
 {
 	struct virtnet_info *vi = rq->vq->vdev->priv;
 	unsigned int len, received = 0, bytes = 0;
@@ -1321,7 +1327,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 	struct virtnet_info *vi = rq->vq->vdev->priv;
 	struct send_queue *sq;
 	unsigned int received, qp;
-	bool xdp_xmit = false;
+	unsigned int xdp_xmit = 0;
 
 	virtnet_poll_cleantx(rq);
 
@@ -1331,12 +1337,14 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 	if (received < budget)
 		virtqueue_napi_complete(napi, rq->vq, received);
 
-	if (xdp_xmit) {
+	if (xdp_xmit & VIRTIO_XDP_REDIR)
+		xdp_do_flush_map();
+
+	if (xdp_xmit & VIRTIO_XDP_TX) {
 		qp = vi->curr_queue_pairs - vi->xdp_queue_pairs +
 		     smp_processor_id();
 		sq = &vi->sq[qp];
 		virtqueue_kick(sq->vq);
-		xdp_do_flush_map();
 	}
 
 	return received;

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH 2/3] i40e: split XDP_TX tail and XDP_REDIRECT map flushing
  2018-06-26 15:39 ` [PATCH 2/3] i40e: " Jesper Dangaard Brouer
@ 2018-06-26 16:37     ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  0 siblings, 0 replies; 8+ messages in thread
From: Björn Töpel @ 2018-06-26 16:37 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Netdev, John Fastabend, jasowang, Daniel Borkmann,
	Björn Töpel, Alexei Starovoitov, intel-wired-lan

Den tis 26 juni 2018 kl 18:08 skrev Jesper Dangaard Brouer <brouer@redhat.com>:
>
> The driver was combining the XDP_TX tail flush and XDP_REDIRECT
> map flushing (xdp_do_flush_map).  This is suboptimal, these two
> flush operations should be kept separate.
>
> It looks like the mistake was copy-pasted from ixgbe.
>
> Fixes: d9314c474d4f ("i40e: add support for XDP_REDIRECT")
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c |   24 +++++++++++++++---------
>  1 file changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> index 8ffb7454e67c..c1c027743159 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> @@ -2200,9 +2200,10 @@ static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
>         return true;
>  }
>
> -#define I40E_XDP_PASS 0
> -#define I40E_XDP_CONSUMED 1
> -#define I40E_XDP_TX 2
> +#define I40E_XDP_PASS          0
> +#define I40E_XDP_CONSUMED      BIT(0)
> +#define I40E_XDP_TX            BIT(1)
> +#define I40E_XDP_REDIR         BIT(2)
>
>  static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf,
>                               struct i40e_ring *xdp_ring);
> @@ -2249,7 +2250,7 @@ static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
>                 break;
>         case XDP_REDIRECT:
>                 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
> -               result = !err ? I40E_XDP_TX : I40E_XDP_CONSUMED;
> +               result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;
>                 break;
>         default:
>                 bpf_warn_invalid_xdp_action(act);
> @@ -2312,7 +2313,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
>         unsigned int total_rx_bytes = 0, total_rx_packets = 0;
>         struct sk_buff *skb = rx_ring->skb;
>         u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
> -       bool failure = false, xdp_xmit = false;
> +       unsigned int xdp_xmit = 0;
> +       bool failure = false;
>         struct xdp_buff xdp;
>
>         xdp.rxq = &rx_ring->xdp_rxq;
> @@ -2373,8 +2375,10 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
>                 }
>
>                 if (IS_ERR(skb)) {
> -                       if (PTR_ERR(skb) == -I40E_XDP_TX) {
> -                               xdp_xmit = true;
> +                       unsigned int xdp_res = -PTR_ERR(skb);
> +
> +                       if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
> +                               xdp_xmit |= xdp_res;
>                                 i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
>                         } else {
>                                 rx_buffer->pagecnt_bias++;
> @@ -2428,12 +2432,14 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
>                 total_rx_packets++;
>         }
>
> -       if (xdp_xmit) {
> +       if (xdp_xmit & I40E_XDP_REDIR)
> +               xdp_do_flush_map();
> +
> +       if (xdp_xmit & I40E_XDP_TX) {
>                 struct i40e_ring *xdp_ring =
>                         rx_ring->vsi->xdp_rings[rx_ring->queue_index];
>
>                 i40e_xdp_ring_update_tail(xdp_ring);
> -               xdp_do_flush_map();
>         }
>
>         rx_ring->skb = skb;
>

Nice! Added intel-wired-lan to Cc.

Acked-by: Björn Töpel <bjorn.topel@intel.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Intel-wired-lan] [PATCH 2/3] i40e: split XDP_TX tail and XDP_REDIRECT map flushing
@ 2018-06-26 16:37     ` =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
  0 siblings, 0 replies; 8+ messages in thread
From: =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?= @ 2018-06-26 16:37 UTC (permalink / raw)
  To: intel-wired-lan

Den tis 26 juni 2018 kl 18:08 skrev Jesper Dangaard Brouer <brouer@redhat.com>:
>
> The driver was combining the XDP_TX tail flush and XDP_REDIRECT
> map flushing (xdp_do_flush_map).  This is suboptimal, these two
> flush operations should be kept separate.
>
> It looks like the mistake was copy-pasted from ixgbe.
>
> Fixes: d9314c474d4f ("i40e: add support for XDP_REDIRECT")
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>  drivers/net/ethernet/intel/i40e/i40e_txrx.c |   24 +++++++++++++++---------
>  1 file changed, 15 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> index 8ffb7454e67c..c1c027743159 100644
> --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> @@ -2200,9 +2200,10 @@ static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
>         return true;
>  }
>
> -#define I40E_XDP_PASS 0
> -#define I40E_XDP_CONSUMED 1
> -#define I40E_XDP_TX 2
> +#define I40E_XDP_PASS          0
> +#define I40E_XDP_CONSUMED      BIT(0)
> +#define I40E_XDP_TX            BIT(1)
> +#define I40E_XDP_REDIR         BIT(2)
>
>  static int i40e_xmit_xdp_ring(struct xdp_frame *xdpf,
>                               struct i40e_ring *xdp_ring);
> @@ -2249,7 +2250,7 @@ static struct sk_buff *i40e_run_xdp(struct i40e_ring *rx_ring,
>                 break;
>         case XDP_REDIRECT:
>                 err = xdp_do_redirect(rx_ring->netdev, xdp, xdp_prog);
> -               result = !err ? I40E_XDP_TX : I40E_XDP_CONSUMED;
> +               result = !err ? I40E_XDP_REDIR : I40E_XDP_CONSUMED;
>                 break;
>         default:
>                 bpf_warn_invalid_xdp_action(act);
> @@ -2312,7 +2313,8 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
>         unsigned int total_rx_bytes = 0, total_rx_packets = 0;
>         struct sk_buff *skb = rx_ring->skb;
>         u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
> -       bool failure = false, xdp_xmit = false;
> +       unsigned int xdp_xmit = 0;
> +       bool failure = false;
>         struct xdp_buff xdp;
>
>         xdp.rxq = &rx_ring->xdp_rxq;
> @@ -2373,8 +2375,10 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
>                 }
>
>                 if (IS_ERR(skb)) {
> -                       if (PTR_ERR(skb) == -I40E_XDP_TX) {
> -                               xdp_xmit = true;
> +                       unsigned int xdp_res = -PTR_ERR(skb);
> +
> +                       if (xdp_res & (I40E_XDP_TX | I40E_XDP_REDIR)) {
> +                               xdp_xmit |= xdp_res;
>                                 i40e_rx_buffer_flip(rx_ring, rx_buffer, size);
>                         } else {
>                                 rx_buffer->pagecnt_bias++;
> @@ -2428,12 +2432,14 @@ static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
>                 total_rx_packets++;
>         }
>
> -       if (xdp_xmit) {
> +       if (xdp_xmit & I40E_XDP_REDIR)
> +               xdp_do_flush_map();
> +
> +       if (xdp_xmit & I40E_XDP_TX) {
>                 struct i40e_ring *xdp_ring =
>                         rx_ring->vsi->xdp_rings[rx_ring->queue_index];
>
>                 i40e_xdp_ring_update_tail(xdp_ring);
> -               xdp_do_flush_map();
>         }
>
>         rx_ring->skb = skb;
>

Nice! Added intel-wired-lan to Cc.

Acked-by: Bj?rn T?pel <bjorn.topel@intel.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 3/3] virtio_net: split XDP_TX kick and XDP_REDIRECT map flushing
  2018-06-26 15:39 ` [PATCH 3/3] virtio_net: split XDP_TX kick " Jesper Dangaard Brouer
@ 2018-06-27 14:56   ` Jason Wang
  0 siblings, 0 replies; 8+ messages in thread
From: Jason Wang @ 2018-06-27 14:56 UTC (permalink / raw)
  To: Jesper Dangaard Brouer, netdev
  Cc: John Fastabend, Daniel Borkmann, BjörnTöpel,
	Alexei Starovoitov



On 2018年06月26日 23:39, Jesper Dangaard Brouer wrote:
> The driver was combining XDP_TX virtqueue_kick and XDP_REDIRECT
> map flushing (xdp_do_flush_map).  This is suboptimal, these two
> flush operations should be kept separate.
>
> The suboptimal behavior was introduced in commit 9267c430c6b6
> ("virtio-net: add missing virtqueue kick when flushing packets").
>
> Fixes: 9267c430c6b6 ("virtio-net: add missing virtqueue kick when flushing packets")
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
>   drivers/net/virtio_net.c |   30 +++++++++++++++++++-----------
>   1 file changed, 19 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 1619ee3070b6..ae47ecf80c2d 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -53,6 +53,10 @@ module_param(napi_tx, bool, 0644);
>   /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
>   #define VIRTIO_XDP_HEADROOM 256
>   
> +/* Separating two types of XDP xmit */
> +#define VIRTIO_XDP_TX		BIT(0)
> +#define VIRTIO_XDP_REDIR	BIT(1)
> +
>   /* RX packet size EWMA. The average packet size is used to determine the packet
>    * buffer size when refilling RX rings. As the entire RX ring may be refilled
>    * at once, the weight is chosen so that the EWMA will be insensitive to short-
> @@ -582,7 +586,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
>   				     struct receive_queue *rq,
>   				     void *buf, void *ctx,
>   				     unsigned int len,
> -				     bool *xdp_xmit)
> +				     unsigned int *xdp_xmit)
>   {
>   	struct sk_buff *skb;
>   	struct bpf_prog *xdp_prog;
> @@ -654,14 +658,14 @@ static struct sk_buff *receive_small(struct net_device *dev,
>   				trace_xdp_exception(vi->dev, xdp_prog, act);
>   				goto err_xdp;
>   			}
> -			*xdp_xmit = true;
> +			*xdp_xmit |= VIRTIO_XDP_TX;
>   			rcu_read_unlock();
>   			goto xdp_xmit;
>   		case XDP_REDIRECT:
>   			err = xdp_do_redirect(dev, &xdp, xdp_prog);
>   			if (err)
>   				goto err_xdp;
> -			*xdp_xmit = true;
> +			*xdp_xmit |= VIRTIO_XDP_REDIR;
>   			rcu_read_unlock();
>   			goto xdp_xmit;
>   		default:
> @@ -723,7 +727,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   					 void *buf,
>   					 void *ctx,
>   					 unsigned int len,
> -					 bool *xdp_xmit)
> +					 unsigned int *xdp_xmit)
>   {
>   	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
>   	u16 num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> @@ -818,7 +822,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   					put_page(xdp_page);
>   				goto err_xdp;
>   			}
> -			*xdp_xmit = true;
> +			*xdp_xmit |= VIRTIO_XDP_TX;
>   			if (unlikely(xdp_page != page))
>   				put_page(page);
>   			rcu_read_unlock();
> @@ -830,7 +834,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   					put_page(xdp_page);
>   				goto err_xdp;
>   			}
> -			*xdp_xmit = true;
> +			*xdp_xmit |= VIRTIO_XDP_REDIR;
>   			if (unlikely(xdp_page != page))
>   				put_page(page);
>   			rcu_read_unlock();
> @@ -939,7 +943,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   }
>   
>   static int receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
> -		       void *buf, unsigned int len, void **ctx, bool *xdp_xmit)
> +		       void *buf, unsigned int len, void **ctx,
> +		       unsigned int *xdp_xmit)
>   {
>   	struct net_device *dev = vi->dev;
>   	struct sk_buff *skb;
> @@ -1232,7 +1237,8 @@ static void refill_work(struct work_struct *work)
>   	}
>   }
>   
> -static int virtnet_receive(struct receive_queue *rq, int budget, bool *xdp_xmit)
> +static int virtnet_receive(struct receive_queue *rq, int budget,
> +			   unsigned int *xdp_xmit)
>   {
>   	struct virtnet_info *vi = rq->vq->vdev->priv;
>   	unsigned int len, received = 0, bytes = 0;
> @@ -1321,7 +1327,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
>   	struct virtnet_info *vi = rq->vq->vdev->priv;
>   	struct send_queue *sq;
>   	unsigned int received, qp;
> -	bool xdp_xmit = false;
> +	unsigned int xdp_xmit = 0;
>   
>   	virtnet_poll_cleantx(rq);
>   
> @@ -1331,12 +1337,14 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
>   	if (received < budget)
>   		virtqueue_napi_complete(napi, rq->vq, received);
>   
> -	if (xdp_xmit) {
> +	if (xdp_xmit & VIRTIO_XDP_REDIR)
> +		xdp_do_flush_map();
> +
> +	if (xdp_xmit & VIRTIO_XDP_TX) {
>   		qp = vi->curr_queue_pairs - vi->xdp_queue_pairs +
>   		     smp_processor_id();
>   		sq = &vi->sq[qp];
>   		virtqueue_kick(sq->vq);
> -		xdp_do_flush_map();
>   	}
>   
>   	return received;
>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops
  2018-06-26 15:39 [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops Jesper Dangaard Brouer
                   ` (2 preceding siblings ...)
  2018-06-26 15:39 ` [PATCH 3/3] virtio_net: split XDP_TX kick " Jesper Dangaard Brouer
@ 2018-06-28  5:29 ` David Miller
  3 siblings, 0 replies; 8+ messages in thread
From: David Miller @ 2018-06-28  5:29 UTC (permalink / raw)
  To: brouer
  Cc: netdev, john.fastabend, jasowang, borkmann, bjorn.topel,
	alexei.starovoitov

From: Jesper Dangaard Brouer <brouer@redhat.com>
Date: Tue, 26 Jun 2018 17:39:43 +0200

> Fix driver logic that are combining XDP_TX flush and XDP_REDIRECT map
> flushing.  These are two different XDP xmit modes, and it is clearly
> wrong to invoke both types of flush operations when only one of the
> XDP xmit modes is used.

Series applied and queued up for -stable, thanks Jesper.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-06-28  5:29 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-26 15:39 [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops Jesper Dangaard Brouer
2018-06-26 15:39 ` [PATCH 1/3] ixgbe: split XDP_TX tail and XDP_REDIRECT map flushing Jesper Dangaard Brouer
2018-06-26 15:39 ` [PATCH 2/3] i40e: " Jesper Dangaard Brouer
2018-06-26 16:37   ` Björn Töpel
2018-06-26 16:37     ` [Intel-wired-lan] " =?unknown-8bit?q?Bj=C3=B6rn_T=C3=B6pel?=
2018-06-26 15:39 ` [PATCH 3/3] virtio_net: split XDP_TX kick " Jesper Dangaard Brouer
2018-06-27 14:56   ` Jason Wang
2018-06-28  5:29 ` [PATCH 0/3] xdp: don't mix XDP_TX and XDP_REDIRECT flush ops David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.