bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
@ 2023-10-16 12:00 Xuan Zhuo
  2023-10-16 12:00 ` [PATCH net-next v1 01/19] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
                   ` (19 more replies)
  0 siblings, 20 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

## AF_XDP

XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
copy feature of xsk (XDP socket) needs to be supported by the driver. The
performance of zero copy is very good. mlx5 and intel ixgbe already support
this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
feature.

At present, we have completed some preparation:

1. vq-reset (virtio spec and kernel code)
2. virtio-core premapped dma
3. virtio-net xdp refactor

So it is time for Virtio-Net to complete the support for the XDP Socket
Zerocopy.

Virtio-net can not increase the queue num at will, so xsk shares the queue with
kernel.

On the other hand, Virtio-Net does not support generate interrupt from driver
manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
is also the local CPU, then we wake up napi directly.

This patch set includes some refactor to the virtio-net to let that to support
AF_XDP.

## performance

ENV: Qemu with vhost-user(polling mode).

Sockperf: https://github.com/Mellanox/sockperf
I use this tool to send udp packet by kernel syscall.

xmit command: sockperf tp -i 10.0.3.1 -t 1000

I write a tool that sends udp packets or recvs udp packets by AF_XDP.

                  | Guest APP CPU |Guest Softirq CPU | UDP PPS
------------------|---------------|------------------|------------
xmit by syscall   |   100%        |                  |   676,915
xmit by xsk       |   59.1%       |   100%           | 5,447,168
recv by syscall   |   60%         |   100%           |   932,288
recv by xsk       |   35.7%       |   100%           | 3,343,168

## maintain

I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
virtio-net.

Please review.

Thanks.

v1:
    1. remove two virtio commits. Push this patchset to net-next
    2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
    3. fix some warnings

Xuan Zhuo (19):
  virtio_net: rename free_old_xmit_skbs to free_old_xmit
  virtio_net: unify the code for recycling the xmit ptr
  virtio_net: independent directory
  virtio_net: move to virtio_net.h
  virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
  virtio_net: separate virtnet_rx_resize()
  virtio_net: separate virtnet_tx_resize()
  virtio_net: sq support premapped mode
  virtio_net: xsk: bind/unbind xsk
  virtio_net: xsk: prevent disable tx napi
  virtio_net: xsk: tx: support tx
  virtio_net: xsk: tx: support wakeup
  virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
  virtio_net: xsk: rx: introduce add_recvbuf_xsk()
  virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
  virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
  virtio_net: update tx timeout record
  virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY

 MAINTAINERS                                 |   2 +-
 drivers/net/Kconfig                         |   8 +-
 drivers/net/Makefile                        |   2 +-
 drivers/net/virtio/Kconfig                  |  13 +
 drivers/net/virtio/Makefile                 |   8 +
 drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
 drivers/net/virtio/virtio_net.h             | 359 +++++++++++
 drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
 drivers/net/virtio/xsk.h                    |  32 +
 9 files changed, 1247 insertions(+), 374 deletions(-)
 create mode 100644 drivers/net/virtio/Kconfig
 create mode 100644 drivers/net/virtio/Makefile
 rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
 create mode 100644 drivers/net/virtio/virtio_net.h
 create mode 100644 drivers/net/virtio/xsk.c
 create mode 100644 drivers/net/virtio/xsk.h

--
2.32.0.3.g01195cf9f


^ permalink raw reply	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 01/19] virtio_net: rename free_old_xmit_skbs to free_old_xmit
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-19  4:17   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 02/19] virtio_net: unify the code for recycling the xmit ptr Xuan Zhuo
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Since free_old_xmit_skbs not only deals with skb, but also xdp frame and
subsequent added xsk, so change the name of this function to
free_old_xmit.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index fe7f314d65c9..3d87386d8220 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -744,7 +744,7 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
 	}
 }
 
-static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
+static void free_old_xmit(struct send_queue *sq, bool in_napi)
 {
 	unsigned int len;
 	unsigned int packets = 0;
@@ -816,7 +816,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
 				virtqueue_napi_schedule(&sq->napi, sq->vq);
 		} else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
 			/* More just got used, free them then recheck. */
-			free_old_xmit_skbs(sq, false);
+			free_old_xmit(sq, false);
 			if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
 				netif_start_subqueue(dev, qnum);
 				virtqueue_disable_cb(sq->vq);
@@ -2124,7 +2124,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
 
 		do {
 			virtqueue_disable_cb(sq->vq);
-			free_old_xmit_skbs(sq, true);
+			free_old_xmit(sq, true);
 		} while (unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
 
 		if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
@@ -2246,7 +2246,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	txq = netdev_get_tx_queue(vi->dev, index);
 	__netif_tx_lock(txq, raw_smp_processor_id());
 	virtqueue_disable_cb(sq->vq);
-	free_old_xmit_skbs(sq, true);
+	free_old_xmit(sq, true);
 
 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
 		netif_tx_wake_queue(txq);
@@ -2336,7 +2336,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
 		if (use_napi)
 			virtqueue_disable_cb(sq->vq);
 
-		free_old_xmit_skbs(sq, false);
+		free_old_xmit(sq, false);
 
 	} while (use_napi && kick &&
 	       unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 02/19] virtio_net: unify the code for recycling the xmit ptr
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
  2023-10-16 12:00 ` [PATCH net-next v1 01/19] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-19  4:23   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 03/19] virtio_net: independent directory Xuan Zhuo
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

There are two completely similar and independent implementations. This
is inconvenient for the subsequent addition of new types. So extract a
function from this piece of code and call this function uniformly to
recover old xmit ptr.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 76 +++++++++++++++++-----------------------
 1 file changed, 33 insertions(+), 43 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 3d87386d8220..6cf77b6acdab 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -352,6 +352,30 @@ static struct xdp_frame *ptr_to_xdp(void *ptr)
 	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
 }
 
+static void __free_old_xmit(struct send_queue *sq, bool in_napi,
+			    struct virtnet_sq_stats *stats)
+{
+	unsigned int len;
+	void *ptr;
+
+	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
+		if (!is_xdp_frame(ptr)) {
+			struct sk_buff *skb = ptr;
+
+			pr_debug("Sent skb %p\n", skb);
+
+			stats->bytes += skb->len;
+			napi_consume_skb(skb, in_napi);
+		} else {
+			struct xdp_frame *frame = ptr_to_xdp(ptr);
+
+			stats->bytes += xdp_get_frame_len(frame);
+			xdp_return_frame(frame);
+		}
+		stats->packets++;
+	}
+}
+
 /* Converting between virtqueue no. and kernel tx/rx queue no.
  * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
  */
@@ -746,37 +770,19 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
 
 static void free_old_xmit(struct send_queue *sq, bool in_napi)
 {
-	unsigned int len;
-	unsigned int packets = 0;
-	unsigned int bytes = 0;
-	void *ptr;
+	struct virtnet_sq_stats stats = {};
 
-	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (likely(!is_xdp_frame(ptr))) {
-			struct sk_buff *skb = ptr;
-
-			pr_debug("Sent skb %p\n", skb);
-
-			bytes += skb->len;
-			napi_consume_skb(skb, in_napi);
-		} else {
-			struct xdp_frame *frame = ptr_to_xdp(ptr);
-
-			bytes += xdp_get_frame_len(frame);
-			xdp_return_frame(frame);
-		}
-		packets++;
-	}
+	__free_old_xmit(sq, in_napi, &stats);
 
 	/* Avoid overhead when no packets have been processed
 	 * happens when called speculatively from start_xmit.
 	 */
-	if (!packets)
+	if (!stats.packets)
 		return;
 
 	u64_stats_update_begin(&sq->stats.syncp);
-	sq->stats.bytes += bytes;
-	sq->stats.packets += packets;
+	sq->stats.bytes += stats.bytes;
+	sq->stats.packets += stats.packets;
 	u64_stats_update_end(&sq->stats.syncp);
 }
 
@@ -915,15 +921,12 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 			    int n, struct xdp_frame **frames, u32 flags)
 {
 	struct virtnet_info *vi = netdev_priv(dev);
+	struct virtnet_sq_stats stats = {};
 	struct receive_queue *rq = vi->rq;
 	struct bpf_prog *xdp_prog;
 	struct send_queue *sq;
-	unsigned int len;
-	int packets = 0;
-	int bytes = 0;
 	int nxmit = 0;
 	int kicks = 0;
-	void *ptr;
 	int ret;
 	int i;
 
@@ -942,20 +945,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	}
 
 	/* Free up any pending old buffers before queueing new ones. */
-	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (likely(is_xdp_frame(ptr))) {
-			struct xdp_frame *frame = ptr_to_xdp(ptr);
-
-			bytes += xdp_get_frame_len(frame);
-			xdp_return_frame(frame);
-		} else {
-			struct sk_buff *skb = ptr;
-
-			bytes += skb->len;
-			napi_consume_skb(skb, false);
-		}
-		packets++;
-	}
+	__free_old_xmit(sq, false, &stats);
 
 	for (i = 0; i < n; i++) {
 		struct xdp_frame *xdpf = frames[i];
@@ -975,8 +965,8 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	}
 out:
 	u64_stats_update_begin(&sq->stats.syncp);
-	sq->stats.bytes += bytes;
-	sq->stats.packets += packets;
+	sq->stats.bytes += stats.bytes;
+	sq->stats.packets += stats.packets;
 	sq->stats.xdp_tx += n;
 	sq->stats.xdp_tx_drops += n - nxmit;
 	sq->stats.kicks += kicks;
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 03/19] virtio_net: independent directory
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
  2023-10-16 12:00 ` [PATCH net-next v1 01/19] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
  2023-10-16 12:00 ` [PATCH net-next v1 02/19] virtio_net: unify the code for recycling the xmit ptr Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-19  6:10   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h Xuan Zhuo
                   ` (16 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Create a separate directory for virtio-net. AF_XDP support will be added
later, then a separate xsk.c file will be added, so we should create a
directory for virtio-net.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 MAINTAINERS                                 |  2 +-
 drivers/net/Kconfig                         |  8 +-------
 drivers/net/Makefile                        |  2 +-
 drivers/net/virtio/Kconfig                  | 13 +++++++++++++
 drivers/net/virtio/Makefile                 |  8 ++++++++
 drivers/net/{virtio_net.c => virtio/main.c} |  0
 6 files changed, 24 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/virtio/Kconfig
 create mode 100644 drivers/net/virtio/Makefile
 rename drivers/net/{virtio_net.c => virtio/main.c} (100%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9c186c214c54..e4fbcbc100e3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22768,7 +22768,7 @@ F:	Documentation/devicetree/bindings/virtio/
 F:	Documentation/driver-api/virtio/
 F:	drivers/block/virtio_blk.c
 F:	drivers/crypto/virtio/
-F:	drivers/net/virtio_net.c
+F:	drivers/net/virtio/
 F:	drivers/vdpa/
 F:	drivers/virtio/
 F:	include/linux/vdpa.h
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index 44eeb5d61ba9..54ee6fa4f4a6 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -430,13 +430,7 @@ config VETH
 	  When one end receives the packet it appears on its pair and vice
 	  versa.
 
-config VIRTIO_NET
-	tristate "Virtio network driver"
-	depends on VIRTIO
-	select NET_FAILOVER
-	help
-	  This is the virtual network driver for virtio.  It can be used with
-	  QEMU based VMMs (like KVM or Xen).  Say Y or M.
+source "drivers/net/virtio/Kconfig"
 
 config NLMON
 	tristate "Virtual netlink monitoring device"
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index e26f98f897c5..47537dd0f120 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -31,7 +31,7 @@ obj-$(CONFIG_NET_TEAM) += team/
 obj-$(CONFIG_TUN) += tun.o
 obj-$(CONFIG_TAP) += tap.o
 obj-$(CONFIG_VETH) += veth.o
-obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
+obj-$(CONFIG_VIRTIO_NET) += virtio/
 obj-$(CONFIG_VXLAN) += vxlan/
 obj-$(CONFIG_GENEVE) += geneve.o
 obj-$(CONFIG_BAREUDP) += bareudp.o
diff --git a/drivers/net/virtio/Kconfig b/drivers/net/virtio/Kconfig
new file mode 100644
index 000000000000..d8ccb3ac49df
--- /dev/null
+++ b/drivers/net/virtio/Kconfig
@@ -0,0 +1,13 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# virtio-net device configuration
+#
+config VIRTIO_NET
+	tristate "Virtio network driver"
+	depends on VIRTIO
+	select NET_FAILOVER
+	help
+	  This is the virtual network driver for virtio.  It can be used with
+	  QEMU based VMMs (like KVM or Xen).
+
+	  Say Y or M.
diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
new file mode 100644
index 000000000000..15ed7c97fd4f
--- /dev/null
+++ b/drivers/net/virtio/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the virtio network device drivers.
+#
+
+obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
+
+virtio_net-y := main.o
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio/main.c
similarity index 100%
rename from drivers/net/virtio_net.c
rename to drivers/net/virtio/main.c
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (2 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 03/19] virtio_net: independent directory Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-19  6:12   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 05/19] virtio_net: add prefix virtnet to all struct/api inside virtio_net.h Xuan Zhuo
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Move some structure definitions and inline functions into the
virtio_net.h file.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 252 +------------------------------
 drivers/net/virtio/virtio_net.h | 256 ++++++++++++++++++++++++++++++++
 2 files changed, 258 insertions(+), 250 deletions(-)
 create mode 100644 drivers/net/virtio/virtio_net.h

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 6cf77b6acdab..d8b6c0d86f29 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -6,7 +6,6 @@
 //#define DEBUG
 #include <linux/netdevice.h>
 #include <linux/etherdevice.h>
-#include <linux/ethtool.h>
 #include <linux/module.h>
 #include <linux/virtio.h>
 #include <linux/virtio_net.h>
@@ -16,7 +15,6 @@
 #include <linux/if_vlan.h>
 #include <linux/slab.h>
 #include <linux/cpu.h>
-#include <linux/average.h>
 #include <linux/filter.h>
 #include <linux/kernel.h>
 #include <net/route.h>
@@ -24,6 +22,8 @@
 #include <net/net_failover.h>
 #include <net/netdev_rx_queue.h>
 
+#include "virtio_net.h"
+
 static int napi_weight = NAPI_POLL_WEIGHT;
 module_param(napi_weight, int, 0444);
 
@@ -45,15 +45,6 @@ module_param(napi_tx, bool, 0644);
 #define VIRTIO_XDP_TX		BIT(0)
 #define VIRTIO_XDP_REDIR	BIT(1)
 
-#define VIRTIO_XDP_FLAG	BIT(0)
-
-/* RX packet size EWMA. The average packet size is used to determine the packet
- * buffer size when refilling RX rings. As the entire RX ring may be refilled
- * at once, the weight is chosen so that the EWMA will be insensitive to short-
- * term, transient changes in packet size.
- */
-DECLARE_EWMA(pkt_len, 0, 64)
-
 #define VIRTNET_DRIVER_VERSION "1.0.0"
 
 static const unsigned long guest_offloads[] = {
@@ -74,36 +65,6 @@ static const unsigned long guest_offloads[] = {
 				(1ULL << VIRTIO_NET_F_GUEST_USO4) | \
 				(1ULL << VIRTIO_NET_F_GUEST_USO6))
 
-struct virtnet_stat_desc {
-	char desc[ETH_GSTRING_LEN];
-	size_t offset;
-};
-
-struct virtnet_sq_stats {
-	struct u64_stats_sync syncp;
-	u64 packets;
-	u64 bytes;
-	u64 xdp_tx;
-	u64 xdp_tx_drops;
-	u64 kicks;
-	u64 tx_timeouts;
-};
-
-struct virtnet_rq_stats {
-	struct u64_stats_sync syncp;
-	u64 packets;
-	u64 bytes;
-	u64 drops;
-	u64 xdp_packets;
-	u64 xdp_tx;
-	u64 xdp_redirects;
-	u64 xdp_drops;
-	u64 kicks;
-};
-
-#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
-#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
-
 static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
 	{ "packets",		VIRTNET_SQ_STAT(packets) },
 	{ "bytes",		VIRTNET_SQ_STAT(bytes) },
@@ -127,80 +88,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
 #define VIRTNET_SQ_STATS_LEN	ARRAY_SIZE(virtnet_sq_stats_desc)
 #define VIRTNET_RQ_STATS_LEN	ARRAY_SIZE(virtnet_rq_stats_desc)
 
-struct virtnet_interrupt_coalesce {
-	u32 max_packets;
-	u32 max_usecs;
-};
-
-/* The dma information of pages allocated at a time. */
-struct virtnet_rq_dma {
-	dma_addr_t addr;
-	u32 ref;
-	u16 len;
-	u16 need_sync;
-};
-
-/* Internal representation of a send virtqueue */
-struct send_queue {
-	/* Virtqueue associated with this send _queue */
-	struct virtqueue *vq;
-
-	/* TX: fragments + linear part + virtio header */
-	struct scatterlist sg[MAX_SKB_FRAGS + 2];
-
-	/* Name of the send queue: output.$index */
-	char name[16];
-
-	struct virtnet_sq_stats stats;
-
-	struct virtnet_interrupt_coalesce intr_coal;
-
-	struct napi_struct napi;
-
-	/* Record whether sq is in reset state. */
-	bool reset;
-};
-
-/* Internal representation of a receive virtqueue */
-struct receive_queue {
-	/* Virtqueue associated with this receive_queue */
-	struct virtqueue *vq;
-
-	struct napi_struct napi;
-
-	struct bpf_prog __rcu *xdp_prog;
-
-	struct virtnet_rq_stats stats;
-
-	struct virtnet_interrupt_coalesce intr_coal;
-
-	/* Chain pages by the private ptr. */
-	struct page *pages;
-
-	/* Average packet length for mergeable receive buffers. */
-	struct ewma_pkt_len mrg_avg_pkt_len;
-
-	/* Page frag for packet buffer allocation. */
-	struct page_frag alloc_frag;
-
-	/* RX: fragments + linear part + virtio header */
-	struct scatterlist sg[MAX_SKB_FRAGS + 2];
-
-	/* Min single buffer size for mergeable buffers case. */
-	unsigned int min_buf_len;
-
-	/* Name of this receive queue: input.$index */
-	char name[16];
-
-	struct xdp_rxq_info xdp_rxq;
-
-	/* Record the last dma info to free after new pages is allocated. */
-	struct virtnet_rq_dma *last_dma;
-
-	/* Do dma by self */
-	bool do_dma;
-};
-
 /* This structure can contain rss message with maximum settings for indirection table and keysize
  * Note, that default structure that describes RSS configuration virtio_net_rss_config
  * contains same info but can't handle table values.
@@ -234,88 +121,6 @@ struct control_buf {
 	struct virtio_net_ctrl_coal_vq coal_vq;
 };
 
-struct virtnet_info {
-	struct virtio_device *vdev;
-	struct virtqueue *cvq;
-	struct net_device *dev;
-	struct send_queue *sq;
-	struct receive_queue *rq;
-	unsigned int status;
-
-	/* Max # of queue pairs supported by the device */
-	u16 max_queue_pairs;
-
-	/* # of queue pairs currently used by the driver */
-	u16 curr_queue_pairs;
-
-	/* # of XDP queue pairs currently used by the driver */
-	u16 xdp_queue_pairs;
-
-	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
-	bool xdp_enabled;
-
-	/* I like... big packets and I cannot lie! */
-	bool big_packets;
-
-	/* number of sg entries allocated for big packets */
-	unsigned int big_packets_num_skbfrags;
-
-	/* Host will merge rx buffers for big packets (shake it! shake it!) */
-	bool mergeable_rx_bufs;
-
-	/* Host supports rss and/or hash report */
-	bool has_rss;
-	bool has_rss_hash_report;
-	u8 rss_key_size;
-	u16 rss_indir_table_size;
-	u32 rss_hash_types_supported;
-	u32 rss_hash_types_saved;
-
-	/* Has control virtqueue */
-	bool has_cvq;
-
-	/* Host can handle any s/g split between our header and packet data */
-	bool any_header_sg;
-
-	/* Packet virtio header size */
-	u8 hdr_len;
-
-	/* Work struct for delayed refilling if we run low on memory. */
-	struct delayed_work refill;
-
-	/* Is delayed refill enabled? */
-	bool refill_enabled;
-
-	/* The lock to synchronize the access to refill_enabled */
-	spinlock_t refill_lock;
-
-	/* Work struct for config space updates */
-	struct work_struct config_work;
-
-	/* Does the affinity hint is set for virtqueues? */
-	bool affinity_hint_set;
-
-	/* CPU hotplug instances for online & dead */
-	struct hlist_node node;
-	struct hlist_node node_dead;
-
-	struct control_buf *ctrl;
-
-	/* Ethtool settings */
-	u8 duplex;
-	u32 speed;
-
-	/* Interrupt coalescing settings */
-	struct virtnet_interrupt_coalesce intr_coal_tx;
-	struct virtnet_interrupt_coalesce intr_coal_rx;
-
-	unsigned long guest_offloads;
-	unsigned long guest_offloads_capable;
-
-	/* failover when STANDBY feature enabled */
-	struct failover *failover;
-};
-
 struct padded_vnet_hdr {
 	struct virtio_net_hdr_v1_hash hdr;
 	/*
@@ -337,45 +142,11 @@ struct virtio_net_common_hdr {
 static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
 static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
 
-static bool is_xdp_frame(void *ptr)
-{
-	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
-}
-
 static void *xdp_to_ptr(struct xdp_frame *ptr)
 {
 	return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
 }
 
-static struct xdp_frame *ptr_to_xdp(void *ptr)
-{
-	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
-}
-
-static void __free_old_xmit(struct send_queue *sq, bool in_napi,
-			    struct virtnet_sq_stats *stats)
-{
-	unsigned int len;
-	void *ptr;
-
-	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (!is_xdp_frame(ptr)) {
-			struct sk_buff *skb = ptr;
-
-			pr_debug("Sent skb %p\n", skb);
-
-			stats->bytes += skb->len;
-			napi_consume_skb(skb, in_napi);
-		} else {
-			struct xdp_frame *frame = ptr_to_xdp(ptr);
-
-			stats->bytes += xdp_get_frame_len(frame);
-			xdp_return_frame(frame);
-		}
-		stats->packets++;
-	}
-}
-
 /* Converting between virtqueue no. and kernel tx/rx queue no.
  * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
  */
@@ -446,15 +217,6 @@ static void disable_delayed_refill(struct virtnet_info *vi)
 	spin_unlock_bh(&vi->refill_lock);
 }
 
-static void virtqueue_napi_schedule(struct napi_struct *napi,
-				    struct virtqueue *vq)
-{
-	if (napi_schedule_prep(napi)) {
-		virtqueue_disable_cb(vq);
-		__napi_schedule(napi);
-	}
-}
-
 static void virtqueue_napi_complete(struct napi_struct *napi,
 				    struct virtqueue *vq, int processed)
 {
@@ -786,16 +548,6 @@ static void free_old_xmit(struct send_queue *sq, bool in_napi)
 	u64_stats_update_end(&sq->stats.syncp);
 }
 
-static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
-{
-	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
-		return false;
-	else if (q < vi->curr_queue_pairs)
-		return true;
-	else
-		return false;
-}
-
 static void check_sq_full_and_disable(struct virtnet_info *vi,
 				      struct net_device *dev,
 				      struct send_queue *sq)
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
new file mode 100644
index 000000000000..ddaf0ecf4d9d
--- /dev/null
+++ b/drivers/net/virtio/virtio_net.h
@@ -0,0 +1,256 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef __VIRTIO_NET_H__
+#define __VIRTIO_NET_H__
+
+#include <linux/ethtool.h>
+#include <linux/average.h>
+
+#define VIRTIO_XDP_FLAG	BIT(0)
+
+/* RX packet size EWMA. The average packet size is used to determine the packet
+ * buffer size when refilling RX rings. As the entire RX ring may be refilled
+ * at once, the weight is chosen so that the EWMA will be insensitive to short-
+ * term, transient changes in packet size.
+ */
+DECLARE_EWMA(pkt_len, 0, 64)
+
+struct virtnet_stat_desc {
+	char desc[ETH_GSTRING_LEN];
+	size_t offset;
+};
+
+struct virtnet_sq_stats {
+	struct u64_stats_sync syncp;
+	u64 packets;
+	u64 bytes;
+	u64 xdp_tx;
+	u64 xdp_tx_drops;
+	u64 kicks;
+	u64 tx_timeouts;
+};
+
+struct virtnet_rq_stats {
+	struct u64_stats_sync syncp;
+	u64 packets;
+	u64 bytes;
+	u64 drops;
+	u64 xdp_packets;
+	u64 xdp_tx;
+	u64 xdp_redirects;
+	u64 xdp_drops;
+	u64 kicks;
+};
+
+#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
+#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
+
+struct virtnet_interrupt_coalesce {
+	u32 max_packets;
+	u32 max_usecs;
+};
+
+/* The dma information of pages allocated at a time. */
+struct virtnet_rq_dma {
+	dma_addr_t addr;
+	u32 ref;
+	u16 len;
+	u16 need_sync;
+};
+
+/* Internal representation of a send virtqueue */
+struct send_queue {
+	/* Virtqueue associated with this send _queue */
+	struct virtqueue *vq;
+
+	/* TX: fragments + linear part + virtio header */
+	struct scatterlist sg[MAX_SKB_FRAGS + 2];
+
+	/* Name of the send queue: output.$index */
+	char name[16];
+
+	struct virtnet_sq_stats stats;
+
+	struct virtnet_interrupt_coalesce intr_coal;
+
+	struct napi_struct napi;
+
+	/* Record whether sq is in reset state. */
+	bool reset;
+};
+
+/* Internal representation of a receive virtqueue */
+struct receive_queue {
+	/* Virtqueue associated with this receive_queue */
+	struct virtqueue *vq;
+
+	struct napi_struct napi;
+
+	struct bpf_prog __rcu *xdp_prog;
+
+	struct virtnet_rq_stats stats;
+
+	struct virtnet_interrupt_coalesce intr_coal;
+
+	/* Chain pages by the private ptr. */
+	struct page *pages;
+
+	/* Average packet length for mergeable receive buffers. */
+	struct ewma_pkt_len mrg_avg_pkt_len;
+
+	/* Page frag for packet buffer allocation. */
+	struct page_frag alloc_frag;
+
+	/* RX: fragments + linear part + virtio header */
+	struct scatterlist sg[MAX_SKB_FRAGS + 2];
+
+	/* Min single buffer size for mergeable buffers case. */
+	unsigned int min_buf_len;
+
+	/* Name of this receive queue: input.$index */
+	char name[16];
+
+	struct xdp_rxq_info xdp_rxq;
+
+	/* Record the last dma info to free after new pages is allocated. */
+	struct virtnet_rq_dma *last_dma;
+
+	/* Do dma by self */
+	bool do_dma;
+};
+
+struct virtnet_info {
+	struct virtio_device *vdev;
+	struct virtqueue *cvq;
+	struct net_device *dev;
+	struct send_queue *sq;
+	struct receive_queue *rq;
+	unsigned int status;
+
+	/* Max # of queue pairs supported by the device */
+	u16 max_queue_pairs;
+
+	/* # of queue pairs currently used by the driver */
+	u16 curr_queue_pairs;
+
+	/* # of XDP queue pairs currently used by the driver */
+	u16 xdp_queue_pairs;
+
+	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
+	bool xdp_enabled;
+
+	/* I like... big packets and I cannot lie! */
+	bool big_packets;
+
+	/* number of sg entries allocated for big packets */
+	unsigned int big_packets_num_skbfrags;
+
+	/* Host will merge rx buffers for big packets (shake it! shake it!) */
+	bool mergeable_rx_bufs;
+
+	/* Host supports rss and/or hash report */
+	bool has_rss;
+	bool has_rss_hash_report;
+	u8 rss_key_size;
+	u16 rss_indir_table_size;
+	u32 rss_hash_types_supported;
+	u32 rss_hash_types_saved;
+
+	/* Has control virtqueue */
+	bool has_cvq;
+
+	/* Host can handle any s/g split between our header and packet data */
+	bool any_header_sg;
+
+	/* Packet virtio header size */
+	u8 hdr_len;
+
+	/* Work struct for delayed refilling if we run low on memory. */
+	struct delayed_work refill;
+
+	/* Is delayed refill enabled? */
+	bool refill_enabled;
+
+	/* The lock to synchronize the access to refill_enabled */
+	spinlock_t refill_lock;
+
+	/* Work struct for config space updates */
+	struct work_struct config_work;
+
+	/* Does the affinity hint is set for virtqueues? */
+	bool affinity_hint_set;
+
+	/* CPU hotplug instances for online & dead */
+	struct hlist_node node;
+	struct hlist_node node_dead;
+
+	struct control_buf *ctrl;
+
+	/* Ethtool settings */
+	u8 duplex;
+	u32 speed;
+
+	/* Interrupt coalescing settings */
+	struct virtnet_interrupt_coalesce intr_coal_tx;
+	struct virtnet_interrupt_coalesce intr_coal_rx;
+
+	unsigned long guest_offloads;
+	unsigned long guest_offloads_capable;
+
+	/* failover when STANDBY feature enabled */
+	struct failover *failover;
+};
+
+static inline bool is_xdp_frame(void *ptr)
+{
+	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
+}
+
+static inline struct xdp_frame *ptr_to_xdp(void *ptr)
+{
+	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
+}
+
+static inline void __free_old_xmit(struct send_queue *sq, bool in_napi,
+				   struct virtnet_sq_stats *stats)
+{
+	unsigned int len;
+	void *ptr;
+
+	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
+		if (!is_xdp_frame(ptr)) {
+			struct sk_buff *skb = ptr;
+
+			pr_debug("Sent skb %p\n", skb);
+
+			stats->bytes += skb->len;
+			napi_consume_skb(skb, in_napi);
+		} else {
+			struct xdp_frame *frame = ptr_to_xdp(ptr);
+
+			stats->bytes += xdp_get_frame_len(frame);
+			xdp_return_frame(frame);
+		}
+		stats->packets++;
+	}
+}
+
+static inline void virtqueue_napi_schedule(struct napi_struct *napi,
+					   struct virtqueue *vq)
+{
+	if (napi_schedule_prep(napi)) {
+		virtqueue_disable_cb(vq);
+		__napi_schedule(napi);
+	}
+}
+
+static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
+{
+	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
+		return false;
+	else if (q < vi->curr_queue_pairs)
+		return true;
+	else
+		return false;
+}
+#endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 05/19] virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (3 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-19  6:14   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 06/19] virtio_net: separate virtnet_rx_resize() Xuan Zhuo
                   ` (14 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

We move some structures and APIs to the header file, but these
structures and APIs do not prefixed with virtnet. This patch adds
virtnet for these.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 122 ++++++++++++++++----------------
 drivers/net/virtio/virtio_net.h |  30 ++++----
 2 files changed, 76 insertions(+), 76 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index d8b6c0d86f29..ba38b6078e1d 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -180,7 +180,7 @@ skb_vnet_common_hdr(struct sk_buff *skb)
  * private is used to chain pages for big packets, put the whole
  * most recent used list in the beginning for reuse
  */
-static void give_pages(struct receive_queue *rq, struct page *page)
+static void give_pages(struct virtnet_rq *rq, struct page *page)
 {
 	struct page *end;
 
@@ -190,7 +190,7 @@ static void give_pages(struct receive_queue *rq, struct page *page)
 	rq->pages = page;
 }
 
-static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask)
+static struct page *get_a_page(struct virtnet_rq *rq, gfp_t gfp_mask)
 {
 	struct page *p = rq->pages;
 
@@ -225,7 +225,7 @@ static void virtqueue_napi_complete(struct napi_struct *napi,
 	opaque = virtqueue_enable_cb_prepare(vq);
 	if (napi_complete_done(napi, processed)) {
 		if (unlikely(virtqueue_poll(vq, opaque)))
-			virtqueue_napi_schedule(napi, vq);
+			virtnet_vq_napi_schedule(napi, vq);
 	} else {
 		virtqueue_disable_cb(vq);
 	}
@@ -240,7 +240,7 @@ static void skb_xmit_done(struct virtqueue *vq)
 	virtqueue_disable_cb(vq);
 
 	if (napi->weight)
-		virtqueue_napi_schedule(napi, vq);
+		virtnet_vq_napi_schedule(napi, vq);
 	else
 		/* We were probably waiting for more output buffers. */
 		netif_wake_subqueue(vi->dev, vq2txq(vq));
@@ -281,7 +281,7 @@ static struct sk_buff *virtnet_build_skb(void *buf, unsigned int buflen,
 
 /* Called from bottom half context */
 static struct sk_buff *page_to_skb(struct virtnet_info *vi,
-				   struct receive_queue *rq,
+				   struct virtnet_rq *rq,
 				   struct page *page, unsigned int offset,
 				   unsigned int len, unsigned int truesize,
 				   unsigned int headroom)
@@ -380,7 +380,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
 	return skb;
 }
 
-static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
+static void virtnet_rq_unmap(struct virtnet_rq *rq, void *buf, u32 len)
 {
 	struct page *page = virt_to_head_page(buf);
 	struct virtnet_rq_dma *dma;
@@ -409,7 +409,7 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
 	put_page(page);
 }
 
-static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx)
+static void *virtnet_rq_get_buf(struct virtnet_rq *rq, u32 *len, void **ctx)
 {
 	void *buf;
 
@@ -420,7 +420,7 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx)
 	return buf;
 }
 
-static void *virtnet_rq_detach_unused_buf(struct receive_queue *rq)
+static void *virtnet_rq_detach_unused_buf(struct virtnet_rq *rq)
 {
 	void *buf;
 
@@ -431,7 +431,7 @@ static void *virtnet_rq_detach_unused_buf(struct receive_queue *rq)
 	return buf;
 }
 
-static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
+static void virtnet_rq_init_one_sg(struct virtnet_rq *rq, void *buf, u32 len)
 {
 	struct virtnet_rq_dma *dma;
 	dma_addr_t addr;
@@ -456,7 +456,7 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
 	rq->sg[0].length = len;
 }
 
-static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
+static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp)
 {
 	struct page_frag *alloc_frag = &rq->alloc_frag;
 	struct virtnet_rq_dma *dma;
@@ -530,11 +530,11 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
 	}
 }
 
-static void free_old_xmit(struct send_queue *sq, bool in_napi)
+static void free_old_xmit(struct virtnet_sq *sq, bool in_napi)
 {
 	struct virtnet_sq_stats stats = {};
 
-	__free_old_xmit(sq, in_napi, &stats);
+	virtnet_free_old_xmit(sq, in_napi, &stats);
 
 	/* Avoid overhead when no packets have been processed
 	 * happens when called speculatively from start_xmit.
@@ -550,7 +550,7 @@ static void free_old_xmit(struct send_queue *sq, bool in_napi)
 
 static void check_sq_full_and_disable(struct virtnet_info *vi,
 				      struct net_device *dev,
-				      struct send_queue *sq)
+				      struct virtnet_sq *sq)
 {
 	bool use_napi = sq->napi.weight;
 	int qnum;
@@ -571,7 +571,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
 		netif_stop_subqueue(dev, qnum);
 		if (use_napi) {
 			if (unlikely(!virtqueue_enable_cb_delayed(sq->vq)))
-				virtqueue_napi_schedule(&sq->napi, sq->vq);
+				virtnet_vq_napi_schedule(&sq->napi, sq->vq);
 		} else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
 			/* More just got used, free them then recheck. */
 			free_old_xmit(sq, false);
@@ -584,7 +584,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
 }
 
 static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
-				   struct send_queue *sq,
+				   struct virtnet_sq *sq,
 				   struct xdp_frame *xdpf)
 {
 	struct virtio_net_hdr_mrg_rxbuf *hdr;
@@ -674,9 +674,9 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 	struct virtnet_sq_stats stats = {};
-	struct receive_queue *rq = vi->rq;
+	struct virtnet_rq *rq = vi->rq;
 	struct bpf_prog *xdp_prog;
-	struct send_queue *sq;
+	struct virtnet_sq *sq;
 	int nxmit = 0;
 	int kicks = 0;
 	int ret;
@@ -697,7 +697,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	}
 
 	/* Free up any pending old buffers before queueing new ones. */
-	__free_old_xmit(sq, false, &stats);
+	virtnet_free_old_xmit(sq, false, &stats);
 
 	for (i = 0; i < n; i++) {
 		struct xdp_frame *xdpf = frames[i];
@@ -708,7 +708,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	}
 	ret = nxmit;
 
-	if (!is_xdp_raw_buffer_queue(vi, sq - vi->sq))
+	if (!virtnet_is_xdp_raw_buffer_queue(vi, sq - vi->sq))
 		check_sq_full_and_disable(vi, dev, sq);
 
 	if (flags & XDP_XMIT_FLUSH) {
@@ -816,7 +816,7 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
  * across multiple buffers (num_buf > 1), and we make sure buffers
  * have enough headroom.
  */
-static struct page *xdp_linearize_page(struct receive_queue *rq,
+static struct page *xdp_linearize_page(struct virtnet_rq *rq,
 				       int *num_buf,
 				       struct page *p,
 				       int offset,
@@ -897,7 +897,7 @@ static struct sk_buff *receive_small_build_skb(struct virtnet_info *vi,
 
 static struct sk_buff *receive_small_xdp(struct net_device *dev,
 					 struct virtnet_info *vi,
-					 struct receive_queue *rq,
+					 struct virtnet_rq *rq,
 					 struct bpf_prog *xdp_prog,
 					 void *buf,
 					 unsigned int xdp_headroom,
@@ -984,7 +984,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev,
 
 static struct sk_buff *receive_small(struct net_device *dev,
 				     struct virtnet_info *vi,
-				     struct receive_queue *rq,
+				     struct virtnet_rq *rq,
 				     void *buf, void *ctx,
 				     unsigned int len,
 				     unsigned int *xdp_xmit,
@@ -1031,7 +1031,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
 
 static struct sk_buff *receive_big(struct net_device *dev,
 				   struct virtnet_info *vi,
-				   struct receive_queue *rq,
+				   struct virtnet_rq *rq,
 				   void *buf,
 				   unsigned int len,
 				   struct virtnet_rq_stats *stats)
@@ -1052,7 +1052,7 @@ static struct sk_buff *receive_big(struct net_device *dev,
 	return NULL;
 }
 
-static void mergeable_buf_free(struct receive_queue *rq, int num_buf,
+static void mergeable_buf_free(struct virtnet_rq *rq, int num_buf,
 			       struct net_device *dev,
 			       struct virtnet_rq_stats *stats)
 {
@@ -1126,7 +1126,7 @@ static struct sk_buff *build_skb_from_xdp_buff(struct net_device *dev,
 /* TODO: build xdp in big mode */
 static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 				      struct virtnet_info *vi,
-				      struct receive_queue *rq,
+				      struct virtnet_rq *rq,
 				      struct xdp_buff *xdp,
 				      void *buf,
 				      unsigned int len,
@@ -1214,7 +1214,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 }
 
 static void *mergeable_xdp_get_buf(struct virtnet_info *vi,
-				   struct receive_queue *rq,
+				   struct virtnet_rq *rq,
 				   struct bpf_prog *xdp_prog,
 				   void *ctx,
 				   unsigned int *frame_sz,
@@ -1289,7 +1289,7 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi,
 
 static struct sk_buff *receive_mergeable_xdp(struct net_device *dev,
 					     struct virtnet_info *vi,
-					     struct receive_queue *rq,
+					     struct virtnet_rq *rq,
 					     struct bpf_prog *xdp_prog,
 					     void *buf,
 					     void *ctx,
@@ -1349,7 +1349,7 @@ static struct sk_buff *receive_mergeable_xdp(struct net_device *dev,
 
 static struct sk_buff *receive_mergeable(struct net_device *dev,
 					 struct virtnet_info *vi,
-					 struct receive_queue *rq,
+					 struct virtnet_rq *rq,
 					 void *buf,
 					 void *ctx,
 					 unsigned int len,
@@ -1494,7 +1494,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
 	skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type);
 }
 
-static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
+static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq,
 			void *buf, unsigned int len, void **ctx,
 			unsigned int *xdp_xmit,
 			struct virtnet_rq_stats *stats)
@@ -1554,7 +1554,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
  * not need to use  mergeable_len_to_ctx here - it is enough
  * to store the headroom as the context ignoring the truesize.
  */
-static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
+static int add_recvbuf_small(struct virtnet_info *vi, struct virtnet_rq *rq,
 			     gfp_t gfp)
 {
 	char *buf;
@@ -1583,7 +1583,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
 	return err;
 }
 
-static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq,
+static int add_recvbuf_big(struct virtnet_info *vi, struct virtnet_rq *rq,
 			   gfp_t gfp)
 {
 	struct page *first, *list = NULL;
@@ -1632,7 +1632,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq,
 	return err;
 }
 
-static unsigned int get_mergeable_buf_len(struct receive_queue *rq,
+static unsigned int get_mergeable_buf_len(struct virtnet_rq *rq,
 					  struct ewma_pkt_len *avg_pkt_len,
 					  unsigned int room)
 {
@@ -1650,7 +1650,7 @@ static unsigned int get_mergeable_buf_len(struct receive_queue *rq,
 }
 
 static int add_recvbuf_mergeable(struct virtnet_info *vi,
-				 struct receive_queue *rq, gfp_t gfp)
+				 struct virtnet_rq *rq, gfp_t gfp)
 {
 	struct page_frag *alloc_frag = &rq->alloc_frag;
 	unsigned int headroom = virtnet_get_headroom(vi);
@@ -1705,7 +1705,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
  * before we're receiving packets, or from refill_work which is
  * careful to disable receiving (using napi_disable).
  */
-static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
+static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
 			  gfp_t gfp)
 {
 	int err;
@@ -1737,9 +1737,9 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
 static void skb_recv_done(struct virtqueue *rvq)
 {
 	struct virtnet_info *vi = rvq->vdev->priv;
-	struct receive_queue *rq = &vi->rq[vq2rxq(rvq)];
+	struct virtnet_rq *rq = &vi->rq[vq2rxq(rvq)];
 
-	virtqueue_napi_schedule(&rq->napi, rvq);
+	virtnet_vq_napi_schedule(&rq->napi, rvq);
 }
 
 static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
@@ -1751,7 +1751,7 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
 	 * Call local_bh_enable after to trigger softIRQ processing.
 	 */
 	local_bh_disable();
-	virtqueue_napi_schedule(napi, vq);
+	virtnet_vq_napi_schedule(napi, vq);
 	local_bh_enable();
 }
 
@@ -1787,7 +1787,7 @@ static void refill_work(struct work_struct *work)
 	int i;
 
 	for (i = 0; i < vi->curr_queue_pairs; i++) {
-		struct receive_queue *rq = &vi->rq[i];
+		struct virtnet_rq *rq = &vi->rq[i];
 
 		napi_disable(&rq->napi);
 		still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
@@ -1801,7 +1801,7 @@ static void refill_work(struct work_struct *work)
 	}
 }
 
-static int virtnet_receive(struct receive_queue *rq, int budget,
+static int virtnet_receive(struct virtnet_rq *rq, int budget,
 			   unsigned int *xdp_xmit)
 {
 	struct virtnet_info *vi = rq->vq->vdev->priv;
@@ -1848,14 +1848,14 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
 	return stats.packets;
 }
 
-static void virtnet_poll_cleantx(struct receive_queue *rq)
+static void virtnet_poll_cleantx(struct virtnet_rq *rq)
 {
 	struct virtnet_info *vi = rq->vq->vdev->priv;
 	unsigned int index = vq2rxq(rq->vq);
-	struct send_queue *sq = &vi->sq[index];
+	struct virtnet_sq *sq = &vi->sq[index];
 	struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index);
 
-	if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))
+	if (!sq->napi.weight || virtnet_is_xdp_raw_buffer_queue(vi, index))
 		return;
 
 	if (__netif_tx_trylock(txq)) {
@@ -1878,10 +1878,10 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
 
 static int virtnet_poll(struct napi_struct *napi, int budget)
 {
-	struct receive_queue *rq =
-		container_of(napi, struct receive_queue, napi);
+	struct virtnet_rq *rq =
+		container_of(napi, struct virtnet_rq, napi);
 	struct virtnet_info *vi = rq->vq->vdev->priv;
-	struct send_queue *sq;
+	struct virtnet_sq *sq;
 	unsigned int received;
 	unsigned int xdp_xmit = 0;
 
@@ -1972,14 +1972,14 @@ static int virtnet_open(struct net_device *dev)
 
 static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 {
-	struct send_queue *sq = container_of(napi, struct send_queue, napi);
+	struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi);
 	struct virtnet_info *vi = sq->vq->vdev->priv;
 	unsigned int index = vq2txq(sq->vq);
 	struct netdev_queue *txq;
 	int opaque;
 	bool done;
 
-	if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
+	if (unlikely(virtnet_is_xdp_raw_buffer_queue(vi, index))) {
 		/* We don't need to enable cb for XDP */
 		napi_complete_done(napi, 0);
 		return 0;
@@ -2016,7 +2016,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	return 0;
 }
 
-static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
+static int xmit_skb(struct virtnet_sq *sq, struct sk_buff *skb)
 {
 	struct virtio_net_hdr_mrg_rxbuf *hdr;
 	const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest;
@@ -2067,7 +2067,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 	int qnum = skb_get_queue_mapping(skb);
-	struct send_queue *sq = &vi->sq[qnum];
+	struct virtnet_sq *sq = &vi->sq[qnum];
 	int err;
 	struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum);
 	bool kick = !netdev_xmit_more();
@@ -2121,7 +2121,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
 }
 
 static int virtnet_rx_resize(struct virtnet_info *vi,
-			     struct receive_queue *rq, u32 ring_num)
+			     struct virtnet_rq *rq, u32 ring_num)
 {
 	bool running = netif_running(vi->dev);
 	int err, qindex;
@@ -2144,7 +2144,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
 }
 
 static int virtnet_tx_resize(struct virtnet_info *vi,
-			     struct send_queue *sq, u32 ring_num)
+			     struct virtnet_sq *sq, u32 ring_num)
 {
 	bool running = netif_running(vi->dev);
 	struct netdev_queue *txq;
@@ -2290,8 +2290,8 @@ static void virtnet_stats(struct net_device *dev,
 
 	for (i = 0; i < vi->max_queue_pairs; i++) {
 		u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
-		struct receive_queue *rq = &vi->rq[i];
-		struct send_queue *sq = &vi->sq[i];
+		struct virtnet_rq *rq = &vi->rq[i];
+		struct virtnet_sq *sq = &vi->sq[i];
 
 		do {
 			start = u64_stats_fetch_begin(&sq->stats.syncp);
@@ -2604,8 +2604,8 @@ static int virtnet_set_ringparam(struct net_device *dev,
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 	u32 rx_pending, tx_pending;
-	struct receive_queue *rq;
-	struct send_queue *sq;
+	struct virtnet_rq *rq;
+	struct virtnet_sq *sq;
 	int i, err;
 
 	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
@@ -2909,7 +2909,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev,
 	size_t offset;
 
 	for (i = 0; i < vi->curr_queue_pairs; i++) {
-		struct receive_queue *rq = &vi->rq[i];
+		struct virtnet_rq *rq = &vi->rq[i];
 
 		stats_base = (u8 *)&rq->stats;
 		do {
@@ -2923,7 +2923,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev,
 	}
 
 	for (i = 0; i < vi->curr_queue_pairs; i++) {
-		struct send_queue *sq = &vi->sq[i];
+		struct virtnet_sq *sq = &vi->sq[i];
 
 		stats_base = (u8 *)&sq->stats;
 		do {
@@ -3604,7 +3604,7 @@ static int virtnet_set_features(struct net_device *dev,
 static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue)
 {
 	struct virtnet_info *priv = netdev_priv(dev);
-	struct send_queue *sq = &priv->sq[txqueue];
+	struct virtnet_sq *sq = &priv->sq[txqueue];
 	struct netdev_queue *txq = netdev_get_tx_queue(dev, txqueue);
 
 	u64_stats_update_begin(&sq->stats.syncp);
@@ -3729,10 +3729,10 @@ static void free_receive_page_frags(struct virtnet_info *vi)
 
 static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
-	if (!is_xdp_frame(buf))
+	if (!virtnet_is_xdp_frame(buf))
 		dev_kfree_skb(buf);
 	else
-		xdp_return_frame(ptr_to_xdp(buf));
+		xdp_return_frame(virtnet_ptr_to_xdp(buf));
 }
 
 static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
@@ -3761,7 +3761,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
 	}
 
 	for (i = 0; i < vi->max_queue_pairs; i++) {
-		struct receive_queue *rq = &vi->rq[i];
+		struct virtnet_rq *rq = &vi->rq[i];
 
 		while ((buf = virtnet_rq_detach_unused_buf(rq)) != NULL)
 			virtnet_rq_free_unused_buf(rq->vq, buf);
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index ddaf0ecf4d9d..282504d6639a 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -59,8 +59,8 @@ struct virtnet_rq_dma {
 };
 
 /* Internal representation of a send virtqueue */
-struct send_queue {
-	/* Virtqueue associated with this send _queue */
+struct virtnet_sq {
+	/* Virtqueue associated with this virtnet_sq */
 	struct virtqueue *vq;
 
 	/* TX: fragments + linear part + virtio header */
@@ -80,8 +80,8 @@ struct send_queue {
 };
 
 /* Internal representation of a receive virtqueue */
-struct receive_queue {
-	/* Virtqueue associated with this receive_queue */
+struct virtnet_rq {
+	/* Virtqueue associated with this virtnet_rq */
 	struct virtqueue *vq;
 
 	struct napi_struct napi;
@@ -123,8 +123,8 @@ struct virtnet_info {
 	struct virtio_device *vdev;
 	struct virtqueue *cvq;
 	struct net_device *dev;
-	struct send_queue *sq;
-	struct receive_queue *rq;
+	struct virtnet_sq *sq;
+	struct virtnet_rq *rq;
 	unsigned int status;
 
 	/* Max # of queue pairs supported by the device */
@@ -201,24 +201,24 @@ struct virtnet_info {
 	struct failover *failover;
 };
 
-static inline bool is_xdp_frame(void *ptr)
+static inline bool virtnet_is_xdp_frame(void *ptr)
 {
 	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
 }
 
-static inline struct xdp_frame *ptr_to_xdp(void *ptr)
+static inline struct xdp_frame *virtnet_ptr_to_xdp(void *ptr)
 {
 	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
 }
 
-static inline void __free_old_xmit(struct send_queue *sq, bool in_napi,
-				   struct virtnet_sq_stats *stats)
+static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
+					 struct virtnet_sq_stats *stats)
 {
 	unsigned int len;
 	void *ptr;
 
 	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (!is_xdp_frame(ptr)) {
+		if (!virtnet_is_xdp_frame(ptr)) {
 			struct sk_buff *skb = ptr;
 
 			pr_debug("Sent skb %p\n", skb);
@@ -226,7 +226,7 @@ static inline void __free_old_xmit(struct send_queue *sq, bool in_napi,
 			stats->bytes += skb->len;
 			napi_consume_skb(skb, in_napi);
 		} else {
-			struct xdp_frame *frame = ptr_to_xdp(ptr);
+			struct xdp_frame *frame = virtnet_ptr_to_xdp(ptr);
 
 			stats->bytes += xdp_get_frame_len(frame);
 			xdp_return_frame(frame);
@@ -235,8 +235,8 @@ static inline void __free_old_xmit(struct send_queue *sq, bool in_napi,
 	}
 }
 
-static inline void virtqueue_napi_schedule(struct napi_struct *napi,
-					   struct virtqueue *vq)
+static inline void virtnet_vq_napi_schedule(struct napi_struct *napi,
+					    struct virtqueue *vq)
 {
 	if (napi_schedule_prep(napi)) {
 		virtqueue_disable_cb(vq);
@@ -244,7 +244,7 @@ static inline void virtqueue_napi_schedule(struct napi_struct *napi,
 	}
 }
 
-static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
+static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
 {
 	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
 		return false;
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 06/19] virtio_net: separate virtnet_rx_resize()
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (4 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 05/19] virtio_net: add prefix virtnet to all struct/api inside virtio_net.h Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-19  6:17   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 07/19] virtio_net: separate virtnet_tx_resize() Xuan Zhuo
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

This patch separates two sub-functions from virtnet_rx_resize():

* virtnet_rx_pause
* virtnet_rx_resume

Then the subsequent reset rx for xsk can share these two functions.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 29 +++++++++++++++++++++--------
 drivers/net/virtio/virtio_net.h |  3 +++
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index ba38b6078e1d..e6b262341619 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -2120,26 +2120,39 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
 	return NETDEV_TX_OK;
 }
 
-static int virtnet_rx_resize(struct virtnet_info *vi,
-			     struct virtnet_rq *rq, u32 ring_num)
+void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq)
 {
 	bool running = netif_running(vi->dev);
-	int err, qindex;
-
-	qindex = rq - vi->rq;
 
 	if (running)
 		napi_disable(&rq->napi);
+}
 
-	err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_buf);
-	if (err)
-		netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);
+void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq)
+{
+	bool running = netif_running(vi->dev);
 
 	if (!try_fill_recv(vi, rq, GFP_KERNEL))
 		schedule_delayed_work(&vi->refill, 0);
 
 	if (running)
 		virtnet_napi_enable(rq->vq, &rq->napi);
+}
+
+static int virtnet_rx_resize(struct virtnet_info *vi,
+			     struct virtnet_rq *rq, u32 ring_num)
+{
+	int err, qindex;
+
+	qindex = rq - vi->rq;
+
+	virtnet_rx_pause(vi, rq);
+
+	err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_buf);
+	if (err)
+		netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);
+
+	virtnet_rx_resume(vi, rq);
 	return err;
 }
 
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 282504d6639a..70eea23adba6 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -253,4 +253,7 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
 	else
 		return false;
 }
+
+void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
+void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
 #endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 07/19] virtio_net: separate virtnet_tx_resize()
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (5 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 06/19] virtio_net: separate virtnet_rx_resize() Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-19  6:18   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 08/19] virtio_net: sq support premapped mode Xuan Zhuo
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

This patch separates two sub-functions from virtnet_tx_resize():

* virtnet_tx_pause
* virtnet_tx_resume

Then the subsequent virtnet_tx_reset() can share these two functions.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 35 +++++++++++++++++++++++++++------
 drivers/net/virtio/virtio_net.h |  2 ++
 2 files changed, 31 insertions(+), 6 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index e6b262341619..8da84ea9bcbe 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -2156,12 +2156,11 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
 	return err;
 }
 
-static int virtnet_tx_resize(struct virtnet_info *vi,
-			     struct virtnet_sq *sq, u32 ring_num)
+void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq)
 {
 	bool running = netif_running(vi->dev);
 	struct netdev_queue *txq;
-	int err, qindex;
+	int qindex;
 
 	qindex = sq - vi->sq;
 
@@ -2182,10 +2181,17 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
 	netif_stop_subqueue(vi->dev, qindex);
 
 	__netif_tx_unlock_bh(txq);
+}
 
-	err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
-	if (err)
-		netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
+void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq)
+{
+	bool running = netif_running(vi->dev);
+	struct netdev_queue *txq;
+	int qindex;
+
+	qindex = sq - vi->sq;
+
+	txq = netdev_get_tx_queue(vi->dev, qindex);
 
 	__netif_tx_lock_bh(txq);
 	sq->reset = false;
@@ -2194,6 +2200,23 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
 
 	if (running)
 		virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
+}
+
+static int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq,
+			     u32 ring_num)
+{
+	int qindex, err;
+
+	qindex = sq - vi->sq;
+
+	virtnet_tx_pause(vi, sq);
+
+	err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
+	if (err)
+		netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
+
+	virtnet_tx_resume(vi, sq);
+
 	return err;
 }
 
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 70eea23adba6..2f930af35364 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -256,4 +256,6 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
 
 void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
 void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
+void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
+void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq);
 #endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 08/19] virtio_net: sq support premapped mode
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (6 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 07/19] virtio_net: separate virtnet_tx_resize() Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:50   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 09/19] virtio_net: xsk: bind/unbind xsk Xuan Zhuo
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

If the xsk is enabling, the xsk tx will share the send queue.
But the xsk requires that the send queue use the premapped mode.
So the send queue must support premapped mode.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 108 ++++++++++++++++++++++++++++----
 drivers/net/virtio/virtio_net.h |  54 +++++++++++++++-
 2 files changed, 149 insertions(+), 13 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 8da84ea9bcbe..02d27101fef1 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -514,20 +514,104 @@ static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp)
 	return buf;
 }
 
-static void virtnet_rq_set_premapped(struct virtnet_info *vi)
+static int virtnet_sq_set_premapped(struct virtnet_sq *sq)
 {
-	int i;
+	struct virtnet_sq_dma *d;
+	int err, size, i;
 
-	/* disable for big mode */
-	if (!vi->mergeable_rx_bufs && vi->big_packets)
-		return;
+	size = virtqueue_get_vring_size(sq->vq);
+
+	size += MAX_SKB_FRAGS + 2;
+
+	sq->dmainfo.head = kcalloc(size, sizeof(*sq->dmainfo.head), GFP_KERNEL);
+	if (!sq->dmainfo.head)
+		return -ENOMEM;
+
+	err = virtqueue_set_dma_premapped(sq->vq);
+	if (err) {
+		kfree(sq->dmainfo.head);
+		return err;
+	}
+
+	sq->dmainfo.free = NULL;
+
+	sq->do_dma = true;
+
+	for (i = 0; i < size; ++i) {
+		d = &sq->dmainfo.head[i];
+
+		d->next = sq->dmainfo.free;
+		sq->dmainfo.free = d;
+	}
+
+	return 0;
+}
+
+static void virtnet_set_premapped(struct virtnet_info *vi)
+{
+	int i;
 
 	for (i = 0; i < vi->max_queue_pairs; i++) {
-		if (virtqueue_set_dma_premapped(vi->rq[i].vq))
+		if (!virtnet_sq_set_premapped(&vi->sq[i]))
+			vi->sq[i].do_dma = true;
+
+		/* disable for big mode */
+		if (!vi->mergeable_rx_bufs && vi->big_packets)
 			continue;
 
-		vi->rq[i].do_dma = true;
+		if (!virtqueue_set_dma_premapped(vi->rq[i].vq))
+			vi->rq[i].do_dma = true;
+	}
+}
+
+static struct virtnet_sq_dma *virtnet_sq_map_sg(struct virtnet_sq *sq, int nents, void *data)
+{
+	struct virtnet_sq_dma *d, *head;
+	struct scatterlist *sg;
+	int i;
+
+	head = NULL;
+
+	for_each_sg(sq->sg, sg, nents, i) {
+		sg->dma_address = virtqueue_dma_map_single_attrs(sq->vq, sg_virt(sg),
+								 sg->length,
+								 DMA_TO_DEVICE, 0);
+		if (virtqueue_dma_mapping_error(sq->vq, sg->dma_address))
+			goto err;
+
+		d = sq->dmainfo.free;
+		sq->dmainfo.free = d->next;
+
+		d->addr = sg->dma_address;
+		d->len = sg->length;
+
+		d->next = head;
+		head = d;
+	}
+
+	head->data = data;
+
+	return (void *)((unsigned long)head | ((unsigned long)data & VIRTIO_XMIT_DATA_MASK));
+err:
+	virtnet_sq_unmap(sq, head);
+	return NULL;
+}
+
+static int virtnet_add_outbuf(struct virtnet_sq *sq, u32 num, void *data)
+{
+	int ret;
+
+	if (sq->do_dma) {
+		data = virtnet_sq_map_sg(sq, num, data);
+		if (!data)
+			return -ENOMEM;
 	}
+
+	ret = virtqueue_add_outbuf(sq->vq, sq->sg, num, data, GFP_ATOMIC);
+	if (ret && sq->do_dma)
+		virtnet_sq_unmap(sq, data);
+
+	return ret;
 }
 
 static void free_old_xmit(struct virtnet_sq *sq, bool in_napi)
@@ -623,8 +707,7 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
 			    skb_frag_size(frag), skb_frag_off(frag));
 	}
 
-	err = virtqueue_add_outbuf(sq->vq, sq->sg, nr_frags + 1,
-				   xdp_to_ptr(xdpf), GFP_ATOMIC);
+	err = virtnet_add_outbuf(sq, nr_frags + 1, xdp_to_ptr(xdpf));
 	if (unlikely(err))
 		return -ENOSPC; /* Caller handle free/refcnt */
 
@@ -2060,7 +2143,8 @@ static int xmit_skb(struct virtnet_sq *sq, struct sk_buff *skb)
 			return num_sg;
 		num_sg++;
 	}
-	return virtqueue_add_outbuf(sq->vq, sq->sg, num_sg, skb, GFP_ATOMIC);
+
+	return virtnet_add_outbuf(sq, num_sg, skb);
 }
 
 static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -3717,6 +3801,8 @@ static void virtnet_free_queues(struct virtnet_info *vi)
 	for (i = 0; i < vi->max_queue_pairs; i++) {
 		__netif_napi_del(&vi->rq[i].napi);
 		__netif_napi_del(&vi->sq[i].napi);
+
+		kfree(vi->sq[i].dmainfo.head);
 	}
 
 	/* We called __netif_napi_del(),
@@ -3974,7 +4060,7 @@ static int init_vqs(struct virtnet_info *vi)
 	if (ret)
 		goto err_free;
 
-	virtnet_rq_set_premapped(vi);
+	virtnet_set_premapped(vi);
 
 	cpus_read_lock();
 	virtnet_set_affinity(vi);
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 2f930af35364..cc742756e19a 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -7,6 +7,7 @@
 #include <linux/average.h>
 
 #define VIRTIO_XDP_FLAG	BIT(0)
+#define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG)
 
 /* RX packet size EWMA. The average packet size is used to determine the packet
  * buffer size when refilling RX rings. As the entire RX ring may be refilled
@@ -58,6 +59,18 @@ struct virtnet_rq_dma {
 	u16 need_sync;
 };
 
+struct virtnet_sq_dma {
+	struct virtnet_sq_dma *next;
+	dma_addr_t addr;
+	u32 len;
+	void *data;
+};
+
+struct virtnet_sq_dma_head {
+	struct virtnet_sq_dma *free;
+	struct virtnet_sq_dma *head;
+};
+
 /* Internal representation of a send virtqueue */
 struct virtnet_sq {
 	/* Virtqueue associated with this virtnet_sq */
@@ -77,6 +90,10 @@ struct virtnet_sq {
 
 	/* Record whether sq is in reset state. */
 	bool reset;
+
+	bool do_dma;
+
+	struct virtnet_sq_dma_head dmainfo;
 };
 
 /* Internal representation of a receive virtqueue */
@@ -211,6 +228,29 @@ static inline struct xdp_frame *virtnet_ptr_to_xdp(void *ptr)
 	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
 }
 
+static inline void *virtnet_sq_unmap(struct virtnet_sq *sq, void *data)
+{
+	struct virtnet_sq_dma *next, *head;
+
+	head = (void *)((unsigned long)data & ~VIRTIO_XMIT_DATA_MASK);
+
+	data = head->data;
+
+	while (head) {
+		virtqueue_dma_unmap_single_attrs(sq->vq, head->addr, head->len,
+						 DMA_TO_DEVICE, 0);
+
+		next = head->next;
+
+		head->next = sq->dmainfo.free;
+		sq->dmainfo.free = head;
+
+		head = next;
+	}
+
+	return data;
+}
+
 static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
 					 struct virtnet_sq_stats *stats)
 {
@@ -219,14 +259,24 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
 
 	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
 		if (!virtnet_is_xdp_frame(ptr)) {
-			struct sk_buff *skb = ptr;
+			struct sk_buff *skb;
+
+			if (sq->do_dma)
+				ptr = virtnet_sq_unmap(sq, ptr);
+
+			skb = ptr;
 
 			pr_debug("Sent skb %p\n", skb);
 
 			stats->bytes += skb->len;
 			napi_consume_skb(skb, in_napi);
 		} else {
-			struct xdp_frame *frame = virtnet_ptr_to_xdp(ptr);
+			struct xdp_frame *frame;
+
+			if (sq->do_dma)
+				ptr = virtnet_sq_unmap(sq, ptr);
+
+			frame = virtnet_ptr_to_xdp(ptr);
 
 			stats->bytes += xdp_get_frame_len(frame);
 			xdp_return_frame(frame);
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 09/19] virtio_net: xsk: bind/unbind xsk
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (7 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 08/19] virtio_net: sq support premapped mode Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:51   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 10/19] virtio_net: xsk: prevent disable tx napi Xuan Zhuo
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

This patch implement the logic of bind/unbind xsk pool to sq and rq.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/Makefile     |   2 +-
 drivers/net/virtio/main.c       |  10 +-
 drivers/net/virtio/virtio_net.h |  18 ++++
 drivers/net/virtio/xsk.c        | 186 ++++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h        |   7 ++
 5 files changed, 216 insertions(+), 7 deletions(-)
 create mode 100644 drivers/net/virtio/xsk.c
 create mode 100644 drivers/net/virtio/xsk.h

diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
index 15ed7c97fd4f..8c2a884d2dba 100644
--- a/drivers/net/virtio/Makefile
+++ b/drivers/net/virtio/Makefile
@@ -5,4 +5,4 @@
 
 obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
 
-virtio_net-y := main.o
+virtio_net-y := main.o xsk.o
diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 02d27101fef1..38733a782f12 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -8,7 +8,6 @@
 #include <linux/etherdevice.h>
 #include <linux/module.h>
 #include <linux/virtio.h>
-#include <linux/virtio_net.h>
 #include <linux/bpf.h>
 #include <linux/bpf_trace.h>
 #include <linux/scatterlist.h>
@@ -139,9 +138,6 @@ struct virtio_net_common_hdr {
 	};
 };
 
-static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
-static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
-
 static void *xdp_to_ptr(struct xdp_frame *ptr)
 {
 	return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
@@ -3664,6 +3660,8 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
 	switch (xdp->command) {
 	case XDP_SETUP_PROG:
 		return virtnet_xdp_set(dev, xdp->prog, xdp->extack);
+	case XDP_SETUP_XSK_POOL:
+		return virtnet_xsk_pool_setup(dev, xdp);
 	default:
 		return -EINVAL;
 	}
@@ -3849,7 +3847,7 @@ static void free_receive_page_frags(struct virtnet_info *vi)
 		}
 }
 
-static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
+void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
 	if (!virtnet_is_xdp_frame(buf))
 		dev_kfree_skb(buf);
@@ -3857,7 +3855,7 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 		xdp_return_frame(virtnet_ptr_to_xdp(buf));
 }
 
-static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
+void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
 	struct virtnet_info *vi = vq->vdev->priv;
 	int i = vq2rxq(vq);
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index cc742756e19a..9e69b6c5921b 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -5,6 +5,8 @@
 
 #include <linux/ethtool.h>
 #include <linux/average.h>
+#include <linux/virtio_net.h>
+#include <net/xdp_sock_drv.h>
 
 #define VIRTIO_XDP_FLAG	BIT(0)
 #define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG)
@@ -94,6 +96,11 @@ struct virtnet_sq {
 	bool do_dma;
 
 	struct virtnet_sq_dma_head dmainfo;
+	struct {
+		struct xsk_buff_pool __rcu *pool;
+
+		dma_addr_t hdr_dma_address;
+	} xsk;
 };
 
 /* Internal representation of a receive virtqueue */
@@ -134,6 +141,13 @@ struct virtnet_rq {
 
 	/* Do dma by self */
 	bool do_dma;
+
+	struct {
+		struct xsk_buff_pool __rcu *pool;
+
+		/* xdp rxq used by xsk */
+		struct xdp_rxq_info xdp_rxq;
+	} xsk;
 };
 
 struct virtnet_info {
@@ -218,6 +232,8 @@ struct virtnet_info {
 	struct failover *failover;
 };
 
+#include "xsk.h"
+
 static inline bool virtnet_is_xdp_frame(void *ptr)
 {
 	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
@@ -308,4 +324,6 @@ void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
 void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
 void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
 void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq);
+void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
+void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
 #endif
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
new file mode 100644
index 000000000000..dddd01962a3f
--- /dev/null
+++ b/drivers/net/virtio/xsk.c
@@ -0,0 +1,186 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * virtio-net xsk
+ */
+
+#include "virtio_net.h"
+
+static struct virtio_net_hdr_mrg_rxbuf xsk_hdr;
+
+static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
+				    struct xsk_buff_pool *pool)
+{
+	int err, qindex;
+
+	qindex = rq - vi->rq;
+
+	if (pool) {
+		err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, vi->dev, qindex, rq->napi.napi_id);
+		if (err < 0)
+			return err;
+
+		err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq,
+						 MEM_TYPE_XSK_BUFF_POOL, NULL);
+		if (err < 0) {
+			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
+			return err;
+		}
+
+		xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq);
+	} else {
+		xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
+	}
+
+	virtnet_rx_pause(vi, rq);
+
+	err = virtqueue_reset(rq->vq, virtnet_rq_free_unused_buf);
+	if (err)
+		netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err);
+
+	if (pool && err)
+		xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
+	else
+		rcu_assign_pointer(rq->xsk.pool, pool);
+
+	virtnet_rx_resume(vi, rq);
+
+	return err;
+}
+
+static int virtnet_sq_bind_xsk_pool(struct virtnet_info *vi,
+				    struct virtnet_sq *sq,
+				    struct xsk_buff_pool *pool)
+{
+	int err, qindex;
+
+	qindex = sq - vi->sq;
+
+	virtnet_tx_pause(vi, sq);
+
+	err = virtqueue_reset(sq->vq, virtnet_sq_free_unused_buf);
+	if (err)
+		netdev_err(vi->dev, "reset tx fail: tx queue index: %d err: %d\n", qindex, err);
+
+	if (pool) {
+		if (!err)
+			rcu_assign_pointer(sq->xsk.pool, pool);
+	} else {
+		rcu_assign_pointer(sq->xsk.pool, NULL);
+	}
+
+	virtnet_tx_resume(vi, sq);
+
+	return err;
+}
+
+static int virtnet_xsk_pool_enable(struct net_device *dev,
+				   struct xsk_buff_pool *pool,
+				   u16 qid)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	struct virtnet_rq *rq;
+	struct virtnet_sq *sq;
+	struct device *dma_dev;
+	dma_addr_t hdr_dma;
+	int err;
+
+	/* In big_packets mode, xdp cannot work, so there is no need to
+	 * initialize xsk of rq.
+	 */
+	if (vi->big_packets && !vi->mergeable_rx_bufs)
+		return -ENOENT;
+
+	if (qid >= vi->curr_queue_pairs)
+		return -EINVAL;
+
+	sq = &vi->sq[qid];
+	rq = &vi->rq[qid];
+
+	/* xsk tx zerocopy depend on the tx napi.
+	 *
+	 * All xsk packets are actually consumed and sent out from the xsk tx
+	 * queue under the tx napi mechanism.
+	 */
+	if (!sq->napi.weight)
+		return -EPERM;
+
+	if (!rq->do_dma || !sq->do_dma)
+		return -EPERM;
+
+	if (virtqueue_dma_dev(rq->vq) != virtqueue_dma_dev(sq->vq))
+		return -EPERM;
+
+	dma_dev = virtqueue_dma_dev(rq->vq);
+	if (!dma_dev)
+		return -EPERM;
+
+	hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE);
+	if (dma_mapping_error(dma_dev, hdr_dma))
+		return -ENOMEM;
+
+	err = xsk_pool_dma_map(pool, dma_dev, 0);
+	if (err)
+		goto err_xsk_map;
+
+	err = virtnet_rq_bind_xsk_pool(vi, rq, pool);
+	if (err)
+		goto err_rq;
+
+	err = virtnet_sq_bind_xsk_pool(vi, sq, pool);
+	if (err)
+		goto err_sq;
+
+	sq->xsk.hdr_dma_address = hdr_dma;
+
+	return 0;
+
+err_sq:
+	virtnet_rq_bind_xsk_pool(vi, rq, NULL);
+err_rq:
+	xsk_pool_dma_unmap(pool, 0);
+err_xsk_map:
+	dma_unmap_single(dma_dev, hdr_dma, vi->hdr_len, DMA_TO_DEVICE);
+	return err;
+}
+
+static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	struct xsk_buff_pool *pool;
+	struct device *dma_dev;
+	struct virtnet_rq *rq;
+	struct virtnet_sq *sq;
+	int err1, err2;
+
+	if (qid >= vi->curr_queue_pairs)
+		return -EINVAL;
+
+	sq = &vi->sq[qid];
+	rq = &vi->rq[qid];
+
+	dma_dev = virtqueue_dma_dev(rq->vq);
+
+	/* Sync with the XSK wakeup and NAPI. */
+	synchronize_net();
+
+	dma_unmap_single(dma_dev, sq->xsk.hdr_dma_address, vi->hdr_len, DMA_TO_DEVICE);
+
+	rcu_read_lock();
+	pool = rcu_dereference(sq->xsk.pool);
+	xsk_pool_dma_unmap(pool, 0);
+	rcu_read_unlock();
+
+	err1 = virtnet_sq_bind_xsk_pool(vi, sq, NULL);
+	err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL);
+
+	return err1 | err2;
+}
+
+int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp)
+{
+	if (xdp->xsk.pool)
+		return virtnet_xsk_pool_enable(dev, xdp->xsk.pool,
+					       xdp->xsk.queue_id);
+	else
+		return virtnet_xsk_pool_disable(dev, xdp->xsk.queue_id);
+}
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
new file mode 100644
index 000000000000..1918285c310c
--- /dev/null
+++ b/drivers/net/virtio/xsk.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef __XSK_H__
+#define __XSK_H__
+
+int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
+#endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 10/19] virtio_net: xsk: prevent disable tx napi
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (8 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 09/19] virtio_net: xsk: bind/unbind xsk Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:51   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 11/19] virtio_net: xsk: tx: support tx Xuan Zhuo
                   ` (9 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
we must stop tx napi from being disabled.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 38733a782f12..b320770e5f4e 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -3203,7 +3203,7 @@ static int virtnet_set_coalesce(struct net_device *dev,
 				struct netlink_ext_ack *extack)
 {
 	struct virtnet_info *vi = netdev_priv(dev);
-	int ret, queue_number, napi_weight;
+	int ret, queue_number, napi_weight, i;
 	bool update_napi = false;
 
 	/* Can't change NAPI weight if the link is up */
@@ -3232,6 +3232,14 @@ static int virtnet_set_coalesce(struct net_device *dev,
 		return ret;
 
 	if (update_napi) {
+		/* xsk xmit depends on the tx napi. So if xsk is active,
+		 * prevent modifications to tx napi.
+		 */
+		for (i = queue_number; i < vi->max_queue_pairs; i++) {
+			if (rtnl_dereference(vi->sq[i].xsk.pool))
+				return -EBUSY;
+		}
+
 		for (; queue_number < vi->max_queue_pairs; queue_number++)
 			vi->sq[queue_number].napi.weight = napi_weight;
 	}
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 11/19] virtio_net: xsk: tx: support tx
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (9 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 10/19] virtio_net: xsk: prevent disable tx napi Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:52   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 12/19] virtio_net: xsk: tx: support wakeup Xuan Zhuo
                   ` (8 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

The driver's tx napi is very important for XSK. It is responsible for
obtaining data from the XSK queue and sending it out.

At the beginning, we need to trigger tx napi.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       |  18 +++++-
 drivers/net/virtio/virtio_net.h |   3 +-
 drivers/net/virtio/xsk.c        | 108 ++++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h        |  13 ++++
 4 files changed, 140 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index b320770e5f4e..a08429bef61f 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -2054,7 +2054,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi);
 	struct virtnet_info *vi = sq->vq->vdev->priv;
 	unsigned int index = vq2txq(sq->vq);
+	struct xsk_buff_pool *pool;
 	struct netdev_queue *txq;
+	int busy = 0;
 	int opaque;
 	bool done;
 
@@ -2067,11 +2069,25 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	txq = netdev_get_tx_queue(vi->dev, index);
 	__netif_tx_lock(txq, raw_smp_processor_id());
 	virtqueue_disable_cb(sq->vq);
-	free_old_xmit(sq, true);
+
+	rcu_read_lock();
+	pool = rcu_dereference(sq->xsk.pool);
+	if (pool) {
+		busy |= virtnet_xsk_xmit(sq, pool, budget);
+		rcu_read_unlock();
+	} else {
+		rcu_read_unlock();
+		free_old_xmit(sq, true);
+	}
 
 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
 		netif_tx_wake_queue(txq);
 
+	if (busy) {
+		__netif_tx_unlock(txq);
+		return budget;
+	}
+
 	opaque = virtqueue_enable_cb_prepare(sq->vq);
 
 	done = napi_complete_done(napi, 0);
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 9e69b6c5921b..3bbb1f5baad5 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -9,7 +9,8 @@
 #include <net/xdp_sock_drv.h>
 
 #define VIRTIO_XDP_FLAG	BIT(0)
-#define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG)
+#define VIRTIO_XSK_FLAG	BIT(1)
+#define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG | VIRTIO_XSK_FLAG)
 
 /* RX packet size EWMA. The average packet size is used to determine the packet
  * buffer size when refilling RX rings. As the entire RX ring may be refilled
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index dddd01962a3f..0e775a9d270f 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -7,6 +7,114 @@
 
 static struct virtio_net_hdr_mrg_rxbuf xsk_hdr;
 
+static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
+{
+	sg->dma_address = addr;
+	sg->length = len;
+}
+
+static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
+{
+	struct virtnet_info *vi = sq->vq->vdev->priv;
+	struct net_device *dev = vi->dev;
+	int qnum = sq - vi->sq;
+
+	/* If it is a raw buffer queue, it does not check whether the status
+	 * of the queue is stopped when sending. So there is no need to check
+	 * the situation of the raw buffer queue.
+	 */
+	if (virtnet_is_xdp_raw_buffer_queue(vi, qnum))
+		return;
+
+	/* If this sq is not the exclusive queue of the current cpu,
+	 * then it may be called by start_xmit, so check it running out
+	 * of space.
+	 *
+	 * Stop the queue to avoid getting packets that we are
+	 * then unable to transmit. Then wait the tx interrupt.
+	 */
+	if (sq->vq->num_free < 2 + MAX_SKB_FRAGS)
+		netif_stop_subqueue(dev, qnum);
+}
+
+static int virtnet_xsk_xmit_one(struct virtnet_sq *sq,
+				struct xsk_buff_pool *pool,
+				struct xdp_desc *desc)
+{
+	struct virtnet_info *vi;
+	dma_addr_t addr;
+
+	vi = sq->vq->vdev->priv;
+
+	addr = xsk_buff_raw_get_dma(pool, desc->addr);
+	xsk_buff_raw_dma_sync_for_device(pool, addr, desc->len);
+
+	sg_init_table(sq->sg, 2);
+
+	sg_fill_dma(sq->sg, sq->xsk.hdr_dma_address, vi->hdr_len);
+	sg_fill_dma(sq->sg + 1, addr, desc->len);
+
+	return virtqueue_add_outbuf(sq->vq, sq->sg, 2,
+				    virtnet_xsk_to_ptr(desc->len), GFP_ATOMIC);
+}
+
+static int virtnet_xsk_xmit_batch(struct virtnet_sq *sq,
+				  struct xsk_buff_pool *pool,
+				  unsigned int budget,
+				  struct virtnet_sq_stats *stats)
+{
+	struct xdp_desc *descs = pool->tx_descs;
+	u32 nb_pkts, max_pkts, i;
+	bool kick = false;
+	int err;
+
+	max_pkts = min_t(u32, budget, sq->vq->num_free / 2);
+
+	nb_pkts = xsk_tx_peek_release_desc_batch(pool, max_pkts);
+	if (!nb_pkts)
+		return 0;
+
+	for (i = 0; i < nb_pkts; i++) {
+		err = virtnet_xsk_xmit_one(sq, pool, &descs[i]);
+		if (unlikely(err))
+			break;
+
+		kick = true;
+	}
+
+	if (kick && virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq))
+		++stats->kicks;
+
+	stats->xdp_tx += i;
+
+	return i;
+}
+
+bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
+		      int budget)
+{
+	struct virtnet_sq_stats stats = {};
+	int sent;
+
+	virtnet_free_old_xmit(sq, true, &stats);
+
+	sent = virtnet_xsk_xmit_batch(sq, pool, budget, &stats);
+
+	virtnet_xsk_check_queue(sq);
+
+	u64_stats_update_begin(&sq->stats.syncp);
+	sq->stats.packets += stats.packets;
+	sq->stats.bytes += stats.bytes;
+	sq->stats.kicks += stats.kicks;
+	sq->stats.xdp_tx += stats.xdp_tx;
+	u64_stats_update_end(&sq->stats.syncp);
+
+	if (xsk_uses_need_wakeup(pool))
+		xsk_set_tx_need_wakeup(pool);
+
+	return sent == budget;
+}
+
 static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
 				    struct xsk_buff_pool *pool)
 {
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 1918285c310c..73ca8cd5308b 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -3,5 +3,18 @@
 #ifndef __XSK_H__
 #define __XSK_H__
 
+#define VIRTIO_XSK_FLAG_OFFSET	4
+
+static inline void *virtnet_xsk_to_ptr(u32 len)
+{
+	unsigned long p;
+
+	p = len << VIRTIO_XSK_FLAG_OFFSET;
+
+	return (void *)(p | VIRTIO_XSK_FLAG);
+}
+
 int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
+bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
+		      int budget);
 #endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 12/19] virtio_net: xsk: tx: support wakeup
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (10 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 11/19] virtio_net: xsk: tx: support tx Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:52   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer Xuan Zhuo
                   ` (7 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

xsk wakeup is used to trigger the logic for xsk xmit by xsk framework or
user.

Virtio-Net does not support to actively generate an interruption, so it
tries to trigger tx NAPI on the tx interrupt cpu.

Consider the effect of cache. When interrupt triggers, it is
generally fixed on a CPU. It is better to start TX Napi on the same
CPU.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       |  3 ++
 drivers/net/virtio/virtio_net.h |  8 +++++
 drivers/net/virtio/xsk.c        | 57 +++++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h        |  1 +
 4 files changed, 69 insertions(+)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index a08429bef61f..1a222221352e 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -2066,6 +2066,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 		return 0;
 	}
 
+	sq->xsk.last_cpu = smp_processor_id();
+
 	txq = netdev_get_tx_queue(vi->dev, index);
 	__netif_tx_lock(txq, raw_smp_processor_id());
 	virtqueue_disable_cb(sq->vq);
@@ -3770,6 +3772,7 @@ static const struct net_device_ops virtnet_netdev = {
 	.ndo_vlan_rx_kill_vid = virtnet_vlan_rx_kill_vid,
 	.ndo_bpf		= virtnet_xdp,
 	.ndo_xdp_xmit		= virtnet_xdp_xmit,
+	.ndo_xsk_wakeup         = virtnet_xsk_wakeup,
 	.ndo_features_check	= passthru_features_check,
 	.ndo_get_phys_port_name	= virtnet_get_phys_port_name,
 	.ndo_set_features	= virtnet_set_features,
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 3bbb1f5baad5..7c72a8bb1813 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -101,6 +101,14 @@ struct virtnet_sq {
 		struct xsk_buff_pool __rcu *pool;
 
 		dma_addr_t hdr_dma_address;
+
+		u32 last_cpu;
+		struct __call_single_data csd;
+
+		/* The lock to prevent the repeat of calling
+		 * smp_call_function_single_async().
+		 */
+		spinlock_t ipi_lock;
 	} xsk;
 };
 
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index 0e775a9d270f..973e783260c3 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -115,6 +115,60 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
 	return sent == budget;
 }
 
+static void virtnet_remote_napi_schedule(void *info)
+{
+	struct virtnet_sq *sq = info;
+
+	virtnet_vq_napi_schedule(&sq->napi, sq->vq);
+}
+
+static void virtnet_remote_raise_napi(struct virtnet_sq *sq)
+{
+	u32 last_cpu, cur_cpu;
+
+	last_cpu = sq->xsk.last_cpu;
+	cur_cpu = get_cpu();
+
+	/* On remote cpu, softirq will run automatically when ipi irq exit. On
+	 * local cpu, smp_call_xxx will not trigger ipi interrupt, then softirq
+	 * cannot be triggered automatically. So Call local_bh_enable after to
+	 * trigger softIRQ processing.
+	 */
+	if (last_cpu == cur_cpu) {
+		local_bh_disable();
+		virtnet_vq_napi_schedule(&sq->napi, sq->vq);
+		local_bh_enable();
+	} else {
+		if (spin_trylock(&sq->xsk.ipi_lock)) {
+			smp_call_function_single_async(last_cpu, &sq->xsk.csd);
+			spin_unlock(&sq->xsk.ipi_lock);
+		}
+	}
+
+	put_cpu();
+}
+
+int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	struct virtnet_sq *sq;
+
+	if (!netif_running(dev))
+		return -ENETDOWN;
+
+	if (qid >= vi->curr_queue_pairs)
+		return -EINVAL;
+
+	sq = &vi->sq[qid];
+
+	if (napi_if_scheduled_mark_missed(&sq->napi))
+		return 0;
+
+	virtnet_remote_raise_napi(sq);
+
+	return 0;
+}
+
 static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
 				    struct xsk_buff_pool *pool)
 {
@@ -240,6 +294,9 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 
 	sq->xsk.hdr_dma_address = hdr_dma;
 
+	INIT_CSD(&sq->xsk.csd, virtnet_remote_napi_schedule, sq);
+	spin_lock_init(&sq->xsk.ipi_lock);
+
 	return 0;
 
 err_sq:
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 73ca8cd5308b..1bd19dcda649 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -17,4 +17,5 @@ static inline void *virtnet_xsk_to_ptr(u32 len)
 int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
 bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
 		      int budget);
+int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
 #endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (11 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 12/19] virtio_net: xsk: tx: support wakeup Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-16 23:44   ` Jakub Kicinski
  2023-10-16 12:00 ` [PATCH net-next v1 14/19] virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check " Xuan Zhuo
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

virtnet_free_old_xmit distinguishes three type ptr(skb, xdp frame, xsk
buffer) by the last two types bits.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/virtio_net.h | 16 ++++++++++++++--
 drivers/net/virtio/xsk.h        |  5 +++++
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 7c72a8bb1813..d4e620a084f4 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -243,6 +243,11 @@ struct virtnet_info {
 
 #include "xsk.h"
 
+static inline bool virtnet_is_skb_ptr(void *ptr)
+{
+	return !((unsigned long)ptr & VIRTIO_XMIT_DATA_MASK);
+}
+
 static inline bool virtnet_is_xdp_frame(void *ptr)
 {
 	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
@@ -279,11 +284,12 @@ static inline void *virtnet_sq_unmap(struct virtnet_sq *sq, void *data)
 static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
 					 struct virtnet_sq_stats *stats)
 {
+	unsigned int xsknum = 0;
 	unsigned int len;
 	void *ptr;
 
 	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (!virtnet_is_xdp_frame(ptr)) {
+		if (virtnet_is_skb_ptr(ptr)) {
 			struct sk_buff *skb;
 
 			if (sq->do_dma)
@@ -295,7 +301,7 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
 
 			stats->bytes += skb->len;
 			napi_consume_skb(skb, in_napi);
-		} else {
+		} else if (virtnet_is_xdp_frame(ptr)) {
 			struct xdp_frame *frame;
 
 			if (sq->do_dma)
@@ -305,9 +311,15 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
 
 			stats->bytes += xdp_get_frame_len(frame);
 			xdp_return_frame(frame);
+		} else {
+			stats->bytes += virtnet_ptr_to_xsk(ptr);
+			++xsknum;
 		}
 		stats->packets++;
 	}
+
+	if (xsknum)
+		xsk_tx_completed(sq->xsk.pool, xsknum);
 }
 
 static inline void virtnet_vq_napi_schedule(struct napi_struct *napi,
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 1bd19dcda649..7ebc9bda7aee 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -14,6 +14,11 @@ static inline void *virtnet_xsk_to_ptr(u32 len)
 	return (void *)(p | VIRTIO_XSK_FLAG);
 }
 
+static inline u32 virtnet_ptr_to_xsk(void *ptr)
+{
+	return ((unsigned long)ptr) >> VIRTIO_XSK_FLAG_OFFSET;
+}
+
 int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
 bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
 		      int budget);
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 14/19] virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (12 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:53   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 15/19] virtio_net: xsk: rx: introduce add_recvbuf_xsk() Xuan Zhuo
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

virtnet_sq_free_unused_buf() check xsk buffer.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 1a222221352e..58bb38f9b453 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -3876,10 +3876,12 @@ static void free_receive_page_frags(struct virtnet_info *vi)
 
 void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
-	if (!virtnet_is_xdp_frame(buf))
+	if (virtnet_is_skb_ptr(buf))
 		dev_kfree_skb(buf);
-	else
+	else if (virtnet_is_xdp_frame(buf))
 		xdp_return_frame(virtnet_ptr_to_xdp(buf));
+
+	/* xsk buffer do not need handle. */
 }
 
 void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 15/19] virtio_net: xsk: rx: introduce add_recvbuf_xsk()
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (13 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 14/19] virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check " Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:56   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer Xuan Zhuo
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Implement the logic of filling vq with XSK buffer.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 13 +++++++
 drivers/net/virtio/virtio_net.h |  5 +++
 drivers/net/virtio/xsk.c        | 66 ++++++++++++++++++++++++++++++++-
 drivers/net/virtio/xsk.h        |  2 +
 4 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 58bb38f9b453..0e740447b142 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -1787,9 +1787,20 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
 static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
 			  gfp_t gfp)
 {
+	struct xsk_buff_pool *pool;
 	int err;
 	bool oom;
 
+	rcu_read_lock();
+	pool = rcu_dereference(rq->xsk.pool);
+	if (pool) {
+		err = virtnet_add_recvbuf_xsk(vi, rq, pool, gfp);
+		oom = err == -ENOMEM;
+		rcu_read_unlock();
+		goto kick;
+	}
+	rcu_read_unlock();
+
 	do {
 		if (vi->mergeable_rx_bufs)
 			err = add_recvbuf_mergeable(vi, rq, gfp);
@@ -1802,6 +1813,8 @@ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
 		if (err)
 			break;
 	} while (rq->vq->num_free);
+
+kick:
 	if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) {
 		unsigned long flags;
 
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index d4e620a084f4..6e71622fca45 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -156,6 +156,11 @@ struct virtnet_rq {
 
 		/* xdp rxq used by xsk */
 		struct xdp_rxq_info xdp_rxq;
+
+		struct xdp_buff **xsk_buffs;
+		u32 nxt_idx;
+		u32 num;
+		u32 size;
 	} xsk;
 };
 
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index 973e783260c3..841fb078882a 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -37,6 +37,58 @@ static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
 		netif_stop_subqueue(dev, qnum);
 }
 
+static int virtnet_add_recvbuf_batch(struct virtnet_info *vi, struct virtnet_rq *rq,
+				     struct xsk_buff_pool *pool, gfp_t gfp)
+{
+	struct xdp_buff **xsk_buffs;
+	dma_addr_t addr;
+	u32 len, i;
+	int err = 0;
+
+	xsk_buffs = rq->xsk.xsk_buffs;
+
+	if (rq->xsk.nxt_idx >= rq->xsk.num) {
+		rq->xsk.num = xsk_buff_alloc_batch(pool, xsk_buffs, rq->xsk.size);
+		if (!rq->xsk.num)
+			return -ENOMEM;
+		rq->xsk.nxt_idx = 0;
+	}
+
+	while (rq->xsk.nxt_idx < rq->xsk.num) {
+		i = rq->xsk.nxt_idx;
+
+		/* use the part of XDP_PACKET_HEADROOM as the virtnet hdr space */
+		addr = xsk_buff_xdp_get_dma(xsk_buffs[i]) - vi->hdr_len;
+		len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len;
+
+		sg_init_table(rq->sg, 1);
+		sg_fill_dma(rq->sg, addr, len);
+
+		err = virtqueue_add_inbuf(rq->vq, rq->sg, 1, xsk_buffs[i], gfp);
+		if (err)
+			return err;
+
+		rq->xsk.nxt_idx++;
+	}
+
+	return 0;
+}
+
+int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq,
+			    struct xsk_buff_pool *pool, gfp_t gfp)
+{
+	int err;
+
+	do {
+		err = virtnet_add_recvbuf_batch(vi, rq, pool, gfp);
+		if (err)
+			return err;
+
+	} while (rq->vq->num_free);
+
+	return 0;
+}
+
 static int virtnet_xsk_xmit_one(struct virtnet_sq *sq,
 				struct xsk_buff_pool *pool,
 				struct xdp_desc *desc)
@@ -244,7 +296,7 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 	struct virtnet_sq *sq;
 	struct device *dma_dev;
 	dma_addr_t hdr_dma;
-	int err;
+	int err, size;
 
 	/* In big_packets mode, xdp cannot work, so there is no need to
 	 * initialize xsk of rq.
@@ -276,6 +328,16 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 	if (!dma_dev)
 		return -EPERM;
 
+	size = virtqueue_get_vring_size(rq->vq);
+
+	rq->xsk.xsk_buffs = kcalloc(size, sizeof(*rq->xsk.xsk_buffs), GFP_KERNEL);
+	if (!rq->xsk.xsk_buffs)
+		return -ENOMEM;
+
+	rq->xsk.size = size;
+	rq->xsk.nxt_idx = 0;
+	rq->xsk.num = 0;
+
 	hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE);
 	if (dma_mapping_error(dma_dev, hdr_dma))
 		return -ENOMEM;
@@ -338,6 +400,8 @@ static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
 	err1 = virtnet_sq_bind_xsk_pool(vi, sq, NULL);
 	err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL);
 
+	kfree(rq->xsk.xsk_buffs);
+
 	return err1 | err2;
 }
 
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 7ebc9bda7aee..bef41a3f954e 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -23,4 +23,6 @@ int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
 bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
 		      int budget);
 int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
+int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq,
+			    struct xsk_buff_pool *pool, gfp_t gfp);
 #endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (14 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 15/19] virtio_net: xsk: rx: introduce add_recvbuf_xsk() Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:57   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 17/19] virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check " Xuan Zhuo
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Implementing the logic of xsk rx. If this packet is not for XSK
determined in XDP, then we need to copy once to generate a SKB.
If it is for XSK, it is a zerocopy receive packet process.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       |  14 ++--
 drivers/net/virtio/virtio_net.h |   4 ++
 drivers/net/virtio/xsk.c        | 120 ++++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h        |   4 ++
 4 files changed, 137 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 0e740447b142..003dd67ab707 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -822,10 +822,10 @@ static void put_xdp_frags(struct xdp_buff *xdp)
 	}
 }
 
-static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
-			       struct net_device *dev,
-			       unsigned int *xdp_xmit,
-			       struct virtnet_rq_stats *stats)
+int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
+			struct net_device *dev,
+			unsigned int *xdp_xmit,
+			struct virtnet_rq_stats *stats)
 {
 	struct xdp_frame *xdpf;
 	int err;
@@ -1589,13 +1589,17 @@ static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq,
 		return;
 	}
 
-	if (vi->mergeable_rx_bufs)
+	rcu_read_lock();
+	if (rcu_dereference(rq->xsk.pool))
+		skb = virtnet_receive_xsk(dev, vi, rq, buf, len, xdp_xmit, stats);
+	else if (vi->mergeable_rx_bufs)
 		skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit,
 					stats);
 	else if (vi->big_packets)
 		skb = receive_big(dev, vi, rq, buf, len, stats);
 	else
 		skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats);
+	rcu_read_unlock();
 
 	if (unlikely(!skb))
 		return;
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 6e71622fca45..fd7f34703c9b 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -346,6 +346,10 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
 		return false;
 }
 
+int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
+			struct net_device *dev,
+			unsigned int *xdp_xmit,
+			struct virtnet_rq_stats *stats);
 void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
 void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
 void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index 841fb078882a..f1c64414fac9 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -13,6 +13,18 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
 	sg->length = len;
 }
 
+static unsigned int virtnet_receive_buf_num(struct virtnet_info *vi, char *buf)
+{
+	struct virtio_net_hdr_mrg_rxbuf *hdr;
+
+	if (vi->mergeable_rx_bufs) {
+		hdr = (struct virtio_net_hdr_mrg_rxbuf *)buf;
+		return virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+	}
+
+	return 1;
+}
+
 static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
 {
 	struct virtnet_info *vi = sq->vq->vdev->priv;
@@ -37,6 +49,114 @@ static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
 		netif_stop_subqueue(dev, qnum);
 }
 
+static void merge_drop_follow_xdp(struct net_device *dev,
+				  struct virtnet_rq *rq,
+				  u32 num_buf,
+				  struct virtnet_rq_stats *stats)
+{
+	struct xdp_buff *xdp;
+	u32 len;
+
+	while (num_buf-- > 1) {
+		xdp = virtqueue_get_buf(rq->vq, &len);
+		if (unlikely(!xdp)) {
+			pr_debug("%s: rx error: %d buffers missing\n",
+				 dev->name, num_buf);
+			dev->stats.rx_length_errors++;
+			break;
+		}
+		stats->bytes += len;
+		xsk_buff_free(xdp);
+	}
+}
+
+static struct sk_buff *construct_skb(struct virtnet_rq *rq,
+				     struct xdp_buff *xdp)
+{
+	unsigned int metasize = xdp->data - xdp->data_meta;
+	struct sk_buff *skb;
+	unsigned int size;
+
+	size = xdp->data_end - xdp->data_hard_start;
+	skb = napi_alloc_skb(&rq->napi, size);
+	if (unlikely(!skb))
+		return NULL;
+
+	skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
+
+	size = xdp->data_end - xdp->data_meta;
+	memcpy(__skb_put(skb, size), xdp->data_meta, size);
+
+	if (metasize) {
+		__skb_pull(skb, metasize);
+		skb_metadata_set(skb, metasize);
+	}
+
+	return skb;
+}
+
+struct sk_buff *virtnet_receive_xsk(struct net_device *dev, struct virtnet_info *vi,
+				    struct virtnet_rq *rq, void *buf,
+				    unsigned int len, unsigned int *xdp_xmit,
+				    struct virtnet_rq_stats *stats)
+{
+	struct virtio_net_hdr_mrg_rxbuf *hdr;
+	struct sk_buff *skb = NULL;
+	u32 ret, headroom, num_buf;
+	struct bpf_prog *prog;
+	struct xdp_buff *xdp;
+
+	len -= vi->hdr_len;
+
+	xdp = (struct xdp_buff *)buf;
+
+	xsk_buff_set_size(xdp, len);
+
+	hdr = xdp->data - vi->hdr_len;
+
+	num_buf = virtnet_receive_buf_num(vi, (char *)hdr);
+	if (num_buf > 1)
+		goto drop;
+
+	headroom = xdp->data - xdp->data_hard_start;
+
+	xdp_prepare_buff(xdp, xdp->data_hard_start, headroom, len, true);
+	xsk_buff_dma_sync_for_cpu(xdp, rq->xsk.pool);
+
+	ret = XDP_PASS;
+	rcu_read_lock();
+	prog = rcu_dereference(rq->xdp_prog);
+	if (prog)
+		ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats);
+	rcu_read_unlock();
+
+	switch (ret) {
+	case XDP_PASS:
+		skb = construct_skb(rq, xdp);
+		xsk_buff_free(xdp);
+		break;
+
+	case XDP_TX:
+	case XDP_REDIRECT:
+		goto consumed;
+
+	default:
+		goto drop;
+	}
+
+	return skb;
+
+drop:
+	stats->drops++;
+
+	xsk_buff_free(xdp);
+
+	if (num_buf > 1)
+		merge_drop_follow_xdp(dev, rq, num_buf, stats);
+consumed:
+	return NULL;
+}
+
 static int virtnet_add_recvbuf_batch(struct virtnet_info *vi, struct virtnet_rq *rq,
 				     struct xsk_buff_pool *pool, gfp_t gfp)
 {
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index bef41a3f954e..dbd2839a5f61 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -25,4 +25,8 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
 int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
 int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq,
 			    struct xsk_buff_pool *pool, gfp_t gfp);
+struct sk_buff *virtnet_receive_xsk(struct net_device *dev, struct virtnet_info *vi,
+				    struct virtnet_rq *rq, void *buf,
+				    unsigned int len, unsigned int *xdp_xmit,
+				    struct virtnet_rq_stats *stats);
 #endif
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 17/19] virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (15 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-16 12:00 ` [PATCH net-next v1 18/19] virtio_net: update tx timeout record Xuan Zhuo
                   ` (2 subsequent siblings)
  19 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Since this will be called in other circumstances(freeze), we must check
whether it is xsk's buffer in this function. It cannot be judged outside
this function.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 003dd67ab707..ac62d0955c13 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -3904,8 +3904,21 @@ void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
 	struct virtnet_info *vi = vq->vdev->priv;
+	struct xsk_buff_pool *pool;
 	int i = vq2rxq(vq);
 
+	rcu_read_lock();
+	pool = rcu_dereference(vi->rq[i].xsk.pool);
+	if (pool) {
+		struct xdp_buff *xdp;
+
+		xdp = (struct xdp_buff *)buf;
+		xsk_buff_free(xdp);
+		rcu_read_unlock();
+		return;
+	}
+	rcu_read_unlock();
+
 	if (vi->mergeable_rx_bufs)
 		put_page(virt_to_head_page(buf));
 	else if (vi->big_packets)
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 18/19] virtio_net: update tx timeout record
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (16 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 17/19] virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check " Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-20  6:57   ` Jason Wang
  2023-10-16 12:00 ` [PATCH net-next v1 19/19] virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY Xuan Zhuo
  2023-10-17  2:53 ` [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Jason Wang
  19 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

If send queue sent some packets, we update the tx timeout
record to prevent the tx timeout.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/xsk.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index f1c64414fac9..5d3de505c56c 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -274,6 +274,16 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
 
 	virtnet_xsk_check_queue(sq);
 
+	if (stats.packets) {
+		struct netdev_queue *txq;
+		struct virtnet_info *vi;
+
+		vi = sq->vq->vdev->priv;
+
+		txq = netdev_get_tx_queue(vi->dev, sq - vi->sq);
+		txq_trans_cond_update(txq);
+	}
+
 	u64_stats_update_begin(&sq->stats.syncp);
 	sq->stats.packets += stats.packets;
 	sq->stats.bytes += stats.bytes;
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* [PATCH net-next v1 19/19] virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (17 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 18/19] virtio_net: update tx timeout record Xuan Zhuo
@ 2023-10-16 12:00 ` Xuan Zhuo
  2023-10-17  2:53 ` [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Jason Wang
  19 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-16 12:00 UTC (permalink / raw)
  To: netdev
  Cc: David S. Miller, Eric Dumazet, Jakub Kicinski, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

Now, we supported AF_XDP(xsk). Add NETDEV_XDP_ACT_XSK_ZEROCOPY to
xdp_features.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index ac62d0955c13..c66d8509330e 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -4329,7 +4329,8 @@ static int virtnet_probe(struct virtio_device *vdev)
 		dev->hw_features |= NETIF_F_GRO_HW;
 
 	dev->vlan_features = dev->features;
-	dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT;
+	dev->xdp_features = NETDEV_XDP_ACT_BASIC | NETDEV_XDP_ACT_REDIRECT |
+		NETDEV_XDP_ACT_XSK_ZEROCOPY;
 
 	/* MTU range: 68 - 65535 */
 	dev->min_mtu = MIN_MTU;
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  2023-10-16 12:00 ` [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer Xuan Zhuo
@ 2023-10-16 23:44   ` Jakub Kicinski
  2023-10-17  2:02     ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jakub Kicinski @ 2023-10-16 23:44 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Paolo Abeni,
	Michael S. Tsirkin, Jason Wang, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, 16 Oct 2023 20:00:27 +0800 Xuan Zhuo wrote:
> @@ -305,9 +311,15 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
>  
>  			stats->bytes += xdp_get_frame_len(frame);
>  			xdp_return_frame(frame);
> +		} else {
> +			stats->bytes += virtnet_ptr_to_xsk(ptr);
> +			++xsknum;
>  		}
>  		stats->packets++;
>  	}
> +
> +	if (xsknum)
> +		xsk_tx_completed(sq->xsk.pool, xsknum);
>  }

sparse complains:

drivers/net/virtio/virtio_net.h:322:41: warning: incorrect type in argument 1 (different address spaces)
drivers/net/virtio/virtio_net.h:322:41:    expected struct xsk_buff_pool *pool
drivers/net/virtio/virtio_net.h:322:41:    got struct xsk_buff_pool
[noderef] __rcu *pool

please build test with W=1 C=1
-- 
pw-bot: cr

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  2023-10-16 23:44   ` Jakub Kicinski
@ 2023-10-17  2:02     ` Xuan Zhuo
  2023-10-19  6:38       ` Michael S. Tsirkin
  0 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-17  2:02 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: netdev, David S. Miller, Eric Dumazet, Paolo Abeni,
	Michael S.  Tsirkin, Jason Wang, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, 16 Oct 2023 16:44:34 -0700, Jakub Kicinski <kuba@kernel.org> wrote:
> On Mon, 16 Oct 2023 20:00:27 +0800 Xuan Zhuo wrote:
> > @@ -305,9 +311,15 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
> >
> >  			stats->bytes += xdp_get_frame_len(frame);
> >  			xdp_return_frame(frame);
> > +		} else {
> > +			stats->bytes += virtnet_ptr_to_xsk(ptr);
> > +			++xsknum;
> >  		}
> >  		stats->packets++;
> >  	}
> > +
> > +	if (xsknum)
> > +		xsk_tx_completed(sq->xsk.pool, xsknum);
> >  }
>
> sparse complains:
>
> drivers/net/virtio/virtio_net.h:322:41: warning: incorrect type in argument 1 (different address spaces)
> drivers/net/virtio/virtio_net.h:322:41:    expected struct xsk_buff_pool *pool
> drivers/net/virtio/virtio_net.h:322:41:    got struct xsk_buff_pool
> [noderef] __rcu *pool
>
> please build test with W=1 C=1

OK. I will add C=1 to may script.

Thanks.


> --
> pw-bot: cr

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (18 preceding siblings ...)
  2023-10-16 12:00 ` [PATCH net-next v1 19/19] virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY Xuan Zhuo
@ 2023-10-17  2:53 ` Jason Wang
  2023-10-17  3:02   ` Xuan Zhuo
  19 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-17  2:53 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> ## AF_XDP
>
> XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> copy feature of xsk (XDP socket) needs to be supported by the driver. The
> performance of zero copy is very good. mlx5 and intel ixgbe already support
> this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> feature.
>
> At present, we have completed some preparation:
>
> 1. vq-reset (virtio spec and kernel code)
> 2. virtio-core premapped dma
> 3. virtio-net xdp refactor
>
> So it is time for Virtio-Net to complete the support for the XDP Socket
> Zerocopy.
>
> Virtio-net can not increase the queue num at will, so xsk shares the queue with
> kernel.
>
> On the other hand, Virtio-Net does not support generate interrupt from driver
> manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> is also the local CPU, then we wake up napi directly.
>
> This patch set includes some refactor to the virtio-net to let that to support
> AF_XDP.
>
> ## performance
>
> ENV: Qemu with vhost-user(polling mode).
>
> Sockperf: https://github.com/Mellanox/sockperf
> I use this tool to send udp packet by kernel syscall.
>
> xmit command: sockperf tp -i 10.0.3.1 -t 1000
>
> I write a tool that sends udp packets or recvs udp packets by AF_XDP.
>
>                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> ------------------|---------------|------------------|------------
> xmit by syscall   |   100%        |                  |   676,915
> xmit by xsk       |   59.1%       |   100%           | 5,447,168
> recv by syscall   |   60%         |   100%           |   932,288
> recv by xsk       |   35.7%       |   100%           | 3,343,168

Any chance we can get a testpmd result (which I guess should be better
than PPS above)?

Thanks

>
> ## maintain
>
> I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> virtio-net.
>
> Please review.
>
> Thanks.
>
> v1:
>     1. remove two virtio commits. Push this patchset to net-next
>     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
>     3. fix some warnings
>
> Xuan Zhuo (19):
>   virtio_net: rename free_old_xmit_skbs to free_old_xmit
>   virtio_net: unify the code for recycling the xmit ptr
>   virtio_net: independent directory
>   virtio_net: move to virtio_net.h
>   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
>   virtio_net: separate virtnet_rx_resize()
>   virtio_net: separate virtnet_tx_resize()
>   virtio_net: sq support premapped mode
>   virtio_net: xsk: bind/unbind xsk
>   virtio_net: xsk: prevent disable tx napi
>   virtio_net: xsk: tx: support tx
>   virtio_net: xsk: tx: support wakeup
>   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
>   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
>   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
>   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
>   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
>   virtio_net: update tx timeout record
>   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
>
>  MAINTAINERS                                 |   2 +-
>  drivers/net/Kconfig                         |   8 +-
>  drivers/net/Makefile                        |   2 +-
>  drivers/net/virtio/Kconfig                  |  13 +
>  drivers/net/virtio/Makefile                 |   8 +
>  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
>  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
>  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
>  drivers/net/virtio/xsk.h                    |  32 +
>  9 files changed, 1247 insertions(+), 374 deletions(-)
>  create mode 100644 drivers/net/virtio/Kconfig
>  create mode 100644 drivers/net/virtio/Makefile
>  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
>  create mode 100644 drivers/net/virtio/virtio_net.h
>  create mode 100644 drivers/net/virtio/xsk.c
>  create mode 100644 drivers/net/virtio/xsk.h
>
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  2:53 ` [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Jason Wang
@ 2023-10-17  3:02   ` Xuan Zhuo
  2023-10-17  3:20     ` Jason Wang
  0 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-17  3:02 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > ## AF_XDP
> >
> > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > feature.
> >
> > At present, we have completed some preparation:
> >
> > 1. vq-reset (virtio spec and kernel code)
> > 2. virtio-core premapped dma
> > 3. virtio-net xdp refactor
> >
> > So it is time for Virtio-Net to complete the support for the XDP Socket
> > Zerocopy.
> >
> > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > kernel.
> >
> > On the other hand, Virtio-Net does not support generate interrupt from driver
> > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > is also the local CPU, then we wake up napi directly.
> >
> > This patch set includes some refactor to the virtio-net to let that to support
> > AF_XDP.
> >
> > ## performance
> >
> > ENV: Qemu with vhost-user(polling mode).
> >
> > Sockperf: https://github.com/Mellanox/sockperf
> > I use this tool to send udp packet by kernel syscall.
> >
> > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> >
> > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> >
> >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > ------------------|---------------|------------------|------------
> > xmit by syscall   |   100%        |                  |   676,915
> > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > recv by syscall   |   60%         |   100%           |   932,288
> > recv by xsk       |   35.7%       |   100%           | 3,343,168
>
> Any chance we can get a testpmd result (which I guess should be better
> than PPS above)?

Do you mean testpmd + DPDK + AF_XDP?

Yes. This is probably better because my tool does more work. That is not a
complete testing tool used by our business.

What I noticed is that the hotspot is the driver writing virtio desc. Because
the device is in busy mode. So there is race between driver and device.
So I modified the virtio core and lazily updated avail idx. Then pps can reach
10,000,000.

Thanks.

>
> Thanks
>
> >
> > ## maintain
> >
> > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > virtio-net.
> >
> > Please review.
> >
> > Thanks.
> >
> > v1:
> >     1. remove two virtio commits. Push this patchset to net-next
> >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> >     3. fix some warnings
> >
> > Xuan Zhuo (19):
> >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> >   virtio_net: unify the code for recycling the xmit ptr
> >   virtio_net: independent directory
> >   virtio_net: move to virtio_net.h
> >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> >   virtio_net: separate virtnet_rx_resize()
> >   virtio_net: separate virtnet_tx_resize()
> >   virtio_net: sq support premapped mode
> >   virtio_net: xsk: bind/unbind xsk
> >   virtio_net: xsk: prevent disable tx napi
> >   virtio_net: xsk: tx: support tx
> >   virtio_net: xsk: tx: support wakeup
> >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> >   virtio_net: update tx timeout record
> >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> >
> >  MAINTAINERS                                 |   2 +-
> >  drivers/net/Kconfig                         |   8 +-
> >  drivers/net/Makefile                        |   2 +-
> >  drivers/net/virtio/Kconfig                  |  13 +
> >  drivers/net/virtio/Makefile                 |   8 +
> >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> >  drivers/net/virtio/xsk.h                    |  32 +
> >  9 files changed, 1247 insertions(+), 374 deletions(-)
> >  create mode 100644 drivers/net/virtio/Kconfig
> >  create mode 100644 drivers/net/virtio/Makefile
> >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> >  create mode 100644 drivers/net/virtio/virtio_net.h
> >  create mode 100644 drivers/net/virtio/xsk.c
> >  create mode 100644 drivers/net/virtio/xsk.h
> >
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  3:02   ` Xuan Zhuo
@ 2023-10-17  3:20     ` Jason Wang
  2023-10-17  3:22       ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-17  3:20 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > ## AF_XDP
> > >
> > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > feature.
> > >
> > > At present, we have completed some preparation:
> > >
> > > 1. vq-reset (virtio spec and kernel code)
> > > 2. virtio-core premapped dma
> > > 3. virtio-net xdp refactor
> > >
> > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > Zerocopy.
> > >
> > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > kernel.
> > >
> > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > is also the local CPU, then we wake up napi directly.
> > >
> > > This patch set includes some refactor to the virtio-net to let that to support
> > > AF_XDP.
> > >
> > > ## performance
> > >
> > > ENV: Qemu with vhost-user(polling mode).
> > >
> > > Sockperf: https://github.com/Mellanox/sockperf
> > > I use this tool to send udp packet by kernel syscall.
> > >
> > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > >
> > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > >
> > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > ------------------|---------------|------------------|------------
> > > xmit by syscall   |   100%        |                  |   676,915
> > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > recv by syscall   |   60%         |   100%           |   932,288
> > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> >
> > Any chance we can get a testpmd result (which I guess should be better
> > than PPS above)?
>
> Do you mean testpmd + DPDK + AF_XDP?

Yes.

>
> Yes. This is probably better because my tool does more work. That is not a
> complete testing tool used by our business.

Probably, but it would be appealing for others. Especially considering
DPDK supports AF_XDP PMD now.

>
> What I noticed is that the hotspot is the driver writing virtio desc. Because
> the device is in busy mode. So there is race between driver and device.
> So I modified the virtio core and lazily updated avail idx. Then pps can reach
> 10,000,000.

Care to post a draft for this?

Thanks

>
> Thanks.
>
> >
> > Thanks
> >
> > >
> > > ## maintain
> > >
> > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > virtio-net.
> > >
> > > Please review.
> > >
> > > Thanks.
> > >
> > > v1:
> > >     1. remove two virtio commits. Push this patchset to net-next
> > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > >     3. fix some warnings
> > >
> > > Xuan Zhuo (19):
> > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > >   virtio_net: unify the code for recycling the xmit ptr
> > >   virtio_net: independent directory
> > >   virtio_net: move to virtio_net.h
> > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > >   virtio_net: separate virtnet_rx_resize()
> > >   virtio_net: separate virtnet_tx_resize()
> > >   virtio_net: sq support premapped mode
> > >   virtio_net: xsk: bind/unbind xsk
> > >   virtio_net: xsk: prevent disable tx napi
> > >   virtio_net: xsk: tx: support tx
> > >   virtio_net: xsk: tx: support wakeup
> > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > >   virtio_net: update tx timeout record
> > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > >
> > >  MAINTAINERS                                 |   2 +-
> > >  drivers/net/Kconfig                         |   8 +-
> > >  drivers/net/Makefile                        |   2 +-
> > >  drivers/net/virtio/Kconfig                  |  13 +
> > >  drivers/net/virtio/Makefile                 |   8 +
> > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > >  drivers/net/virtio/xsk.h                    |  32 +
> > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > >  create mode 100644 drivers/net/virtio/Kconfig
> > >  create mode 100644 drivers/net/virtio/Makefile
> > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > >  create mode 100644 drivers/net/virtio/xsk.c
> > >  create mode 100644 drivers/net/virtio/xsk.h
> > >
> > > --
> > > 2.32.0.3.g01195cf9f
> > >
> >
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  3:20     ` Jason Wang
@ 2023-10-17  3:22       ` Xuan Zhuo
  2023-10-17  3:28         ` Jason Wang
  0 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-17  3:22 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > ## AF_XDP
> > > >
> > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > feature.
> > > >
> > > > At present, we have completed some preparation:
> > > >
> > > > 1. vq-reset (virtio spec and kernel code)
> > > > 2. virtio-core premapped dma
> > > > 3. virtio-net xdp refactor
> > > >
> > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > Zerocopy.
> > > >
> > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > kernel.
> > > >
> > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > is also the local CPU, then we wake up napi directly.
> > > >
> > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > AF_XDP.
> > > >
> > > > ## performance
> > > >
> > > > ENV: Qemu with vhost-user(polling mode).
> > > >
> > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > I use this tool to send udp packet by kernel syscall.
> > > >
> > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > >
> > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > >
> > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > ------------------|---------------|------------------|------------
> > > > xmit by syscall   |   100%        |                  |   676,915
> > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > >
> > > Any chance we can get a testpmd result (which I guess should be better
> > > than PPS above)?
> >
> > Do you mean testpmd + DPDK + AF_XDP?
>
> Yes.
>
> >
> > Yes. This is probably better because my tool does more work. That is not a
> > complete testing tool used by our business.
>
> Probably, but it would be appealing for others. Especially considering
> DPDK supports AF_XDP PMD now.

OK.

Let me try.

But could you start to review firstly?


>
> >
> > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > the device is in busy mode. So there is race between driver and device.
> > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > 10,000,000.
>
> Care to post a draft for this?

YES, I is thinking for this.
But maybe that is just work for split. The packed mode has some troubles.

Thanks.

>
> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > > >
> > > > ## maintain
> > > >
> > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > virtio-net.
> > > >
> > > > Please review.
> > > >
> > > > Thanks.
> > > >
> > > > v1:
> > > >     1. remove two virtio commits. Push this patchset to net-next
> > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > >     3. fix some warnings
> > > >
> > > > Xuan Zhuo (19):
> > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > >   virtio_net: unify the code for recycling the xmit ptr
> > > >   virtio_net: independent directory
> > > >   virtio_net: move to virtio_net.h
> > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > >   virtio_net: separate virtnet_rx_resize()
> > > >   virtio_net: separate virtnet_tx_resize()
> > > >   virtio_net: sq support premapped mode
> > > >   virtio_net: xsk: bind/unbind xsk
> > > >   virtio_net: xsk: prevent disable tx napi
> > > >   virtio_net: xsk: tx: support tx
> > > >   virtio_net: xsk: tx: support wakeup
> > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > >   virtio_net: update tx timeout record
> > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > >
> > > >  MAINTAINERS                                 |   2 +-
> > > >  drivers/net/Kconfig                         |   8 +-
> > > >  drivers/net/Makefile                        |   2 +-
> > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > >  drivers/net/virtio/Makefile                 |   8 +
> > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > >  create mode 100644 drivers/net/virtio/Makefile
> > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > >
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  3:22       ` Xuan Zhuo
@ 2023-10-17  3:28         ` Jason Wang
  2023-10-17  5:27           ` Jason Wang
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-17  3:28 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > ## AF_XDP
> > > > >
> > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > feature.
> > > > >
> > > > > At present, we have completed some preparation:
> > > > >
> > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > 2. virtio-core premapped dma
> > > > > 3. virtio-net xdp refactor
> > > > >
> > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > Zerocopy.
> > > > >
> > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > kernel.
> > > > >
> > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > is also the local CPU, then we wake up napi directly.
> > > > >
> > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > AF_XDP.
> > > > >
> > > > > ## performance
> > > > >
> > > > > ENV: Qemu with vhost-user(polling mode).
> > > > >
> > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > I use this tool to send udp packet by kernel syscall.
> > > > >
> > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > >
> > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > >
> > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > ------------------|---------------|------------------|------------
> > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > >
> > > > Any chance we can get a testpmd result (which I guess should be better
> > > > than PPS above)?
> > >
> > > Do you mean testpmd + DPDK + AF_XDP?
> >
> > Yes.
> >
> > >
> > > Yes. This is probably better because my tool does more work. That is not a
> > > complete testing tool used by our business.
> >
> > Probably, but it would be appealing for others. Especially considering
> > DPDK supports AF_XDP PMD now.
>
> OK.
>
> Let me try.
>
> But could you start to review firstly?

Yes, it's in my todo list.

>
>
> >
> > >
> > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > the device is in busy mode. So there is race between driver and device.
> > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > 10,000,000.
> >
> > Care to post a draft for this?
>
> YES, I is thinking for this.
> But maybe that is just work for split. The packed mode has some troubles.

Ok.

Thanks

>
> Thanks.
>
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > ## maintain
> > > > >
> > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > virtio-net.
> > > > >
> > > > > Please review.
> > > > >
> > > > > Thanks.
> > > > >
> > > > > v1:
> > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > >     3. fix some warnings
> > > > >
> > > > > Xuan Zhuo (19):
> > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > >   virtio_net: independent directory
> > > > >   virtio_net: move to virtio_net.h
> > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > >   virtio_net: separate virtnet_rx_resize()
> > > > >   virtio_net: separate virtnet_tx_resize()
> > > > >   virtio_net: sq support premapped mode
> > > > >   virtio_net: xsk: bind/unbind xsk
> > > > >   virtio_net: xsk: prevent disable tx napi
> > > > >   virtio_net: xsk: tx: support tx
> > > > >   virtio_net: xsk: tx: support wakeup
> > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > >   virtio_net: update tx timeout record
> > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > >
> > > > >  MAINTAINERS                                 |   2 +-
> > > > >  drivers/net/Kconfig                         |   8 +-
> > > > >  drivers/net/Makefile                        |   2 +-
> > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > >
> > > > > --
> > > > > 2.32.0.3.g01195cf9f
> > > > >
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  3:28         ` Jason Wang
@ 2023-10-17  5:27           ` Jason Wang
  2023-10-17  6:06             ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-17  5:27 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > ## AF_XDP
> > > > > >
> > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > feature.
> > > > > >
> > > > > > At present, we have completed some preparation:
> > > > > >
> > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > 2. virtio-core premapped dma
> > > > > > 3. virtio-net xdp refactor
> > > > > >
> > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > Zerocopy.
> > > > > >
> > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > kernel.
> > > > > >
> > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > is also the local CPU, then we wake up napi directly.
> > > > > >
> > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > AF_XDP.
> > > > > >
> > > > > > ## performance
> > > > > >
> > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > >
> > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > >
> > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > >
> > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > >
> > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > ------------------|---------------|------------------|------------
> > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > >
> > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > than PPS above)?
> > > >
> > > > Do you mean testpmd + DPDK + AF_XDP?
> > >
> > > Yes.
> > >
> > > >
> > > > Yes. This is probably better because my tool does more work. That is not a
> > > > complete testing tool used by our business.
> > >
> > > Probably, but it would be appealing for others. Especially considering
> > > DPDK supports AF_XDP PMD now.
> >
> > OK.
> >
> > Let me try.
> >
> > But could you start to review firstly?
>
> Yes, it's in my todo list.

Speaking too fast, I think if it doesn't take too long time, I would
wait for the result first as netdim series. One reason is that I
remember claims to be only 10% to 20% loss comparing to wire speed, so
I'd expect it should be much faster. I vaguely remember, even a vhost
can gives us more than 3M PPS if we disable SMAP, so the numbers here
are not as impressive as expected.

Thanks

>
> >
> >
> > >
> > > >
> > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > the device is in busy mode. So there is race between driver and device.
> > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > 10,000,000.
> > >
> > > Care to post a draft for this?
> >
> > YES, I is thinking for this.
> > But maybe that is just work for split. The packed mode has some troubles.
>
> Ok.
>
> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > ## maintain
> > > > > >
> > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > virtio-net.
> > > > > >
> > > > > > Please review.
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > v1:
> > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > >     3. fix some warnings
> > > > > >
> > > > > > Xuan Zhuo (19):
> > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > >   virtio_net: independent directory
> > > > > >   virtio_net: move to virtio_net.h
> > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > >   virtio_net: sq support premapped mode
> > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > >   virtio_net: xsk: tx: support tx
> > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > >   virtio_net: update tx timeout record
> > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > >
> > > > > >  MAINTAINERS                                 |   2 +-
> > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > >
> > > > > > --
> > > > > > 2.32.0.3.g01195cf9f
> > > > > >
> > > > >
> > > >
> > >
> >


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  5:27           ` Jason Wang
@ 2023-10-17  6:06             ` Xuan Zhuo
  2023-10-17  6:26               ` Jason Wang
  0 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-17  6:06 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> >
> > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > ## AF_XDP
> > > > > > >
> > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > feature.
> > > > > > >
> > > > > > > At present, we have completed some preparation:
> > > > > > >
> > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > 2. virtio-core premapped dma
> > > > > > > 3. virtio-net xdp refactor
> > > > > > >
> > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > Zerocopy.
> > > > > > >
> > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > kernel.
> > > > > > >
> > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > >
> > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > AF_XDP.
> > > > > > >
> > > > > > > ## performance
> > > > > > >
> > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > >
> > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > >
> > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > >
> > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > >
> > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > ------------------|---------------|------------------|------------
> > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > >
> > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > than PPS above)?
> > > > >
> > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > >
> > > > Yes.
> > > >
> > > > >
> > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > complete testing tool used by our business.
> > > >
> > > > Probably, but it would be appealing for others. Especially considering
> > > > DPDK supports AF_XDP PMD now.
> > >
> > > OK.
> > >
> > > Let me try.
> > >
> > > But could you start to review firstly?
> >
> > Yes, it's in my todo list.
>
> Speaking too fast, I think if it doesn't take too long time, I would
> wait for the result first as netdim series. One reason is that I
> remember claims to be only 10% to 20% loss comparing to wire speed, so
> I'd expect it should be much faster. I vaguely remember, even a vhost
> can gives us more than 3M PPS if we disable SMAP, so the numbers here
> are not as impressive as expected.


What is SMAP? Cloud you give me more info?

So if we think the 3M as the wire speed, you expect the result
can reach 2.8M pps/core, right?
Now the recv result is 2.5M(2463646) pps/core.
Do you think there is a huge gap?

My tool makes udp packet and lookup route, so it take more much cpu.

I am confused.


What is SMAP? Could you give me more information?

So if we use 3M as the wire speed, you would expect the result to be 2.8M
pps/core, right?

Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
the difference is big?

My tool makes udp packets and looks up routes, so it requires more CPU.

I'm confused. Is there something I misunderstood?

Thanks.

>
> Thanks
>
> >
> > >
> > >
> > > >
> > > > >
> > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > the device is in busy mode. So there is race between driver and device.
> > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > 10,000,000.
> > > >
> > > > Care to post a draft for this?
> > >
> > > YES, I is thinking for this.
> > > But maybe that is just work for split. The packed mode has some troubles.
> >
> > Ok.
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > ## maintain
> > > > > > >
> > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > virtio-net.
> > > > > > >
> > > > > > > Please review.
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > v1:
> > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > >     3. fix some warnings
> > > > > > >
> > > > > > > Xuan Zhuo (19):
> > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > >   virtio_net: independent directory
> > > > > > >   virtio_net: move to virtio_net.h
> > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > >   virtio_net: sq support premapped mode
> > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > >   virtio_net: update tx timeout record
> > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > >
> > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > >
> > > > > > > --
> > > > > > > 2.32.0.3.g01195cf9f
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  6:06             ` Xuan Zhuo
@ 2023-10-17  6:26               ` Jason Wang
  2023-10-17  6:43                 ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-17  6:26 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > ## AF_XDP
> > > > > > > >
> > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > feature.
> > > > > > > >
> > > > > > > > At present, we have completed some preparation:
> > > > > > > >
> > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > 2. virtio-core premapped dma
> > > > > > > > 3. virtio-net xdp refactor
> > > > > > > >
> > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > Zerocopy.
> > > > > > > >
> > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > kernel.
> > > > > > > >
> > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > >
> > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > AF_XDP.
> > > > > > > >
> > > > > > > > ## performance
> > > > > > > >
> > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > >
> > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > >
> > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > >
> > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > >
> > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > >
> > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > than PPS above)?
> > > > > >
> > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > >
> > > > > Yes.
> > > > >
> > > > > >
> > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > complete testing tool used by our business.
> > > > >
> > > > > Probably, but it would be appealing for others. Especially considering
> > > > > DPDK supports AF_XDP PMD now.
> > > >
> > > > OK.
> > > >
> > > > Let me try.
> > > >
> > > > But could you start to review firstly?
> > >
> > > Yes, it's in my todo list.
> >
> > Speaking too fast, I think if it doesn't take too long time, I would
> > wait for the result first as netdim series. One reason is that I
> > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > I'd expect it should be much faster. I vaguely remember, even a vhost
> > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > are not as impressive as expected.
>
>
> What is SMAP? Cloud you give me more info?

Supervisor Mode Access Prevention

Vhost suffers from this.

>
> So if we think the 3M as the wire speed, you expect the result
> can reach 2.8M pps/core, right?

It's AF_XDP that claims to be 80% if my memory is correct. So a
correct AF_XDP implementation should not sit behind this too much.

> Now the recv result is 2.5M(2463646) pps/core.
> Do you think there is a huge gap?

You never describe your testing environment in details. For example,
is this a virtual environment? What's the CPU model and frequency etc.

Because I never see a NIC whose wire speed is 3M.

>
> My tool makes udp packet and lookup route, so it take more much cpu.

That's why I suggest you to test raw PPS.

Thanks

>
> I am confused.
>
>
> What is SMAP? Could you give me more information?
>
> So if we use 3M as the wire speed, you would expect the result to be 2.8M
> pps/core, right?
>
> Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> the difference is big?
>
> My tool makes udp packets and looks up routes, so it requires more CPU.
>
> I'm confused. Is there something I misunderstood?
>
> Thanks.
>
> >
> > Thanks
> >
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > 10,000,000.
> > > > >
> > > > > Care to post a draft for this?
> > > >
> > > > YES, I is thinking for this.
> > > > But maybe that is just work for split. The packed mode has some troubles.
> > >
> > > Ok.
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > ## maintain
> > > > > > > >
> > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > virtio-net.
> > > > > > > >
> > > > > > > > Please review.
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > v1:
> > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > >     3. fix some warnings
> > > > > > > >
> > > > > > > > Xuan Zhuo (19):
> > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > >   virtio_net: independent directory
> > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > >   virtio_net: update tx timeout record
> > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > >
> > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > >
> > > > > > > > --
> > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> >
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  6:26               ` Jason Wang
@ 2023-10-17  6:43                 ` Xuan Zhuo
  2023-10-17 11:19                   ` Xuan Zhuo
  2023-10-18  1:02                   ` Jason Wang
  0 siblings, 2 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-17  6:43 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > >
> > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > ## AF_XDP
> > > > > > > > >
> > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > feature.
> > > > > > > > >
> > > > > > > > > At present, we have completed some preparation:
> > > > > > > > >
> > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > >
> > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > Zerocopy.
> > > > > > > > >
> > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > kernel.
> > > > > > > > >
> > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > >
> > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > AF_XDP.
> > > > > > > > >
> > > > > > > > > ## performance
> > > > > > > > >
> > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > >
> > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > >
> > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > >
> > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > >
> > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > >
> > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > than PPS above)?
> > > > > > >
> > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > >
> > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > complete testing tool used by our business.
> > > > > >
> > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > DPDK supports AF_XDP PMD now.
> > > > >
> > > > > OK.
> > > > >
> > > > > Let me try.
> > > > >
> > > > > But could you start to review firstly?
> > > >
> > > > Yes, it's in my todo list.
> > >
> > > Speaking too fast, I think if it doesn't take too long time, I would
> > > wait for the result first as netdim series. One reason is that I
> > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > are not as impressive as expected.
> >
> >
> > What is SMAP? Cloud you give me more info?
>
> Supervisor Mode Access Prevention
>
> Vhost suffers from this.
>
> >
> > So if we think the 3M as the wire speed, you expect the result
> > can reach 2.8M pps/core, right?
>
> It's AF_XDP that claims to be 80% if my memory is correct. So a
> correct AF_XDP implementation should not sit behind this too much.
>
> > Now the recv result is 2.5M(2463646) pps/core.
> > Do you think there is a huge gap?
>
> You never describe your testing environment in details. For example,
> is this a virtual environment? What's the CPU model and frequency etc.
>
> Because I never see a NIC whose wire speed is 3M.
>
> >
> > My tool makes udp packet and lookup route, so it take more much cpu.
>
> That's why I suggest you to test raw PPS.

OK. Let's align some info.

1. My test env is vhost-user. Qemu + vhost-user(polling mode).
   I do not use the DPDK, because that there is some trouble for me.
   I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
   That has two threads all are busy mode for tx and rx.
   tx thread consumes the tx ring and drop the packet.
   rx thread put the packet to the rx ring.

2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz

3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
   I think we can align that the vhost max speed is 8.5 MPPS.
   Is that ok?
   And the expected AF_XDP pps is about 6 MPPS.

4. About the raw PPS, I agree that. I will test with testpmd.


Thanks.


>
> Thanks
>
> >
> > I am confused.
> >
> >
> > What is SMAP? Could you give me more information?
> >
> > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > pps/core, right?
> >
> > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > the difference is big?
> >
> > My tool makes udp packets and looks up routes, so it requires more CPU.
> >
> > I'm confused. Is there something I misunderstood?
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > 10,000,000.
> > > > > >
> > > > > > Care to post a draft for this?
> > > > >
> > > > > YES, I is thinking for this.
> > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > >
> > > > Ok.
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > ## maintain
> > > > > > > > >
> > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > virtio-net.
> > > > > > > > >
> > > > > > > > > Please review.
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > v1:
> > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > >     3. fix some warnings
> > > > > > > > >
> > > > > > > > > Xuan Zhuo (19):
> > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > >   virtio_net: independent directory
> > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > >
> > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  6:43                 ` Xuan Zhuo
@ 2023-10-17 11:19                   ` Xuan Zhuo
  2023-10-18  2:46                     ` Jason Wang
  2023-10-18  3:32                     ` Xuan Zhuo
  2023-10-18  1:02                   ` Jason Wang
  1 sibling, 2 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-17 11:19 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf, Jason Wang

On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > >
> > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > ## AF_XDP
> > > > > > > > > >
> > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > feature.
> > > > > > > > > >
> > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > >
> > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > >
> > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > Zerocopy.
> > > > > > > > > >
> > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > kernel.
> > > > > > > > > >
> > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > >
> > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > AF_XDP.
> > > > > > > > > >
> > > > > > > > > > ## performance
> > > > > > > > > >
> > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > >
> > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > >
> > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > >
> > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > >
> > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > >
> > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > than PPS above)?
> > > > > > > >
> > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > >
> > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > complete testing tool used by our business.
> > > > > > >
> > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > DPDK supports AF_XDP PMD now.
> > > > > >
> > > > > > OK.
> > > > > >
> > > > > > Let me try.
> > > > > >
> > > > > > But could you start to review firstly?
> > > > >
> > > > > Yes, it's in my todo list.
> > > >
> > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > wait for the result first as netdim series. One reason is that I
> > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > are not as impressive as expected.
> > >
> > >
> > > What is SMAP? Cloud you give me more info?
> >
> > Supervisor Mode Access Prevention
> >
> > Vhost suffers from this.
> >
> > >
> > > So if we think the 3M as the wire speed, you expect the result
> > > can reach 2.8M pps/core, right?
> >
> > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > correct AF_XDP implementation should not sit behind this too much.
> >
> > > Now the recv result is 2.5M(2463646) pps/core.
> > > Do you think there is a huge gap?
> >
> > You never describe your testing environment in details. For example,
> > is this a virtual environment? What's the CPU model and frequency etc.
> >
> > Because I never see a NIC whose wire speed is 3M.
> >
> > >
> > > My tool makes udp packet and lookup route, so it take more much cpu.
> >
> > That's why I suggest you to test raw PPS.
>
> OK. Let's align some info.
>
> 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
>    I do not use the DPDK, because that there is some trouble for me.
>    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
>    That has two threads all are busy mode for tx and rx.
>    tx thread consumes the tx ring and drop the packet.
>    rx thread put the packet to the rx ring.
>
> 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
>
> 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
>    I think we can align that the vhost max speed is 8.5 MPPS.
>    Is that ok?
>    And the expected AF_XDP pps is about 6 MPPS.
>
> 4. About the raw PPS, I agree that. I will test with testpmd.
>

## testpmd command

./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
        --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
        --log-level=pmd.net.af_xdp:8 \
        -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap

## work without the follow patch[0]

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152

  Throughput (since last show)
  Rx-pps:      3790446          Rx-bps:   1698120056
  Tx-pps:      3790446          Tx-bps:   1698120056
  ############################################################################


## work with the follow patch[0]

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152

  Throughput (since last show)
  Rx-pps:      6333196          Rx-bps:   2837272088
  Tx-pps:      6333227          Tx-bps:   2837285936
  ############################################################################

I search the dpdk code that the dpdk virtio driver has the similar code.

virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
{
	[...]

	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {

		[...]

		/* Enqueue Packet buffers */
		virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
			can_push, 0);
	}

	[...]

	if (likely(nb_tx)) {
-->		vq_update_avail_idx(vq);

		if (unlikely(virtqueue_kick_prepare(vq))) {
			virtqueue_notify(vq);
			PMD_TX_LOG(DEBUG, "Notified backend after xmit");
		}
	}
}

## patch[0]

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 51d8f3299c10..cfe556b5d88f 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
        avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
        vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);

-       /* Descriptors and available array need to be set before we expose the
-        * new available array entries. */
-       virtio_wmb(vq->weak_barriers);
        vq->split.avail_idx_shadow++;
-       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
-                                               vq->split.avail_idx_shadow);
        vq->num_added++;

        pr_debug("Added buffer head %i to %p\n", head, vq);
@@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,

        /* This is very unlikely, but theoretically possible.  Kick
         * just in case. */
-       if (unlikely(vq->num_added == (1 << 16) - 1))
+       if (unlikely(vq->num_added == (1 << 16) - 1)) {
+               virtio_wmb(vq->weak_barriers);
+               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
+                                                            vq->split.avail_idx_shadow);
                virtqueue_kick(_vq);
+       }

        return 0;

@@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
         * event. */
        virtio_mb(vq->weak_barriers);

+       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
+                                               vq->split.avail_idx_shadow);
+
        old = vq->split.avail_idx_shadow - vq->num_added;
        new = vq->split.avail_idx_shadow;
        vq->num_added = 0;

---------------

Thanks.


>
> Thanks.
>
>
> >
> > Thanks
> >
> > >
> > > I am confused.
> > >
> > >
> > > What is SMAP? Could you give me more information?
> > >
> > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > pps/core, right?
> > >
> > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > the difference is big?
> > >
> > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > >
> > > I'm confused. Is there something I misunderstood?
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > 10,000,000.
> > > > > > >
> > > > > > > Care to post a draft for this?
> > > > > >
> > > > > > YES, I is thinking for this.
> > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > >
> > > > > Ok.
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > ## maintain
> > > > > > > > > >
> > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > virtio-net.
> > > > > > > > > >
> > > > > > > > > > Please review.
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > v1:
> > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > >     3. fix some warnings
> > > > > > > > > >
> > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > >
> > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
>

^ permalink raw reply related	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17  6:43                 ` Xuan Zhuo
  2023-10-17 11:19                   ` Xuan Zhuo
@ 2023-10-18  1:02                   ` Jason Wang
  1 sibling, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-18  1:02 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, Oct 17, 2023 at 3:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > >
> > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > ## AF_XDP
> > > > > > > > > >
> > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > feature.
> > > > > > > > > >
> > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > >
> > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > >
> > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > Zerocopy.
> > > > > > > > > >
> > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > kernel.
> > > > > > > > > >
> > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > >
> > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > AF_XDP.
> > > > > > > > > >
> > > > > > > > > > ## performance
> > > > > > > > > >
> > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > >
> > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > >
> > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > >
> > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > >
> > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > >
> > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > than PPS above)?
> > > > > > > >
> > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > >
> > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > complete testing tool used by our business.
> > > > > > >
> > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > DPDK supports AF_XDP PMD now.
> > > > > >
> > > > > > OK.
> > > > > >
> > > > > > Let me try.
> > > > > >
> > > > > > But could you start to review firstly?
> > > > >
> > > > > Yes, it's in my todo list.
> > > >
> > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > wait for the result first as netdim series. One reason is that I
> > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > are not as impressive as expected.
> > >
> > >
> > > What is SMAP? Cloud you give me more info?
> >
> > Supervisor Mode Access Prevention
> >
> > Vhost suffers from this.
> >
> > >
> > > So if we think the 3M as the wire speed, you expect the result
> > > can reach 2.8M pps/core, right?
> >
> > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > correct AF_XDP implementation should not sit behind this too much.
> >
> > > Now the recv result is 2.5M(2463646) pps/core.
> > > Do you think there is a huge gap?
> >
> > You never describe your testing environment in details. For example,
> > is this a virtual environment? What's the CPU model and frequency etc.
> >
> > Because I never see a NIC whose wire speed is 3M.
> >
> > >
> > > My tool makes udp packet and lookup route, so it take more much cpu.
> >
> > That's why I suggest you to test raw PPS.
>
> OK. Let's align some info.
>
> 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
>    I do not use the DPDK, because that there is some trouble for me.
>    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
>    That has two threads all are busy mode for tx and rx.
>    tx thread consumes the tx ring and drop the packet.
>    rx thread put the packet to the rx ring.
>
> 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
>
> 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
>    I think we can align that the vhost max speed is 8.5 MPPS.
>    Is that ok?

Let's have an apple to apple comparison.

Firstly, I would test AF_XDP on virtio-net hardware which I guess you
should have some. Then we don't need any test as baseline but the wire
speed.

Secondly, if it can't be done, let's do something much more simple:

1) Boot Qemu with vhost-user and wire it to testpmd
2) Testing
2.1) virtio PMD in guest with testpmd
2.2) AF_XDP PMD in guest with testpmd

Then let's compare.

Thanks


>    And the expected AF_XDP pps is about 6 MPPS.
>
> 4. About the raw PPS, I agree that. I will test with testpmd.
>
>
> Thanks.
>
>
> >
> > Thanks
> >
> > >
> > > I am confused.
> > >
> > >
> > > What is SMAP? Could you give me more information?
> > >
> > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > pps/core, right?
> > >
> > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > the difference is big?
> > >
> > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > >
> > > I'm confused. Is there something I misunderstood?
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > 10,000,000.
> > > > > > >
> > > > > > > Care to post a draft for this?
> > > > > >
> > > > > > YES, I is thinking for this.
> > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > >
> > > > > Ok.
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > ## maintain
> > > > > > > > > >
> > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > virtio-net.
> > > > > > > > > >
> > > > > > > > > > Please review.
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > v1:
> > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > >     3. fix some warnings
> > > > > > > > > >
> > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > >
> > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17 11:19                   ` Xuan Zhuo
@ 2023-10-18  2:46                     ` Jason Wang
  2023-10-18  2:56                       ` Xuan Zhuo
  2023-10-18  3:32                     ` Xuan Zhuo
  1 sibling, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-18  2:46 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Tue, Oct 17, 2023 at 7:28 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > >
> > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > ## AF_XDP
> > > > > > > > > > >
> > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > feature.
> > > > > > > > > > >
> > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > >
> > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > >
> > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > Zerocopy.
> > > > > > > > > > >
> > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > kernel.
> > > > > > > > > > >
> > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > >
> > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > > ## performance
> > > > > > > > > > >
> > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > >
> > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > >
> > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > >
> > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > >
> > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > than PPS above)?
> > > > > > > > >
> > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > >
> > > > > > > > Yes.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > complete testing tool used by our business.
> > > > > > > >
> > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > >
> > > > > > > OK.
> > > > > > >
> > > > > > > Let me try.
> > > > > > >
> > > > > > > But could you start to review firstly?
> > > > > >
> > > > > > Yes, it's in my todo list.
> > > > >
> > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > wait for the result first as netdim series. One reason is that I
> > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > are not as impressive as expected.
> > > >
> > > >
> > > > What is SMAP? Cloud you give me more info?
> > >
> > > Supervisor Mode Access Prevention
> > >
> > > Vhost suffers from this.
> > >
> > > >
> > > > So if we think the 3M as the wire speed, you expect the result
> > > > can reach 2.8M pps/core, right?
> > >
> > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > correct AF_XDP implementation should not sit behind this too much.
> > >
> > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > Do you think there is a huge gap?
> > >
> > > You never describe your testing environment in details. For example,
> > > is this a virtual environment? What's the CPU model and frequency etc.
> > >
> > > Because I never see a NIC whose wire speed is 3M.
> > >
> > > >
> > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > >
> > > That's why I suggest you to test raw PPS.
> >
> > OK. Let's align some info.
> >
> > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> >    I do not use the DPDK, because that there is some trouble for me.
> >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> >    That has two threads all are busy mode for tx and rx.
> >    tx thread consumes the tx ring and drop the packet.
> >    rx thread put the packet to the rx ring.
> >
> > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> >
> > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> >    I think we can align that the vhost max speed is 8.5 MPPS.
> >    Is that ok?
> >    And the expected AF_XDP pps is about 6 MPPS.
> >
> > 4. About the raw PPS, I agree that. I will test with testpmd.
> >
>
> ## testpmd command
>
> ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
>         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
>         --log-level=pmd.net.af_xdp:8 \
>         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
>
> ## work without the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
>
>   Throughput (since last show)
>   Rx-pps:      3790446          Rx-bps:   1698120056
>   Tx-pps:      3790446          Tx-bps:   1698120056
>   ############################################################################
>
>
> ## work with the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
>
>   Throughput (since last show)
>   Rx-pps:      6333196          Rx-bps:   2837272088
>   Tx-pps:      6333227          Tx-bps:   2837285936
>   ############################################################################
>
> I search the dpdk code that the dpdk virtio driver has the similar code.
>
> virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> {
>         [...]
>
>         for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
>
>                 [...]
>
>                 /* Enqueue Packet buffers */
>                 virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
>                         can_push, 0);
>         }
>
>         [...]
>
>         if (likely(nb_tx)) {
> -->             vq_update_avail_idx(vq);
>
>                 if (unlikely(virtqueue_kick_prepare(vq))) {
>                         virtqueue_notify(vq);
>                         PMD_TX_LOG(DEBUG, "Notified backend after xmit");
>                 }
>         }
> }
>
> ## patch[0]
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 51d8f3299c10..cfe556b5d88f 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
>         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>
> -       /* Descriptors and available array need to be set before we expose the
> -        * new available array entries. */
> -       virtio_wmb(vq->weak_barriers);
>         vq->split.avail_idx_shadow++;
> -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> -                                               vq->split.avail_idx_shadow);
>         vq->num_added++;
>
>         pr_debug("Added buffer head %i to %p\n", head, vq);
> @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>
>         /* This is very unlikely, but theoretically possible.  Kick
>          * just in case. */
> -       if (unlikely(vq->num_added == (1 << 16) - 1))
> +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> +               virtio_wmb(vq->weak_barriers);
> +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                                            vq->split.avail_idx_shadow);
>                 virtqueue_kick(_vq);
> +       }
>
>         return 0;
>
> @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
>          * event. */
>         virtio_mb(vq->weak_barriers);
>
> +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                               vq->split.avail_idx_shadow);
> +

Looks like an interesting optimization.

Would you mind posting this with numbers separately?

Btw, does the current API require virtqueue_kick_prepare() to be done
before a virtqueue_notify(). If not, we need do something similar in
virtqueue_notify()?

Thanks

>         old = vq->split.avail_idx_shadow - vq->num_added;
>         new = vq->split.avail_idx_shadow;
>         vq->num_added = 0;
>
> ---------------
>
> Thanks.
>
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > I am confused.
> > > >
> > > >
> > > > What is SMAP? Could you give me more information?
> > > >
> > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > pps/core, right?
> > > >
> > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > the difference is big?
> > > >
> > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > >
> > > > I'm confused. Is there something I misunderstood?
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > 10,000,000.
> > > > > > > >
> > > > > > > > Care to post a draft for this?
> > > > > > >
> > > > > > > YES, I is thinking for this.
> > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > >
> > > > > > Ok.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > ## maintain
> > > > > > > > > > >
> > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > virtio-net.
> > > > > > > > > > >
> > > > > > > > > > > Please review.
> > > > > > > > > > >
> > > > > > > > > > > Thanks.
> > > > > > > > > > >
> > > > > > > > > > > v1:
> > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > >
> > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > >
> > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-18  2:46                     ` Jason Wang
@ 2023-10-18  2:56                       ` Xuan Zhuo
  0 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-18  2:56 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Wed, 18 Oct 2023 10:46:38 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 7:28 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > > >
> > > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > ## AF_XDP
> > > > > > > > > > > >
> > > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > > feature.
> > > > > > > > > > > >
> > > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > > >
> > > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > > >
> > > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > > Zerocopy.
> > > > > > > > > > > >
> > > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > > kernel.
> > > > > > > > > > > >
> > > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > > >
> > > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > > AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > > ## performance
> > > > > > > > > > > >
> > > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > > >
> > > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > > >
> > > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > > >
> > > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > > >
> > > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > > than PPS above)?
> > > > > > > > > >
> > > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > > >
> > > > > > > > > Yes.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > > complete testing tool used by our business.
> > > > > > > > >
> > > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > > >
> > > > > > > > OK.
> > > > > > > >
> > > > > > > > Let me try.
> > > > > > > >
> > > > > > > > But could you start to review firstly?
> > > > > > >
> > > > > > > Yes, it's in my todo list.
> > > > > >
> > > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > > wait for the result first as netdim series. One reason is that I
> > > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > > are not as impressive as expected.
> > > > >
> > > > >
> > > > > What is SMAP? Cloud you give me more info?
> > > >
> > > > Supervisor Mode Access Prevention
> > > >
> > > > Vhost suffers from this.
> > > >
> > > > >
> > > > > So if we think the 3M as the wire speed, you expect the result
> > > > > can reach 2.8M pps/core, right?
> > > >
> > > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > > correct AF_XDP implementation should not sit behind this too much.
> > > >
> > > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > > Do you think there is a huge gap?
> > > >
> > > > You never describe your testing environment in details. For example,
> > > > is this a virtual environment? What's the CPU model and frequency etc.
> > > >
> > > > Because I never see a NIC whose wire speed is 3M.
> > > >
> > > > >
> > > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > > >
> > > > That's why I suggest you to test raw PPS.
> > >
> > > OK. Let's align some info.
> > >
> > > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> > >    I do not use the DPDK, because that there is some trouble for me.
> > >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> > >    That has two threads all are busy mode for tx and rx.
> > >    tx thread consumes the tx ring and drop the packet.
> > >    rx thread put the packet to the rx ring.
> > >
> > > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> > >
> > > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> > >    I think we can align that the vhost max speed is 8.5 MPPS.
> > >    Is that ok?
> > >    And the expected AF_XDP pps is about 6 MPPS.
> > >
> > > 4. About the raw PPS, I agree that. I will test with testpmd.
> > >
> >
> > ## testpmd command
> >
> > ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
> >         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
> >         --log-level=pmd.net.af_xdp:8 \
> >         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
> >
> > ## work without the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
> >
> >   Throughput (since last show)
> >   Rx-pps:      3790446          Rx-bps:   1698120056
> >   Tx-pps:      3790446          Tx-bps:   1698120056
> >   ############################################################################
> >
> >
> > ## work with the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
> >
> >   Throughput (since last show)
> >   Rx-pps:      6333196          Rx-bps:   2837272088
> >   Tx-pps:      6333227          Tx-bps:   2837285936
> >   ############################################################################
> >
> > I search the dpdk code that the dpdk virtio driver has the similar code.
> >
> > virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> > {
> >         [...]
> >
> >         for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
> >
> >                 [...]
> >
> >                 /* Enqueue Packet buffers */
> >                 virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
> >                         can_push, 0);
> >         }
> >
> >         [...]
> >
> >         if (likely(nb_tx)) {
> > -->             vq_update_avail_idx(vq);
> >
> >                 if (unlikely(virtqueue_kick_prepare(vq))) {
> >                         virtqueue_notify(vq);
> >                         PMD_TX_LOG(DEBUG, "Notified backend after xmit");
> >                 }
> >         }
> > }
> >
> > ## patch[0]
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 51d8f3299c10..cfe556b5d88f 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> >         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
> >
> > -       /* Descriptors and available array need to be set before we expose the
> > -        * new available array entries. */
> > -       virtio_wmb(vq->weak_barriers);
> >         vq->split.avail_idx_shadow++;
> > -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > -                                               vq->split.avail_idx_shadow);
> >         vq->num_added++;
> >
> >         pr_debug("Added buffer head %i to %p\n", head, vq);
> > @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >
> >         /* This is very unlikely, but theoretically possible.  Kick
> >          * just in case. */
> > -       if (unlikely(vq->num_added == (1 << 16) - 1))
> > +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> > +               virtio_wmb(vq->weak_barriers);
> > +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                                            vq->split.avail_idx_shadow);
> >                 virtqueue_kick(_vq);
> > +       }
> >
> >         return 0;
> >
> > @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> >          * event. */
> >         virtio_mb(vq->weak_barriers);
> >
> > +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                               vq->split.avail_idx_shadow);
> > +
>
> Looks like an interesting optimization.
>
> Would you mind posting this with numbers separately?

I will post this later.


>
> Btw, does the current API require virtqueue_kick_prepare() to be done
> before a virtqueue_notify(). If not, we need do something similar in
> virtqueue_notify()?

As I know, prepare is done before a notify.

I will check this doubly.

Thanks.


>
> Thanks
>
> >         old = vq->split.avail_idx_shadow - vq->num_added;
> >         new = vq->split.avail_idx_shadow;
> >         vq->num_added = 0;
> >
> > ---------------
> >
> > Thanks.
> >
> >
> > >
> > > Thanks.
> > >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > I am confused.
> > > > >
> > > > >
> > > > > What is SMAP? Could you give me more information?
> > > > >
> > > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > > pps/core, right?
> > > > >
> > > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > > the difference is big?
> > > > >
> > > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > > >
> > > > > I'm confused. Is there something I misunderstood?
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > > 10,000,000.
> > > > > > > > >
> > > > > > > > > Care to post a draft for this?
> > > > > > > >
> > > > > > > > YES, I is thinking for this.
> > > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > > >
> > > > > > > Ok.
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > ## maintain
> > > > > > > > > > > >
> > > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > > virtio-net.
> > > > > > > > > > > >
> > > > > > > > > > > > Please review.
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks.
> > > > > > > > > > > >
> > > > > > > > > > > > v1:
> > > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > > >
> > > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > > >
> > > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-17 11:19                   ` Xuan Zhuo
  2023-10-18  2:46                     ` Jason Wang
@ 2023-10-18  3:32                     ` Xuan Zhuo
  2023-10-18  3:40                       ` Jason Wang
  1 sibling, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-18  3:32 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf, Jason Wang

On Tue, 17 Oct 2023 19:19:41 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > >
> > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > ## AF_XDP
> > > > > > > > > > >
> > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > feature.
> > > > > > > > > > >
> > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > >
> > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > >
> > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > Zerocopy.
> > > > > > > > > > >
> > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > kernel.
> > > > > > > > > > >
> > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > >
> > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > > ## performance
> > > > > > > > > > >
> > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > >
> > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > >
> > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > >
> > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > >
> > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > >
> > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > than PPS above)?
> > > > > > > > >
> > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > >
> > > > > > > > Yes.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > complete testing tool used by our business.
> > > > > > > >
> > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > >
> > > > > > > OK.
> > > > > > >
> > > > > > > Let me try.
> > > > > > >
> > > > > > > But could you start to review firstly?
> > > > > >
> > > > > > Yes, it's in my todo list.
> > > > >
> > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > wait for the result first as netdim series. One reason is that I
> > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > are not as impressive as expected.
> > > >
> > > >
> > > > What is SMAP? Cloud you give me more info?
> > >
> > > Supervisor Mode Access Prevention
> > >
> > > Vhost suffers from this.
> > >
> > > >
> > > > So if we think the 3M as the wire speed, you expect the result
> > > > can reach 2.8M pps/core, right?
> > >
> > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > correct AF_XDP implementation should not sit behind this too much.
> > >
> > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > Do you think there is a huge gap?
> > >
> > > You never describe your testing environment in details. For example,
> > > is this a virtual environment? What's the CPU model and frequency etc.
> > >
> > > Because I never see a NIC whose wire speed is 3M.
> > >
> > > >
> > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > >
> > > That's why I suggest you to test raw PPS.
> >
> > OK. Let's align some info.
> >
> > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> >    I do not use the DPDK, because that there is some trouble for me.
> >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> >    That has two threads all are busy mode for tx and rx.
> >    tx thread consumes the tx ring and drop the packet.
> >    rx thread put the packet to the rx ring.
> >
> > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> >
> > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> >    I think we can align that the vhost max speed is 8.5 MPPS.
> >    Is that ok?
> >    And the expected AF_XDP pps is about 6 MPPS.
> >
> > 4. About the raw PPS, I agree that. I will test with testpmd.
> >
>
> ## testpmd command
>
> ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
>         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
>         --log-level=pmd.net.af_xdp:8 \
>         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
>
> ## work without the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
>
>   Throughput (since last show)
>   Rx-pps:      3790446          Rx-bps:   1698120056
>   Tx-pps:      3790446          Tx-bps:   1698120056
>   ############################################################################
>
>
> ## work with the follow patch[0]
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
>
>   Throughput (since last show)
>   Rx-pps:      6333196          Rx-bps:   2837272088
>   Tx-pps:      6333227          Tx-bps:   2837285936
>   ############################################################################


## virtio PMD in guest with testpmd

testpmd> show port stats all

 ######################## NIC statistics for port 0 ########################
 RX-packets: 19531092064 RX-missed: 0     RX-bytes: 1093741155584
 RX-errors: 0
 RX-nombuf: 0
 TX-packets: 5959955552 TX-errors: 0     TX-bytes: 371030645664


 Throughput (since last show)
 Rx-pps:   8861574     Rx-bps:  3969985208
 Tx-pps:   8861493     Tx-bps:  3969962736
 ############################################################################

## AF_XDP PMD in guest with testpmd

testpmd> show port stats all

  ######################## NIC statistics for port 0  ########################
  RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152

  Throughput (since last show)
  Rx-pps:      6333196          Rx-bps:   2837272088
  Tx-pps:      6333227          Tx-bps:   2837285936
  ############################################################################

But AF_XDP consumes more CPU for tx and rx napi(100% and 86%).

Thanks.

>
> I search the dpdk code that the dpdk virtio driver has the similar code.
>
> virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> {
> 	[...]
>
> 	for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
>
> 		[...]
>
> 		/* Enqueue Packet buffers */
> 		virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
> 			can_push, 0);
> 	}
>
> 	[...]
>
> 	if (likely(nb_tx)) {
> -->		vq_update_avail_idx(vq);
>
> 		if (unlikely(virtqueue_kick_prepare(vq))) {
> 			virtqueue_notify(vq);
> 			PMD_TX_LOG(DEBUG, "Notified backend after xmit");
> 		}
> 	}
> }
>
> ## patch[0]
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 51d8f3299c10..cfe556b5d88f 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
>         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
>
> -       /* Descriptors and available array need to be set before we expose the
> -        * new available array entries. */
> -       virtio_wmb(vq->weak_barriers);
>         vq->split.avail_idx_shadow++;
> -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> -                                               vq->split.avail_idx_shadow);
>         vq->num_added++;
>
>         pr_debug("Added buffer head %i to %p\n", head, vq);
> @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>
>         /* This is very unlikely, but theoretically possible.  Kick
>          * just in case. */
> -       if (unlikely(vq->num_added == (1 << 16) - 1))
> +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> +               virtio_wmb(vq->weak_barriers);
> +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                                            vq->split.avail_idx_shadow);
>                 virtqueue_kick(_vq);
> +       }
>
>         return 0;
>
> @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
>          * event. */
>         virtio_mb(vq->weak_barriers);
>
> +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> +                                               vq->split.avail_idx_shadow);
> +
>         old = vq->split.avail_idx_shadow - vq->num_added;
>         new = vq->split.avail_idx_shadow;
>         vq->num_added = 0;
>
> ---------------
>
> Thanks.
>
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > I am confused.
> > > >
> > > >
> > > > What is SMAP? Could you give me more information?
> > > >
> > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > pps/core, right?
> > > >
> > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > the difference is big?
> > > >
> > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > >
> > > > I'm confused. Is there something I misunderstood?
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > 10,000,000.
> > > > > > > >
> > > > > > > > Care to post a draft for this?
> > > > > > >
> > > > > > > YES, I is thinking for this.
> > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > >
> > > > > > Ok.
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > ## maintain
> > > > > > > > > > >
> > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > virtio-net.
> > > > > > > > > > >
> > > > > > > > > > > Please review.
> > > > > > > > > > >
> > > > > > > > > > > Thanks.
> > > > > > > > > > >
> > > > > > > > > > > v1:
> > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > >
> > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > >
> > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > > >
> > >
> >

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy
  2023-10-18  3:32                     ` Xuan Zhuo
@ 2023-10-18  3:40                       ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-18  3:40 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Wed, Oct 18, 2023 at 11:38 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 17 Oct 2023 19:19:41 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > On Tue, 17 Oct 2023 14:43:33 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > On Tue, 17 Oct 2023 14:26:01 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Oct 17, 2023 at 2:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 17 Oct 2023 13:27:47 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Tue, Oct 17, 2023 at 11:28 AM Jason Wang <jasowang@redhat.com> wrote:
> > > > > > >
> > > > > > > On Tue, Oct 17, 2023 at 11:26 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Tue, 17 Oct 2023 11:20:41 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Tue, Oct 17, 2023 at 11:11 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > On Tue, 17 Oct 2023 10:53:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > ## AF_XDP
> > > > > > > > > > > >
> > > > > > > > > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > > > > > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > > > > > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > > > > > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > > > > > > > > feature.
> > > > > > > > > > > >
> > > > > > > > > > > > At present, we have completed some preparation:
> > > > > > > > > > > >
> > > > > > > > > > > > 1. vq-reset (virtio spec and kernel code)
> > > > > > > > > > > > 2. virtio-core premapped dma
> > > > > > > > > > > > 3. virtio-net xdp refactor
> > > > > > > > > > > >
> > > > > > > > > > > > So it is time for Virtio-Net to complete the support for the XDP Socket
> > > > > > > > > > > > Zerocopy.
> > > > > > > > > > > >
> > > > > > > > > > > > Virtio-net can not increase the queue num at will, so xsk shares the queue with
> > > > > > > > > > > > kernel.
> > > > > > > > > > > >
> > > > > > > > > > > > On the other hand, Virtio-Net does not support generate interrupt from driver
> > > > > > > > > > > > manually, so when we wakeup tx xmit, we used some tips. If the CPU run by TX
> > > > > > > > > > > > NAPI last time is other CPUs, use IPI to wake up NAPI on the remote CPU. If it
> > > > > > > > > > > > is also the local CPU, then we wake up napi directly.
> > > > > > > > > > > >
> > > > > > > > > > > > This patch set includes some refactor to the virtio-net to let that to support
> > > > > > > > > > > > AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > > ## performance
> > > > > > > > > > > >
> > > > > > > > > > > > ENV: Qemu with vhost-user(polling mode).
> > > > > > > > > > > >
> > > > > > > > > > > > Sockperf: https://github.com/Mellanox/sockperf
> > > > > > > > > > > > I use this tool to send udp packet by kernel syscall.
> > > > > > > > > > > >
> > > > > > > > > > > > xmit command: sockperf tp -i 10.0.3.1 -t 1000
> > > > > > > > > > > >
> > > > > > > > > > > > I write a tool that sends udp packets or recvs udp packets by AF_XDP.
> > > > > > > > > > > >
> > > > > > > > > > > >                   | Guest APP CPU |Guest Softirq CPU | UDP PPS
> > > > > > > > > > > > ------------------|---------------|------------------|------------
> > > > > > > > > > > > xmit by syscall   |   100%        |                  |   676,915
> > > > > > > > > > > > xmit by xsk       |   59.1%       |   100%           | 5,447,168
> > > > > > > > > > > > recv by syscall   |   60%         |   100%           |   932,288
> > > > > > > > > > > > recv by xsk       |   35.7%       |   100%           | 3,343,168
> > > > > > > > > > >
> > > > > > > > > > > Any chance we can get a testpmd result (which I guess should be better
> > > > > > > > > > > than PPS above)?
> > > > > > > > > >
> > > > > > > > > > Do you mean testpmd + DPDK + AF_XDP?
> > > > > > > > >
> > > > > > > > > Yes.
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Yes. This is probably better because my tool does more work. That is not a
> > > > > > > > > > complete testing tool used by our business.
> > > > > > > > >
> > > > > > > > > Probably, but it would be appealing for others. Especially considering
> > > > > > > > > DPDK supports AF_XDP PMD now.
> > > > > > > >
> > > > > > > > OK.
> > > > > > > >
> > > > > > > > Let me try.
> > > > > > > >
> > > > > > > > But could you start to review firstly?
> > > > > > >
> > > > > > > Yes, it's in my todo list.
> > > > > >
> > > > > > Speaking too fast, I think if it doesn't take too long time, I would
> > > > > > wait for the result first as netdim series. One reason is that I
> > > > > > remember claims to be only 10% to 20% loss comparing to wire speed, so
> > > > > > I'd expect it should be much faster. I vaguely remember, even a vhost
> > > > > > can gives us more than 3M PPS if we disable SMAP, so the numbers here
> > > > > > are not as impressive as expected.
> > > > >
> > > > >
> > > > > What is SMAP? Cloud you give me more info?
> > > >
> > > > Supervisor Mode Access Prevention
> > > >
> > > > Vhost suffers from this.
> > > >
> > > > >
> > > > > So if we think the 3M as the wire speed, you expect the result
> > > > > can reach 2.8M pps/core, right?
> > > >
> > > > It's AF_XDP that claims to be 80% if my memory is correct. So a
> > > > correct AF_XDP implementation should not sit behind this too much.
> > > >
> > > > > Now the recv result is 2.5M(2463646) pps/core.
> > > > > Do you think there is a huge gap?
> > > >
> > > > You never describe your testing environment in details. For example,
> > > > is this a virtual environment? What's the CPU model and frequency etc.
> > > >
> > > > Because I never see a NIC whose wire speed is 3M.
> > > >
> > > > >
> > > > > My tool makes udp packet and lookup route, so it take more much cpu.
> > > >
> > > > That's why I suggest you to test raw PPS.
> > >
> > > OK. Let's align some info.
> > >
> > > 1. My test env is vhost-user. Qemu + vhost-user(polling mode).
> > >    I do not use the DPDK, because that there is some trouble for me.
> > >    I use the VAPP (https://github.com/fengidri/vapp) as the vhost-user device.
> > >    That has two threads all are busy mode for tx and rx.
> > >    tx thread consumes the tx ring and drop the packet.
> > >    rx thread put the packet to the rx ring.
> > >
> > > 2. My Host CPU: Intel(R) Xeon(R) Platinum 8163 CPU @ 2.50GHz
> > >
> > > 3. From this http://fast.dpdk.org/doc/perf/DPDK_23_03_Intel_virtio_performance_report.pdf
> > >    I think we can align that the vhost max speed is 8.5 MPPS.
> > >    Is that ok?
> > >    And the expected AF_XDP pps is about 6 MPPS.
> > >
> > > 4. About the raw PPS, I agree that. I will test with testpmd.
> > >
> >
> > ## testpmd command
> >
> > ./build/app/dpdk-testpmd -l 1-2 --no-pci --main-lcore=2 \
> >         --vdev net_af_xdp0,iface=ens5,queue_count=1,busy_budget=0 \
> >         --log-level=pmd.net.af_xdp:8 \
> >         -- -i -a --nb-cores=1 --rxq=1 --txq=1 --forward-mode=macswap
> >
> > ## work without the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 3615824336 RX-missed: 0          RX-bytes:  202486162816
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 3615795592 TX-errors: 20738      TX-bytes:  202484553152
> >
> >   Throughput (since last show)
> >   Rx-pps:      3790446          Rx-bps:   1698120056
> >   Tx-pps:      3790446          Tx-bps:   1698120056
> >   ############################################################################
> >
> >
> > ## work with the follow patch[0]
> >
> > testpmd> show port stats all
> >
> >   ######################## NIC statistics for port 0  ########################
> >   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
> >   RX-errors: 0
> >   RX-nombuf:  0
> >   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
> >
> >   Throughput (since last show)
> >   Rx-pps:      6333196          Rx-bps:   2837272088
> >   Tx-pps:      6333227          Tx-bps:   2837285936
> >   ############################################################################
>
>
> ## virtio PMD in guest with testpmd
>
> testpmd> show port stats all
>
>  ######################## NIC statistics for port 0 ########################
>  RX-packets: 19531092064 RX-missed: 0     RX-bytes: 1093741155584
>  RX-errors: 0
>  RX-nombuf: 0
>  TX-packets: 5959955552 TX-errors: 0     TX-bytes: 371030645664
>
>
>  Throughput (since last show)
>  Rx-pps:   8861574     Rx-bps:  3969985208
>  Tx-pps:   8861493     Tx-bps:  3969962736
>  ############################################################################
>
> ## AF_XDP PMD in guest with testpmd
>
> testpmd> show port stats all
>
>   ######################## NIC statistics for port 0  ########################
>   RX-packets: 68152727   RX-missed: 0          RX-bytes:  3816552712
>   RX-errors: 0
>   RX-nombuf:  0
>   TX-packets: 68114967   TX-errors: 33216      TX-bytes:  3814438152
>
>   Throughput (since last show)
>   Rx-pps:      6333196          Rx-bps:   2837272088
>   Tx-pps:      6333227          Tx-bps:   2837285936
>   ############################################################################
>
> But AF_XDP consumes more CPU for tx and rx napi(100% and 86%).

Thanks for the testing. This is expected.

I will look at the series in detail.

Thanks

>
> Thanks.
>
> >
> > I search the dpdk code that the dpdk virtio driver has the similar code.
> >
> > virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
> > {
> >       [...]
> >
> >       for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
> >
> >               [...]
> >
> >               /* Enqueue Packet buffers */
> >               virtqueue_enqueue_xmit(txvq, txm, slots, use_indirect,
> >                       can_push, 0);
> >       }
> >
> >       [...]
> >
> >       if (likely(nb_tx)) {
> > -->           vq_update_avail_idx(vq);
> >
> >               if (unlikely(virtqueue_kick_prepare(vq))) {
> >                       virtqueue_notify(vq);
> >                       PMD_TX_LOG(DEBUG, "Notified backend after xmit");
> >               }
> >       }
> > }
> >
> > ## patch[0]
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 51d8f3299c10..cfe556b5d88f 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -687,12 +687,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> >         vq->split.vring.avail->ring[avail] = cpu_to_virtio16(_vq->vdev, head);
> >
> > -       /* Descriptors and available array need to be set before we expose the
> > -        * new available array entries. */
> > -       virtio_wmb(vq->weak_barriers);
> >         vq->split.avail_idx_shadow++;
> > -       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > -                                               vq->split.avail_idx_shadow);
> >         vq->num_added++;
> >
> >         pr_debug("Added buffer head %i to %p\n", head, vq);
> > @@ -700,8 +695,12 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >
> >         /* This is very unlikely, but theoretically possible.  Kick
> >          * just in case. */
> > -       if (unlikely(vq->num_added == (1 << 16) - 1))
> > +       if (unlikely(vq->num_added == (1 << 16) - 1)) {
> > +               virtio_wmb(vq->weak_barriers);
> > +               vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                                            vq->split.avail_idx_shadow);
> >                 virtqueue_kick(_vq);
> > +       }
> >
> >         return 0;
> >
> > @@ -742,6 +741,9 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> >          * event. */
> >         virtio_mb(vq->weak_barriers);
> >
> > +       vq->split.vring.avail->idx = cpu_to_virtio16(_vq->vdev,
> > +                                               vq->split.avail_idx_shadow);
> > +
> >         old = vq->split.avail_idx_shadow - vq->num_added;
> >         new = vq->split.avail_idx_shadow;
> >         vq->num_added = 0;
> >
> > ---------------
> >
> > Thanks.
> >
> >
> > >
> > > Thanks.
> > >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > I am confused.
> > > > >
> > > > >
> > > > > What is SMAP? Could you give me more information?
> > > > >
> > > > > So if we use 3M as the wire speed, you would expect the result to be 2.8M
> > > > > pps/core, right?
> > > > >
> > > > > Now the recv result is 2.5M (2463646 = 3,343,168/1.357) pps/core. Do you think
> > > > > the difference is big?
> > > > >
> > > > > My tool makes udp packets and looks up routes, so it requires more CPU.
> > > > >
> > > > > I'm confused. Is there something I misunderstood?
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > What I noticed is that the hotspot is the driver writing virtio desc. Because
> > > > > > > > > > the device is in busy mode. So there is race between driver and device.
> > > > > > > > > > So I modified the virtio core and lazily updated avail idx. Then pps can reach
> > > > > > > > > > 10,000,000.
> > > > > > > > >
> > > > > > > > > Care to post a draft for this?
> > > > > > > >
> > > > > > > > YES, I is thinking for this.
> > > > > > > > But maybe that is just work for split. The packed mode has some troubles.
> > > > > > >
> > > > > > > Ok.
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Thanks.
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Thanks
> > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > ## maintain
> > > > > > > > > > > >
> > > > > > > > > > > > I am currently a reviewer for virtio-net. I commit to maintain AF_XDP support in
> > > > > > > > > > > > virtio-net.
> > > > > > > > > > > >
> > > > > > > > > > > > Please review.
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks.
> > > > > > > > > > > >
> > > > > > > > > > > > v1:
> > > > > > > > > > > >     1. remove two virtio commits. Push this patchset to net-next
> > > > > > > > > > > >     2. squash "virtio_net: virtnet_poll_tx support rescheduled" to xsk: support tx
> > > > > > > > > > > >     3. fix some warnings
> > > > > > > > > > > >
> > > > > > > > > > > > Xuan Zhuo (19):
> > > > > > > > > > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > > > > > > > > > >   virtio_net: unify the code for recycling the xmit ptr
> > > > > > > > > > > >   virtio_net: independent directory
> > > > > > > > > > > >   virtio_net: move to virtio_net.h
> > > > > > > > > > > >   virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
> > > > > > > > > > > >   virtio_net: separate virtnet_rx_resize()
> > > > > > > > > > > >   virtio_net: separate virtnet_tx_resize()
> > > > > > > > > > > >   virtio_net: sq support premapped mode
> > > > > > > > > > > >   virtio_net: xsk: bind/unbind xsk
> > > > > > > > > > > >   virtio_net: xsk: prevent disable tx napi
> > > > > > > > > > > >   virtio_net: xsk: tx: support tx
> > > > > > > > > > > >   virtio_net: xsk: tx: support wakeup
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > > > > > > > > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > > > > > > > > > >   virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check xsk buffer
> > > > > > > > > > > >   virtio_net: update tx timeout record
> > > > > > > > > > > >   virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY
> > > > > > > > > > > >
> > > > > > > > > > > >  MAINTAINERS                                 |   2 +-
> > > > > > > > > > > >  drivers/net/Kconfig                         |   8 +-
> > > > > > > > > > > >  drivers/net/Makefile                        |   2 +-
> > > > > > > > > > > >  drivers/net/virtio/Kconfig                  |  13 +
> > > > > > > > > > > >  drivers/net/virtio/Makefile                 |   8 +
> > > > > > > > > > > >  drivers/net/{virtio_net.c => virtio/main.c} | 652 +++++++++-----------
> > > > > > > > > > > >  drivers/net/virtio/virtio_net.h             | 359 +++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.c                    | 545 ++++++++++++++++
> > > > > > > > > > > >  drivers/net/virtio/xsk.h                    |  32 +
> > > > > > > > > > > >  9 files changed, 1247 insertions(+), 374 deletions(-)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/Makefile
> > > > > > > > > > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (91%)
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > > > > > > > > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > >
> > > > >
> > > >
> > >
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 01/19] virtio_net: rename free_old_xmit_skbs to free_old_xmit
  2023-10-16 12:00 ` [PATCH net-next v1 01/19] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
@ 2023-10-19  4:17   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-19  4:17 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Since free_old_xmit_skbs not only deals with skb, but also xdp frame and
> subsequent added xsk, so change the name of this function to
> free_old_xmit.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/net/virtio_net.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index fe7f314d65c9..3d87386d8220 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -744,7 +744,7 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
>         }
>  }
>
> -static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
> +static void free_old_xmit(struct send_queue *sq, bool in_napi)
>  {
>         unsigned int len;
>         unsigned int packets = 0;
> @@ -816,7 +816,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
>                                 virtqueue_napi_schedule(&sq->napi, sq->vq);
>                 } else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
>                         /* More just got used, free them then recheck. */
> -                       free_old_xmit_skbs(sq, false);
> +                       free_old_xmit(sq, false);
>                         if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
>                                 netif_start_subqueue(dev, qnum);
>                                 virtqueue_disable_cb(sq->vq);
> @@ -2124,7 +2124,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
>
>                 do {
>                         virtqueue_disable_cb(sq->vq);
> -                       free_old_xmit_skbs(sq, true);
> +                       free_old_xmit(sq, true);
>                 } while (unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
>
>                 if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
> @@ -2246,7 +2246,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
>         txq = netdev_get_tx_queue(vi->dev, index);
>         __netif_tx_lock(txq, raw_smp_processor_id());
>         virtqueue_disable_cb(sq->vq);
> -       free_old_xmit_skbs(sq, true);
> +       free_old_xmit(sq, true);
>
>         if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
>                 netif_tx_wake_queue(txq);
> @@ -2336,7 +2336,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
>                 if (use_napi)
>                         virtqueue_disable_cb(sq->vq);
>
> -               free_old_xmit_skbs(sq, false);
> +               free_old_xmit(sq, false);
>
>         } while (use_napi && kick &&
>                unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 02/19] virtio_net: unify the code for recycling the xmit ptr
  2023-10-16 12:00 ` [PATCH net-next v1 02/19] virtio_net: unify the code for recycling the xmit ptr Xuan Zhuo
@ 2023-10-19  4:23   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-19  4:23 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> There are two completely similar and independent implementations. This
> is inconvenient for the subsequent addition of new types. So extract a
> function from this piece of code and call this function uniformly to
> recover old xmit ptr.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/net/virtio_net.c | 76 +++++++++++++++++-----------------------
>  1 file changed, 33 insertions(+), 43 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 3d87386d8220..6cf77b6acdab 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -352,6 +352,30 @@ static struct xdp_frame *ptr_to_xdp(void *ptr)
>         return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
>  }
>
> +static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> +                           struct virtnet_sq_stats *stats)
> +{
> +       unsigned int len;
> +       void *ptr;
> +
> +       while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> +               if (!is_xdp_frame(ptr)) {
> +                       struct sk_buff *skb = ptr;
> +
> +                       pr_debug("Sent skb %p\n", skb);
> +
> +                       stats->bytes += skb->len;
> +                       napi_consume_skb(skb, in_napi);
> +               } else {
> +                       struct xdp_frame *frame = ptr_to_xdp(ptr);
> +
> +                       stats->bytes += xdp_get_frame_len(frame);
> +                       xdp_return_frame(frame);
> +               }
> +               stats->packets++;
> +       }
> +}
> +
>  /* Converting between virtqueue no. and kernel tx/rx queue no.
>   * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
>   */
> @@ -746,37 +770,19 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
>
>  static void free_old_xmit(struct send_queue *sq, bool in_napi)
>  {
> -       unsigned int len;
> -       unsigned int packets = 0;
> -       unsigned int bytes = 0;
> -       void *ptr;
> +       struct virtnet_sq_stats stats = {};
>
> -       while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> -               if (likely(!is_xdp_frame(ptr))) {
> -                       struct sk_buff *skb = ptr;
> -
> -                       pr_debug("Sent skb %p\n", skb);
> -
> -                       bytes += skb->len;
> -                       napi_consume_skb(skb, in_napi);
> -               } else {
> -                       struct xdp_frame *frame = ptr_to_xdp(ptr);
> -
> -                       bytes += xdp_get_frame_len(frame);
> -                       xdp_return_frame(frame);
> -               }
> -               packets++;
> -       }
> +       __free_old_xmit(sq, in_napi, &stats);
>
>         /* Avoid overhead when no packets have been processed
>          * happens when called speculatively from start_xmit.
>          */
> -       if (!packets)
> +       if (!stats.packets)
>                 return;
>
>         u64_stats_update_begin(&sq->stats.syncp);
> -       sq->stats.bytes += bytes;
> -       sq->stats.packets += packets;
> +       sq->stats.bytes += stats.bytes;
> +       sq->stats.packets += stats.packets;
>         u64_stats_update_end(&sq->stats.syncp);
>  }
>
> @@ -915,15 +921,12 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>                             int n, struct xdp_frame **frames, u32 flags)
>  {
>         struct virtnet_info *vi = netdev_priv(dev);
> +       struct virtnet_sq_stats stats = {};
>         struct receive_queue *rq = vi->rq;
>         struct bpf_prog *xdp_prog;
>         struct send_queue *sq;
> -       unsigned int len;
> -       int packets = 0;
> -       int bytes = 0;
>         int nxmit = 0;
>         int kicks = 0;
> -       void *ptr;
>         int ret;
>         int i;
>
> @@ -942,20 +945,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>         }
>
>         /* Free up any pending old buffers before queueing new ones. */
> -       while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> -               if (likely(is_xdp_frame(ptr))) {
> -                       struct xdp_frame *frame = ptr_to_xdp(ptr);
> -
> -                       bytes += xdp_get_frame_len(frame);
> -                       xdp_return_frame(frame);
> -               } else {
> -                       struct sk_buff *skb = ptr;
> -
> -                       bytes += skb->len;
> -                       napi_consume_skb(skb, false);
> -               }
> -               packets++;
> -       }
> +       __free_old_xmit(sq, false, &stats);
>
>         for (i = 0; i < n; i++) {
>                 struct xdp_frame *xdpf = frames[i];
> @@ -975,8 +965,8 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>         }
>  out:
>         u64_stats_update_begin(&sq->stats.syncp);
> -       sq->stats.bytes += bytes;
> -       sq->stats.packets += packets;
> +       sq->stats.bytes += stats.bytes;
> +       sq->stats.packets += stats.packets;
>         sq->stats.xdp_tx += n;
>         sq->stats.xdp_tx_drops += n - nxmit;
>         sq->stats.kicks += kicks;
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 03/19] virtio_net: independent directory
  2023-10-16 12:00 ` [PATCH net-next v1 03/19] virtio_net: independent directory Xuan Zhuo
@ 2023-10-19  6:10   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-19  6:10 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Create a separate directory for virtio-net. AF_XDP support will be added
> later, then a separate xsk.c file will be added, so we should create a
> directory for virtio-net.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  MAINTAINERS                                 |  2 +-
>  drivers/net/Kconfig                         |  8 +-------
>  drivers/net/Makefile                        |  2 +-
>  drivers/net/virtio/Kconfig                  | 13 +++++++++++++
>  drivers/net/virtio/Makefile                 |  8 ++++++++
>  drivers/net/{virtio_net.c => virtio/main.c} |  0
>  6 files changed, 24 insertions(+), 9 deletions(-)
>  create mode 100644 drivers/net/virtio/Kconfig
>  create mode 100644 drivers/net/virtio/Makefile
>  rename drivers/net/{virtio_net.c => virtio/main.c} (100%)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9c186c214c54..e4fbcbc100e3 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -22768,7 +22768,7 @@ F:      Documentation/devicetree/bindings/virtio/
>  F:     Documentation/driver-api/virtio/
>  F:     drivers/block/virtio_blk.c
>  F:     drivers/crypto/virtio/
> -F:     drivers/net/virtio_net.c
> +F:     drivers/net/virtio/
>  F:     drivers/vdpa/
>  F:     drivers/virtio/
>  F:     include/linux/vdpa.h
> diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
> index 44eeb5d61ba9..54ee6fa4f4a6 100644
> --- a/drivers/net/Kconfig
> +++ b/drivers/net/Kconfig
> @@ -430,13 +430,7 @@ config VETH
>           When one end receives the packet it appears on its pair and vice
>           versa.
>
> -config VIRTIO_NET
> -       tristate "Virtio network driver"
> -       depends on VIRTIO
> -       select NET_FAILOVER
> -       help
> -         This is the virtual network driver for virtio.  It can be used with
> -         QEMU based VMMs (like KVM or Xen).  Say Y or M.
> +source "drivers/net/virtio/Kconfig"
>
>  config NLMON
>         tristate "Virtual netlink monitoring device"
> diff --git a/drivers/net/Makefile b/drivers/net/Makefile
> index e26f98f897c5..47537dd0f120 100644
> --- a/drivers/net/Makefile
> +++ b/drivers/net/Makefile
> @@ -31,7 +31,7 @@ obj-$(CONFIG_NET_TEAM) += team/
>  obj-$(CONFIG_TUN) += tun.o
>  obj-$(CONFIG_TAP) += tap.o
>  obj-$(CONFIG_VETH) += veth.o
> -obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
> +obj-$(CONFIG_VIRTIO_NET) += virtio/
>  obj-$(CONFIG_VXLAN) += vxlan/
>  obj-$(CONFIG_GENEVE) += geneve.o
>  obj-$(CONFIG_BAREUDP) += bareudp.o
> diff --git a/drivers/net/virtio/Kconfig b/drivers/net/virtio/Kconfig
> new file mode 100644
> index 000000000000..d8ccb3ac49df
> --- /dev/null
> +++ b/drivers/net/virtio/Kconfig
> @@ -0,0 +1,13 @@
> +# SPDX-License-Identifier: GPL-2.0-only
> +#
> +# virtio-net device configuration
> +#
> +config VIRTIO_NET
> +       tristate "Virtio network driver"
> +       depends on VIRTIO
> +       select NET_FAILOVER
> +       help
> +         This is the virtual network driver for virtio.  It can be used with
> +         QEMU based VMMs (like KVM or Xen).
> +
> +         Say Y or M.
> diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
> new file mode 100644
> index 000000000000..15ed7c97fd4f
> --- /dev/null
> +++ b/drivers/net/virtio/Makefile
> @@ -0,0 +1,8 @@
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +# Makefile for the virtio network device drivers.
> +#
> +
> +obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
> +
> +virtio_net-y := main.o
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio/main.c
> similarity index 100%
> rename from drivers/net/virtio_net.c
> rename to drivers/net/virtio/main.c
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h
  2023-10-16 12:00 ` [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h Xuan Zhuo
@ 2023-10-19  6:12   ` Jason Wang
  2023-10-19  7:16     ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-19  6:12 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Move some structure definitions and inline functions into the
> virtio_net.h file.

Some of the functions are not inline one before the moving. I'm not
sure what's the criteria to choose the function to be moved.


>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       | 252 +------------------------------
>  drivers/net/virtio/virtio_net.h | 256 ++++++++++++++++++++++++++++++++
>  2 files changed, 258 insertions(+), 250 deletions(-)
>  create mode 100644 drivers/net/virtio/virtio_net.h
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 6cf77b6acdab..d8b6c0d86f29 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -6,7 +6,6 @@
>  //#define DEBUG
>  #include <linux/netdevice.h>
>  #include <linux/etherdevice.h>
> -#include <linux/ethtool.h>
>  #include <linux/module.h>
>  #include <linux/virtio.h>
>  #include <linux/virtio_net.h>
> @@ -16,7 +15,6 @@
>  #include <linux/if_vlan.h>
>  #include <linux/slab.h>
>  #include <linux/cpu.h>
> -#include <linux/average.h>
>  #include <linux/filter.h>
>  #include <linux/kernel.h>
>  #include <net/route.h>
> @@ -24,6 +22,8 @@
>  #include <net/net_failover.h>
>  #include <net/netdev_rx_queue.h>
>
> +#include "virtio_net.h"
> +
>  static int napi_weight = NAPI_POLL_WEIGHT;
>  module_param(napi_weight, int, 0444);
>
> @@ -45,15 +45,6 @@ module_param(napi_tx, bool, 0644);
>  #define VIRTIO_XDP_TX          BIT(0)
>  #define VIRTIO_XDP_REDIR       BIT(1)
>
> -#define VIRTIO_XDP_FLAG        BIT(0)
> -
> -/* RX packet size EWMA. The average packet size is used to determine the packet
> - * buffer size when refilling RX rings. As the entire RX ring may be refilled
> - * at once, the weight is chosen so that the EWMA will be insensitive to short-
> - * term, transient changes in packet size.
> - */
> -DECLARE_EWMA(pkt_len, 0, 64)
> -
>  #define VIRTNET_DRIVER_VERSION "1.0.0"
>
>  static const unsigned long guest_offloads[] = {
> @@ -74,36 +65,6 @@ static const unsigned long guest_offloads[] = {
>                                 (1ULL << VIRTIO_NET_F_GUEST_USO4) | \
>                                 (1ULL << VIRTIO_NET_F_GUEST_USO6))
>
> -struct virtnet_stat_desc {
> -       char desc[ETH_GSTRING_LEN];
> -       size_t offset;
> -};
> -
> -struct virtnet_sq_stats {
> -       struct u64_stats_sync syncp;
> -       u64 packets;
> -       u64 bytes;
> -       u64 xdp_tx;
> -       u64 xdp_tx_drops;
> -       u64 kicks;
> -       u64 tx_timeouts;
> -};
> -
> -struct virtnet_rq_stats {
> -       struct u64_stats_sync syncp;
> -       u64 packets;
> -       u64 bytes;
> -       u64 drops;
> -       u64 xdp_packets;
> -       u64 xdp_tx;
> -       u64 xdp_redirects;
> -       u64 xdp_drops;
> -       u64 kicks;
> -};
> -
> -#define VIRTNET_SQ_STAT(m)     offsetof(struct virtnet_sq_stats, m)
> -#define VIRTNET_RQ_STAT(m)     offsetof(struct virtnet_rq_stats, m)
> -
>  static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
>         { "packets",            VIRTNET_SQ_STAT(packets) },
>         { "bytes",              VIRTNET_SQ_STAT(bytes) },
> @@ -127,80 +88,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
>  #define VIRTNET_SQ_STATS_LEN   ARRAY_SIZE(virtnet_sq_stats_desc)
>  #define VIRTNET_RQ_STATS_LEN   ARRAY_SIZE(virtnet_rq_stats_desc)
>
> -struct virtnet_interrupt_coalesce {
> -       u32 max_packets;
> -       u32 max_usecs;
> -};
> -
> -/* The dma information of pages allocated at a time. */
> -struct virtnet_rq_dma {
> -       dma_addr_t addr;
> -       u32 ref;
> -       u16 len;
> -       u16 need_sync;
> -};
> -
> -/* Internal representation of a send virtqueue */
> -struct send_queue {
> -       /* Virtqueue associated with this send _queue */
> -       struct virtqueue *vq;
> -
> -       /* TX: fragments + linear part + virtio header */
> -       struct scatterlist sg[MAX_SKB_FRAGS + 2];
> -
> -       /* Name of the send queue: output.$index */
> -       char name[16];
> -
> -       struct virtnet_sq_stats stats;
> -
> -       struct virtnet_interrupt_coalesce intr_coal;
> -
> -       struct napi_struct napi;
> -
> -       /* Record whether sq is in reset state. */
> -       bool reset;
> -};
> -
> -/* Internal representation of a receive virtqueue */
> -struct receive_queue {
> -       /* Virtqueue associated with this receive_queue */
> -       struct virtqueue *vq;
> -
> -       struct napi_struct napi;
> -
> -       struct bpf_prog __rcu *xdp_prog;
> -
> -       struct virtnet_rq_stats stats;
> -
> -       struct virtnet_interrupt_coalesce intr_coal;
> -
> -       /* Chain pages by the private ptr. */
> -       struct page *pages;
> -
> -       /* Average packet length for mergeable receive buffers. */
> -       struct ewma_pkt_len mrg_avg_pkt_len;
> -
> -       /* Page frag for packet buffer allocation. */
> -       struct page_frag alloc_frag;
> -
> -       /* RX: fragments + linear part + virtio header */
> -       struct scatterlist sg[MAX_SKB_FRAGS + 2];
> -
> -       /* Min single buffer size for mergeable buffers case. */
> -       unsigned int min_buf_len;
> -
> -       /* Name of this receive queue: input.$index */
> -       char name[16];
> -
> -       struct xdp_rxq_info xdp_rxq;
> -
> -       /* Record the last dma info to free after new pages is allocated. */
> -       struct virtnet_rq_dma *last_dma;
> -
> -       /* Do dma by self */
> -       bool do_dma;
> -};
> -
>  /* This structure can contain rss message with maximum settings for indirection table and keysize
>   * Note, that default structure that describes RSS configuration virtio_net_rss_config
>   * contains same info but can't handle table values.
> @@ -234,88 +121,6 @@ struct control_buf {
>         struct virtio_net_ctrl_coal_vq coal_vq;
>  };
>
> -struct virtnet_info {
> -       struct virtio_device *vdev;
> -       struct virtqueue *cvq;
> -       struct net_device *dev;
> -       struct send_queue *sq;
> -       struct receive_queue *rq;
> -       unsigned int status;
> -
> -       /* Max # of queue pairs supported by the device */
> -       u16 max_queue_pairs;
> -
> -       /* # of queue pairs currently used by the driver */
> -       u16 curr_queue_pairs;
> -
> -       /* # of XDP queue pairs currently used by the driver */
> -       u16 xdp_queue_pairs;
> -
> -       /* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> -       bool xdp_enabled;
> -
> -       /* I like... big packets and I cannot lie! */
> -       bool big_packets;
> -
> -       /* number of sg entries allocated for big packets */
> -       unsigned int big_packets_num_skbfrags;
> -
> -       /* Host will merge rx buffers for big packets (shake it! shake it!) */
> -       bool mergeable_rx_bufs;
> -
> -       /* Host supports rss and/or hash report */
> -       bool has_rss;
> -       bool has_rss_hash_report;
> -       u8 rss_key_size;
> -       u16 rss_indir_table_size;
> -       u32 rss_hash_types_supported;
> -       u32 rss_hash_types_saved;
> -
> -       /* Has control virtqueue */
> -       bool has_cvq;
> -
> -       /* Host can handle any s/g split between our header and packet data */
> -       bool any_header_sg;
> -
> -       /* Packet virtio header size */
> -       u8 hdr_len;
> -
> -       /* Work struct for delayed refilling if we run low on memory. */
> -       struct delayed_work refill;
> -
> -       /* Is delayed refill enabled? */
> -       bool refill_enabled;
> -
> -       /* The lock to synchronize the access to refill_enabled */
> -       spinlock_t refill_lock;
> -
> -       /* Work struct for config space updates */
> -       struct work_struct config_work;
> -
> -       /* Does the affinity hint is set for virtqueues? */
> -       bool affinity_hint_set;
> -
> -       /* CPU hotplug instances for online & dead */
> -       struct hlist_node node;
> -       struct hlist_node node_dead;
> -
> -       struct control_buf *ctrl;
> -
> -       /* Ethtool settings */
> -       u8 duplex;
> -       u32 speed;
> -
> -       /* Interrupt coalescing settings */
> -       struct virtnet_interrupt_coalesce intr_coal_tx;
> -       struct virtnet_interrupt_coalesce intr_coal_rx;
> -
> -       unsigned long guest_offloads;
> -       unsigned long guest_offloads_capable;
> -
> -       /* failover when STANDBY feature enabled */
> -       struct failover *failover;
> -};
> -
>  struct padded_vnet_hdr {
>         struct virtio_net_hdr_v1_hash hdr;
>         /*
> @@ -337,45 +142,11 @@ struct virtio_net_common_hdr {
>  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
>  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
>
> -static bool is_xdp_frame(void *ptr)
> -{
> -       return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> -}
> -
>  static void *xdp_to_ptr(struct xdp_frame *ptr)
>  {
>         return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
>  }

Any reason for not moving this?

Thanks

>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 05/19] virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
  2023-10-16 12:00 ` [PATCH net-next v1 05/19] virtio_net: add prefix virtnet to all struct/api inside virtio_net.h Xuan Zhuo
@ 2023-10-19  6:14   ` Jason Wang
  2023-10-19  6:36     ` Michael S. Tsirkin
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-19  6:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> We move some structures and APIs to the header file, but these
> structures and APIs do not prefixed with virtnet. This patch adds
> virtnet for these.

What's the benefit of doing this? AFAIK virtio-net is the only user
for virtio-net.h?

THanks

>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       | 122 ++++++++++++++++----------------
>  drivers/net/virtio/virtio_net.h |  30 ++++----
>  2 files changed, 76 insertions(+), 76 deletions(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index d8b6c0d86f29..ba38b6078e1d 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -180,7 +180,7 @@ skb_vnet_common_hdr(struct sk_buff *skb)
>   * private is used to chain pages for big packets, put the whole
>   * most recent used list in the beginning for reuse
>   */
> -static void give_pages(struct receive_queue *rq, struct page *page)
> +static void give_pages(struct virtnet_rq *rq, struct page *page)
>  {
>         struct page *end;
>
> @@ -190,7 +190,7 @@ static void give_pages(struct receive_queue *rq, struct page *page)
>         rq->pages = page;
>  }
>
> -static struct page *get_a_page(struct receive_queue *rq, gfp_t gfp_mask)
> +static struct page *get_a_page(struct virtnet_rq *rq, gfp_t gfp_mask)
>  {
>         struct page *p = rq->pages;
>
> @@ -225,7 +225,7 @@ static void virtqueue_napi_complete(struct napi_struct *napi,
>         opaque = virtqueue_enable_cb_prepare(vq);
>         if (napi_complete_done(napi, processed)) {
>                 if (unlikely(virtqueue_poll(vq, opaque)))
> -                       virtqueue_napi_schedule(napi, vq);
> +                       virtnet_vq_napi_schedule(napi, vq);
>         } else {
>                 virtqueue_disable_cb(vq);
>         }
> @@ -240,7 +240,7 @@ static void skb_xmit_done(struct virtqueue *vq)
>         virtqueue_disable_cb(vq);
>
>         if (napi->weight)
> -               virtqueue_napi_schedule(napi, vq);
> +               virtnet_vq_napi_schedule(napi, vq);
>         else
>                 /* We were probably waiting for more output buffers. */
>                 netif_wake_subqueue(vi->dev, vq2txq(vq));
> @@ -281,7 +281,7 @@ static struct sk_buff *virtnet_build_skb(void *buf, unsigned int buflen,
>
>  /* Called from bottom half context */
>  static struct sk_buff *page_to_skb(struct virtnet_info *vi,
> -                                  struct receive_queue *rq,
> +                                  struct virtnet_rq *rq,
>                                    struct page *page, unsigned int offset,
>                                    unsigned int len, unsigned int truesize,
>                                    unsigned int headroom)
> @@ -380,7 +380,7 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
>         return skb;
>  }
>
> -static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
> +static void virtnet_rq_unmap(struct virtnet_rq *rq, void *buf, u32 len)
>  {
>         struct page *page = virt_to_head_page(buf);
>         struct virtnet_rq_dma *dma;
> @@ -409,7 +409,7 @@ static void virtnet_rq_unmap(struct receive_queue *rq, void *buf, u32 len)
>         put_page(page);
>  }
>
> -static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx)
> +static void *virtnet_rq_get_buf(struct virtnet_rq *rq, u32 *len, void **ctx)
>  {
>         void *buf;
>
> @@ -420,7 +420,7 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx)
>         return buf;
>  }
>
> -static void *virtnet_rq_detach_unused_buf(struct receive_queue *rq)
> +static void *virtnet_rq_detach_unused_buf(struct virtnet_rq *rq)
>  {
>         void *buf;
>
> @@ -431,7 +431,7 @@ static void *virtnet_rq_detach_unused_buf(struct receive_queue *rq)
>         return buf;
>  }
>
> -static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
> +static void virtnet_rq_init_one_sg(struct virtnet_rq *rq, void *buf, u32 len)
>  {
>         struct virtnet_rq_dma *dma;
>         dma_addr_t addr;
> @@ -456,7 +456,7 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
>         rq->sg[0].length = len;
>  }
>
> -static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
> +static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp)
>  {
>         struct page_frag *alloc_frag = &rq->alloc_frag;
>         struct virtnet_rq_dma *dma;
> @@ -530,11 +530,11 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
>         }
>  }
>
> -static void free_old_xmit(struct send_queue *sq, bool in_napi)
> +static void free_old_xmit(struct virtnet_sq *sq, bool in_napi)
>  {
>         struct virtnet_sq_stats stats = {};
>
> -       __free_old_xmit(sq, in_napi, &stats);
> +       virtnet_free_old_xmit(sq, in_napi, &stats);
>
>         /* Avoid overhead when no packets have been processed
>          * happens when called speculatively from start_xmit.
> @@ -550,7 +550,7 @@ static void free_old_xmit(struct send_queue *sq, bool in_napi)
>
>  static void check_sq_full_and_disable(struct virtnet_info *vi,
>                                       struct net_device *dev,
> -                                     struct send_queue *sq)
> +                                     struct virtnet_sq *sq)
>  {
>         bool use_napi = sq->napi.weight;
>         int qnum;
> @@ -571,7 +571,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
>                 netif_stop_subqueue(dev, qnum);
>                 if (use_napi) {
>                         if (unlikely(!virtqueue_enable_cb_delayed(sq->vq)))
> -                               virtqueue_napi_schedule(&sq->napi, sq->vq);
> +                               virtnet_vq_napi_schedule(&sq->napi, sq->vq);
>                 } else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
>                         /* More just got used, free them then recheck. */
>                         free_old_xmit(sq, false);
> @@ -584,7 +584,7 @@ static void check_sq_full_and_disable(struct virtnet_info *vi,
>  }
>
>  static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
> -                                  struct send_queue *sq,
> +                                  struct virtnet_sq *sq,
>                                    struct xdp_frame *xdpf)
>  {
>         struct virtio_net_hdr_mrg_rxbuf *hdr;
> @@ -674,9 +674,9 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>  {
>         struct virtnet_info *vi = netdev_priv(dev);
>         struct virtnet_sq_stats stats = {};
> -       struct receive_queue *rq = vi->rq;
> +       struct virtnet_rq *rq = vi->rq;
>         struct bpf_prog *xdp_prog;
> -       struct send_queue *sq;
> +       struct virtnet_sq *sq;
>         int nxmit = 0;
>         int kicks = 0;
>         int ret;
> @@ -697,7 +697,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>         }
>
>         /* Free up any pending old buffers before queueing new ones. */
> -       __free_old_xmit(sq, false, &stats);
> +       virtnet_free_old_xmit(sq, false, &stats);
>
>         for (i = 0; i < n; i++) {
>                 struct xdp_frame *xdpf = frames[i];
> @@ -708,7 +708,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>         }
>         ret = nxmit;
>
> -       if (!is_xdp_raw_buffer_queue(vi, sq - vi->sq))
> +       if (!virtnet_is_xdp_raw_buffer_queue(vi, sq - vi->sq))
>                 check_sq_full_and_disable(vi, dev, sq);
>
>         if (flags & XDP_XMIT_FLUSH) {
> @@ -816,7 +816,7 @@ static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
>   * across multiple buffers (num_buf > 1), and we make sure buffers
>   * have enough headroom.
>   */
> -static struct page *xdp_linearize_page(struct receive_queue *rq,
> +static struct page *xdp_linearize_page(struct virtnet_rq *rq,
>                                        int *num_buf,
>                                        struct page *p,
>                                        int offset,
> @@ -897,7 +897,7 @@ static struct sk_buff *receive_small_build_skb(struct virtnet_info *vi,
>
>  static struct sk_buff *receive_small_xdp(struct net_device *dev,
>                                          struct virtnet_info *vi,
> -                                        struct receive_queue *rq,
> +                                        struct virtnet_rq *rq,
>                                          struct bpf_prog *xdp_prog,
>                                          void *buf,
>                                          unsigned int xdp_headroom,
> @@ -984,7 +984,7 @@ static struct sk_buff *receive_small_xdp(struct net_device *dev,
>
>  static struct sk_buff *receive_small(struct net_device *dev,
>                                      struct virtnet_info *vi,
> -                                    struct receive_queue *rq,
> +                                    struct virtnet_rq *rq,
>                                      void *buf, void *ctx,
>                                      unsigned int len,
>                                      unsigned int *xdp_xmit,
> @@ -1031,7 +1031,7 @@ static struct sk_buff *receive_small(struct net_device *dev,
>
>  static struct sk_buff *receive_big(struct net_device *dev,
>                                    struct virtnet_info *vi,
> -                                  struct receive_queue *rq,
> +                                  struct virtnet_rq *rq,
>                                    void *buf,
>                                    unsigned int len,
>                                    struct virtnet_rq_stats *stats)
> @@ -1052,7 +1052,7 @@ static struct sk_buff *receive_big(struct net_device *dev,
>         return NULL;
>  }
>
> -static void mergeable_buf_free(struct receive_queue *rq, int num_buf,
> +static void mergeable_buf_free(struct virtnet_rq *rq, int num_buf,
>                                struct net_device *dev,
>                                struct virtnet_rq_stats *stats)
>  {
> @@ -1126,7 +1126,7 @@ static struct sk_buff *build_skb_from_xdp_buff(struct net_device *dev,
>  /* TODO: build xdp in big mode */
>  static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>                                       struct virtnet_info *vi,
> -                                     struct receive_queue *rq,
> +                                     struct virtnet_rq *rq,
>                                       struct xdp_buff *xdp,
>                                       void *buf,
>                                       unsigned int len,
> @@ -1214,7 +1214,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>  }
>
>  static void *mergeable_xdp_get_buf(struct virtnet_info *vi,
> -                                  struct receive_queue *rq,
> +                                  struct virtnet_rq *rq,
>                                    struct bpf_prog *xdp_prog,
>                                    void *ctx,
>                                    unsigned int *frame_sz,
> @@ -1289,7 +1289,7 @@ static void *mergeable_xdp_get_buf(struct virtnet_info *vi,
>
>  static struct sk_buff *receive_mergeable_xdp(struct net_device *dev,
>                                              struct virtnet_info *vi,
> -                                            struct receive_queue *rq,
> +                                            struct virtnet_rq *rq,
>                                              struct bpf_prog *xdp_prog,
>                                              void *buf,
>                                              void *ctx,
> @@ -1349,7 +1349,7 @@ static struct sk_buff *receive_mergeable_xdp(struct net_device *dev,
>
>  static struct sk_buff *receive_mergeable(struct net_device *dev,
>                                          struct virtnet_info *vi,
> -                                        struct receive_queue *rq,
> +                                        struct virtnet_rq *rq,
>                                          void *buf,
>                                          void *ctx,
>                                          unsigned int len,
> @@ -1494,7 +1494,7 @@ static void virtio_skb_set_hash(const struct virtio_net_hdr_v1_hash *hdr_hash,
>         skb_set_hash(skb, __le32_to_cpu(hdr_hash->hash_value), rss_hash_type);
>  }
>
> -static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
> +static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq,
>                         void *buf, unsigned int len, void **ctx,
>                         unsigned int *xdp_xmit,
>                         struct virtnet_rq_stats *stats)
> @@ -1554,7 +1554,7 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
>   * not need to use  mergeable_len_to_ctx here - it is enough
>   * to store the headroom as the context ignoring the truesize.
>   */
> -static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
> +static int add_recvbuf_small(struct virtnet_info *vi, struct virtnet_rq *rq,
>                              gfp_t gfp)
>  {
>         char *buf;
> @@ -1583,7 +1583,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
>         return err;
>  }
>
> -static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq,
> +static int add_recvbuf_big(struct virtnet_info *vi, struct virtnet_rq *rq,
>                            gfp_t gfp)
>  {
>         struct page *first, *list = NULL;
> @@ -1632,7 +1632,7 @@ static int add_recvbuf_big(struct virtnet_info *vi, struct receive_queue *rq,
>         return err;
>  }
>
> -static unsigned int get_mergeable_buf_len(struct receive_queue *rq,
> +static unsigned int get_mergeable_buf_len(struct virtnet_rq *rq,
>                                           struct ewma_pkt_len *avg_pkt_len,
>                                           unsigned int room)
>  {
> @@ -1650,7 +1650,7 @@ static unsigned int get_mergeable_buf_len(struct receive_queue *rq,
>  }
>
>  static int add_recvbuf_mergeable(struct virtnet_info *vi,
> -                                struct receive_queue *rq, gfp_t gfp)
> +                                struct virtnet_rq *rq, gfp_t gfp)
>  {
>         struct page_frag *alloc_frag = &rq->alloc_frag;
>         unsigned int headroom = virtnet_get_headroom(vi);
> @@ -1705,7 +1705,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
>   * before we're receiving packets, or from refill_work which is
>   * careful to disable receiving (using napi_disable).
>   */
> -static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
> +static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
>                           gfp_t gfp)
>  {
>         int err;
> @@ -1737,9 +1737,9 @@ static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
>  static void skb_recv_done(struct virtqueue *rvq)
>  {
>         struct virtnet_info *vi = rvq->vdev->priv;
> -       struct receive_queue *rq = &vi->rq[vq2rxq(rvq)];
> +       struct virtnet_rq *rq = &vi->rq[vq2rxq(rvq)];
>
> -       virtqueue_napi_schedule(&rq->napi, rvq);
> +       virtnet_vq_napi_schedule(&rq->napi, rvq);
>  }
>
>  static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> @@ -1751,7 +1751,7 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
>          * Call local_bh_enable after to trigger softIRQ processing.
>          */
>         local_bh_disable();
> -       virtqueue_napi_schedule(napi, vq);
> +       virtnet_vq_napi_schedule(napi, vq);
>         local_bh_enable();
>  }
>
> @@ -1787,7 +1787,7 @@ static void refill_work(struct work_struct *work)
>         int i;
>
>         for (i = 0; i < vi->curr_queue_pairs; i++) {
> -               struct receive_queue *rq = &vi->rq[i];
> +               struct virtnet_rq *rq = &vi->rq[i];
>
>                 napi_disable(&rq->napi);
>                 still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> @@ -1801,7 +1801,7 @@ static void refill_work(struct work_struct *work)
>         }
>  }
>
> -static int virtnet_receive(struct receive_queue *rq, int budget,
> +static int virtnet_receive(struct virtnet_rq *rq, int budget,
>                            unsigned int *xdp_xmit)
>  {
>         struct virtnet_info *vi = rq->vq->vdev->priv;
> @@ -1848,14 +1848,14 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
>         return stats.packets;
>  }
>
> -static void virtnet_poll_cleantx(struct receive_queue *rq)
> +static void virtnet_poll_cleantx(struct virtnet_rq *rq)
>  {
>         struct virtnet_info *vi = rq->vq->vdev->priv;
>         unsigned int index = vq2rxq(rq->vq);
> -       struct send_queue *sq = &vi->sq[index];
> +       struct virtnet_sq *sq = &vi->sq[index];
>         struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, index);
>
> -       if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))
> +       if (!sq->napi.weight || virtnet_is_xdp_raw_buffer_queue(vi, index))
>                 return;
>
>         if (__netif_tx_trylock(txq)) {
> @@ -1878,10 +1878,10 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
>
>  static int virtnet_poll(struct napi_struct *napi, int budget)
>  {
> -       struct receive_queue *rq =
> -               container_of(napi, struct receive_queue, napi);
> +       struct virtnet_rq *rq =
> +               container_of(napi, struct virtnet_rq, napi);
>         struct virtnet_info *vi = rq->vq->vdev->priv;
> -       struct send_queue *sq;
> +       struct virtnet_sq *sq;
>         unsigned int received;
>         unsigned int xdp_xmit = 0;
>
> @@ -1972,14 +1972,14 @@ static int virtnet_open(struct net_device *dev)
>
>  static int virtnet_poll_tx(struct napi_struct *napi, int budget)
>  {
> -       struct send_queue *sq = container_of(napi, struct send_queue, napi);
> +       struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi);
>         struct virtnet_info *vi = sq->vq->vdev->priv;
>         unsigned int index = vq2txq(sq->vq);
>         struct netdev_queue *txq;
>         int opaque;
>         bool done;
>
> -       if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
> +       if (unlikely(virtnet_is_xdp_raw_buffer_queue(vi, index))) {
>                 /* We don't need to enable cb for XDP */
>                 napi_complete_done(napi, 0);
>                 return 0;
> @@ -2016,7 +2016,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
>         return 0;
>  }
>
> -static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
> +static int xmit_skb(struct virtnet_sq *sq, struct sk_buff *skb)
>  {
>         struct virtio_net_hdr_mrg_rxbuf *hdr;
>         const unsigned char *dest = ((struct ethhdr *)skb->data)->h_dest;
> @@ -2067,7 +2067,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
>  {
>         struct virtnet_info *vi = netdev_priv(dev);
>         int qnum = skb_get_queue_mapping(skb);
> -       struct send_queue *sq = &vi->sq[qnum];
> +       struct virtnet_sq *sq = &vi->sq[qnum];
>         int err;
>         struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum);
>         bool kick = !netdev_xmit_more();
> @@ -2121,7 +2121,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
>  }
>
>  static int virtnet_rx_resize(struct virtnet_info *vi,
> -                            struct receive_queue *rq, u32 ring_num)
> +                            struct virtnet_rq *rq, u32 ring_num)
>  {
>         bool running = netif_running(vi->dev);
>         int err, qindex;
> @@ -2144,7 +2144,7 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
>  }
>
>  static int virtnet_tx_resize(struct virtnet_info *vi,
> -                            struct send_queue *sq, u32 ring_num)
> +                            struct virtnet_sq *sq, u32 ring_num)
>  {
>         bool running = netif_running(vi->dev);
>         struct netdev_queue *txq;
> @@ -2290,8 +2290,8 @@ static void virtnet_stats(struct net_device *dev,
>
>         for (i = 0; i < vi->max_queue_pairs; i++) {
>                 u64 tpackets, tbytes, terrors, rpackets, rbytes, rdrops;
> -               struct receive_queue *rq = &vi->rq[i];
> -               struct send_queue *sq = &vi->sq[i];
> +               struct virtnet_rq *rq = &vi->rq[i];
> +               struct virtnet_sq *sq = &vi->sq[i];
>
>                 do {
>                         start = u64_stats_fetch_begin(&sq->stats.syncp);
> @@ -2604,8 +2604,8 @@ static int virtnet_set_ringparam(struct net_device *dev,
>  {
>         struct virtnet_info *vi = netdev_priv(dev);
>         u32 rx_pending, tx_pending;
> -       struct receive_queue *rq;
> -       struct send_queue *sq;
> +       struct virtnet_rq *rq;
> +       struct virtnet_sq *sq;
>         int i, err;
>
>         if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> @@ -2909,7 +2909,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev,
>         size_t offset;
>
>         for (i = 0; i < vi->curr_queue_pairs; i++) {
> -               struct receive_queue *rq = &vi->rq[i];
> +               struct virtnet_rq *rq = &vi->rq[i];
>
>                 stats_base = (u8 *)&rq->stats;
>                 do {
> @@ -2923,7 +2923,7 @@ static void virtnet_get_ethtool_stats(struct net_device *dev,
>         }
>
>         for (i = 0; i < vi->curr_queue_pairs; i++) {
> -               struct send_queue *sq = &vi->sq[i];
> +               struct virtnet_sq *sq = &vi->sq[i];
>
>                 stats_base = (u8 *)&sq->stats;
>                 do {
> @@ -3604,7 +3604,7 @@ static int virtnet_set_features(struct net_device *dev,
>  static void virtnet_tx_timeout(struct net_device *dev, unsigned int txqueue)
>  {
>         struct virtnet_info *priv = netdev_priv(dev);
> -       struct send_queue *sq = &priv->sq[txqueue];
> +       struct virtnet_sq *sq = &priv->sq[txqueue];
>         struct netdev_queue *txq = netdev_get_tx_queue(dev, txqueue);
>
>         u64_stats_update_begin(&sq->stats.syncp);
> @@ -3729,10 +3729,10 @@ static void free_receive_page_frags(struct virtnet_info *vi)
>
>  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
>  {
> -       if (!is_xdp_frame(buf))
> +       if (!virtnet_is_xdp_frame(buf))
>                 dev_kfree_skb(buf);
>         else
> -               xdp_return_frame(ptr_to_xdp(buf));
> +               xdp_return_frame(virtnet_ptr_to_xdp(buf));
>  }
>
>  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> @@ -3761,7 +3761,7 @@ static void free_unused_bufs(struct virtnet_info *vi)
>         }
>
>         for (i = 0; i < vi->max_queue_pairs; i++) {
> -               struct receive_queue *rq = &vi->rq[i];
> +               struct virtnet_rq *rq = &vi->rq[i];
>
>                 while ((buf = virtnet_rq_detach_unused_buf(rq)) != NULL)
>                         virtnet_rq_free_unused_buf(rq->vq, buf);
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index ddaf0ecf4d9d..282504d6639a 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -59,8 +59,8 @@ struct virtnet_rq_dma {
>  };
>
>  /* Internal representation of a send virtqueue */
> -struct send_queue {
> -       /* Virtqueue associated with this send _queue */
> +struct virtnet_sq {
> +       /* Virtqueue associated with this virtnet_sq */
>         struct virtqueue *vq;
>
>         /* TX: fragments + linear part + virtio header */
> @@ -80,8 +80,8 @@ struct send_queue {
>  };
>
>  /* Internal representation of a receive virtqueue */
> -struct receive_queue {
> -       /* Virtqueue associated with this receive_queue */
> +struct virtnet_rq {
> +       /* Virtqueue associated with this virtnet_rq */
>         struct virtqueue *vq;
>
>         struct napi_struct napi;
> @@ -123,8 +123,8 @@ struct virtnet_info {
>         struct virtio_device *vdev;
>         struct virtqueue *cvq;
>         struct net_device *dev;
> -       struct send_queue *sq;
> -       struct receive_queue *rq;
> +       struct virtnet_sq *sq;
> +       struct virtnet_rq *rq;
>         unsigned int status;
>
>         /* Max # of queue pairs supported by the device */
> @@ -201,24 +201,24 @@ struct virtnet_info {
>         struct failover *failover;
>  };
>
> -static inline bool is_xdp_frame(void *ptr)
> +static inline bool virtnet_is_xdp_frame(void *ptr)
>  {
>         return (unsigned long)ptr & VIRTIO_XDP_FLAG;
>  }
>
> -static inline struct xdp_frame *ptr_to_xdp(void *ptr)
> +static inline struct xdp_frame *virtnet_ptr_to_xdp(void *ptr)
>  {
>         return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
>  }
>
> -static inline void __free_old_xmit(struct send_queue *sq, bool in_napi,
> -                                  struct virtnet_sq_stats *stats)
> +static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
> +                                        struct virtnet_sq_stats *stats)
>  {
>         unsigned int len;
>         void *ptr;
>
>         while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> -               if (!is_xdp_frame(ptr)) {
> +               if (!virtnet_is_xdp_frame(ptr)) {
>                         struct sk_buff *skb = ptr;
>
>                         pr_debug("Sent skb %p\n", skb);
> @@ -226,7 +226,7 @@ static inline void __free_old_xmit(struct send_queue *sq, bool in_napi,
>                         stats->bytes += skb->len;
>                         napi_consume_skb(skb, in_napi);
>                 } else {
> -                       struct xdp_frame *frame = ptr_to_xdp(ptr);
> +                       struct xdp_frame *frame = virtnet_ptr_to_xdp(ptr);
>
>                         stats->bytes += xdp_get_frame_len(frame);
>                         xdp_return_frame(frame);
> @@ -235,8 +235,8 @@ static inline void __free_old_xmit(struct send_queue *sq, bool in_napi,
>         }
>  }
>
> -static inline void virtqueue_napi_schedule(struct napi_struct *napi,
> -                                          struct virtqueue *vq)
> +static inline void virtnet_vq_napi_schedule(struct napi_struct *napi,
> +                                           struct virtqueue *vq)
>  {
>         if (napi_schedule_prep(napi)) {
>                 virtqueue_disable_cb(vq);
> @@ -244,7 +244,7 @@ static inline void virtqueue_napi_schedule(struct napi_struct *napi,
>         }
>  }
>
> -static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> +static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
>  {
>         if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
>                 return false;
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 06/19] virtio_net: separate virtnet_rx_resize()
  2023-10-16 12:00 ` [PATCH net-next v1 06/19] virtio_net: separate virtnet_rx_resize() Xuan Zhuo
@ 2023-10-19  6:17   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-19  6:17 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> This patch separates two sub-functions from virtnet_rx_resize():
>
> * virtnet_rx_pause
> * virtnet_rx_resume
>
> Then the subsequent reset rx for xsk can share these two functions.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/net/virtio/main.c       | 29 +++++++++++++++++++++--------
>  drivers/net/virtio/virtio_net.h |  3 +++
>  2 files changed, 24 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index ba38b6078e1d..e6b262341619 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -2120,26 +2120,39 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
>         return NETDEV_TX_OK;
>  }
>
> -static int virtnet_rx_resize(struct virtnet_info *vi,
> -                            struct virtnet_rq *rq, u32 ring_num)
> +void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq)
>  {
>         bool running = netif_running(vi->dev);
> -       int err, qindex;
> -
> -       qindex = rq - vi->rq;
>
>         if (running)
>                 napi_disable(&rq->napi);
> +}
>
> -       err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_buf);
> -       if (err)
> -               netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);
> +void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq)
> +{
> +       bool running = netif_running(vi->dev);
>
>         if (!try_fill_recv(vi, rq, GFP_KERNEL))
>                 schedule_delayed_work(&vi->refill, 0);
>
>         if (running)
>                 virtnet_napi_enable(rq->vq, &rq->napi);
> +}
> +
> +static int virtnet_rx_resize(struct virtnet_info *vi,
> +                            struct virtnet_rq *rq, u32 ring_num)
> +{
> +       int err, qindex;
> +
> +       qindex = rq - vi->rq;
> +
> +       virtnet_rx_pause(vi, rq);
> +
> +       err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_buf);
> +       if (err)
> +               netdev_err(vi->dev, "resize rx fail: rx queue index: %d err: %d\n", qindex, err);
> +
> +       virtnet_rx_resume(vi, rq);
>         return err;
>  }
>
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index 282504d6639a..70eea23adba6 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -253,4 +253,7 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
>         else
>                 return false;
>  }
> +
> +void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
> +void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
>  #endif
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 07/19] virtio_net: separate virtnet_tx_resize()
  2023-10-16 12:00 ` [PATCH net-next v1 07/19] virtio_net: separate virtnet_tx_resize() Xuan Zhuo
@ 2023-10-19  6:18   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-19  6:18 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> This patch separates two sub-functions from virtnet_tx_resize():
>
> * virtnet_tx_pause
> * virtnet_tx_resume
>
> Then the subsequent virtnet_tx_reset() can share these two functions.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/net/virtio/main.c       | 35 +++++++++++++++++++++++++++------
>  drivers/net/virtio/virtio_net.h |  2 ++
>  2 files changed, 31 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index e6b262341619..8da84ea9bcbe 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -2156,12 +2156,11 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
>         return err;
>  }
>
> -static int virtnet_tx_resize(struct virtnet_info *vi,
> -                            struct virtnet_sq *sq, u32 ring_num)
> +void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq)
>  {
>         bool running = netif_running(vi->dev);
>         struct netdev_queue *txq;
> -       int err, qindex;
> +       int qindex;
>
>         qindex = sq - vi->sq;
>
> @@ -2182,10 +2181,17 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
>         netif_stop_subqueue(vi->dev, qindex);
>
>         __netif_tx_unlock_bh(txq);
> +}
>
> -       err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
> -       if (err)
> -               netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
> +void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq)
> +{
> +       bool running = netif_running(vi->dev);
> +       struct netdev_queue *txq;
> +       int qindex;
> +
> +       qindex = sq - vi->sq;
> +
> +       txq = netdev_get_tx_queue(vi->dev, qindex);
>
>         __netif_tx_lock_bh(txq);
>         sq->reset = false;
> @@ -2194,6 +2200,23 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
>
>         if (running)
>                 virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> +}
> +
> +static int virtnet_tx_resize(struct virtnet_info *vi, struct virtnet_sq *sq,
> +                            u32 ring_num)
> +{
> +       int qindex, err;
> +
> +       qindex = sq - vi->sq;
> +
> +       virtnet_tx_pause(vi, sq);
> +
> +       err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
> +       if (err)
> +               netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
> +
> +       virtnet_tx_resume(vi, sq);
> +
>         return err;
>  }
>
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index 70eea23adba6..2f930af35364 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -256,4 +256,6 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
>
>  void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
>  void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
> +void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
> +void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq);
>  #endif
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 05/19] virtio_net: add prefix virtnet to all struct/api inside virtio_net.h
  2023-10-19  6:14   ` Jason Wang
@ 2023-10-19  6:36     ` Michael S. Tsirkin
  0 siblings, 0 replies; 66+ messages in thread
From: Michael S. Tsirkin @ 2023-10-19  6:36 UTC (permalink / raw)
  To: Jason Wang
  Cc: Xuan Zhuo, netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Thu, Oct 19, 2023 at 02:14:27PM +0800, Jason Wang wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > We move some structures and APIs to the header file, but these
> > structures and APIs do not prefixed with virtnet. This patch adds
> > virtnet for these.
> 
> What's the benefit of doing this? AFAIK virtio-net is the only user
> for virtio-net.h?
> 
> THanks

If the split takes place I, for one, would be happy if there's some way
to tell where to look for a given structure/API just from the name.

-- 
MST


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  2023-10-17  2:02     ` Xuan Zhuo
@ 2023-10-19  6:38       ` Michael S. Tsirkin
  2023-10-19  7:13         ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Michael S. Tsirkin @ 2023-10-19  6:38 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Jakub Kicinski, netdev, David S. Miller, Eric Dumazet,
	Paolo Abeni, Jason Wang, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Oct 17, 2023 at 10:02:05AM +0800, Xuan Zhuo wrote:
> On Mon, 16 Oct 2023 16:44:34 -0700, Jakub Kicinski <kuba@kernel.org> wrote:
> > On Mon, 16 Oct 2023 20:00:27 +0800 Xuan Zhuo wrote:
> > > @@ -305,9 +311,15 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
> > >
> > >  			stats->bytes += xdp_get_frame_len(frame);
> > >  			xdp_return_frame(frame);
> > > +		} else {
> > > +			stats->bytes += virtnet_ptr_to_xsk(ptr);
> > > +			++xsknum;
> > >  		}
> > >  		stats->packets++;
> > >  	}
> > > +
> > > +	if (xsknum)
> > > +		xsk_tx_completed(sq->xsk.pool, xsknum);
> > >  }
> >
> > sparse complains:
> >
> > drivers/net/virtio/virtio_net.h:322:41: warning: incorrect type in argument 1 (different address spaces)
> > drivers/net/virtio/virtio_net.h:322:41:    expected struct xsk_buff_pool *pool
> > drivers/net/virtio/virtio_net.h:322:41:    got struct xsk_buff_pool
> > [noderef] __rcu *pool
> >
> > please build test with W=1 C=1
> 
> OK. I will add C=1 to may script.
> 
> Thanks.

And I hope we all understand, rcu has to be used properly it's not just
about casting the warning away.

-- 
MST


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  2023-10-19  6:38       ` Michael S. Tsirkin
@ 2023-10-19  7:13         ` Xuan Zhuo
  2023-10-19  8:42           ` Michael S. Tsirkin
  0 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-19  7:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jakub Kicinski, netdev, David S. Miller, Eric Dumazet,
	Paolo Abeni, Jason Wang, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Thu, 19 Oct 2023 02:38:16 -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Tue, Oct 17, 2023 at 10:02:05AM +0800, Xuan Zhuo wrote:
> > On Mon, 16 Oct 2023 16:44:34 -0700, Jakub Kicinski <kuba@kernel.org> wrote:
> > > On Mon, 16 Oct 2023 20:00:27 +0800 Xuan Zhuo wrote:
> > > > @@ -305,9 +311,15 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
> > > >
> > > >  			stats->bytes += xdp_get_frame_len(frame);
> > > >  			xdp_return_frame(frame);
> > > > +		} else {
> > > > +			stats->bytes += virtnet_ptr_to_xsk(ptr);
> > > > +			++xsknum;
> > > >  		}
> > > >  		stats->packets++;
> > > >  	}
> > > > +
> > > > +	if (xsknum)
> > > > +		xsk_tx_completed(sq->xsk.pool, xsknum);
> > > >  }
> > >
> > > sparse complains:
> > >
> > > drivers/net/virtio/virtio_net.h:322:41: warning: incorrect type in argument 1 (different address spaces)
> > > drivers/net/virtio/virtio_net.h:322:41:    expected struct xsk_buff_pool *pool
> > > drivers/net/virtio/virtio_net.h:322:41:    got struct xsk_buff_pool
> > > [noderef] __rcu *pool
> > >
> > > please build test with W=1 C=1
> >
> > OK. I will add C=1 to may script.
> >
> > Thanks.
>
> And I hope we all understand, rcu has to be used properly it's not just
> about casting the warning away.


Yes. I see. I will use rcu_dereference() and rcu_read_xxx().

Thanks.

>
> --
> MST
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h
  2023-10-19  6:12   ` Jason Wang
@ 2023-10-19  7:16     ` Xuan Zhuo
  2023-10-20  6:59       ` Jason Wang
  0 siblings, 1 reply; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-19  7:16 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Thu, 19 Oct 2023 14:12:55 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Move some structure definitions and inline functions into the
> > virtio_net.h file.
>
> Some of the functions are not inline one before the moving. I'm not
> sure what's the criteria to choose the function to be moved.


That will used by xsk.c or other funcions in headers in the subsequence
commits.

If you are confused, I can try move the function when that is needed.
This commit just move some important structures.

Thanks.

>
>
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       | 252 +------------------------------
> >  drivers/net/virtio/virtio_net.h | 256 ++++++++++++++++++++++++++++++++
> >  2 files changed, 258 insertions(+), 250 deletions(-)
> >  create mode 100644 drivers/net/virtio/virtio_net.h
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 6cf77b6acdab..d8b6c0d86f29 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -6,7 +6,6 @@
> >  //#define DEBUG
> >  #include <linux/netdevice.h>
> >  #include <linux/etherdevice.h>
> > -#include <linux/ethtool.h>
> >  #include <linux/module.h>
> >  #include <linux/virtio.h>
> >  #include <linux/virtio_net.h>
> > @@ -16,7 +15,6 @@
> >  #include <linux/if_vlan.h>
> >  #include <linux/slab.h>
> >  #include <linux/cpu.h>
> > -#include <linux/average.h>
> >  #include <linux/filter.h>
> >  #include <linux/kernel.h>
> >  #include <net/route.h>
> > @@ -24,6 +22,8 @@
> >  #include <net/net_failover.h>
> >  #include <net/netdev_rx_queue.h>
> >
> > +#include "virtio_net.h"
> > +
> >  static int napi_weight = NAPI_POLL_WEIGHT;
> >  module_param(napi_weight, int, 0444);
> >
> > @@ -45,15 +45,6 @@ module_param(napi_tx, bool, 0644);
> >  #define VIRTIO_XDP_TX          BIT(0)
> >  #define VIRTIO_XDP_REDIR       BIT(1)
> >
> > -#define VIRTIO_XDP_FLAG        BIT(0)
> > -
> > -/* RX packet size EWMA. The average packet size is used to determine the packet
> > - * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > - * at once, the weight is chosen so that the EWMA will be insensitive to short-
> > - * term, transient changes in packet size.
> > - */
> > -DECLARE_EWMA(pkt_len, 0, 64)
> > -
> >  #define VIRTNET_DRIVER_VERSION "1.0.0"
> >
> >  static const unsigned long guest_offloads[] = {
> > @@ -74,36 +65,6 @@ static const unsigned long guest_offloads[] = {
> >                                 (1ULL << VIRTIO_NET_F_GUEST_USO4) | \
> >                                 (1ULL << VIRTIO_NET_F_GUEST_USO6))
> >
> > -struct virtnet_stat_desc {
> > -       char desc[ETH_GSTRING_LEN];
> > -       size_t offset;
> > -};
> > -
> > -struct virtnet_sq_stats {
> > -       struct u64_stats_sync syncp;
> > -       u64 packets;
> > -       u64 bytes;
> > -       u64 xdp_tx;
> > -       u64 xdp_tx_drops;
> > -       u64 kicks;
> > -       u64 tx_timeouts;
> > -};
> > -
> > -struct virtnet_rq_stats {
> > -       struct u64_stats_sync syncp;
> > -       u64 packets;
> > -       u64 bytes;
> > -       u64 drops;
> > -       u64 xdp_packets;
> > -       u64 xdp_tx;
> > -       u64 xdp_redirects;
> > -       u64 xdp_drops;
> > -       u64 kicks;
> > -};
> > -
> > -#define VIRTNET_SQ_STAT(m)     offsetof(struct virtnet_sq_stats, m)
> > -#define VIRTNET_RQ_STAT(m)     offsetof(struct virtnet_rq_stats, m)
> > -
> >  static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
> >         { "packets",            VIRTNET_SQ_STAT(packets) },
> >         { "bytes",              VIRTNET_SQ_STAT(bytes) },
> > @@ -127,80 +88,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
> >  #define VIRTNET_SQ_STATS_LEN   ARRAY_SIZE(virtnet_sq_stats_desc)
> >  #define VIRTNET_RQ_STATS_LEN   ARRAY_SIZE(virtnet_rq_stats_desc)
> >
> > -struct virtnet_interrupt_coalesce {
> > -       u32 max_packets;
> > -       u32 max_usecs;
> > -};
> > -
> > -/* The dma information of pages allocated at a time. */
> > -struct virtnet_rq_dma {
> > -       dma_addr_t addr;
> > -       u32 ref;
> > -       u16 len;
> > -       u16 need_sync;
> > -};
> > -
> > -/* Internal representation of a send virtqueue */
> > -struct send_queue {
> > -       /* Virtqueue associated with this send _queue */
> > -       struct virtqueue *vq;
> > -
> > -       /* TX: fragments + linear part + virtio header */
> > -       struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > -
> > -       /* Name of the send queue: output.$index */
> > -       char name[16];
> > -
> > -       struct virtnet_sq_stats stats;
> > -
> > -       struct virtnet_interrupt_coalesce intr_coal;
> > -
> > -       struct napi_struct napi;
> > -
> > -       /* Record whether sq is in reset state. */
> > -       bool reset;
> > -};
> > -
> > -/* Internal representation of a receive virtqueue */
> > -struct receive_queue {
> > -       /* Virtqueue associated with this receive_queue */
> > -       struct virtqueue *vq;
> > -
> > -       struct napi_struct napi;
> > -
> > -       struct bpf_prog __rcu *xdp_prog;
> > -
> > -       struct virtnet_rq_stats stats;
> > -
> > -       struct virtnet_interrupt_coalesce intr_coal;
> > -
> > -       /* Chain pages by the private ptr. */
> > -       struct page *pages;
> > -
> > -       /* Average packet length for mergeable receive buffers. */
> > -       struct ewma_pkt_len mrg_avg_pkt_len;
> > -
> > -       /* Page frag for packet buffer allocation. */
> > -       struct page_frag alloc_frag;
> > -
> > -       /* RX: fragments + linear part + virtio header */
> > -       struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > -
> > -       /* Min single buffer size for mergeable buffers case. */
> > -       unsigned int min_buf_len;
> > -
> > -       /* Name of this receive queue: input.$index */
> > -       char name[16];
> > -
> > -       struct xdp_rxq_info xdp_rxq;
> > -
> > -       /* Record the last dma info to free after new pages is allocated. */
> > -       struct virtnet_rq_dma *last_dma;
> > -
> > -       /* Do dma by self */
> > -       bool do_dma;
> > -};
> > -
> >  /* This structure can contain rss message with maximum settings for indirection table and keysize
> >   * Note, that default structure that describes RSS configuration virtio_net_rss_config
> >   * contains same info but can't handle table values.
> > @@ -234,88 +121,6 @@ struct control_buf {
> >         struct virtio_net_ctrl_coal_vq coal_vq;
> >  };
> >
> > -struct virtnet_info {
> > -       struct virtio_device *vdev;
> > -       struct virtqueue *cvq;
> > -       struct net_device *dev;
> > -       struct send_queue *sq;
> > -       struct receive_queue *rq;
> > -       unsigned int status;
> > -
> > -       /* Max # of queue pairs supported by the device */
> > -       u16 max_queue_pairs;
> > -
> > -       /* # of queue pairs currently used by the driver */
> > -       u16 curr_queue_pairs;
> > -
> > -       /* # of XDP queue pairs currently used by the driver */
> > -       u16 xdp_queue_pairs;
> > -
> > -       /* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> > -       bool xdp_enabled;
> > -
> > -       /* I like... big packets and I cannot lie! */
> > -       bool big_packets;
> > -
> > -       /* number of sg entries allocated for big packets */
> > -       unsigned int big_packets_num_skbfrags;
> > -
> > -       /* Host will merge rx buffers for big packets (shake it! shake it!) */
> > -       bool mergeable_rx_bufs;
> > -
> > -       /* Host supports rss and/or hash report */
> > -       bool has_rss;
> > -       bool has_rss_hash_report;
> > -       u8 rss_key_size;
> > -       u16 rss_indir_table_size;
> > -       u32 rss_hash_types_supported;
> > -       u32 rss_hash_types_saved;
> > -
> > -       /* Has control virtqueue */
> > -       bool has_cvq;
> > -
> > -       /* Host can handle any s/g split between our header and packet data */
> > -       bool any_header_sg;
> > -
> > -       /* Packet virtio header size */
> > -       u8 hdr_len;
> > -
> > -       /* Work struct for delayed refilling if we run low on memory. */
> > -       struct delayed_work refill;
> > -
> > -       /* Is delayed refill enabled? */
> > -       bool refill_enabled;
> > -
> > -       /* The lock to synchronize the access to refill_enabled */
> > -       spinlock_t refill_lock;
> > -
> > -       /* Work struct for config space updates */
> > -       struct work_struct config_work;
> > -
> > -       /* Does the affinity hint is set for virtqueues? */
> > -       bool affinity_hint_set;
> > -
> > -       /* CPU hotplug instances for online & dead */
> > -       struct hlist_node node;
> > -       struct hlist_node node_dead;
> > -
> > -       struct control_buf *ctrl;
> > -
> > -       /* Ethtool settings */
> > -       u8 duplex;
> > -       u32 speed;
> > -
> > -       /* Interrupt coalescing settings */
> > -       struct virtnet_interrupt_coalesce intr_coal_tx;
> > -       struct virtnet_interrupt_coalesce intr_coal_rx;
> > -
> > -       unsigned long guest_offloads;
> > -       unsigned long guest_offloads_capable;
> > -
> > -       /* failover when STANDBY feature enabled */
> > -       struct failover *failover;
> > -};
> > -
> >  struct padded_vnet_hdr {
> >         struct virtio_net_hdr_v1_hash hdr;
> >         /*
> > @@ -337,45 +142,11 @@ struct virtio_net_common_hdr {
> >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> >
> > -static bool is_xdp_frame(void *ptr)
> > -{
> > -       return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > -}
> > -
> >  static void *xdp_to_ptr(struct xdp_frame *ptr)
> >  {
> >         return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
> >  }
>
> Any reason for not moving this?
>
> Thanks
>
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer
  2023-10-19  7:13         ` Xuan Zhuo
@ 2023-10-19  8:42           ` Michael S. Tsirkin
  0 siblings, 0 replies; 66+ messages in thread
From: Michael S. Tsirkin @ 2023-10-19  8:42 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Jakub Kicinski, netdev, David S. Miller, Eric Dumazet,
	Paolo Abeni, Jason Wang, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Thu, Oct 19, 2023 at 03:13:48PM +0800, Xuan Zhuo wrote:
> On Thu, 19 Oct 2023 02:38:16 -0400, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Tue, Oct 17, 2023 at 10:02:05AM +0800, Xuan Zhuo wrote:
> > > On Mon, 16 Oct 2023 16:44:34 -0700, Jakub Kicinski <kuba@kernel.org> wrote:
> > > > On Mon, 16 Oct 2023 20:00:27 +0800 Xuan Zhuo wrote:
> > > > > @@ -305,9 +311,15 @@ static inline void virtnet_free_old_xmit(struct virtnet_sq *sq, bool in_napi,
> > > > >
> > > > >  			stats->bytes += xdp_get_frame_len(frame);
> > > > >  			xdp_return_frame(frame);
> > > > > +		} else {
> > > > > +			stats->bytes += virtnet_ptr_to_xsk(ptr);
> > > > > +			++xsknum;
> > > > >  		}
> > > > >  		stats->packets++;
> > > > >  	}
> > > > > +
> > > > > +	if (xsknum)
> > > > > +		xsk_tx_completed(sq->xsk.pool, xsknum);
> > > > >  }
> > > >
> > > > sparse complains:
> > > >
> > > > drivers/net/virtio/virtio_net.h:322:41: warning: incorrect type in argument 1 (different address spaces)
> > > > drivers/net/virtio/virtio_net.h:322:41:    expected struct xsk_buff_pool *pool
> > > > drivers/net/virtio/virtio_net.h:322:41:    got struct xsk_buff_pool
> > > > [noderef] __rcu *pool
> > > >
> > > > please build test with W=1 C=1
> > >
> > > OK. I will add C=1 to may script.
> > >
> > > Thanks.
> >
> > And I hope we all understand, rcu has to be used properly it's not just
> > about casting the warning away.
> 
> 
> Yes. I see. I will use rcu_dereference() and rcu_read_xxx().
> 
> Thanks.

When you do, pls don't forget to add comments documenting what does
rcu_read_lock and synchronize_rcu.


-- 
MST


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 08/19] virtio_net: sq support premapped mode
  2023-10-16 12:00 ` [PATCH net-next v1 08/19] virtio_net: sq support premapped mode Xuan Zhuo
@ 2023-10-20  6:50   ` Jason Wang
  2023-10-20  7:16     ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:50 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> If the xsk is enabling, the xsk tx will share the send queue.
> But the xsk requires that the send queue use the premapped mode.
> So the send queue must support premapped mode.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       | 108 ++++++++++++++++++++++++++++----
>  drivers/net/virtio/virtio_net.h |  54 +++++++++++++++-
>  2 files changed, 149 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 8da84ea9bcbe..02d27101fef1 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -514,20 +514,104 @@ static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp)
>         return buf;
>  }
>
> -static void virtnet_rq_set_premapped(struct virtnet_info *vi)
> +static int virtnet_sq_set_premapped(struct virtnet_sq *sq)
>  {
> -       int i;
> +       struct virtnet_sq_dma *d;
> +       int err, size, i;
>
> -       /* disable for big mode */
> -       if (!vi->mergeable_rx_bufs && vi->big_packets)
> -               return;

Not specific to this patch but any plan to fix the big mode?


> +       size = virtqueue_get_vring_size(sq->vq);
> +
> +       size += MAX_SKB_FRAGS + 2;
> +
> +       sq->dmainfo.head = kcalloc(size, sizeof(*sq->dmainfo.head), GFP_KERNEL);
> +       if (!sq->dmainfo.head)
> +               return -ENOMEM;
> +
> +       err = virtqueue_set_dma_premapped(sq->vq);
> +       if (err) {
> +               kfree(sq->dmainfo.head);
> +               return err;
> +       }
> +
> +       sq->dmainfo.free = NULL;
> +
> +       sq->do_dma = true;
> +
> +       for (i = 0; i < size; ++i) {
> +               d = &sq->dmainfo.head[i];
> +
> +               d->next = sq->dmainfo.free;
> +               sq->dmainfo.free = d;
> +       }
> +
> +       return 0;
> +}
> +
> +static void virtnet_set_premapped(struct virtnet_info *vi)
> +{
> +       int i;
>
>         for (i = 0; i < vi->max_queue_pairs; i++) {
> -               if (virtqueue_set_dma_premapped(vi->rq[i].vq))
> +               if (!virtnet_sq_set_premapped(&vi->sq[i]))
> +                       vi->sq[i].do_dma = true;
> +
> +               /* disable for big mode */
> +               if (!vi->mergeable_rx_bufs && vi->big_packets)
>                         continue;
>
> -               vi->rq[i].do_dma = true;
> +               if (!virtqueue_set_dma_premapped(vi->rq[i].vq))
> +                       vi->rq[i].do_dma = true;
> +       }
> +}
> +
> +static struct virtnet_sq_dma *virtnet_sq_map_sg(struct virtnet_sq *sq, int nents, void *data)
> +{
> +       struct virtnet_sq_dma *d, *head;
> +       struct scatterlist *sg;
> +       int i;
> +
> +       head = NULL;
> +
> +       for_each_sg(sq->sg, sg, nents, i) {
> +               sg->dma_address = virtqueue_dma_map_single_attrs(sq->vq, sg_virt(sg),
> +                                                                sg->length,
> +                                                                DMA_TO_DEVICE, 0);
> +               if (virtqueue_dma_mapping_error(sq->vq, sg->dma_address))
> +                       goto err;
> +
> +               d = sq->dmainfo.free;
> +               sq->dmainfo.free = d->next;
> +
> +               d->addr = sg->dma_address;
> +               d->len = sg->length;
> +
> +               d->next = head;
> +               head = d;

It's really a pity that we need to duplicate those DMA metata twice.
Could we invent a new API to just fetch it from the virtio core?

> +       }
> +
> +       head->data = data;
> +
> +       return (void *)((unsigned long)head | ((unsigned long)data & VIRTIO_XMIT_DATA_MASK));

If we packed everything into dmainfo, we can leave the type (XDP vs
skb) there to avoid trick like packing it into the pointer here?

Thanks


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 09/19] virtio_net: xsk: bind/unbind xsk
  2023-10-16 12:00 ` [PATCH net-next v1 09/19] virtio_net: xsk: bind/unbind xsk Xuan Zhuo
@ 2023-10-20  6:51   ` Jason Wang
  2023-10-20  7:28     ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:51 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> This patch implement the logic of bind/unbind xsk pool to sq and rq.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/Makefile     |   2 +-
>  drivers/net/virtio/main.c       |  10 +-
>  drivers/net/virtio/virtio_net.h |  18 ++++
>  drivers/net/virtio/xsk.c        | 186 ++++++++++++++++++++++++++++++++
>  drivers/net/virtio/xsk.h        |   7 ++
>  5 files changed, 216 insertions(+), 7 deletions(-)
>  create mode 100644 drivers/net/virtio/xsk.c
>  create mode 100644 drivers/net/virtio/xsk.h
>
> diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
> index 15ed7c97fd4f..8c2a884d2dba 100644
> --- a/drivers/net/virtio/Makefile
> +++ b/drivers/net/virtio/Makefile
> @@ -5,4 +5,4 @@
>
>  obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
>
> -virtio_net-y := main.o
> +virtio_net-y := main.o xsk.o
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 02d27101fef1..38733a782f12 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -8,7 +8,6 @@
>  #include <linux/etherdevice.h>
>  #include <linux/module.h>
>  #include <linux/virtio.h>
> -#include <linux/virtio_net.h>
>  #include <linux/bpf.h>
>  #include <linux/bpf_trace.h>
>  #include <linux/scatterlist.h>
> @@ -139,9 +138,6 @@ struct virtio_net_common_hdr {
>         };
>  };
>
> -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> -static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> -
>  static void *xdp_to_ptr(struct xdp_frame *ptr)
>  {
>         return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
> @@ -3664,6 +3660,8 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
>         switch (xdp->command) {
>         case XDP_SETUP_PROG:
>                 return virtnet_xdp_set(dev, xdp->prog, xdp->extack);
> +       case XDP_SETUP_XSK_POOL:
> +               return virtnet_xsk_pool_setup(dev, xdp);
>         default:
>                 return -EINVAL;
>         }
> @@ -3849,7 +3847,7 @@ static void free_receive_page_frags(struct virtnet_info *vi)
>                 }
>  }
>
> -static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
> +void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
>  {
>         if (!virtnet_is_xdp_frame(buf))
>                 dev_kfree_skb(buf);
> @@ -3857,7 +3855,7 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
>                 xdp_return_frame(virtnet_ptr_to_xdp(buf));
>  }
>
> -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
>  {
>         struct virtnet_info *vi = vq->vdev->priv;
>         int i = vq2rxq(vq);
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index cc742756e19a..9e69b6c5921b 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -5,6 +5,8 @@
>
>  #include <linux/ethtool.h>
>  #include <linux/average.h>
> +#include <linux/virtio_net.h>
> +#include <net/xdp_sock_drv.h>
>
>  #define VIRTIO_XDP_FLAG        BIT(0)
>  #define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG)
> @@ -94,6 +96,11 @@ struct virtnet_sq {
>         bool do_dma;
>
>         struct virtnet_sq_dma_head dmainfo;
> +       struct {
> +               struct xsk_buff_pool __rcu *pool;
> +
> +               dma_addr_t hdr_dma_address;
> +       } xsk;
>  };
>
>  /* Internal representation of a receive virtqueue */
> @@ -134,6 +141,13 @@ struct virtnet_rq {
>
>         /* Do dma by self */
>         bool do_dma;
> +
> +       struct {
> +               struct xsk_buff_pool __rcu *pool;
> +
> +               /* xdp rxq used by xsk */
> +               struct xdp_rxq_info xdp_rxq;
> +       } xsk;
>  };
>
>  struct virtnet_info {
> @@ -218,6 +232,8 @@ struct virtnet_info {
>         struct failover *failover;
>  };
>
> +#include "xsk.h"
> +

Any reason we don't do it with other headers?

>  static inline bool virtnet_is_xdp_frame(void *ptr)
>  {
>         return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> @@ -308,4 +324,6 @@ void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
>  void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
>  void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
>  void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq);
> +void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
>  #endif
> diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> new file mode 100644
> index 000000000000..dddd01962a3f
> --- /dev/null
> +++ b/drivers/net/virtio/xsk.c
> @@ -0,0 +1,186 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * virtio-net xsk
> + */
> +
> +#include "virtio_net.h"
> +
> +static struct virtio_net_hdr_mrg_rxbuf xsk_hdr;
> +
> +static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
> +                                   struct xsk_buff_pool *pool)
> +{
> +       int err, qindex;
> +
> +       qindex = rq - vi->rq;
> +
> +       if (pool) {
> +               err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, vi->dev, qindex, rq->napi.napi_id);
> +               if (err < 0)
> +                       return err;
> +
> +               err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq,
> +                                                MEM_TYPE_XSK_BUFF_POOL, NULL);
> +               if (err < 0) {
> +                       xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> +                       return err;
> +               }
> +
> +               xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq);
> +       } else {
> +               xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> +       }
> +
> +       virtnet_rx_pause(vi, rq);
> +
> +       err = virtqueue_reset(rq->vq, virtnet_rq_free_unused_buf);
> +       if (err)
> +               netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err);
> +
> +       if (pool && err)
> +               xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> +       else
> +               rcu_assign_pointer(rq->xsk.pool, pool);
> +
> +       virtnet_rx_resume(vi, rq);
> +
> +       return err;
> +}
> +
> +static int virtnet_sq_bind_xsk_pool(struct virtnet_info *vi,
> +                                   struct virtnet_sq *sq,
> +                                   struct xsk_buff_pool *pool)
> +{
> +       int err, qindex;
> +
> +       qindex = sq - vi->sq;
> +
> +       virtnet_tx_pause(vi, sq);
> +
> +       err = virtqueue_reset(sq->vq, virtnet_sq_free_unused_buf);
> +       if (err)
> +               netdev_err(vi->dev, "reset tx fail: tx queue index: %d err: %d\n", qindex, err);
> +
> +       if (pool) {
> +               if (!err)
> +                       rcu_assign_pointer(sq->xsk.pool, pool);
> +       } else {
> +               rcu_assign_pointer(sq->xsk.pool, NULL);
> +       }
> +
> +       virtnet_tx_resume(vi, sq);
> +
> +       return err;
> +}
> +
> +static int virtnet_xsk_pool_enable(struct net_device *dev,
> +                                  struct xsk_buff_pool *pool,
> +                                  u16 qid)
> +{
> +       struct virtnet_info *vi = netdev_priv(dev);
> +       struct virtnet_rq *rq;
> +       struct virtnet_sq *sq;
> +       struct device *dma_dev;
> +       dma_addr_t hdr_dma;
> +       int err;
> +
> +       /* In big_packets mode, xdp cannot work, so there is no need to
> +        * initialize xsk of rq.
> +        */
> +       if (vi->big_packets && !vi->mergeable_rx_bufs)
> +               return -ENOENT;
> +
> +       if (qid >= vi->curr_queue_pairs)
> +               return -EINVAL;
> +
> +       sq = &vi->sq[qid];
> +       rq = &vi->rq[qid];
> +
> +       /* xsk tx zerocopy depend on the tx napi.
> +        *
> +        * All xsk packets are actually consumed and sent out from the xsk tx
> +        * queue under the tx napi mechanism.
> +        */
> +       if (!sq->napi.weight)
> +               return -EPERM;
> +
> +       if (!rq->do_dma || !sq->do_dma)
> +               return -EPERM;
> +
> +       if (virtqueue_dma_dev(rq->vq) != virtqueue_dma_dev(sq->vq))
> +               return -EPERM;

Any reason we need this check? Is there any code that has this
assumption? E.g dma sync in XDP_TX?

> +
> +       dma_dev = virtqueue_dma_dev(rq->vq);
> +       if (!dma_dev)
> +               return -EPERM;
> +
> +       hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE);
> +       if (dma_mapping_error(dma_dev, hdr_dma))
> +               return -ENOMEM;
> +
> +       err = xsk_pool_dma_map(pool, dma_dev, 0);
> +       if (err)
> +               goto err_xsk_map;
> +
> +       err = virtnet_rq_bind_xsk_pool(vi, rq, pool);
> +       if (err)
> +               goto err_rq;
> +
> +       err = virtnet_sq_bind_xsk_pool(vi, sq, pool);
> +       if (err)
> +               goto err_sq;
> +
> +       sq->xsk.hdr_dma_address = hdr_dma;

I think we probably need some comments to explain why a single hdr can
work. Like it means we use the same hdr for all XSK packets?

> +
> +       return 0;
> +
> +err_sq:
> +       virtnet_rq_bind_xsk_pool(vi, rq, NULL);
> +err_rq:
> +       xsk_pool_dma_unmap(pool, 0);
> +err_xsk_map:
> +       dma_unmap_single(dma_dev, hdr_dma, vi->hdr_len, DMA_TO_DEVICE);
> +       return err;
> +}
> +
> +static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
> +{
> +       struct virtnet_info *vi = netdev_priv(dev);
> +       struct xsk_buff_pool *pool;
> +       struct device *dma_dev;
> +       struct virtnet_rq *rq;
> +       struct virtnet_sq *sq;
> +       int err1, err2;
> +
> +       if (qid >= vi->curr_queue_pairs)
> +               return -EINVAL;
> +
> +       sq = &vi->sq[qid];
> +       rq = &vi->rq[qid];
> +
> +       dma_dev = virtqueue_dma_dev(rq->vq);
> +
> +       /* Sync with the XSK wakeup and NAPI. */
> +       synchronize_net();

Any reason we couldn't do bind_xsk_pool(NULL) here? It seems easier.

Thanks


> +
> +       dma_unmap_single(dma_dev, sq->xsk.hdr_dma_address, vi->hdr_len, DMA_TO_DEVICE);
> +
> +       rcu_read_lock();
> +       pool = rcu_dereference(sq->xsk.pool);
> +       xsk_pool_dma_unmap(pool, 0);
> +       rcu_read_unlock();
> +
> +       err1 = virtnet_sq_bind_xsk_pool(vi, sq, NULL);
> +       err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL);
> +
> +       return err1 | err2;
> +}
> +
> +int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp)
> +{
> +       if (xdp->xsk.pool)
> +               return virtnet_xsk_pool_enable(dev, xdp->xsk.pool,
> +                                              xdp->xsk.queue_id);
> +       else
> +               return virtnet_xsk_pool_disable(dev, xdp->xsk.queue_id);
> +}
> diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> new file mode 100644
> index 000000000000..1918285c310c
> --- /dev/null
> +++ b/drivers/net/virtio/xsk.h
> @@ -0,0 +1,7 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +
> +#ifndef __XSK_H__
> +#define __XSK_H__
> +
> +int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> +#endif
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 10/19] virtio_net: xsk: prevent disable tx napi
  2023-10-16 12:00 ` [PATCH net-next v1 10/19] virtio_net: xsk: prevent disable tx napi Xuan Zhuo
@ 2023-10-20  6:51   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:51 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
> we must stop tx napi from being disabled.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks


> ---
>  drivers/net/virtio/main.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 38733a782f12..b320770e5f4e 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -3203,7 +3203,7 @@ static int virtnet_set_coalesce(struct net_device *dev,
>                                 struct netlink_ext_ack *extack)
>  {
>         struct virtnet_info *vi = netdev_priv(dev);
> -       int ret, queue_number, napi_weight;
> +       int ret, queue_number, napi_weight, i;
>         bool update_napi = false;
>
>         /* Can't change NAPI weight if the link is up */
> @@ -3232,6 +3232,14 @@ static int virtnet_set_coalesce(struct net_device *dev,
>                 return ret;
>
>         if (update_napi) {
> +               /* xsk xmit depends on the tx napi. So if xsk is active,
> +                * prevent modifications to tx napi.
> +                */
> +               for (i = queue_number; i < vi->max_queue_pairs; i++) {
> +                       if (rtnl_dereference(vi->sq[i].xsk.pool))
> +                               return -EBUSY;
> +               }
> +
>                 for (; queue_number < vi->max_queue_pairs; queue_number++)
>                         vi->sq[queue_number].napi.weight = napi_weight;
>         }
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 11/19] virtio_net: xsk: tx: support tx
  2023-10-16 12:00 ` [PATCH net-next v1 11/19] virtio_net: xsk: tx: support tx Xuan Zhuo
@ 2023-10-20  6:52   ` Jason Wang
  2023-10-20  8:06     ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:52 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> The driver's tx napi is very important for XSK. It is responsible for
> obtaining data from the XSK queue and sending it out.
>
> At the beginning, we need to trigger tx napi.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       |  18 +++++-
>  drivers/net/virtio/virtio_net.h |   3 +-
>  drivers/net/virtio/xsk.c        | 108 ++++++++++++++++++++++++++++++++
>  drivers/net/virtio/xsk.h        |  13 ++++
>  4 files changed, 140 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index b320770e5f4e..a08429bef61f 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -2054,7 +2054,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
>         struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi);
>         struct virtnet_info *vi = sq->vq->vdev->priv;
>         unsigned int index = vq2txq(sq->vq);
> +       struct xsk_buff_pool *pool;
>         struct netdev_queue *txq;
> +       int busy = 0;
>         int opaque;
>         bool done;
>
> @@ -2067,11 +2069,25 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
>         txq = netdev_get_tx_queue(vi->dev, index);
>         __netif_tx_lock(txq, raw_smp_processor_id());
>         virtqueue_disable_cb(sq->vq);
> -       free_old_xmit(sq, true);
> +
> +       rcu_read_lock();
> +       pool = rcu_dereference(sq->xsk.pool);
> +       if (pool) {
> +               busy |= virtnet_xsk_xmit(sq, pool, budget);
> +               rcu_read_unlock();
> +       } else {
> +               rcu_read_unlock();
> +               free_old_xmit(sq, true);
> +       }
>
>         if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
>                 netif_tx_wake_queue(txq);
>
> +       if (busy) {
> +               __netif_tx_unlock(txq);
> +               return budget;
> +       }
> +
>         opaque = virtqueue_enable_cb_prepare(sq->vq);
>
>         done = napi_complete_done(napi, 0);
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index 9e69b6c5921b..3bbb1f5baad5 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -9,7 +9,8 @@
>  #include <net/xdp_sock_drv.h>
>
>  #define VIRTIO_XDP_FLAG        BIT(0)
> -#define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG)
> +#define VIRTIO_XSK_FLAG        BIT(1)
> +#define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG | VIRTIO_XSK_FLAG)
>
>  /* RX packet size EWMA. The average packet size is used to determine the packet
>   * buffer size when refilling RX rings. As the entire RX ring may be refilled
> diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> index dddd01962a3f..0e775a9d270f 100644
> --- a/drivers/net/virtio/xsk.c
> +++ b/drivers/net/virtio/xsk.c
> @@ -7,6 +7,114 @@
>
>  static struct virtio_net_hdr_mrg_rxbuf xsk_hdr;
>
> +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
> +{
> +       sg->dma_address = addr;
> +       sg->length = len;
> +}
> +
> +static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
> +{
> +       struct virtnet_info *vi = sq->vq->vdev->priv;
> +       struct net_device *dev = vi->dev;
> +       int qnum = sq - vi->sq;
> +
> +       /* If it is a raw buffer queue, it does not check whether the status
> +        * of the queue is stopped when sending. So there is no need to check
> +        * the situation of the raw buffer queue.
> +        */
> +       if (virtnet_is_xdp_raw_buffer_queue(vi, qnum))
> +               return;
> +
> +       /* If this sq is not the exclusive queue of the current cpu,
> +        * then it may be called by start_xmit, so check it running out
> +        * of space.
> +        *
> +        * Stop the queue to avoid getting packets that we are
> +        * then unable to transmit. Then wait the tx interrupt.
> +        */
> +       if (sq->vq->num_free < 2 + MAX_SKB_FRAGS)
> +               netif_stop_subqueue(dev, qnum);
> +}
> +
> +static int virtnet_xsk_xmit_one(struct virtnet_sq *sq,
> +                               struct xsk_buff_pool *pool,
> +                               struct xdp_desc *desc)
> +{
> +       struct virtnet_info *vi;
> +       dma_addr_t addr;
> +
> +       vi = sq->vq->vdev->priv;
> +
> +       addr = xsk_buff_raw_get_dma(pool, desc->addr);
> +       xsk_buff_raw_dma_sync_for_device(pool, addr, desc->len);
> +
> +       sg_init_table(sq->sg, 2);
> +
> +       sg_fill_dma(sq->sg, sq->xsk.hdr_dma_address, vi->hdr_len);
> +       sg_fill_dma(sq->sg + 1, addr, desc->len);
> +
> +       return virtqueue_add_outbuf(sq->vq, sq->sg, 2,
> +                                   virtnet_xsk_to_ptr(desc->len), GFP_ATOMIC);
> +}
> +
> +static int virtnet_xsk_xmit_batch(struct virtnet_sq *sq,
> +                                 struct xsk_buff_pool *pool,
> +                                 unsigned int budget,
> +                                 struct virtnet_sq_stats *stats)
> +{
> +       struct xdp_desc *descs = pool->tx_descs;
> +       u32 nb_pkts, max_pkts, i;
> +       bool kick = false;
> +       int err;
> +
> +       max_pkts = min_t(u32, budget, sq->vq->num_free / 2);

Need document why num_free / 2 is chosen here.

Others look fine.

Thanks


> +
> +       nb_pkts = xsk_tx_peek_release_desc_batch(pool, max_pkts);
> +       if (!nb_pkts)
> +               return 0;
> +
> +       for (i = 0; i < nb_pkts; i++) {
> +               err = virtnet_xsk_xmit_one(sq, pool, &descs[i]);
> +               if (unlikely(err))
> +                       break;
> +
> +               kick = true;
> +       }
> +
> +       if (kick && virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq))
> +               ++stats->kicks;
> +
> +       stats->xdp_tx += i;
> +
> +       return i;
> +}
> +
> +bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
> +                     int budget)
> +{
> +       struct virtnet_sq_stats stats = {};
> +       int sent;
> +
> +       virtnet_free_old_xmit(sq, true, &stats);
> +
> +       sent = virtnet_xsk_xmit_batch(sq, pool, budget, &stats);
> +
> +       virtnet_xsk_check_queue(sq);
> +
> +       u64_stats_update_begin(&sq->stats.syncp);
> +       sq->stats.packets += stats.packets;
> +       sq->stats.bytes += stats.bytes;
> +       sq->stats.kicks += stats.kicks;
> +       sq->stats.xdp_tx += stats.xdp_tx;
> +       u64_stats_update_end(&sq->stats.syncp);
> +
> +       if (xsk_uses_need_wakeup(pool))
> +               xsk_set_tx_need_wakeup(pool);
> +
> +       return sent == budget;
> +}
> +
>  static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
>                                     struct xsk_buff_pool *pool)
>  {
> diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> index 1918285c310c..73ca8cd5308b 100644
> --- a/drivers/net/virtio/xsk.h
> +++ b/drivers/net/virtio/xsk.h
> @@ -3,5 +3,18 @@
>  #ifndef __XSK_H__
>  #define __XSK_H__
>
> +#define VIRTIO_XSK_FLAG_OFFSET 4
> +
> +static inline void *virtnet_xsk_to_ptr(u32 len)
> +{
> +       unsigned long p;
> +
> +       p = len << VIRTIO_XSK_FLAG_OFFSET;
> +
> +       return (void *)(p | VIRTIO_XSK_FLAG);
> +}
> +
>  int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> +bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
> +                     int budget);
>  #endif
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 12/19] virtio_net: xsk: tx: support wakeup
  2023-10-16 12:00 ` [PATCH net-next v1 12/19] virtio_net: xsk: tx: support wakeup Xuan Zhuo
@ 2023-10-20  6:52   ` Jason Wang
  2023-10-20  8:09     ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:52 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> xsk wakeup is used to trigger the logic for xsk xmit by xsk framework or
> user.
>
> Virtio-Net does not support to actively generate an interruption, so it
> tries to trigger tx NAPI on the tx interrupt cpu.
>
> Consider the effect of cache. When interrupt triggers, it is
> generally fixed on a CPU. It is better to start TX Napi on the same
> CPU.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       |  3 ++
>  drivers/net/virtio/virtio_net.h |  8 +++++
>  drivers/net/virtio/xsk.c        | 57 +++++++++++++++++++++++++++++++++
>  drivers/net/virtio/xsk.h        |  1 +
>  4 files changed, 69 insertions(+)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index a08429bef61f..1a222221352e 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -2066,6 +2066,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
>                 return 0;
>         }
>
> +       sq->xsk.last_cpu = smp_processor_id();
> +
>         txq = netdev_get_tx_queue(vi->dev, index);
>         __netif_tx_lock(txq, raw_smp_processor_id());
>         virtqueue_disable_cb(sq->vq);
> @@ -3770,6 +3772,7 @@ static const struct net_device_ops virtnet_netdev = {
>         .ndo_vlan_rx_kill_vid = virtnet_vlan_rx_kill_vid,
>         .ndo_bpf                = virtnet_xdp,
>         .ndo_xdp_xmit           = virtnet_xdp_xmit,
> +       .ndo_xsk_wakeup         = virtnet_xsk_wakeup,
>         .ndo_features_check     = passthru_features_check,
>         .ndo_get_phys_port_name = virtnet_get_phys_port_name,
>         .ndo_set_features       = virtnet_set_features,
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index 3bbb1f5baad5..7c72a8bb1813 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -101,6 +101,14 @@ struct virtnet_sq {
>                 struct xsk_buff_pool __rcu *pool;
>
>                 dma_addr_t hdr_dma_address;
> +
> +               u32 last_cpu;
> +               struct __call_single_data csd;
> +
> +               /* The lock to prevent the repeat of calling
> +                * smp_call_function_single_async().
> +                */
> +               spinlock_t ipi_lock;
>         } xsk;
>  };
>
> diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> index 0e775a9d270f..973e783260c3 100644
> --- a/drivers/net/virtio/xsk.c
> +++ b/drivers/net/virtio/xsk.c
> @@ -115,6 +115,60 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
>         return sent == budget;
>  }
>
> +static void virtnet_remote_napi_schedule(void *info)
> +{
> +       struct virtnet_sq *sq = info;
> +
> +       virtnet_vq_napi_schedule(&sq->napi, sq->vq);
> +}
> +
> +static void virtnet_remote_raise_napi(struct virtnet_sq *sq)
> +{
> +       u32 last_cpu, cur_cpu;
> +
> +       last_cpu = sq->xsk.last_cpu;
> +       cur_cpu = get_cpu();
> +
> +       /* On remote cpu, softirq will run automatically when ipi irq exit. On
> +        * local cpu, smp_call_xxx will not trigger ipi interrupt, then softirq
> +        * cannot be triggered automatically. So Call local_bh_enable after to
> +        * trigger softIRQ processing.
> +        */
> +       if (last_cpu == cur_cpu) {
> +               local_bh_disable();
> +               virtnet_vq_napi_schedule(&sq->napi, sq->vq);
> +               local_bh_enable();
> +       } else {
> +               if (spin_trylock(&sq->xsk.ipi_lock)) {
> +                       smp_call_function_single_async(last_cpu, &sq->xsk.csd);
> +                       spin_unlock(&sq->xsk.ipi_lock);
> +               }
> +       }

Is there any number to show whether it's worth it for an IPI here? For
example, GVE doesn't do this.

Thanks


> +
> +       put_cpu();
> +}
> +
> +int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag)
> +{
> +       struct virtnet_info *vi = netdev_priv(dev);
> +       struct virtnet_sq *sq;
> +
> +       if (!netif_running(dev))
> +               return -ENETDOWN;
> +
> +       if (qid >= vi->curr_queue_pairs)
> +               return -EINVAL;
> +
> +       sq = &vi->sq[qid];
> +
> +       if (napi_if_scheduled_mark_missed(&sq->napi))
> +               return 0;
> +
> +       virtnet_remote_raise_napi(sq);
> +
> +       return 0;
> +}
> +
>  static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
>                                     struct xsk_buff_pool *pool)
>  {
> @@ -240,6 +294,9 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
>
>         sq->xsk.hdr_dma_address = hdr_dma;
>
> +       INIT_CSD(&sq->xsk.csd, virtnet_remote_napi_schedule, sq);
> +       spin_lock_init(&sq->xsk.ipi_lock);
> +
>         return 0;
>
>  err_sq:
> diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> index 73ca8cd5308b..1bd19dcda649 100644
> --- a/drivers/net/virtio/xsk.h
> +++ b/drivers/net/virtio/xsk.h
> @@ -17,4 +17,5 @@ static inline void *virtnet_xsk_to_ptr(u32 len)
>  int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
>  bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
>                       int budget);
> +int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
>  #endif
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 14/19] virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check xsk buffer
  2023-10-16 12:00 ` [PATCH net-next v1 14/19] virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check " Xuan Zhuo
@ 2023-10-20  6:53   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:53 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> virtnet_sq_free_unused_buf() check xsk buffer.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks


> ---
>  drivers/net/virtio/main.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 1a222221352e..58bb38f9b453 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -3876,10 +3876,12 @@ static void free_receive_page_frags(struct virtnet_info *vi)
>
>  void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
>  {
> -       if (!virtnet_is_xdp_frame(buf))
> +       if (virtnet_is_skb_ptr(buf))
>                 dev_kfree_skb(buf);
> -       else
> +       else if (virtnet_is_xdp_frame(buf))
>                 xdp_return_frame(virtnet_ptr_to_xdp(buf));
> +
> +       /* xsk buffer do not need handle. */
>  }
>
>  void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 15/19] virtio_net: xsk: rx: introduce add_recvbuf_xsk()
  2023-10-16 12:00 ` [PATCH net-next v1 15/19] virtio_net: xsk: rx: introduce add_recvbuf_xsk() Xuan Zhuo
@ 2023-10-20  6:56   ` Jason Wang
  2023-10-23  6:56     ` Xuan Zhuo
  0 siblings, 1 reply; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:56 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Implement the logic of filling vq with XSK buffer.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       | 13 +++++++
>  drivers/net/virtio/virtio_net.h |  5 +++
>  drivers/net/virtio/xsk.c        | 66 ++++++++++++++++++++++++++++++++-
>  drivers/net/virtio/xsk.h        |  2 +
>  4 files changed, 85 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 58bb38f9b453..0e740447b142 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -1787,9 +1787,20 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
>  static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
>                           gfp_t gfp)
>  {
> +       struct xsk_buff_pool *pool;
>         int err;
>         bool oom;
>
> +       rcu_read_lock();

A question here: should we sync with refill work during rx_pause?

> +       pool = rcu_dereference(rq->xsk.pool);
> +       if (pool) {
> +               err = virtnet_add_recvbuf_xsk(vi, rq, pool, gfp);
> +               oom = err == -ENOMEM;
> +               rcu_read_unlock();
> +               goto kick;
> +       }
> +       rcu_read_unlock();

And if we synchronize with that there's probably no need for the rcu
and we can merge the logic with the following ones?

Thanks


> +
>         do {
>                 if (vi->mergeable_rx_bufs)
>                         err = add_recvbuf_mergeable(vi, rq, gfp);
> @@ -1802,6 +1813,8 @@ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
>                 if (err)
>                         break;
>         } while (rq->vq->num_free);
> +
> +kick:
>         if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) {
>                 unsigned long flags;
>
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index d4e620a084f4..6e71622fca45 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -156,6 +156,11 @@ struct virtnet_rq {
>
>                 /* xdp rxq used by xsk */
>                 struct xdp_rxq_info xdp_rxq;
> +
> +               struct xdp_buff **xsk_buffs;
> +               u32 nxt_idx;
> +               u32 num;
> +               u32 size;
>         } xsk;
>  };
>
> diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> index 973e783260c3..841fb078882a 100644
> --- a/drivers/net/virtio/xsk.c
> +++ b/drivers/net/virtio/xsk.c
> @@ -37,6 +37,58 @@ static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
>                 netif_stop_subqueue(dev, qnum);
>  }
>
> +static int virtnet_add_recvbuf_batch(struct virtnet_info *vi, struct virtnet_rq *rq,
> +                                    struct xsk_buff_pool *pool, gfp_t gfp)
> +{
> +       struct xdp_buff **xsk_buffs;
> +       dma_addr_t addr;
> +       u32 len, i;
> +       int err = 0;
> +
> +       xsk_buffs = rq->xsk.xsk_buffs;
> +
> +       if (rq->xsk.nxt_idx >= rq->xsk.num) {
> +               rq->xsk.num = xsk_buff_alloc_batch(pool, xsk_buffs, rq->xsk.size);
> +               if (!rq->xsk.num)
> +                       return -ENOMEM;
> +               rq->xsk.nxt_idx = 0;
> +       }
> +
> +       while (rq->xsk.nxt_idx < rq->xsk.num) {
> +               i = rq->xsk.nxt_idx;
> +
> +               /* use the part of XDP_PACKET_HEADROOM as the virtnet hdr space */
> +               addr = xsk_buff_xdp_get_dma(xsk_buffs[i]) - vi->hdr_len;
> +               len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len;
> +
> +               sg_init_table(rq->sg, 1);
> +               sg_fill_dma(rq->sg, addr, len);
> +
> +               err = virtqueue_add_inbuf(rq->vq, rq->sg, 1, xsk_buffs[i], gfp);
> +               if (err)
> +                       return err;
> +
> +               rq->xsk.nxt_idx++;
> +       }
> +
> +       return 0;
> +}
> +
> +int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq,
> +                           struct xsk_buff_pool *pool, gfp_t gfp)
> +{
> +       int err;
> +
> +       do {
> +               err = virtnet_add_recvbuf_batch(vi, rq, pool, gfp);
> +               if (err)
> +                       return err;
> +
> +       } while (rq->vq->num_free);
> +
> +       return 0;
> +}
> +
>  static int virtnet_xsk_xmit_one(struct virtnet_sq *sq,
>                                 struct xsk_buff_pool *pool,
>                                 struct xdp_desc *desc)
> @@ -244,7 +296,7 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
>         struct virtnet_sq *sq;
>         struct device *dma_dev;
>         dma_addr_t hdr_dma;
> -       int err;
> +       int err, size;
>
>         /* In big_packets mode, xdp cannot work, so there is no need to
>          * initialize xsk of rq.
> @@ -276,6 +328,16 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
>         if (!dma_dev)
>                 return -EPERM;
>
> +       size = virtqueue_get_vring_size(rq->vq);
> +
> +       rq->xsk.xsk_buffs = kcalloc(size, sizeof(*rq->xsk.xsk_buffs), GFP_KERNEL);
> +       if (!rq->xsk.xsk_buffs)
> +               return -ENOMEM;
> +
> +       rq->xsk.size = size;
> +       rq->xsk.nxt_idx = 0;
> +       rq->xsk.num = 0;
> +
>         hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE);
>         if (dma_mapping_error(dma_dev, hdr_dma))
>                 return -ENOMEM;
> @@ -338,6 +400,8 @@ static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
>         err1 = virtnet_sq_bind_xsk_pool(vi, sq, NULL);
>         err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL);
>
> +       kfree(rq->xsk.xsk_buffs);
> +
>         return err1 | err2;
>  }
>
> diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> index 7ebc9bda7aee..bef41a3f954e 100644
> --- a/drivers/net/virtio/xsk.h
> +++ b/drivers/net/virtio/xsk.h
> @@ -23,4 +23,6 @@ int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
>  bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
>                       int budget);
>  int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
> +int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq,
> +                           struct xsk_buff_pool *pool, gfp_t gfp);
>  #endif
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
  2023-10-16 12:00 ` [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer Xuan Zhuo
@ 2023-10-20  6:57   ` Jason Wang
  2023-10-23  2:39     ` Xuan Zhuo
  2023-11-15  2:35     ` Xuan Zhuo
  0 siblings, 2 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:57 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Implementing the logic of xsk rx. If this packet is not for XSK
> determined in XDP, then we need to copy once to generate a SKB.
> If it is for XSK, it is a zerocopy receive packet process.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       |  14 ++--
>  drivers/net/virtio/virtio_net.h |   4 ++
>  drivers/net/virtio/xsk.c        | 120 ++++++++++++++++++++++++++++++++
>  drivers/net/virtio/xsk.h        |   4 ++
>  4 files changed, 137 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 0e740447b142..003dd67ab707 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -822,10 +822,10 @@ static void put_xdp_frags(struct xdp_buff *xdp)
>         }
>  }
>
> -static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> -                              struct net_device *dev,
> -                              unsigned int *xdp_xmit,
> -                              struct virtnet_rq_stats *stats)
> +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> +                       struct net_device *dev,
> +                       unsigned int *xdp_xmit,
> +                       struct virtnet_rq_stats *stats)
>  {
>         struct xdp_frame *xdpf;
>         int err;
> @@ -1589,13 +1589,17 @@ static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq,
>                 return;
>         }
>
> -       if (vi->mergeable_rx_bufs)
> +       rcu_read_lock();
> +       if (rcu_dereference(rq->xsk.pool))
> +               skb = virtnet_receive_xsk(dev, vi, rq, buf, len, xdp_xmit, stats);
> +       else if (vi->mergeable_rx_bufs)
>                 skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit,
>                                         stats);
>         else if (vi->big_packets)
>                 skb = receive_big(dev, vi, rq, buf, len, stats);
>         else
>                 skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats);
> +       rcu_read_unlock();
>
>         if (unlikely(!skb))
>                 return;
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index 6e71622fca45..fd7f34703c9b 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -346,6 +346,10 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
>                 return false;
>  }
>
> +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> +                       struct net_device *dev,
> +                       unsigned int *xdp_xmit,
> +                       struct virtnet_rq_stats *stats);
>  void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
>  void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
>  void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
> diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> index 841fb078882a..f1c64414fac9 100644
> --- a/drivers/net/virtio/xsk.c
> +++ b/drivers/net/virtio/xsk.c
> @@ -13,6 +13,18 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
>         sg->length = len;
>  }
>
> +static unsigned int virtnet_receive_buf_num(struct virtnet_info *vi, char *buf)
> +{
> +       struct virtio_net_hdr_mrg_rxbuf *hdr;
> +
> +       if (vi->mergeable_rx_bufs) {
> +               hdr = (struct virtio_net_hdr_mrg_rxbuf *)buf;
> +               return virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> +       }
> +
> +       return 1;
> +}
> +
>  static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
>  {
>         struct virtnet_info *vi = sq->vq->vdev->priv;
> @@ -37,6 +49,114 @@ static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
>                 netif_stop_subqueue(dev, qnum);
>  }
>
> +static void merge_drop_follow_xdp(struct net_device *dev,
> +                                 struct virtnet_rq *rq,
> +                                 u32 num_buf,
> +                                 struct virtnet_rq_stats *stats)
> +{
> +       struct xdp_buff *xdp;
> +       u32 len;
> +
> +       while (num_buf-- > 1) {
> +               xdp = virtqueue_get_buf(rq->vq, &len);
> +               if (unlikely(!xdp)) {
> +                       pr_debug("%s: rx error: %d buffers missing\n",
> +                                dev->name, num_buf);
> +                       dev->stats.rx_length_errors++;
> +                       break;
> +               }
> +               stats->bytes += len;
> +               xsk_buff_free(xdp);
> +       }
> +}
> +
> +static struct sk_buff *construct_skb(struct virtnet_rq *rq,
> +                                    struct xdp_buff *xdp)
> +{
> +       unsigned int metasize = xdp->data - xdp->data_meta;
> +       struct sk_buff *skb;
> +       unsigned int size;
> +
> +       size = xdp->data_end - xdp->data_hard_start;
> +       skb = napi_alloc_skb(&rq->napi, size);
> +       if (unlikely(!skb))
> +               return NULL;
> +
> +       skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
> +
> +       size = xdp->data_end - xdp->data_meta;
> +       memcpy(__skb_put(skb, size), xdp->data_meta, size);
> +
> +       if (metasize) {
> +               __skb_pull(skb, metasize);
> +               skb_metadata_set(skb, metasize);
> +       }
> +
> +       return skb;
> +}
> +
> +struct sk_buff *virtnet_receive_xsk(struct net_device *dev, struct virtnet_info *vi,
> +                                   struct virtnet_rq *rq, void *buf,
> +                                   unsigned int len, unsigned int *xdp_xmit,
> +                                   struct virtnet_rq_stats *stats)
> +{

I wonder if anything blocks us from reusing the existing XDP logic?
Are there some subtle differences?

Thanks


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 18/19] virtio_net: update tx timeout record
  2023-10-16 12:00 ` [PATCH net-next v1 18/19] virtio_net: update tx timeout record Xuan Zhuo
@ 2023-10-20  6:57   ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:57 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> If send queue sent some packets, we update the tx timeout
> record to prevent the tx timeout.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks


> ---
>  drivers/net/virtio/xsk.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
>
> diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> index f1c64414fac9..5d3de505c56c 100644
> --- a/drivers/net/virtio/xsk.c
> +++ b/drivers/net/virtio/xsk.c
> @@ -274,6 +274,16 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
>
>         virtnet_xsk_check_queue(sq);
>
> +       if (stats.packets) {
> +               struct netdev_queue *txq;
> +               struct virtnet_info *vi;
> +
> +               vi = sq->vq->vdev->priv;
> +
> +               txq = netdev_get_tx_queue(vi->dev, sq - vi->sq);
> +               txq_trans_cond_update(txq);
> +       }
> +
>         u64_stats_update_begin(&sq->stats.syncp);
>         sq->stats.packets += stats.packets;
>         sq->stats.bytes += stats.bytes;
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h
  2023-10-19  7:16     ` Xuan Zhuo
@ 2023-10-20  6:59       ` Jason Wang
  0 siblings, 0 replies; 66+ messages in thread
From: Jason Wang @ 2023-10-20  6:59 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Thu, Oct 19, 2023 at 3:20 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Thu, 19 Oct 2023 14:12:55 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Oct 16, 2023 at 8:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > Move some structure definitions and inline functions into the
> > > virtio_net.h file.
> >
> > Some of the functions are not inline one before the moving. I'm not
> > sure what's the criteria to choose the function to be moved.
>
>
> That will used by xsk.c or other funcions in headers in the subsequence
> commits.
>
> If you are confused, I can try move the function when that is needed.
> This commit just move some important structures.

That's fine.

Thanks

>
> Thanks.
>
> >
> >
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio/main.c       | 252 +------------------------------
> > >  drivers/net/virtio/virtio_net.h | 256 ++++++++++++++++++++++++++++++++
> > >  2 files changed, 258 insertions(+), 250 deletions(-)
> > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > >
> > > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > > index 6cf77b6acdab..d8b6c0d86f29 100644
> > > --- a/drivers/net/virtio/main.c
> > > +++ b/drivers/net/virtio/main.c
> > > @@ -6,7 +6,6 @@
> > >  //#define DEBUG
> > >  #include <linux/netdevice.h>
> > >  #include <linux/etherdevice.h>
> > > -#include <linux/ethtool.h>
> > >  #include <linux/module.h>
> > >  #include <linux/virtio.h>
> > >  #include <linux/virtio_net.h>
> > > @@ -16,7 +15,6 @@
> > >  #include <linux/if_vlan.h>
> > >  #include <linux/slab.h>
> > >  #include <linux/cpu.h>
> > > -#include <linux/average.h>
> > >  #include <linux/filter.h>
> > >  #include <linux/kernel.h>
> > >  #include <net/route.h>
> > > @@ -24,6 +22,8 @@
> > >  #include <net/net_failover.h>
> > >  #include <net/netdev_rx_queue.h>
> > >
> > > +#include "virtio_net.h"
> > > +
> > >  static int napi_weight = NAPI_POLL_WEIGHT;
> > >  module_param(napi_weight, int, 0444);
> > >
> > > @@ -45,15 +45,6 @@ module_param(napi_tx, bool, 0644);
> > >  #define VIRTIO_XDP_TX          BIT(0)
> > >  #define VIRTIO_XDP_REDIR       BIT(1)
> > >
> > > -#define VIRTIO_XDP_FLAG        BIT(0)
> > > -
> > > -/* RX packet size EWMA. The average packet size is used to determine the packet
> > > - * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > > - * at once, the weight is chosen so that the EWMA will be insensitive to short-
> > > - * term, transient changes in packet size.
> > > - */
> > > -DECLARE_EWMA(pkt_len, 0, 64)
> > > -
> > >  #define VIRTNET_DRIVER_VERSION "1.0.0"
> > >
> > >  static const unsigned long guest_offloads[] = {
> > > @@ -74,36 +65,6 @@ static const unsigned long guest_offloads[] = {
> > >                                 (1ULL << VIRTIO_NET_F_GUEST_USO4) | \
> > >                                 (1ULL << VIRTIO_NET_F_GUEST_USO6))
> > >
> > > -struct virtnet_stat_desc {
> > > -       char desc[ETH_GSTRING_LEN];
> > > -       size_t offset;
> > > -};
> > > -
> > > -struct virtnet_sq_stats {
> > > -       struct u64_stats_sync syncp;
> > > -       u64 packets;
> > > -       u64 bytes;
> > > -       u64 xdp_tx;
> > > -       u64 xdp_tx_drops;
> > > -       u64 kicks;
> > > -       u64 tx_timeouts;
> > > -};
> > > -
> > > -struct virtnet_rq_stats {
> > > -       struct u64_stats_sync syncp;
> > > -       u64 packets;
> > > -       u64 bytes;
> > > -       u64 drops;
> > > -       u64 xdp_packets;
> > > -       u64 xdp_tx;
> > > -       u64 xdp_redirects;
> > > -       u64 xdp_drops;
> > > -       u64 kicks;
> > > -};
> > > -
> > > -#define VIRTNET_SQ_STAT(m)     offsetof(struct virtnet_sq_stats, m)
> > > -#define VIRTNET_RQ_STAT(m)     offsetof(struct virtnet_rq_stats, m)
> > > -
> > >  static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
> > >         { "packets",            VIRTNET_SQ_STAT(packets) },
> > >         { "bytes",              VIRTNET_SQ_STAT(bytes) },
> > > @@ -127,80 +88,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
> > >  #define VIRTNET_SQ_STATS_LEN   ARRAY_SIZE(virtnet_sq_stats_desc)
> > >  #define VIRTNET_RQ_STATS_LEN   ARRAY_SIZE(virtnet_rq_stats_desc)
> > >
> > > -struct virtnet_interrupt_coalesce {
> > > -       u32 max_packets;
> > > -       u32 max_usecs;
> > > -};
> > > -
> > > -/* The dma information of pages allocated at a time. */
> > > -struct virtnet_rq_dma {
> > > -       dma_addr_t addr;
> > > -       u32 ref;
> > > -       u16 len;
> > > -       u16 need_sync;
> > > -};
> > > -
> > > -/* Internal representation of a send virtqueue */
> > > -struct send_queue {
> > > -       /* Virtqueue associated with this send _queue */
> > > -       struct virtqueue *vq;
> > > -
> > > -       /* TX: fragments + linear part + virtio header */
> > > -       struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > > -
> > > -       /* Name of the send queue: output.$index */
> > > -       char name[16];
> > > -
> > > -       struct virtnet_sq_stats stats;
> > > -
> > > -       struct virtnet_interrupt_coalesce intr_coal;
> > > -
> > > -       struct napi_struct napi;
> > > -
> > > -       /* Record whether sq is in reset state. */
> > > -       bool reset;
> > > -};
> > > -
> > > -/* Internal representation of a receive virtqueue */
> > > -struct receive_queue {
> > > -       /* Virtqueue associated with this receive_queue */
> > > -       struct virtqueue *vq;
> > > -
> > > -       struct napi_struct napi;
> > > -
> > > -       struct bpf_prog __rcu *xdp_prog;
> > > -
> > > -       struct virtnet_rq_stats stats;
> > > -
> > > -       struct virtnet_interrupt_coalesce intr_coal;
> > > -
> > > -       /* Chain pages by the private ptr. */
> > > -       struct page *pages;
> > > -
> > > -       /* Average packet length for mergeable receive buffers. */
> > > -       struct ewma_pkt_len mrg_avg_pkt_len;
> > > -
> > > -       /* Page frag for packet buffer allocation. */
> > > -       struct page_frag alloc_frag;
> > > -
> > > -       /* RX: fragments + linear part + virtio header */
> > > -       struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > > -
> > > -       /* Min single buffer size for mergeable buffers case. */
> > > -       unsigned int min_buf_len;
> > > -
> > > -       /* Name of this receive queue: input.$index */
> > > -       char name[16];
> > > -
> > > -       struct xdp_rxq_info xdp_rxq;
> > > -
> > > -       /* Record the last dma info to free after new pages is allocated. */
> > > -       struct virtnet_rq_dma *last_dma;
> > > -
> > > -       /* Do dma by self */
> > > -       bool do_dma;
> > > -};
> > > -
> > >  /* This structure can contain rss message with maximum settings for indirection table and keysize
> > >   * Note, that default structure that describes RSS configuration virtio_net_rss_config
> > >   * contains same info but can't handle table values.
> > > @@ -234,88 +121,6 @@ struct control_buf {
> > >         struct virtio_net_ctrl_coal_vq coal_vq;
> > >  };
> > >
> > > -struct virtnet_info {
> > > -       struct virtio_device *vdev;
> > > -       struct virtqueue *cvq;
> > > -       struct net_device *dev;
> > > -       struct send_queue *sq;
> > > -       struct receive_queue *rq;
> > > -       unsigned int status;
> > > -
> > > -       /* Max # of queue pairs supported by the device */
> > > -       u16 max_queue_pairs;
> > > -
> > > -       /* # of queue pairs currently used by the driver */
> > > -       u16 curr_queue_pairs;
> > > -
> > > -       /* # of XDP queue pairs currently used by the driver */
> > > -       u16 xdp_queue_pairs;
> > > -
> > > -       /* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> > > -       bool xdp_enabled;
> > > -
> > > -       /* I like... big packets and I cannot lie! */
> > > -       bool big_packets;
> > > -
> > > -       /* number of sg entries allocated for big packets */
> > > -       unsigned int big_packets_num_skbfrags;
> > > -
> > > -       /* Host will merge rx buffers for big packets (shake it! shake it!) */
> > > -       bool mergeable_rx_bufs;
> > > -
> > > -       /* Host supports rss and/or hash report */
> > > -       bool has_rss;
> > > -       bool has_rss_hash_report;
> > > -       u8 rss_key_size;
> > > -       u16 rss_indir_table_size;
> > > -       u32 rss_hash_types_supported;
> > > -       u32 rss_hash_types_saved;
> > > -
> > > -       /* Has control virtqueue */
> > > -       bool has_cvq;
> > > -
> > > -       /* Host can handle any s/g split between our header and packet data */
> > > -       bool any_header_sg;
> > > -
> > > -       /* Packet virtio header size */
> > > -       u8 hdr_len;
> > > -
> > > -       /* Work struct for delayed refilling if we run low on memory. */
> > > -       struct delayed_work refill;
> > > -
> > > -       /* Is delayed refill enabled? */
> > > -       bool refill_enabled;
> > > -
> > > -       /* The lock to synchronize the access to refill_enabled */
> > > -       spinlock_t refill_lock;
> > > -
> > > -       /* Work struct for config space updates */
> > > -       struct work_struct config_work;
> > > -
> > > -       /* Does the affinity hint is set for virtqueues? */
> > > -       bool affinity_hint_set;
> > > -
> > > -       /* CPU hotplug instances for online & dead */
> > > -       struct hlist_node node;
> > > -       struct hlist_node node_dead;
> > > -
> > > -       struct control_buf *ctrl;
> > > -
> > > -       /* Ethtool settings */
> > > -       u8 duplex;
> > > -       u32 speed;
> > > -
> > > -       /* Interrupt coalescing settings */
> > > -       struct virtnet_interrupt_coalesce intr_coal_tx;
> > > -       struct virtnet_interrupt_coalesce intr_coal_rx;
> > > -
> > > -       unsigned long guest_offloads;
> > > -       unsigned long guest_offloads_capable;
> > > -
> > > -       /* failover when STANDBY feature enabled */
> > > -       struct failover *failover;
> > > -};
> > > -
> > >  struct padded_vnet_hdr {
> > >         struct virtio_net_hdr_v1_hash hdr;
> > >         /*
> > > @@ -337,45 +142,11 @@ struct virtio_net_common_hdr {
> > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >
> > > -static bool is_xdp_frame(void *ptr)
> > > -{
> > > -       return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > > -}
> > > -
> > >  static void *xdp_to_ptr(struct xdp_frame *ptr)
> > >  {
> > >         return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
> > >  }
> >
> > Any reason for not moving this?
> >
> > Thanks
> >
> > >
> >
>


^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 08/19] virtio_net: sq support premapped mode
  2023-10-20  6:50   ` Jason Wang
@ 2023-10-20  7:16     ` Xuan Zhuo
  0 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-20  7:16 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Fri, 20 Oct 2023 14:50:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > If the xsk is enabling, the xsk tx will share the send queue.
> > But the xsk requires that the send queue use the premapped mode.
> > So the send queue must support premapped mode.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       | 108 ++++++++++++++++++++++++++++----
> >  drivers/net/virtio/virtio_net.h |  54 +++++++++++++++-
> >  2 files changed, 149 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 8da84ea9bcbe..02d27101fef1 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -514,20 +514,104 @@ static void *virtnet_rq_alloc(struct virtnet_rq *rq, u32 size, gfp_t gfp)
> >         return buf;
> >  }
> >
> > -static void virtnet_rq_set_premapped(struct virtnet_info *vi)
> > +static int virtnet_sq_set_premapped(struct virtnet_sq *sq)
> >  {
> > -       int i;
> > +       struct virtnet_sq_dma *d;
> > +       int err, size, i;
> >
> > -       /* disable for big mode */
> > -       if (!vi->mergeable_rx_bufs && vi->big_packets)
> > -               return;
>
> Not specific to this patch but any plan to fix the big mode?
>


For big, we should make it support XDP and do dma first.


>
> > +       size = virtqueue_get_vring_size(sq->vq);
> > +
> > +       size += MAX_SKB_FRAGS + 2;
> > +
> > +       sq->dmainfo.head = kcalloc(size, sizeof(*sq->dmainfo.head), GFP_KERNEL);
> > +       if (!sq->dmainfo.head)
> > +               return -ENOMEM;
> > +
> > +       err = virtqueue_set_dma_premapped(sq->vq);
> > +       if (err) {
> > +               kfree(sq->dmainfo.head);
> > +               return err;
> > +       }
> > +
> > +       sq->dmainfo.free = NULL;
> > +

[...]

> > +
> > +               d->addr = sg->dma_address;
> > +               d->len = sg->length;
> > +
> > +               d->next = head;
> > +               head = d;
>
> It's really a pity that we need to duplicate those DMA metata twice.
> Could we invent a new API to just fetch it from the virtio core?

Actually, I posted that patch.

Consider this is pushing to net-next. We can do that on top.


>
> > +       }
> > +
> > +       head->data = data;
> > +
> > +       return (void *)((unsigned long)head | ((unsigned long)data & VIRTIO_XMIT_DATA_MASK));
>
> If we packed everything into dmainfo, we can leave the type (XDP vs
> skb) there to avoid trick like packing it into the pointer here?

Yes. But if the virtio has not _ACCESS_PLATFORM, the driver will
has not the DMA meta data.

Thanks.


>
> Thanks
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 09/19] virtio_net: xsk: bind/unbind xsk
  2023-10-20  6:51   ` Jason Wang
@ 2023-10-20  7:28     ` Xuan Zhuo
  0 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-20  7:28 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Fri, 20 Oct 2023 14:51:15 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > This patch implement the logic of bind/unbind xsk pool to sq and rq.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/Makefile     |   2 +-
> >  drivers/net/virtio/main.c       |  10 +-
> >  drivers/net/virtio/virtio_net.h |  18 ++++
> >  drivers/net/virtio/xsk.c        | 186 ++++++++++++++++++++++++++++++++
> >  drivers/net/virtio/xsk.h        |   7 ++
> >  5 files changed, 216 insertions(+), 7 deletions(-)
> >  create mode 100644 drivers/net/virtio/xsk.c
> >  create mode 100644 drivers/net/virtio/xsk.h
> >
> > diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
> > index 15ed7c97fd4f..8c2a884d2dba 100644
> > --- a/drivers/net/virtio/Makefile
> > +++ b/drivers/net/virtio/Makefile
> > @@ -5,4 +5,4 @@
> >
> >  obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
> >
> > -virtio_net-y := main.o
> > +virtio_net-y := main.o xsk.o
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 02d27101fef1..38733a782f12 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -8,7 +8,6 @@
> >  #include <linux/etherdevice.h>
> >  #include <linux/module.h>
> >  #include <linux/virtio.h>
> > -#include <linux/virtio_net.h>
> >  #include <linux/bpf.h>
> >  #include <linux/bpf_trace.h>
> >  #include <linux/scatterlist.h>
> > @@ -139,9 +138,6 @@ struct virtio_net_common_hdr {
> >         };
> >  };
> >
> > -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > -static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > -
> >  static void *xdp_to_ptr(struct xdp_frame *ptr)
> >  {
> >         return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
> > @@ -3664,6 +3660,8 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
> >         switch (xdp->command) {
> >         case XDP_SETUP_PROG:
> >                 return virtnet_xdp_set(dev, xdp->prog, xdp->extack);
> > +       case XDP_SETUP_XSK_POOL:
> > +               return virtnet_xsk_pool_setup(dev, xdp);
> >         default:
> >                 return -EINVAL;
> >         }
> > @@ -3849,7 +3847,7 @@ static void free_receive_page_frags(struct virtnet_info *vi)
> >                 }
> >  }
> >
> > -static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
> > +void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
> >  {
> >         if (!virtnet_is_xdp_frame(buf))
> >                 dev_kfree_skb(buf);
> > @@ -3857,7 +3855,7 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
> >                 xdp_return_frame(virtnet_ptr_to_xdp(buf));
> >  }
> >
> > -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> > +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> >  {
> >         struct virtnet_info *vi = vq->vdev->priv;
> >         int i = vq2rxq(vq);
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index cc742756e19a..9e69b6c5921b 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -5,6 +5,8 @@
> >
> >  #include <linux/ethtool.h>
> >  #include <linux/average.h>
> > +#include <linux/virtio_net.h>
> > +#include <net/xdp_sock_drv.h>
> >
> >  #define VIRTIO_XDP_FLAG        BIT(0)
> >  #define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG)
> > @@ -94,6 +96,11 @@ struct virtnet_sq {
> >         bool do_dma;
> >
> >         struct virtnet_sq_dma_head dmainfo;
> > +       struct {
> > +               struct xsk_buff_pool __rcu *pool;
> > +
> > +               dma_addr_t hdr_dma_address;
> > +       } xsk;
> >  };
> >
> >  /* Internal representation of a receive virtqueue */
> > @@ -134,6 +141,13 @@ struct virtnet_rq {
> >
> >         /* Do dma by self */
> >         bool do_dma;
> > +
> > +       struct {
> > +               struct xsk_buff_pool __rcu *pool;
> > +
> > +               /* xdp rxq used by xsk */
> > +               struct xdp_rxq_info xdp_rxq;
> > +       } xsk;
> >  };
> >
> >  struct virtnet_info {
> > @@ -218,6 +232,8 @@ struct virtnet_info {
> >         struct failover *failover;
> >  };
> >
> > +#include "xsk.h"
> > +
>
> Any reason we don't do it with other headers?

Because xsk.h will reference the struct virtnet_sq..., if we put xsk.h
at the start of virtio_net.h, the gcc will not happy.
But the virtio-net.h references the api(virtnet_ptr_to_xsk) of the xsk.h.


>
> >  static inline bool virtnet_is_xdp_frame(void *ptr)
> >  {
> >         return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > @@ -308,4 +324,6 @@ void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
> >  void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
> >  void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
> >  void virtnet_tx_resume(struct virtnet_info *vi, struct virtnet_sq *sq);
> > +void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> >  #endif
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > new file mode 100644
> > index 000000000000..dddd01962a3f
> > --- /dev/null
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -0,0 +1,186 @@

[...]

> > +
> > +       if (!rq->do_dma || !sq->do_dma)
> > +               return -EPERM;
> > +
> > +       if (virtqueue_dma_dev(rq->vq) != virtqueue_dma_dev(sq->vq))
> > +               return -EPERM;
>
> Any reason we need this check? Is there any code that has this
> assumption? E.g dma sync in XDP_TX?

For the xsk, the tx and rx should have the same dev.
But vq->dma_dev allows every vq has the respective dma dev.

So I check the dma dev of vq and sq is the same dev.

>
> > +
> > +       dma_dev = virtqueue_dma_dev(rq->vq);
> > +       if (!dma_dev)
> > +               return -EPERM;
> > +
> > +       hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE);
> > +       if (dma_mapping_error(dma_dev, hdr_dma))
> > +               return -ENOMEM;
> > +
> > +       err = xsk_pool_dma_map(pool, dma_dev, 0);
> > +       if (err)
> > +               goto err_xsk_map;
> > +
> > +       err = virtnet_rq_bind_xsk_pool(vi, rq, pool);
> > +       if (err)
> > +               goto err_rq;
> > +
> > +       err = virtnet_sq_bind_xsk_pool(vi, sq, pool);
> > +       if (err)
> > +               goto err_sq;
> > +
> > +       sq->xsk.hdr_dma_address = hdr_dma;
>
> I think we probably need some comments to explain why a single hdr can
> work. Like it means we use the same hdr for all XSK packets?

Will fix.

>
> > +
> > +       return 0;
> > +
> > +err_sq:
> > +       virtnet_rq_bind_xsk_pool(vi, rq, NULL);
> > +err_rq:
> > +       xsk_pool_dma_unmap(pool, 0);
> > +err_xsk_map:
> > +       dma_unmap_single(dma_dev, hdr_dma, vi->hdr_len, DMA_TO_DEVICE);
> > +       return err;
> > +}
> > +
> > +static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
> > +{
> > +       struct virtnet_info *vi = netdev_priv(dev);
> > +       struct xsk_buff_pool *pool;
> > +       struct device *dma_dev;
> > +       struct virtnet_rq *rq;
> > +       struct virtnet_sq *sq;
> > +       int err1, err2;
> > +
> > +       if (qid >= vi->curr_queue_pairs)
> > +               return -EINVAL;
> > +
> > +       sq = &vi->sq[qid];
> > +       rq = &vi->rq[qid];
> > +
> > +       dma_dev = virtqueue_dma_dev(rq->vq);
> > +
> > +       /* Sync with the XSK wakeup and NAPI. */
> > +       synchronize_net();
>
> Any reason we couldn't do bind_xsk_pool(NULL) here? It seems easier.

Do you mean rcu_assign_pointer() is enough?

Let me check it.

Thanks


>
> Thanks
>
>
> > +
> > +       dma_unmap_single(dma_dev, sq->xsk.hdr_dma_address, vi->hdr_len, DMA_TO_DEVICE);
> > +
> > +       rcu_read_lock();
> > +       pool = rcu_dereference(sq->xsk.pool);
> > +       xsk_pool_dma_unmap(pool, 0);
> > +       rcu_read_unlock();
> > +
> > +       err1 = virtnet_sq_bind_xsk_pool(vi, sq, NULL);
> > +       err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL);
> > +
> > +       return err1 | err2;
> > +}
> > +
> > +int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp)
> > +{
> > +       if (xdp->xsk.pool)
> > +               return virtnet_xsk_pool_enable(dev, xdp->xsk.pool,
> > +                                              xdp->xsk.queue_id);
> > +       else
> > +               return virtnet_xsk_pool_disable(dev, xdp->xsk.queue_id);
> > +}
> > diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> > new file mode 100644
> > index 000000000000..1918285c310c
> > --- /dev/null
> > +++ b/drivers/net/virtio/xsk.h
> > @@ -0,0 +1,7 @@
> > +/* SPDX-License-Identifier: GPL-2.0-or-later */
> > +
> > +#ifndef __XSK_H__
> > +#define __XSK_H__
> > +
> > +int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> > +#endif
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 11/19] virtio_net: xsk: tx: support tx
  2023-10-20  6:52   ` Jason Wang
@ 2023-10-20  8:06     ` Xuan Zhuo
  0 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-20  8:06 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Fri, 20 Oct 2023 14:52:08 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > The driver's tx napi is very important for XSK. It is responsible for
> > obtaining data from the XSK queue and sending it out.
> >
> > At the beginning, we need to trigger tx napi.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       |  18 +++++-
> >  drivers/net/virtio/virtio_net.h |   3 +-
> >  drivers/net/virtio/xsk.c        | 108 ++++++++++++++++++++++++++++++++
> >  drivers/net/virtio/xsk.h        |  13 ++++
> >  4 files changed, 140 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index b320770e5f4e..a08429bef61f 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -2054,7 +2054,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> >         struct virtnet_sq *sq = container_of(napi, struct virtnet_sq, napi);
> >         struct virtnet_info *vi = sq->vq->vdev->priv;
> >         unsigned int index = vq2txq(sq->vq);
> > +       struct xsk_buff_pool *pool;
> >         struct netdev_queue *txq;
> > +       int busy = 0;
> >         int opaque;
> >         bool done;
> >
> > @@ -2067,11 +2069,25 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> >         txq = netdev_get_tx_queue(vi->dev, index);
> >         __netif_tx_lock(txq, raw_smp_processor_id());
> >         virtqueue_disable_cb(sq->vq);
> > -       free_old_xmit(sq, true);
> > +
> > +       rcu_read_lock();
> > +       pool = rcu_dereference(sq->xsk.pool);
> > +       if (pool) {
> > +               busy |= virtnet_xsk_xmit(sq, pool, budget);
> > +               rcu_read_unlock();
> > +       } else {
> > +               rcu_read_unlock();
> > +               free_old_xmit(sq, true);
> > +       }
> >
> >         if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
> >                 netif_tx_wake_queue(txq);
> >
> > +       if (busy) {
> > +               __netif_tx_unlock(txq);
> > +               return budget;
> > +       }
> > +
> >         opaque = virtqueue_enable_cb_prepare(sq->vq);
> >
> >         done = napi_complete_done(napi, 0);
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index 9e69b6c5921b..3bbb1f5baad5 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -9,7 +9,8 @@
> >  #include <net/xdp_sock_drv.h>
> >
> >  #define VIRTIO_XDP_FLAG        BIT(0)
> > -#define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG)
> > +#define VIRTIO_XSK_FLAG        BIT(1)
> > +#define VIRTIO_XMIT_DATA_MASK (VIRTIO_XDP_FLAG | VIRTIO_XSK_FLAG)
> >
> >  /* RX packet size EWMA. The average packet size is used to determine the packet
> >   * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > index dddd01962a3f..0e775a9d270f 100644
> > --- a/drivers/net/virtio/xsk.c
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -7,6 +7,114 @@
> >
> >  static struct virtio_net_hdr_mrg_rxbuf xsk_hdr;
> >
> > +static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
> > +{
> > +       sg->dma_address = addr;
> > +       sg->length = len;
> > +}
> > +
> > +static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
> > +{
> > +       struct virtnet_info *vi = sq->vq->vdev->priv;
> > +       struct net_device *dev = vi->dev;
> > +       int qnum = sq - vi->sq;
> > +
> > +       /* If it is a raw buffer queue, it does not check whether the status
> > +        * of the queue is stopped when sending. So there is no need to check
> > +        * the situation of the raw buffer queue.
> > +        */
> > +       if (virtnet_is_xdp_raw_buffer_queue(vi, qnum))
> > +               return;
> > +
> > +       /* If this sq is not the exclusive queue of the current cpu,
> > +        * then it may be called by start_xmit, so check it running out
> > +        * of space.
> > +        *
> > +        * Stop the queue to avoid getting packets that we are
> > +        * then unable to transmit. Then wait the tx interrupt.
> > +        */
> > +       if (sq->vq->num_free < 2 + MAX_SKB_FRAGS)
> > +               netif_stop_subqueue(dev, qnum);
> > +}
> > +
> > +static int virtnet_xsk_xmit_one(struct virtnet_sq *sq,
> > +                               struct xsk_buff_pool *pool,
> > +                               struct xdp_desc *desc)
> > +{
> > +       struct virtnet_info *vi;
> > +       dma_addr_t addr;
> > +
> > +       vi = sq->vq->vdev->priv;
> > +
> > +       addr = xsk_buff_raw_get_dma(pool, desc->addr);
> > +       xsk_buff_raw_dma_sync_for_device(pool, addr, desc->len);
> > +
> > +       sg_init_table(sq->sg, 2);
> > +
> > +       sg_fill_dma(sq->sg, sq->xsk.hdr_dma_address, vi->hdr_len);
> > +       sg_fill_dma(sq->sg + 1, addr, desc->len);
> > +
> > +       return virtqueue_add_outbuf(sq->vq, sq->sg, 2,
> > +                                   virtnet_xsk_to_ptr(desc->len), GFP_ATOMIC);
> > +}
> > +
> > +static int virtnet_xsk_xmit_batch(struct virtnet_sq *sq,
> > +                                 struct xsk_buff_pool *pool,
> > +                                 unsigned int budget,
> > +                                 struct virtnet_sq_stats *stats)
> > +{
> > +       struct xdp_desc *descs = pool->tx_descs;
> > +       u32 nb_pkts, max_pkts, i;
> > +       bool kick = false;
> > +       int err;
> > +
> > +       max_pkts = min_t(u32, budget, sq->vq->num_free / 2);
>
> Need document why num_free / 2 is chosen here.

Will fix.

Thanks.


>
> Others look fine.
>
> Thanks
>
>
> > +
> > +       nb_pkts = xsk_tx_peek_release_desc_batch(pool, max_pkts);
> > +       if (!nb_pkts)
> > +               return 0;
> > +
> > +       for (i = 0; i < nb_pkts; i++) {
> > +               err = virtnet_xsk_xmit_one(sq, pool, &descs[i]);
> > +               if (unlikely(err))
> > +                       break;
> > +
> > +               kick = true;
> > +       }
> > +
> > +       if (kick && virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq))
> > +               ++stats->kicks;
> > +
> > +       stats->xdp_tx += i;
> > +
> > +       return i;
> > +}
> > +
> > +bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
> > +                     int budget)
> > +{
> > +       struct virtnet_sq_stats stats = {};
> > +       int sent;
> > +
> > +       virtnet_free_old_xmit(sq, true, &stats);
> > +
> > +       sent = virtnet_xsk_xmit_batch(sq, pool, budget, &stats);
> > +
> > +       virtnet_xsk_check_queue(sq);
> > +
> > +       u64_stats_update_begin(&sq->stats.syncp);
> > +       sq->stats.packets += stats.packets;
> > +       sq->stats.bytes += stats.bytes;
> > +       sq->stats.kicks += stats.kicks;
> > +       sq->stats.xdp_tx += stats.xdp_tx;
> > +       u64_stats_update_end(&sq->stats.syncp);
> > +
> > +       if (xsk_uses_need_wakeup(pool))
> > +               xsk_set_tx_need_wakeup(pool);
> > +
> > +       return sent == budget;
> > +}
> > +
> >  static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
> >                                     struct xsk_buff_pool *pool)
> >  {
> > diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> > index 1918285c310c..73ca8cd5308b 100644
> > --- a/drivers/net/virtio/xsk.h
> > +++ b/drivers/net/virtio/xsk.h
> > @@ -3,5 +3,18 @@
> >  #ifndef __XSK_H__
> >  #define __XSK_H__
> >
> > +#define VIRTIO_XSK_FLAG_OFFSET 4
> > +
> > +static inline void *virtnet_xsk_to_ptr(u32 len)
> > +{
> > +       unsigned long p;
> > +
> > +       p = len << VIRTIO_XSK_FLAG_OFFSET;
> > +
> > +       return (void *)(p | VIRTIO_XSK_FLAG);
> > +}
> > +
> >  int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> > +bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
> > +                     int budget);
> >  #endif
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 12/19] virtio_net: xsk: tx: support wakeup
  2023-10-20  6:52   ` Jason Wang
@ 2023-10-20  8:09     ` Xuan Zhuo
  0 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-20  8:09 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Fri, 20 Oct 2023 14:52:18 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > xsk wakeup is used to trigger the logic for xsk xmit by xsk framework or
> > user.
> >
> > Virtio-Net does not support to actively generate an interruption, so it
> > tries to trigger tx NAPI on the tx interrupt cpu.
> >
> > Consider the effect of cache. When interrupt triggers, it is
> > generally fixed on a CPU. It is better to start TX Napi on the same
> > CPU.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       |  3 ++
> >  drivers/net/virtio/virtio_net.h |  8 +++++
> >  drivers/net/virtio/xsk.c        | 57 +++++++++++++++++++++++++++++++++
> >  drivers/net/virtio/xsk.h        |  1 +
> >  4 files changed, 69 insertions(+)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index a08429bef61f..1a222221352e 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -2066,6 +2066,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
> >                 return 0;
> >         }
> >
> > +       sq->xsk.last_cpu = smp_processor_id();
> > +
> >         txq = netdev_get_tx_queue(vi->dev, index);
> >         __netif_tx_lock(txq, raw_smp_processor_id());
> >         virtqueue_disable_cb(sq->vq);
> > @@ -3770,6 +3772,7 @@ static const struct net_device_ops virtnet_netdev = {
> >         .ndo_vlan_rx_kill_vid = virtnet_vlan_rx_kill_vid,
> >         .ndo_bpf                = virtnet_xdp,
> >         .ndo_xdp_xmit           = virtnet_xdp_xmit,
> > +       .ndo_xsk_wakeup         = virtnet_xsk_wakeup,
> >         .ndo_features_check     = passthru_features_check,
> >         .ndo_get_phys_port_name = virtnet_get_phys_port_name,
> >         .ndo_set_features       = virtnet_set_features,
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index 3bbb1f5baad5..7c72a8bb1813 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -101,6 +101,14 @@ struct virtnet_sq {
> >                 struct xsk_buff_pool __rcu *pool;
> >
> >                 dma_addr_t hdr_dma_address;
> > +
> > +               u32 last_cpu;
> > +               struct __call_single_data csd;
> > +
> > +               /* The lock to prevent the repeat of calling
> > +                * smp_call_function_single_async().
> > +                */
> > +               spinlock_t ipi_lock;
> >         } xsk;
> >  };
> >
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > index 0e775a9d270f..973e783260c3 100644
> > --- a/drivers/net/virtio/xsk.c
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -115,6 +115,60 @@ bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
> >         return sent == budget;
> >  }
> >
> > +static void virtnet_remote_napi_schedule(void *info)
> > +{
> > +       struct virtnet_sq *sq = info;
> > +
> > +       virtnet_vq_napi_schedule(&sq->napi, sq->vq);
> > +}
> > +
> > +static void virtnet_remote_raise_napi(struct virtnet_sq *sq)
> > +{
> > +       u32 last_cpu, cur_cpu;
> > +
> > +       last_cpu = sq->xsk.last_cpu;
> > +       cur_cpu = get_cpu();
> > +
> > +       /* On remote cpu, softirq will run automatically when ipi irq exit. On
> > +        * local cpu, smp_call_xxx will not trigger ipi interrupt, then softirq
> > +        * cannot be triggered automatically. So Call local_bh_enable after to
> > +        * trigger softIRQ processing.
> > +        */
> > +       if (last_cpu == cur_cpu) {
> > +               local_bh_disable();
> > +               virtnet_vq_napi_schedule(&sq->napi, sq->vq);
> > +               local_bh_enable();
> > +       } else {
> > +               if (spin_trylock(&sq->xsk.ipi_lock)) {
> > +                       smp_call_function_single_async(last_cpu, &sq->xsk.csd);
> > +                       spin_unlock(&sq->xsk.ipi_lock);
> > +               }
> > +       }
>
> Is there any number to show whether it's worth it for an IPI here? For
> example, GVE doesn't do this.

I just do not like the tx napi will run on difference CPUs.

Let's start with the way of GVE.

Thanks


>
> Thanks
>
>
> > +
> > +       put_cpu();
> > +}
> > +
> > +int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag)
> > +{
> > +       struct virtnet_info *vi = netdev_priv(dev);
> > +       struct virtnet_sq *sq;
> > +
> > +       if (!netif_running(dev))
> > +               return -ENETDOWN;
> > +
> > +       if (qid >= vi->curr_queue_pairs)
> > +               return -EINVAL;
> > +
> > +       sq = &vi->sq[qid];
> > +
> > +       if (napi_if_scheduled_mark_missed(&sq->napi))
> > +               return 0;
> > +
> > +       virtnet_remote_raise_napi(sq);
> > +
> > +       return 0;
> > +}
> > +
> >  static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct virtnet_rq *rq,
> >                                     struct xsk_buff_pool *pool)
> >  {
> > @@ -240,6 +294,9 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
> >
> >         sq->xsk.hdr_dma_address = hdr_dma;
> >
> > +       INIT_CSD(&sq->xsk.csd, virtnet_remote_napi_schedule, sq);
> > +       spin_lock_init(&sq->xsk.ipi_lock);
> > +
> >         return 0;
> >
> >  err_sq:
> > diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> > index 73ca8cd5308b..1bd19dcda649 100644
> > --- a/drivers/net/virtio/xsk.h
> > +++ b/drivers/net/virtio/xsk.h
> > @@ -17,4 +17,5 @@ static inline void *virtnet_xsk_to_ptr(u32 len)
> >  int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> >  bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
> >                       int budget);
> > +int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
> >  #endif
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
  2023-10-20  6:57   ` Jason Wang
@ 2023-10-23  2:39     ` Xuan Zhuo
  2023-11-15  2:35     ` Xuan Zhuo
  1 sibling, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-23  2:39 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Fri, 20 Oct 2023 14:57:06 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Implementing the logic of xsk rx. If this packet is not for XSK
> > determined in XDP, then we need to copy once to generate a SKB.
> > If it is for XSK, it is a zerocopy receive packet process.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       |  14 ++--
> >  drivers/net/virtio/virtio_net.h |   4 ++
> >  drivers/net/virtio/xsk.c        | 120 ++++++++++++++++++++++++++++++++
> >  drivers/net/virtio/xsk.h        |   4 ++
> >  4 files changed, 137 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 0e740447b142..003dd67ab707 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -822,10 +822,10 @@ static void put_xdp_frags(struct xdp_buff *xdp)
> >         }
> >  }
> >
> > -static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > -                              struct net_device *dev,
> > -                              unsigned int *xdp_xmit,
> > -                              struct virtnet_rq_stats *stats)
> > +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > +                       struct net_device *dev,
> > +                       unsigned int *xdp_xmit,
> > +                       struct virtnet_rq_stats *stats)
> >  {
> >         struct xdp_frame *xdpf;
> >         int err;
> > @@ -1589,13 +1589,17 @@ static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq,
> >                 return;
> >         }
> >
> > -       if (vi->mergeable_rx_bufs)
> > +       rcu_read_lock();
> > +       if (rcu_dereference(rq->xsk.pool))
> > +               skb = virtnet_receive_xsk(dev, vi, rq, buf, len, xdp_xmit, stats);
> > +       else if (vi->mergeable_rx_bufs)
> >                 skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit,
> >                                         stats);
> >         else if (vi->big_packets)
> >                 skb = receive_big(dev, vi, rq, buf, len, stats);
> >         else
> >                 skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats);
> > +       rcu_read_unlock();
> >
> >         if (unlikely(!skb))
> >                 return;
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index 6e71622fca45..fd7f34703c9b 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -346,6 +346,10 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
> >                 return false;
> >  }
> >
> > +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > +                       struct net_device *dev,
> > +                       unsigned int *xdp_xmit,
> > +                       struct virtnet_rq_stats *stats);
> >  void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
> >  void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
> >  void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > index 841fb078882a..f1c64414fac9 100644
> > --- a/drivers/net/virtio/xsk.c
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -13,6 +13,18 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
> >         sg->length = len;
> >  }
> >
> > +static unsigned int virtnet_receive_buf_num(struct virtnet_info *vi, char *buf)
> > +{
> > +       struct virtio_net_hdr_mrg_rxbuf *hdr;
> > +
> > +       if (vi->mergeable_rx_bufs) {
> > +               hdr = (struct virtio_net_hdr_mrg_rxbuf *)buf;
> > +               return virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> > +       }
> > +
> > +       return 1;
> > +}
> > +
> >  static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
> >  {
> >         struct virtnet_info *vi = sq->vq->vdev->priv;
> > @@ -37,6 +49,114 @@ static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
> >                 netif_stop_subqueue(dev, qnum);
> >  }
> >
> > +static void merge_drop_follow_xdp(struct net_device *dev,
> > +                                 struct virtnet_rq *rq,
> > +                                 u32 num_buf,
> > +                                 struct virtnet_rq_stats *stats)
> > +{
> > +       struct xdp_buff *xdp;
> > +       u32 len;
> > +
> > +       while (num_buf-- > 1) {
> > +               xdp = virtqueue_get_buf(rq->vq, &len);
> > +               if (unlikely(!xdp)) {
> > +                       pr_debug("%s: rx error: %d buffers missing\n",
> > +                                dev->name, num_buf);
> > +                       dev->stats.rx_length_errors++;
> > +                       break;
> > +               }
> > +               stats->bytes += len;
> > +               xsk_buff_free(xdp);
> > +       }
> > +}
> > +
> > +static struct sk_buff *construct_skb(struct virtnet_rq *rq,
> > +                                    struct xdp_buff *xdp)
> > +{
> > +       unsigned int metasize = xdp->data - xdp->data_meta;
> > +       struct sk_buff *skb;
> > +       unsigned int size;
> > +
> > +       size = xdp->data_end - xdp->data_hard_start;
> > +       skb = napi_alloc_skb(&rq->napi, size);
> > +       if (unlikely(!skb))
> > +               return NULL;
> > +
> > +       skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
> > +
> > +       size = xdp->data_end - xdp->data_meta;
> > +       memcpy(__skb_put(skb, size), xdp->data_meta, size);
> > +
> > +       if (metasize) {
> > +               __skb_pull(skb, metasize);
> > +               skb_metadata_set(skb, metasize);
> > +       }
> > +
> > +       return skb;
> > +}
> > +
> > +struct sk_buff *virtnet_receive_xsk(struct net_device *dev, struct virtnet_info *vi,
> > +                                   struct virtnet_rq *rq, void *buf,
> > +                                   unsigned int len, unsigned int *xdp_xmit,
> > +                                   struct virtnet_rq_stats *stats)
> > +{
>
> I wonder if anything blocks us from reusing the existing XDP logic?
> Are there some subtle differences?

1. We need to copy data to create skb for XDP_PASS.
2. We need to call xsk_buff_free() to release the buffer.
3. The handle for xdp_buff is difference.

virtnet_xdp_handler() is re-used. So the receive code is simple.

If we pushed this function into existing code, we would have to maintain
code scattered inside merge and small (and big). So I think it is a good
choice for us to put the xsk code into a function.


Thanks.


>
> Thanks
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 15/19] virtio_net: xsk: rx: introduce add_recvbuf_xsk()
  2023-10-20  6:56   ` Jason Wang
@ 2023-10-23  6:56     ` Xuan Zhuo
  0 siblings, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-10-23  6:56 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Fri, 20 Oct 2023 14:56:51 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Implement the logic of filling vq with XSK buffer.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       | 13 +++++++
> >  drivers/net/virtio/virtio_net.h |  5 +++
> >  drivers/net/virtio/xsk.c        | 66 ++++++++++++++++++++++++++++++++-
> >  drivers/net/virtio/xsk.h        |  2 +
> >  4 files changed, 85 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 58bb38f9b453..0e740447b142 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -1787,9 +1787,20 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
> >  static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
> >                           gfp_t gfp)
> >  {
> > +       struct xsk_buff_pool *pool;
> >         int err;
> >         bool oom;
> >
> > +       rcu_read_lock();
>
> A question here: should we sync with refill work during rx_pause?
>
> > +       pool = rcu_dereference(rq->xsk.pool);
> > +       if (pool) {
> > +               err = virtnet_add_recvbuf_xsk(vi, rq, pool, gfp);
> > +               oom = err == -ENOMEM;
> > +               rcu_read_unlock();
> > +               goto kick;
> > +       }
> > +       rcu_read_unlock();
>
> And if we synchronize with that there's probably no need for the rcu
> and we can merge the logic with the following ones?


YES. we synchronize the rx_pause and this.

But for the rcu object, I think we should use rcu_xxx() API.

Thanks.


>
> Thanks
>
>
> > +
> >         do {
> >                 if (vi->mergeable_rx_bufs)
> >                         err = add_recvbuf_mergeable(vi, rq, gfp);
> > @@ -1802,6 +1813,8 @@ static bool try_fill_recv(struct virtnet_info *vi, struct virtnet_rq *rq,
> >                 if (err)
> >                         break;
> >         } while (rq->vq->num_free);
> > +
> > +kick:
> >         if (virtqueue_kick_prepare(rq->vq) && virtqueue_notify(rq->vq)) {
> >                 unsigned long flags;
> >
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index d4e620a084f4..6e71622fca45 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -156,6 +156,11 @@ struct virtnet_rq {
> >
> >                 /* xdp rxq used by xsk */
> >                 struct xdp_rxq_info xdp_rxq;
> > +
> > +               struct xdp_buff **xsk_buffs;
> > +               u32 nxt_idx;
> > +               u32 num;
> > +               u32 size;
> >         } xsk;
> >  };
> >
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > index 973e783260c3..841fb078882a 100644
> > --- a/drivers/net/virtio/xsk.c
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -37,6 +37,58 @@ static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
> >                 netif_stop_subqueue(dev, qnum);
> >  }
> >
> > +static int virtnet_add_recvbuf_batch(struct virtnet_info *vi, struct virtnet_rq *rq,
> > +                                    struct xsk_buff_pool *pool, gfp_t gfp)
> > +{
> > +       struct xdp_buff **xsk_buffs;
> > +       dma_addr_t addr;
> > +       u32 len, i;
> > +       int err = 0;
> > +
> > +       xsk_buffs = rq->xsk.xsk_buffs;
> > +
> > +       if (rq->xsk.nxt_idx >= rq->xsk.num) {
> > +               rq->xsk.num = xsk_buff_alloc_batch(pool, xsk_buffs, rq->xsk.size);
> > +               if (!rq->xsk.num)
> > +                       return -ENOMEM;
> > +               rq->xsk.nxt_idx = 0;
> > +       }
> > +
> > +       while (rq->xsk.nxt_idx < rq->xsk.num) {
> > +               i = rq->xsk.nxt_idx;
> > +
> > +               /* use the part of XDP_PACKET_HEADROOM as the virtnet hdr space */
> > +               addr = xsk_buff_xdp_get_dma(xsk_buffs[i]) - vi->hdr_len;
> > +               len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len;
> > +
> > +               sg_init_table(rq->sg, 1);
> > +               sg_fill_dma(rq->sg, addr, len);
> > +
> > +               err = virtqueue_add_inbuf(rq->vq, rq->sg, 1, xsk_buffs[i], gfp);
> > +               if (err)
> > +                       return err;
> > +
> > +               rq->xsk.nxt_idx++;
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq,
> > +                           struct xsk_buff_pool *pool, gfp_t gfp)
> > +{
> > +       int err;
> > +
> > +       do {
> > +               err = virtnet_add_recvbuf_batch(vi, rq, pool, gfp);
> > +               if (err)
> > +                       return err;
> > +
> > +       } while (rq->vq->num_free);
> > +
> > +       return 0;
> > +}
> > +
> >  static int virtnet_xsk_xmit_one(struct virtnet_sq *sq,
> >                                 struct xsk_buff_pool *pool,
> >                                 struct xdp_desc *desc)
> > @@ -244,7 +296,7 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
> >         struct virtnet_sq *sq;
> >         struct device *dma_dev;
> >         dma_addr_t hdr_dma;
> > -       int err;
> > +       int err, size;
> >
> >         /* In big_packets mode, xdp cannot work, so there is no need to
> >          * initialize xsk of rq.
> > @@ -276,6 +328,16 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
> >         if (!dma_dev)
> >                 return -EPERM;
> >
> > +       size = virtqueue_get_vring_size(rq->vq);
> > +
> > +       rq->xsk.xsk_buffs = kcalloc(size, sizeof(*rq->xsk.xsk_buffs), GFP_KERNEL);
> > +       if (!rq->xsk.xsk_buffs)
> > +               return -ENOMEM;
> > +
> > +       rq->xsk.size = size;
> > +       rq->xsk.nxt_idx = 0;
> > +       rq->xsk.num = 0;
> > +
> >         hdr_dma = dma_map_single(dma_dev, &xsk_hdr, vi->hdr_len, DMA_TO_DEVICE);
> >         if (dma_mapping_error(dma_dev, hdr_dma))
> >                 return -ENOMEM;
> > @@ -338,6 +400,8 @@ static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
> >         err1 = virtnet_sq_bind_xsk_pool(vi, sq, NULL);
> >         err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL);
> >
> > +       kfree(rq->xsk.xsk_buffs);
> > +
> >         return err1 | err2;
> >  }
> >
> > diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> > index 7ebc9bda7aee..bef41a3f954e 100644
> > --- a/drivers/net/virtio/xsk.h
> > +++ b/drivers/net/virtio/xsk.h
> > @@ -23,4 +23,6 @@ int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> >  bool virtnet_xsk_xmit(struct virtnet_sq *sq, struct xsk_buff_pool *pool,
> >                       int budget);
> >  int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
> > +int virtnet_add_recvbuf_xsk(struct virtnet_info *vi, struct virtnet_rq *rq,
> > +                           struct xsk_buff_pool *pool, gfp_t gfp);
> >  #endif
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

* Re: [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
  2023-10-20  6:57   ` Jason Wang
  2023-10-23  2:39     ` Xuan Zhuo
@ 2023-11-15  2:35     ` Xuan Zhuo
  1 sibling, 0 replies; 66+ messages in thread
From: Xuan Zhuo @ 2023-11-15  2:35 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, David S. Miller, Eric Dumazet, Jakub Kicinski,
	Paolo Abeni, Michael S. Tsirkin, Alexei Starovoitov,
	Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend,
	virtualization, bpf

On Fri, 20 Oct 2023 14:57:06 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Oct 16, 2023 at 8:01 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Implementing the logic of xsk rx. If this packet is not for XSK
> > determined in XDP, then we need to copy once to generate a SKB.
> > If it is for XSK, it is a zerocopy receive packet process.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       |  14 ++--
> >  drivers/net/virtio/virtio_net.h |   4 ++
> >  drivers/net/virtio/xsk.c        | 120 ++++++++++++++++++++++++++++++++
> >  drivers/net/virtio/xsk.h        |   4 ++
> >  4 files changed, 137 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 0e740447b142..003dd67ab707 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -822,10 +822,10 @@ static void put_xdp_frags(struct xdp_buff *xdp)
> >         }
> >  }
> >
> > -static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > -                              struct net_device *dev,
> > -                              unsigned int *xdp_xmit,
> > -                              struct virtnet_rq_stats *stats)
> > +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > +                       struct net_device *dev,
> > +                       unsigned int *xdp_xmit,
> > +                       struct virtnet_rq_stats *stats)
> >  {
> >         struct xdp_frame *xdpf;
> >         int err;
> > @@ -1589,13 +1589,17 @@ static void receive_buf(struct virtnet_info *vi, struct virtnet_rq *rq,
> >                 return;
> >         }
> >
> > -       if (vi->mergeable_rx_bufs)
> > +       rcu_read_lock();
> > +       if (rcu_dereference(rq->xsk.pool))
> > +               skb = virtnet_receive_xsk(dev, vi, rq, buf, len, xdp_xmit, stats);
> > +       else if (vi->mergeable_rx_bufs)
> >                 skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit,
> >                                         stats);
> >         else if (vi->big_packets)
> >                 skb = receive_big(dev, vi, rq, buf, len, stats);
> >         else
> >                 skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats);
> > +       rcu_read_unlock();
> >
> >         if (unlikely(!skb))
> >                 return;
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index 6e71622fca45..fd7f34703c9b 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -346,6 +346,10 @@ static inline bool virtnet_is_xdp_raw_buffer_queue(struct virtnet_info *vi, int
> >                 return false;
> >  }
> >
> > +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > +                       struct net_device *dev,
> > +                       unsigned int *xdp_xmit,
> > +                       struct virtnet_rq_stats *stats);
> >  void virtnet_rx_pause(struct virtnet_info *vi, struct virtnet_rq *rq);
> >  void virtnet_rx_resume(struct virtnet_info *vi, struct virtnet_rq *rq);
> >  void virtnet_tx_pause(struct virtnet_info *vi, struct virtnet_sq *sq);
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > index 841fb078882a..f1c64414fac9 100644
> > --- a/drivers/net/virtio/xsk.c
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -13,6 +13,18 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
> >         sg->length = len;
> >  }
> >
> > +static unsigned int virtnet_receive_buf_num(struct virtnet_info *vi, char *buf)
> > +{
> > +       struct virtio_net_hdr_mrg_rxbuf *hdr;
> > +
> > +       if (vi->mergeable_rx_bufs) {
> > +               hdr = (struct virtio_net_hdr_mrg_rxbuf *)buf;
> > +               return virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> > +       }
> > +
> > +       return 1;
> > +}
> > +
> >  static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
> >  {
> >         struct virtnet_info *vi = sq->vq->vdev->priv;
> > @@ -37,6 +49,114 @@ static void virtnet_xsk_check_queue(struct virtnet_sq *sq)
> >                 netif_stop_subqueue(dev, qnum);
> >  }
> >
> > +static void merge_drop_follow_xdp(struct net_device *dev,
> > +                                 struct virtnet_rq *rq,
> > +                                 u32 num_buf,
> > +                                 struct virtnet_rq_stats *stats)
> > +{
> > +       struct xdp_buff *xdp;
> > +       u32 len;
> > +
> > +       while (num_buf-- > 1) {
> > +               xdp = virtqueue_get_buf(rq->vq, &len);
> > +               if (unlikely(!xdp)) {
> > +                       pr_debug("%s: rx error: %d buffers missing\n",
> > +                                dev->name, num_buf);
> > +                       dev->stats.rx_length_errors++;
> > +                       break;
> > +               }
> > +               stats->bytes += len;
> > +               xsk_buff_free(xdp);
> > +       }
> > +}
> > +
> > +static struct sk_buff *construct_skb(struct virtnet_rq *rq,
> > +                                    struct xdp_buff *xdp)
> > +{
> > +       unsigned int metasize = xdp->data - xdp->data_meta;
> > +       struct sk_buff *skb;
> > +       unsigned int size;
> > +
> > +       size = xdp->data_end - xdp->data_hard_start;
> > +       skb = napi_alloc_skb(&rq->napi, size);
> > +       if (unlikely(!skb))
> > +               return NULL;
> > +
> > +       skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
> > +
> > +       size = xdp->data_end - xdp->data_meta;
> > +       memcpy(__skb_put(skb, size), xdp->data_meta, size);
> > +
> > +       if (metasize) {
> > +               __skb_pull(skb, metasize);
> > +               skb_metadata_set(skb, metasize);
> > +       }
> > +
> > +       return skb;
> > +}
> > +
> > +struct sk_buff *virtnet_receive_xsk(struct net_device *dev, struct virtnet_info *vi,
> > +                                   struct virtnet_rq *rq, void *buf,
> > +                                   unsigned int len, unsigned int *xdp_xmit,
> > +                                   struct virtnet_rq_stats *stats)
> > +{
>
> I wonder if anything blocks us from reusing the existing XDP logic?
> Are there some subtle differences?


About this, "the existing XDP logic" do you mean virtnet_xdp_handler()?
This function reuse it.

virtnet_receive_xsk is on the same level with receive_mergeable.

I do the mergeable logic and small logic inside this function.
Do you have any question on why we introduce this function?

I tried to do inside receive_mergeable/small, but that be difficult.
Because that the buf,ctx is differ from the origin logic.
So I think introducing an new handler is the right way.

Thanks.


>
> Thanks
>

^ permalink raw reply	[flat|nested] 66+ messages in thread

end of thread, other threads:[~2023-11-15  2:44 UTC | newest]

Thread overview: 66+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-16 12:00 [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 01/19] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
2023-10-19  4:17   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 02/19] virtio_net: unify the code for recycling the xmit ptr Xuan Zhuo
2023-10-19  4:23   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 03/19] virtio_net: independent directory Xuan Zhuo
2023-10-19  6:10   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 04/19] virtio_net: move to virtio_net.h Xuan Zhuo
2023-10-19  6:12   ` Jason Wang
2023-10-19  7:16     ` Xuan Zhuo
2023-10-20  6:59       ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 05/19] virtio_net: add prefix virtnet to all struct/api inside virtio_net.h Xuan Zhuo
2023-10-19  6:14   ` Jason Wang
2023-10-19  6:36     ` Michael S. Tsirkin
2023-10-16 12:00 ` [PATCH net-next v1 06/19] virtio_net: separate virtnet_rx_resize() Xuan Zhuo
2023-10-19  6:17   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 07/19] virtio_net: separate virtnet_tx_resize() Xuan Zhuo
2023-10-19  6:18   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 08/19] virtio_net: sq support premapped mode Xuan Zhuo
2023-10-20  6:50   ` Jason Wang
2023-10-20  7:16     ` Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 09/19] virtio_net: xsk: bind/unbind xsk Xuan Zhuo
2023-10-20  6:51   ` Jason Wang
2023-10-20  7:28     ` Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 10/19] virtio_net: xsk: prevent disable tx napi Xuan Zhuo
2023-10-20  6:51   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 11/19] virtio_net: xsk: tx: support tx Xuan Zhuo
2023-10-20  6:52   ` Jason Wang
2023-10-20  8:06     ` Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 12/19] virtio_net: xsk: tx: support wakeup Xuan Zhuo
2023-10-20  6:52   ` Jason Wang
2023-10-20  8:09     ` Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 13/19] virtio_net: xsk: tx: virtnet_free_old_xmit() distinguishes xsk buffer Xuan Zhuo
2023-10-16 23:44   ` Jakub Kicinski
2023-10-17  2:02     ` Xuan Zhuo
2023-10-19  6:38       ` Michael S. Tsirkin
2023-10-19  7:13         ` Xuan Zhuo
2023-10-19  8:42           ` Michael S. Tsirkin
2023-10-16 12:00 ` [PATCH net-next v1 14/19] virtio_net: xsk: tx: virtnet_sq_free_unused_buf() check " Xuan Zhuo
2023-10-20  6:53   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 15/19] virtio_net: xsk: rx: introduce add_recvbuf_xsk() Xuan Zhuo
2023-10-20  6:56   ` Jason Wang
2023-10-23  6:56     ` Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 16/19] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer Xuan Zhuo
2023-10-20  6:57   ` Jason Wang
2023-10-23  2:39     ` Xuan Zhuo
2023-11-15  2:35     ` Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 17/19] virtio_net: xsk: rx: virtnet_rq_free_unused_buf() check " Xuan Zhuo
2023-10-16 12:00 ` [PATCH net-next v1 18/19] virtio_net: update tx timeout record Xuan Zhuo
2023-10-20  6:57   ` Jason Wang
2023-10-16 12:00 ` [PATCH net-next v1 19/19] virtio_net: xdp_features add NETDEV_XDP_ACT_XSK_ZEROCOPY Xuan Zhuo
2023-10-17  2:53 ` [PATCH net-next v1 00/19] virtio-net: support AF_XDP zero copy Jason Wang
2023-10-17  3:02   ` Xuan Zhuo
2023-10-17  3:20     ` Jason Wang
2023-10-17  3:22       ` Xuan Zhuo
2023-10-17  3:28         ` Jason Wang
2023-10-17  5:27           ` Jason Wang
2023-10-17  6:06             ` Xuan Zhuo
2023-10-17  6:26               ` Jason Wang
2023-10-17  6:43                 ` Xuan Zhuo
2023-10-17 11:19                   ` Xuan Zhuo
2023-10-18  2:46                     ` Jason Wang
2023-10-18  2:56                       ` Xuan Zhuo
2023-10-18  3:32                     ` Xuan Zhuo
2023-10-18  3:40                       ` Jason Wang
2023-10-18  1:02                   ` Jason Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).