bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET
@ 2022-02-14  8:13 Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 01/22] virtio_pci: struct virtio_pci_common_cfg add queue_notify_data Xuan Zhuo
                   ` (21 more replies)
  0 siblings, 22 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:13 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

The virtio spec already supports the virtio queue reset function. This patch set
is to add this function to the kernel. The relevant virtio spec information is
here:

    https://github.com/oasis-tcs/virtio-spec/issues/124

Also regarding MMIO support for queue reset, I plan to support it after this
patch is passed.

Performing reset on a queue is divided into four steps:
    1. reset_vq: reset one vq
    2. recycle the buffer from vq by virtqueue_detach_unused_buf()
    3. release the ring of the vq by vring_release_virtqueue()
    4. enable_reset_vq: re-enable the reset queue

#2-#8  : virtio ring support re-enable reset queue and release vring
#9-#14 : virtio PCI support reset queue and re-enable
#15    : add queue reset helper
#16-#17: virtio-net support rx, tx reset
#18-#22: virtio-net support set ringparam

Please review. Thanks.

v5:
  1. add virtio-net support set_ringparam

v4:
  1. just the code of virtio, without virtio-net
  2. Performing reset on a queue is divided into these steps:
    1. reset_vq: reset one vq
    2. recycle the buffer from vq by virtqueue_detach_unused_buf()
    3. release the ring of the vq by vring_release_virtqueue()
    4. enable_reset_vq: re-enable the reset queue
  3. Simplify the parameters of enable_reset_vq()
  4. add container structures for virtio_pci_common_cfg

v3:
  1. keep vq, irq unreleased

Xuan Zhuo (22):
  virtio_pci: struct virtio_pci_common_cfg add queue_notify_data
  virtio: queue_reset: add VIRTIO_F_RING_RESET
  virtio_ring: queue_reset: add function vring_setup_virtqueue()
  virtio_ring: queue_reset: split: add __vring_init_virtqueue()
  virtio_ring: queue_reset: split: support enable reset queue
  virtio_ring: queue_reset: packed: support enable reset queue
  virtio_ring: queue_reset: extract the release function of the vq ring
  virtio_ring: queue_reset: add vring_release_virtqueue()
  virtio: queue_reset: struct virtio_config_ops add callbacks for
    queue_reset
  virtio_pci: queue_reset: update struct virtio_pci_common_cfg and
    option functions
  virtio_pci: queue_reset: release vq by vp_dev->vqs
  virtio_pci: queue_reset: setup_vq() support vring_setup_virtqueue()
  virtio_pci: queue_reset: reserve vq->priv for re-enable queue
  virtio_pci: queue_reset: support VIRTIO_F_RING_RESET
  virtio: queue_reset: add helper
  virtio_net: split free_unused_bufs()
  virtio_net: support rx/tx queue reset
  virtio: add helper virtqueue_get_vring_max_size()
  virtio: add helper virtio_set_max_ring_num()
  virtio_net: set the default max ring num
  virtio_net: get max ring size by virtqueue_get_vring_max_size()
  virtio_net: support set_ringparam

 drivers/net/virtio_net.c               | 238 ++++++++++++++++++++++---
 drivers/virtio/virtio_mmio.c           |   2 +
 drivers/virtio/virtio_pci_common.c     |  61 +++++--
 drivers/virtio/virtio_pci_common.h     |  10 +-
 drivers/virtio/virtio_pci_legacy.c     |   6 +-
 drivers/virtio/virtio_pci_modern.c     |  82 ++++++++-
 drivers/virtio/virtio_pci_modern_dev.c |  36 ++++
 drivers/virtio/virtio_ring.c           | 193 +++++++++++++++-----
 include/linux/virtio.h                 |  15 ++
 include/linux/virtio_config.h          |  82 +++++++++
 include/linux/virtio_pci_modern.h      |   2 +
 include/linux/virtio_ring.h            |  37 ++--
 include/uapi/linux/virtio_config.h     |   7 +-
 include/uapi/linux/virtio_pci.h        |  14 ++
 14 files changed, 668 insertions(+), 117 deletions(-)

--
2.31.0


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH v5 01/22] virtio_pci: struct virtio_pci_common_cfg add queue_notify_data
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
@ 2022-02-14  8:13 ` Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 02/22] virtio: queue_reset: add VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (20 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:13 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Add queue_notify_data in struct virtio_pci_common_cfg, which comes from
here https://github.com/oasis-tcs/virtio-spec/issues/89

For not breaks uABI, add a new struct virtio_pci_common_cfg_notify.

Since I want to add queue_reset after queue_notify_data, I submitted
this patch first.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 include/uapi/linux/virtio_pci.h | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h
index 3a86f36d7e3d..22bec9bd0dfc 100644
--- a/include/uapi/linux/virtio_pci.h
+++ b/include/uapi/linux/virtio_pci.h
@@ -166,6 +166,13 @@ struct virtio_pci_common_cfg {
 	__le32 queue_used_hi;		/* read-write */
 };
 
+struct virtio_pci_common_cfg_notify {
+	struct virtio_pci_common_cfg cfg;
+
+	__le16 queue_notify_data;	/* read-write */
+	__le16 padding;
+};
+
 /* Fields in VIRTIO_PCI_CAP_PCI_CFG: */
 struct virtio_pci_cfg_cap {
 	struct virtio_pci_cap cap;
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 02/22] virtio: queue_reset: add VIRTIO_F_RING_RESET
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 01/22] virtio_pci: struct virtio_pci_common_cfg add queue_notify_data Xuan Zhuo
@ 2022-02-14  8:13 ` Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 03/22] virtio_ring: queue_reset: add function vring_setup_virtqueue() Xuan Zhuo
                   ` (19 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:13 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Added VIRTIO_F_RING_RESET, it came from here
https://github.com/oasis-tcs/virtio-spec/issues/124

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 include/uapi/linux/virtio_config.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h
index b5eda06f0d57..0862be802ff8 100644
--- a/include/uapi/linux/virtio_config.h
+++ b/include/uapi/linux/virtio_config.h
@@ -52,7 +52,7 @@
  * rest are per-device feature bits.
  */
 #define VIRTIO_TRANSPORT_F_START	28
-#define VIRTIO_TRANSPORT_F_END		38
+#define VIRTIO_TRANSPORT_F_END		41
 
 #ifndef VIRTIO_CONFIG_NO_LEGACY
 /* Do we get callbacks when the ring is completely used, even if we've
@@ -92,4 +92,9 @@
  * Does the device support Single Root I/O Virtualization?
  */
 #define VIRTIO_F_SR_IOV			37
+
+/*
+ * This feature indicates that the driver can reset a queue individually.
+ */
+#define VIRTIO_F_RING_RESET		40
 #endif /* _UAPI_LINUX_VIRTIO_CONFIG_H */
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 03/22] virtio_ring: queue_reset: add function vring_setup_virtqueue()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 01/22] virtio_pci: struct virtio_pci_common_cfg add queue_notify_data Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 02/22] virtio: queue_reset: add VIRTIO_F_RING_RESET Xuan Zhuo
@ 2022-02-14  8:13 ` Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 04/22] virtio_ring: queue_reset: split: add __vring_init_virtqueue() Xuan Zhuo
                   ` (18 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:13 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Added function vring_setup_virtqueue() to allow passing existing vq
without reallocating vq.

The purpose of adding this function is to not break the form of
vring_create_virtqueue().

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c |  7 ++++---
 include/linux/virtio_ring.h  | 37 ++++++++++++++++++++++++++----------
 2 files changed, 31 insertions(+), 13 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 962f1477b1fa..4f95d650b066 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2255,7 +2255,7 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
 }
 EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
 
-struct virtqueue *vring_create_virtqueue(
+struct virtqueue *vring_setup_virtqueue(
 	unsigned int index,
 	unsigned int num,
 	unsigned int vring_align,
@@ -2265,7 +2265,8 @@ struct virtqueue *vring_create_virtqueue(
 	bool context,
 	bool (*notify)(struct virtqueue *),
 	void (*callback)(struct virtqueue *),
-	const char *name)
+	const char *name,
+	struct virtqueue *vq)
 {
 
 	if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))
@@ -2277,7 +2278,7 @@ struct virtqueue *vring_create_virtqueue(
 			vdev, weak_barriers, may_reduce_num,
 			context, notify, callback, name);
 }
-EXPORT_SYMBOL_GPL(vring_create_virtqueue);
+EXPORT_SYMBOL_GPL(vring_setup_virtqueue);
 
 /* Only available for split ring */
 struct virtqueue *vring_new_virtqueue(unsigned int index,
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index b485b13fa50b..e90323fce4bf 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -65,16 +65,33 @@ struct virtqueue;
  * expected.  The caller should query virtqueue_get_vring_size to learn
  * the actual size of the ring.
  */
-struct virtqueue *vring_create_virtqueue(unsigned int index,
-					 unsigned int num,
-					 unsigned int vring_align,
-					 struct virtio_device *vdev,
-					 bool weak_barriers,
-					 bool may_reduce_num,
-					 bool ctx,
-					 bool (*notify)(struct virtqueue *vq),
-					 void (*callback)(struct virtqueue *vq),
-					 const char *name);
+struct virtqueue *vring_setup_virtqueue(unsigned int index,
+					unsigned int num,
+					unsigned int vring_align,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool may_reduce_num,
+					bool ctx,
+					bool (*notify)(struct virtqueue *vq),
+					void (*callback)(struct virtqueue *vq),
+					const char *name,
+					struct virtqueue *vq);
+
+static inline struct virtqueue *vring_create_virtqueue(unsigned int index,
+						       unsigned int num,
+						       unsigned int vring_align,
+						       struct virtio_device *vdev,
+						       bool weak_barriers,
+						       bool may_reduce_num,
+						       bool ctx,
+						       bool (*notify)(struct virtqueue *vq),
+						       void (*callback)(struct virtqueue *vq),
+						       const char *name)
+{
+	return vring_setup_virtqueue(index, num, vring_align, vdev,
+				     weak_barriers, may_reduce_num, ctx,
+				     notify, callback, name, NULL);
+}
 
 /* Creates a virtqueue with a custom layout. */
 struct virtqueue *__vring_new_virtqueue(unsigned int index,
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 04/22] virtio_ring: queue_reset: split: add __vring_init_virtqueue()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (2 preceding siblings ...)
  2022-02-14  8:13 ` [PATCH v5 03/22] virtio_ring: queue_reset: add function vring_setup_virtqueue() Xuan Zhuo
@ 2022-02-14  8:13 ` Xuan Zhuo
  2022-02-14  8:13 ` [PATCH v5 05/22] virtio_ring: queue_reset: split: support enable reset queue Xuan Zhuo
                   ` (17 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:13 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Extract vq's initialization function __vring_init_virtqueue() from
__vring_new_virtqueue()

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 61 +++++++++++++++++++++++++-----------
 1 file changed, 42 insertions(+), 19 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 4f95d650b066..9cfbe45ab286 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2169,23 +2169,17 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 EXPORT_SYMBOL_GPL(vring_interrupt);
 
 /* Only available for split ring */
-struct virtqueue *__vring_new_virtqueue(unsigned int index,
-					struct vring vring,
-					struct virtio_device *vdev,
-					bool weak_barriers,
-					bool context,
-					bool (*notify)(struct virtqueue *),
-					void (*callback)(struct virtqueue *),
-					const char *name)
+static int __vring_init_virtqueue(struct virtqueue *_vq,
+				  unsigned int index,
+				  struct vring vring,
+				  struct virtio_device *vdev,
+				  bool weak_barriers,
+				  bool context,
+				  bool (*notify)(struct virtqueue *),
+				  void (*callback)(struct virtqueue *),
+				  const char *name)
 {
-	struct vring_virtqueue *vq;
-
-	if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))
-		return NULL;
-
-	vq = kmalloc(sizeof(*vq), GFP_KERNEL);
-	if (!vq)
-		return NULL;
+	struct vring_virtqueue *vq = to_vvq(_vq);
 
 	vq->packed_ring = false;
 	vq->vq.callback = callback;
@@ -2245,13 +2239,42 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
 	spin_lock(&vdev->vqs_list_lock);
 	list_add_tail(&vq->vq.list, &vdev->vqs);
 	spin_unlock(&vdev->vqs_list_lock);
-	return &vq->vq;
+	return 0;
 
 err_extra:
 	kfree(vq->split.desc_state);
 err_state:
-	kfree(vq);
-	return NULL;
+	return -ENOMEM;
+}
+
+struct virtqueue *__vring_new_virtqueue(unsigned int index,
+					struct vring vring,
+					struct virtio_device *vdev,
+					bool weak_barriers,
+					bool context,
+					bool (*notify)(struct virtqueue *),
+					void (*callback)(struct virtqueue *),
+					const char *name)
+{
+	struct vring_virtqueue *vq;
+	int err;
+
+	if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))
+		return NULL;
+
+	vq = kmalloc(sizeof(*vq), GFP_KERNEL);
+	if (!vq)
+		return NULL;
+
+	err = __vring_init_virtqueue(&vq->vq, index, vring, vdev, weak_barriers,
+				     context, notify, callback, name);
+
+	if (err) {
+		kfree(vq);
+		return NULL;
+	}
+
+	return &vq->vq;
 }
 EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
 
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 05/22] virtio_ring: queue_reset: split: support enable reset queue
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (3 preceding siblings ...)
  2022-02-14  8:13 ` [PATCH v5 04/22] virtio_ring: queue_reset: split: add __vring_init_virtqueue() Xuan Zhuo
@ 2022-02-14  8:13 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 06/22] virtio_ring: queue_reset: packed: " Xuan Zhuo
                   ` (16 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:13 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

The purpose of this patch is to make vring split support re-enable reset
vq.

Based on whether the incoming vq passed by vring_setup_virtqueue() is
NULL or not, distinguish whether it is a normal create virtqueue or
re-enable a reset queue.

When re-enable a reset queue, reuse the original callback, name,
indirect.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 52 +++++++++++++++++++++++++-----------
 1 file changed, 37 insertions(+), 15 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 9cfbe45ab286..4639e1643c78 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -198,6 +198,16 @@ struct vring_virtqueue {
 #endif
 };
 
+static int __vring_init_virtqueue(struct virtqueue *_vq,
+				  unsigned int index,
+				  struct vring vring,
+				  struct virtio_device *vdev,
+				  bool weak_barriers,
+				  bool context,
+				  bool (*notify)(struct virtqueue *),
+				  void (*callback)(struct virtqueue *),
+				  const char *name,
+				  bool reset);
 
 /*
  * Helpers.
@@ -925,9 +935,9 @@ static struct virtqueue *vring_create_virtqueue_split(
 	bool context,
 	bool (*notify)(struct virtqueue *),
 	void (*callback)(struct virtqueue *),
-	const char *name)
+	const char *name,
+	struct virtqueue *vq)
 {
-	struct virtqueue *vq;
 	void *queue = NULL;
 	dma_addr_t dma_addr;
 	size_t queue_size_in_bytes;
@@ -964,12 +974,17 @@ static struct virtqueue *vring_create_virtqueue_split(
 	queue_size_in_bytes = vring_size(num, vring_align);
 	vring_init(&vring, num, queue, vring_align);
 
-	vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
-				   notify, callback, name);
 	if (!vq) {
-		vring_free_queue(vdev, queue_size_in_bytes, queue,
-				 dma_addr);
-		return NULL;
+		vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers,
+					   context, notify, callback, name);
+		if (!vq)
+			goto err;
+
+	} else {
+		if (__vring_init_virtqueue(vq, index, vring, vdev,
+					   weak_barriers, context, notify,
+					   callback, name, true))
+			goto err;
 	}
 
 	to_vvq(vq)->split.queue_dma_addr = dma_addr;
@@ -977,6 +992,9 @@ static struct virtqueue *vring_create_virtqueue_split(
 	to_vvq(vq)->we_own_ring = true;
 
 	return vq;
+err:
+	vring_free_queue(vdev, queue_size_in_bytes, queue, dma_addr);
+	return NULL;
 }
 
 
@@ -2177,14 +2195,20 @@ static int __vring_init_virtqueue(struct virtqueue *_vq,
 				  bool context,
 				  bool (*notify)(struct virtqueue *),
 				  void (*callback)(struct virtqueue *),
-				  const char *name)
+				  const char *name,
+				  bool reset)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
+	if (!reset) {
+		vq->vq.callback = callback;
+		vq->vq.name = name;
+		vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
+			!context;
+	}
+
 	vq->packed_ring = false;
-	vq->vq.callback = callback;
 	vq->vq.vdev = vdev;
-	vq->vq.name = name;
 	vq->vq.num_free = vring.num;
 	vq->vq.index = index;
 	vq->we_own_ring = false;
@@ -2200,8 +2224,6 @@ static int __vring_init_virtqueue(struct virtqueue *_vq,
 	vq->last_add_time_valid = false;
 #endif
 
-	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
-		!context;
 	vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX);
 
 	if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM))
@@ -2215,7 +2237,7 @@ static int __vring_init_virtqueue(struct virtqueue *_vq,
 	vq->split.avail_idx_shadow = 0;
 
 	/* No callback?  Tell other side not to bother us. */
-	if (!callback) {
+	if (!vq->vq.callback) {
 		vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
 		if (!vq->event)
 			vq->split.vring.avail->flags = cpu_to_virtio16(vdev,
@@ -2267,7 +2289,7 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
 		return NULL;
 
 	err = __vring_init_virtqueue(&vq->vq, index, vring, vdev, weak_barriers,
-				     context, notify, callback, name);
+				     context, notify, callback, name, false);
 
 	if (err) {
 		kfree(vq);
@@ -2299,7 +2321,7 @@ struct virtqueue *vring_setup_virtqueue(
 
 	return vring_create_virtqueue_split(index, num, vring_align,
 			vdev, weak_barriers, may_reduce_num,
-			context, notify, callback, name);
+			context, notify, callback, name, vq);
 }
 EXPORT_SYMBOL_GPL(vring_setup_virtqueue);
 
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 06/22] virtio_ring: queue_reset: packed: support enable reset queue
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (4 preceding siblings ...)
  2022-02-14  8:13 ` [PATCH v5 05/22] virtio_ring: queue_reset: split: support enable reset queue Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-16  4:14   ` Jason Wang
  2022-02-14  8:14 ` [PATCH v5 07/22] virtio_ring: queue_reset: extract the release function of the vq ring Xuan Zhuo
                   ` (15 subsequent siblings)
  21 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

The purpose of this patch is to make vring packed support re-enable reset
vq.

Based on whether the incoming vq passed by vring_setup_virtqueue() is
NULL or not, distinguish whether it is a normal create virtqueue or
re-enable a reset queue.

When re-enable a reset queue, reuse the original callback, name, indirect.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 29 ++++++++++++++++++-----------
 1 file changed, 18 insertions(+), 11 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 4639e1643c78..20659f7ca582 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1683,7 +1683,8 @@ static struct virtqueue *vring_create_virtqueue_packed(
 	bool context,
 	bool (*notify)(struct virtqueue *),
 	void (*callback)(struct virtqueue *),
-	const char *name)
+	const char *name,
+	struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq;
 	struct vring_packed_desc *ring;
@@ -1713,13 +1714,20 @@ static struct virtqueue *vring_create_virtqueue_packed(
 	if (!device)
 		goto err_device;
 
-	vq = kmalloc(sizeof(*vq), GFP_KERNEL);
-	if (!vq)
-		goto err_vq;
+	if (_vq) {
+		vq = to_vvq(_vq);
+	} else {
+		vq = kmalloc(sizeof(*vq), GFP_KERNEL);
+		if (!vq)
+			goto err_vq;
+
+		vq->vq.callback = callback;
+		vq->vq.name = name;
+		vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
+			!context;
+	}
 
-	vq->vq.callback = callback;
 	vq->vq.vdev = vdev;
-	vq->vq.name = name;
 	vq->vq.num_free = num;
 	vq->vq.index = index;
 	vq->we_own_ring = true;
@@ -1736,8 +1744,6 @@ static struct virtqueue *vring_create_virtqueue_packed(
 	vq->last_add_time_valid = false;
 #endif
 
-	vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
-		!context;
 	vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX);
 
 	if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM))
@@ -1778,7 +1784,7 @@ static struct virtqueue *vring_create_virtqueue_packed(
 		goto err_desc_extra;
 
 	/* No callback?  Tell other side not to bother us. */
-	if (!callback) {
+	if (!vq->vq.callback) {
 		vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE;
 		vq->packed.vring.driver->flags =
 			cpu_to_le16(vq->packed.event_flags_shadow);
@@ -1792,7 +1798,8 @@ static struct virtqueue *vring_create_virtqueue_packed(
 err_desc_extra:
 	kfree(vq->packed.desc_state);
 err_desc_state:
-	kfree(vq);
+	if (!_vq)
+		kfree(vq);
 err_vq:
 	vring_free_queue(vdev, event_size_in_bytes, device, device_event_dma_addr);
 err_device:
@@ -2317,7 +2324,7 @@ struct virtqueue *vring_setup_virtqueue(
 	if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))
 		return vring_create_virtqueue_packed(index, num, vring_align,
 				vdev, weak_barriers, may_reduce_num,
-				context, notify, callback, name);
+				context, notify, callback, name, vq);
 
 	return vring_create_virtqueue_split(index, num, vring_align,
 			vdev, weak_barriers, may_reduce_num,
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 07/22] virtio_ring: queue_reset: extract the release function of the vq ring
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (5 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 06/22] virtio_ring: queue_reset: packed: " Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 08/22] virtio_ring: queue_reset: add vring_release_virtqueue() Xuan Zhuo
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Extract a function __vring_del_virtqueue() from vring_del_virtqueue() to
handle releasing vq's ring.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 20659f7ca582..c5dd17c7dd4a 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2355,12 +2355,10 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 }
 EXPORT_SYMBOL_GPL(vring_new_virtqueue);
 
-void vring_del_virtqueue(struct virtqueue *_vq)
+static void __vring_del_virtqueue(struct vring_virtqueue *vq)
 {
-	struct vring_virtqueue *vq = to_vvq(_vq);
-
 	spin_lock(&vq->vq.vdev->vqs_list_lock);
-	list_del(&_vq->list);
+	list_del(&vq->vq.list);
 	spin_unlock(&vq->vq.vdev->vqs_list_lock);
 
 	if (vq->we_own_ring) {
@@ -2393,6 +2391,13 @@ void vring_del_virtqueue(struct virtqueue *_vq)
 		kfree(vq->split.desc_state);
 		kfree(vq->split.desc_extra);
 	}
+}
+
+void vring_del_virtqueue(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	__vring_del_virtqueue(vq);
 	kfree(vq);
 }
 EXPORT_SYMBOL_GPL(vring_del_virtqueue);
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 08/22] virtio_ring: queue_reset: add vring_release_virtqueue()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (6 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 07/22] virtio_ring: queue_reset: extract the release function of the vq ring Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-16  4:14   ` Jason Wang
  2022-02-14  8:14 ` [PATCH v5 09/22] virtio: queue_reset: struct virtio_config_ops add callbacks for queue_reset Xuan Zhuo
                   ` (13 subsequent siblings)
  21 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Added vring_release_virtqueue() to release the ring of the vq.

In this process, vq is removed from the vdev->vqs queue. And the memory
of the ring is released

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 18 +++++++++++++++++-
 include/linux/virtio.h       | 12 ++++++++++++
 2 files changed, 29 insertions(+), 1 deletion(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c5dd17c7dd4a..b37753bdbbc4 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1730,6 +1730,7 @@ static struct virtqueue *vring_create_virtqueue_packed(
 	vq->vq.vdev = vdev;
 	vq->vq.num_free = num;
 	vq->vq.index = index;
+	vq->vq.reset = VIRTQUEUE_RESET_STAGE_NONE;
 	vq->we_own_ring = true;
 	vq->notify = notify;
 	vq->weak_barriers = weak_barriers;
@@ -2218,6 +2219,7 @@ static int __vring_init_virtqueue(struct virtqueue *_vq,
 	vq->vq.vdev = vdev;
 	vq->vq.num_free = vring.num;
 	vq->vq.index = index;
+	vq->vq.reset = VIRTQUEUE_RESET_STAGE_NONE;
 	vq->we_own_ring = false;
 	vq->notify = notify;
 	vq->weak_barriers = weak_barriers;
@@ -2397,11 +2399,25 @@ void vring_del_virtqueue(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	__vring_del_virtqueue(vq);
+	if (_vq->reset != VIRTQUEUE_RESET_STAGE_RELEASE)
+		__vring_del_virtqueue(vq);
 	kfree(vq);
 }
 EXPORT_SYMBOL_GPL(vring_del_virtqueue);
 
+void vring_release_virtqueue(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	if (_vq->reset != VIRTQUEUE_RESET_STAGE_DEVICE)
+		return;
+
+	__vring_del_virtqueue(vq);
+
+	_vq->reset = VIRTQUEUE_RESET_STAGE_RELEASE;
+}
+EXPORT_SYMBOL_GPL(vring_release_virtqueue);
+
 /* Manipulates transport-specific feature bits. */
 void vring_transport_features(struct virtio_device *vdev)
 {
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 72292a62cd90..cdb2a551257c 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -10,6 +10,12 @@
 #include <linux/mod_devicetable.h>
 #include <linux/gfp.h>
 
+enum virtqueue_reset_stage {
+	VIRTQUEUE_RESET_STAGE_NONE,
+	VIRTQUEUE_RESET_STAGE_DEVICE,
+	VIRTQUEUE_RESET_STAGE_RELEASE,
+};
+
 /**
  * virtqueue - a queue to register buffers for sending or receiving.
  * @list: the chain of virtqueues for this device
@@ -32,6 +38,7 @@ struct virtqueue {
 	unsigned int index;
 	unsigned int num_free;
 	void *priv;
+	enum virtqueue_reset_stage reset;
 };
 
 int virtqueue_add_outbuf(struct virtqueue *vq,
@@ -196,4 +203,9 @@ void unregister_virtio_driver(struct virtio_driver *drv);
 #define module_virtio_driver(__virtio_driver) \
 	module_driver(__virtio_driver, register_virtio_driver, \
 			unregister_virtio_driver)
+/*
+ * Resets a virtqueue. Just frees the ring, not free vq.
+ * This function must be called after reset_vq().
+ */
+void vring_release_virtqueue(struct virtqueue *vq);
 #endif /* _LINUX_VIRTIO_H */
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 09/22] virtio: queue_reset: struct virtio_config_ops add callbacks for queue_reset
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (7 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 08/22] virtio_ring: queue_reset: add vring_release_virtqueue() Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 10/22] virtio_pci: queue_reset: update struct virtio_pci_common_cfg and option functions Xuan Zhuo
                   ` (12 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Performing reset on a queue is divided into four steps:

1. reset_vq: reset one vq
2. recycle the buffer from vq by virtqueue_detach_unused_buf()
3. release the ring of the vq by vring_release_virtqueue()
4. enable_reset_vq: re-enable the reset queue

So add two callbacks reset_vq, enable_reset_vq to struct
virtio_config_ops.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 include/linux/virtio_config.h | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
index 4d107ad31149..8cde339d40b4 100644
--- a/include/linux/virtio_config.h
+++ b/include/linux/virtio_config.h
@@ -74,6 +74,18 @@ struct virtio_shm_region {
  * @set_vq_affinity: set the affinity for a virtqueue (optional).
  * @get_vq_affinity: get the affinity for a virtqueue (optional).
  * @get_shm_region: get a shared memory region based on the index.
+ * @reset_vq: reset a queue individually (optional).
+ *	vq: the virtqueue
+ *	Returns 0 on success or error status
+ *	After successfully calling this, be sure to call
+ *	virtqueue_detach_unused_buf() to recycle the buffer in the ring, and
+ *	then call vring_release_virtqueue() to release the vq ring.
+ *	Caller should guarantee that the vring is not accessed by any functions
+ *	of virtqueue.
+ * @enable_reset_vq: enable a reset queue
+ *	vq: the virtqueue
+ *	Returns 0 on success or error status
+ *	If reset_vq is set, then enable_reset_vq must also be set.
  */
 typedef void vq_callback_t(struct virtqueue *);
 struct virtio_config_ops {
@@ -100,6 +112,8 @@ struct virtio_config_ops {
 			int index);
 	bool (*get_shm_region)(struct virtio_device *vdev,
 			       struct virtio_shm_region *region, u8 id);
+	int (*reset_vq)(struct virtqueue *vq);
+	int (*enable_reset_vq)(struct virtqueue *vq);
 };
 
 /* If driver didn't advertise the feature, it will never appear. */
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 10/22] virtio_pci: queue_reset: update struct virtio_pci_common_cfg and option functions
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (8 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 09/22] virtio: queue_reset: struct virtio_config_ops add callbacks for queue_reset Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 11/22] virtio_pci: queue_reset: release vq by vp_dev->vqs Xuan Zhuo
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Add queue_reset in virtio_pci_common_cfg, and add related operation
functions.

For not breaks uABI, add a new struct virtio_pci_common_cfg_reset.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_pci_modern_dev.c | 36 ++++++++++++++++++++++++++
 include/linux/virtio_pci_modern.h      |  2 ++
 include/uapi/linux/virtio_pci.h        |  7 +++++
 3 files changed, 45 insertions(+)

diff --git a/drivers/virtio/virtio_pci_modern_dev.c b/drivers/virtio/virtio_pci_modern_dev.c
index e8b3ff2b9fbc..f67e88982918 100644
--- a/drivers/virtio/virtio_pci_modern_dev.c
+++ b/drivers/virtio/virtio_pci_modern_dev.c
@@ -3,6 +3,7 @@
 #include <linux/virtio_pci_modern.h>
 #include <linux/module.h>
 #include <linux/pci.h>
+#include <linux/delay.h>
 
 /*
  * vp_modern_map_capability - map a part of virtio pci capability
@@ -463,6 +464,41 @@ void vp_modern_set_status(struct virtio_pci_modern_device *mdev,
 }
 EXPORT_SYMBOL_GPL(vp_modern_set_status);
 
+/*
+ * vp_modern_get_queue_reset - get the queue reset status
+ * @mdev: the modern virtio-pci device
+ * @index: queue index
+ */
+int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index)
+{
+	struct virtio_pci_common_cfg_reset __iomem *cfg;
+
+	cfg = (struct virtio_pci_common_cfg_reset *)mdev->common;
+
+	vp_iowrite16(index, &cfg->cfg.queue_select);
+	return vp_ioread16(&cfg->queue_reset);
+}
+EXPORT_SYMBOL_GPL(vp_modern_get_queue_reset);
+
+/*
+ * vp_modern_set_queue_reset - reset the queue
+ * @mdev: the modern virtio-pci device
+ * @index: queue index
+ */
+void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index)
+{
+	struct virtio_pci_common_cfg_reset __iomem *cfg;
+
+	cfg = (struct virtio_pci_common_cfg_reset *)mdev->common;
+
+	vp_iowrite16(index, &cfg->cfg.queue_select);
+	vp_iowrite16(1, &cfg->queue_reset);
+
+	while (vp_ioread16(&cfg->queue_reset) != 1)
+		msleep(1);
+}
+EXPORT_SYMBOL_GPL(vp_modern_set_queue_reset);
+
 /*
  * vp_modern_queue_vector - set the MSIX vector for a specific virtqueue
  * @mdev: the modern virtio-pci device
diff --git a/include/linux/virtio_pci_modern.h b/include/linux/virtio_pci_modern.h
index eb2bd9b4077d..cc4154dd7b28 100644
--- a/include/linux/virtio_pci_modern.h
+++ b/include/linux/virtio_pci_modern.h
@@ -106,4 +106,6 @@ void __iomem * vp_modern_map_vq_notify(struct virtio_pci_modern_device *mdev,
 				       u16 index, resource_size_t *pa);
 int vp_modern_probe(struct virtio_pci_modern_device *mdev);
 void vp_modern_remove(struct virtio_pci_modern_device *mdev);
+int vp_modern_get_queue_reset(struct virtio_pci_modern_device *mdev, u16 index);
+void vp_modern_set_queue_reset(struct virtio_pci_modern_device *mdev, u16 index);
 #endif
diff --git a/include/uapi/linux/virtio_pci.h b/include/uapi/linux/virtio_pci.h
index 22bec9bd0dfc..d9462efd6ce8 100644
--- a/include/uapi/linux/virtio_pci.h
+++ b/include/uapi/linux/virtio_pci.h
@@ -173,6 +173,13 @@ struct virtio_pci_common_cfg_notify {
 	__le16 padding;
 };
 
+struct virtio_pci_common_cfg_reset {
+	struct virtio_pci_common_cfg cfg;
+
+	__le16 queue_notify_data;	/* read-write */
+	__le16 queue_reset;		/* read-write */
+};
+
 /* Fields in VIRTIO_PCI_CAP_PCI_CFG: */
 struct virtio_pci_cfg_cap {
 	struct virtio_pci_cap cap;
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 11/22] virtio_pci: queue_reset: release vq by vp_dev->vqs
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (9 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 10/22] virtio_pci: queue_reset: update struct virtio_pci_common_cfg and option functions Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 12/22] virtio_pci: queue_reset: setup_vq() support vring_setup_virtqueue() Xuan Zhuo
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

In the process of queue reset, vq leaves vdev->vqs, so the original
processing logic may miss some vq. So modify the processing method of
releasing vq. Release vq by listing vqs.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_pci_common.c | 22 ++++++++++++++++++----
 drivers/virtio/virtio_pci_common.h |  2 ++
 2 files changed, 20 insertions(+), 4 deletions(-)

diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
index fdbde1db5ec5..6b2573ec1ae8 100644
--- a/drivers/virtio/virtio_pci_common.c
+++ b/drivers/virtio/virtio_pci_common.c
@@ -260,12 +260,20 @@ static void vp_del_vq(struct virtqueue *vq)
 void vp_del_vqs(struct virtio_device *vdev)
 {
 	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
-	struct virtqueue *vq, *n;
-	int i;
+	struct virtio_pci_vq_info *info;
+	struct virtqueue *vq;
+	int i, v;
+
+	for (i = 0; i < vp_dev->nvqs; ++i) {
+
+		info = vp_dev->vqs[i];
+		if (!info)
+			continue;
+
+		vq = info->vq;
 
-	list_for_each_entry_safe(vq, n, &vdev->vqs, list) {
 		if (vp_dev->per_vq_vectors) {
-			int v = vp_dev->vqs[vq->index]->msix_vector;
+			v = info->msix_vector;
 
 			if (v != VIRTIO_MSI_NO_VECTOR) {
 				int irq = pci_irq_vector(vp_dev->pci_dev, v);
@@ -275,6 +283,7 @@ void vp_del_vqs(struct virtio_device *vdev)
 			}
 		}
 		vp_del_vq(vq);
+		vp_dev->vqs[i] = NULL;
 	}
 	vp_dev->per_vq_vectors = false;
 
@@ -308,6 +317,7 @@ void vp_del_vqs(struct virtio_device *vdev)
 	vp_dev->msix_affinity_masks = NULL;
 	kfree(vp_dev->vqs);
 	vp_dev->vqs = NULL;
+	vp_dev->nvqs = 0;
 }
 
 static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs,
@@ -324,6 +334,8 @@ static int vp_find_vqs_msix(struct virtio_device *vdev, unsigned nvqs,
 	if (!vp_dev->vqs)
 		return -ENOMEM;
 
+	vp_dev->nvqs = nvqs;
+
 	if (per_vq_vectors) {
 		/* Best option: one for change interrupt, one per vq. */
 		nvectors = 1;
@@ -395,6 +407,8 @@ static int vp_find_vqs_intx(struct virtio_device *vdev, unsigned nvqs,
 	if (!vp_dev->vqs)
 		return -ENOMEM;
 
+	vp_dev->nvqs = nvqs;
+
 	err = request_irq(vp_dev->pci_dev->irq, vp_interrupt, IRQF_SHARED,
 			dev_name(&vdev->dev), vp_dev);
 	if (err)
diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
index 23f6c5c678d5..392d990b7c73 100644
--- a/drivers/virtio/virtio_pci_common.h
+++ b/drivers/virtio/virtio_pci_common.h
@@ -60,6 +60,8 @@ struct virtio_pci_device {
 	/* array of all queues for house-keeping */
 	struct virtio_pci_vq_info **vqs;
 
+	u32 nvqs;
+
 	/* MSI-X support */
 	int msix_enabled;
 	int intx_enabled;
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 12/22] virtio_pci: queue_reset: setup_vq() support vring_setup_virtqueue()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (10 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 11/22] virtio_pci: queue_reset: release vq by vp_dev->vqs Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 13/22] virtio_pci: queue_reset: reserve vq->priv for re-enable queue Xuan Zhuo
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

modern setup_vq() replaces vring_create_virtqueue() with
vring_setup_virtqueue()

vp_setup_vq() can pass the original vq(from info->vq) to re-enable vq.

Allow direct calls to vp_setup_vq() in virtio_pci_modern.c

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_pci_common.c | 31 ++++++++++++++++++------------
 drivers/virtio/virtio_pci_common.h |  8 +++++++-
 drivers/virtio/virtio_pci_legacy.c |  4 ++--
 drivers/virtio/virtio_pci_modern.c | 12 ++++++------
 4 files changed, 34 insertions(+), 21 deletions(-)

diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
index 6b2573ec1ae8..5a4f750a0b97 100644
--- a/drivers/virtio/virtio_pci_common.c
+++ b/drivers/virtio/virtio_pci_common.c
@@ -205,28 +205,33 @@ static int vp_request_msix_vectors(struct virtio_device *vdev, int nvectors,
 	return err;
 }
 
-static struct virtqueue *vp_setup_vq(struct virtio_device *vdev, unsigned index,
-				     void (*callback)(struct virtqueue *vq),
-				     const char *name,
-				     bool ctx,
-				     u16 msix_vec)
+struct virtqueue *vp_setup_vq(struct virtio_device *vdev, unsigned int index,
+			      void (*callback)(struct virtqueue *vq),
+			      const char *name,
+			      bool ctx,
+			      u16 msix_vec)
 {
 	struct virtio_pci_device *vp_dev = to_vp_device(vdev);
-	struct virtio_pci_vq_info *info = kmalloc(sizeof *info, GFP_KERNEL);
+	struct virtio_pci_vq_info *info;
 	struct virtqueue *vq;
 	unsigned long flags;
 
-	/* fill out our structure that represents an active queue */
-	if (!info)
-		return ERR_PTR(-ENOMEM);
+	info = vp_dev->vqs[index];
+	if (!info) {
+		info = kzalloc(sizeof(*info), GFP_KERNEL);
+
+		/* fill out our structure that represents an active queue */
+		if (!info)
+			return ERR_PTR(-ENOMEM);
+	}
 
 	vq = vp_dev->setup_vq(vp_dev, info, index, callback, name, ctx,
-			      msix_vec);
+			      msix_vec, info->vq);
 	if (IS_ERR(vq))
 		goto out_info;
 
 	info->vq = vq;
-	if (callback) {
+	if (vq->callback) {
 		spin_lock_irqsave(&vp_dev->lock, flags);
 		list_add(&info->node, &vp_dev->virtqueues);
 		spin_unlock_irqrestore(&vp_dev->lock, flags);
@@ -238,7 +243,9 @@ static struct virtqueue *vp_setup_vq(struct virtio_device *vdev, unsigned index,
 	return vq;
 
 out_info:
-	kfree(info);
+	if (!info->vq)
+		kfree(info);
+
 	return vq;
 }
 
diff --git a/drivers/virtio/virtio_pci_common.h b/drivers/virtio/virtio_pci_common.h
index 392d990b7c73..696e3f6a493b 100644
--- a/drivers/virtio/virtio_pci_common.h
+++ b/drivers/virtio/virtio_pci_common.h
@@ -84,7 +84,8 @@ struct virtio_pci_device {
 				      void (*callback)(struct virtqueue *vq),
 				      const char *name,
 				      bool ctx,
-				      u16 msix_vec);
+				      u16 msix_vec,
+				      struct virtqueue *vq);
 	void (*del_vq)(struct virtio_pci_vq_info *info);
 
 	u16 (*config_vector)(struct virtio_pci_device *vp_dev, u16 vector);
@@ -117,6 +118,11 @@ int vp_find_vqs(struct virtio_device *vdev, unsigned nvqs,
 		struct virtqueue *vqs[], vq_callback_t *callbacks[],
 		const char * const names[], const bool *ctx,
 		struct irq_affinity *desc);
+struct virtqueue *vp_setup_vq(struct virtio_device *vdev, unsigned int index,
+			      void (*callback)(struct virtqueue *vq),
+			      const char *name,
+			      bool ctx,
+			      u16 msix_vec);
 const char *vp_bus_name(struct virtio_device *vdev);
 
 /* Setup the affinity for a virtqueue:
diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
index 34141b9abe27..96ec2b04e97d 100644
--- a/drivers/virtio/virtio_pci_legacy.c
+++ b/drivers/virtio/virtio_pci_legacy.c
@@ -113,9 +113,9 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  void (*callback)(struct virtqueue *vq),
 				  const char *name,
 				  bool ctx,
-				  u16 msix_vec)
+				  u16 msix_vec,
+				  struct virtqueue *vq)
 {
-	struct virtqueue *vq;
 	u16 num;
 	int err;
 	u64 q_pfn;
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index 5455bc041fb6..5af82948f0ae 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -187,11 +187,11 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				  void (*callback)(struct virtqueue *vq),
 				  const char *name,
 				  bool ctx,
-				  u16 msix_vec)
+				  u16 msix_vec,
+				  struct virtqueue *vq)
 {
 
 	struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
-	struct virtqueue *vq;
 	u16 num;
 	int err;
 
@@ -211,10 +211,10 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	info->msix_vector = msix_vec;
 
 	/* create the vring */
-	vq = vring_create_virtqueue(index, num,
-				    SMP_CACHE_BYTES, &vp_dev->vdev,
-				    true, true, ctx,
-				    vp_notify, callback, name);
+	vq = vring_setup_virtqueue(index, num,
+				   SMP_CACHE_BYTES, &vp_dev->vdev,
+				   true, true, ctx,
+				   vp_notify, callback, name, vq);
 	if (!vq)
 		return ERR_PTR(-ENOMEM);
 
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 13/22] virtio_pci: queue_reset: reserve vq->priv for re-enable queue
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (11 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 12/22] virtio_pci: queue_reset: setup_vq() support vring_setup_virtqueue() Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Reserve vq->priv during reset. Prevent vp_modern_map_vq_notify() from
being called repeatedly.

Only set vq->priv = NULL in normal setup virtqueue, and keep
vq->priv in the process of re-enable queue.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_pci_modern.c | 8 +++++---
 drivers/virtio/virtio_ring.c       | 2 ++
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index 5af82948f0ae..bed3e9b84272 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -224,10 +224,12 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 				virtqueue_get_avail_addr(vq),
 				virtqueue_get_used_addr(vq));
 
-	vq->priv = (void __force *)vp_modern_map_vq_notify(mdev, index, NULL);
 	if (!vq->priv) {
-		err = -ENOMEM;
-		goto err_map_notify;
+		vq->priv = (void __force *)vp_modern_map_vq_notify(mdev, index, NULL);
+		if (!vq->priv) {
+			err = -ENOMEM;
+			goto err_map_notify;
+		}
 	}
 
 	if (msix_vec != VIRTIO_MSI_NO_VECTOR) {
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index b37753bdbbc4..6a892c8ea16e 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1723,6 +1723,7 @@ static struct virtqueue *vring_create_virtqueue_packed(
 
 		vq->vq.callback = callback;
 		vq->vq.name = name;
+		vq->vq.priv = NULL;
 		vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
 			!context;
 	}
@@ -2211,6 +2212,7 @@ static int __vring_init_virtqueue(struct virtqueue *_vq,
 	if (!reset) {
 		vq->vq.callback = callback;
 		vq->vq.name = name;
+		vq->vq.priv = NULL;
 		vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
 			!context;
 	}
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (12 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 13/22] virtio_pci: queue_reset: reserve vq->priv for re-enable queue Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-16  4:14   ` Jason Wang
  2022-02-14  8:14 ` [PATCH v5 15/22] virtio: queue_reset: add helper Xuan Zhuo
                   ` (7 subsequent siblings)
  21 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

This patch implements virtio pci support for QUEUE RESET.

Performing reset on a queue is divided into these steps:

1. reset_vq: reset one vq
2. recycle the buffer from vq by virtqueue_detach_unused_buf()
3. release the ring of the vq by vring_release_virtqueue()
4. enable_reset_vq: re-enable the reset queue

This patch implements reset_vq, enable_reset_vq in the pci scenario.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_pci_common.c |  8 ++--
 drivers/virtio/virtio_pci_modern.c | 60 ++++++++++++++++++++++++++++++
 2 files changed, 65 insertions(+), 3 deletions(-)

diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
index 5a4f750a0b97..9ea319b1d404 100644
--- a/drivers/virtio/virtio_pci_common.c
+++ b/drivers/virtio/virtio_pci_common.c
@@ -255,9 +255,11 @@ static void vp_del_vq(struct virtqueue *vq)
 	struct virtio_pci_vq_info *info = vp_dev->vqs[vq->index];
 	unsigned long flags;
 
-	spin_lock_irqsave(&vp_dev->lock, flags);
-	list_del(&info->node);
-	spin_unlock_irqrestore(&vp_dev->lock, flags);
+	if (!vq->reset) {
+		spin_lock_irqsave(&vp_dev->lock, flags);
+		list_del(&info->node);
+		spin_unlock_irqrestore(&vp_dev->lock, flags);
+	}
 
 	vp_dev->del_vq(info);
 	kfree(info);
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index bed3e9b84272..7d28f4c36fc2 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -34,6 +34,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features)
 	if ((features & BIT_ULL(VIRTIO_F_SR_IOV)) &&
 			pci_find_ext_capability(pci_dev, PCI_EXT_CAP_ID_SRIOV))
 		__virtio_set_bit(vdev, VIRTIO_F_SR_IOV);
+
+	if (features & BIT_ULL(VIRTIO_F_RING_RESET))
+		__virtio_set_bit(vdev, VIRTIO_F_RING_RESET);
 }
 
 /* virtio config->finalize_features() implementation */
@@ -176,6 +179,59 @@ static void vp_reset(struct virtio_device *vdev)
 	vp_disable_cbs(vdev);
 }
 
+static int vp_modern_reset_vq(struct virtqueue *vq)
+{
+	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
+	struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
+	struct virtio_pci_vq_info *info;
+	unsigned long flags;
+
+	if (!virtio_has_feature(vq->vdev, VIRTIO_F_RING_RESET))
+		return -ENOENT;
+
+	vp_modern_set_queue_reset(mdev, vq->index);
+
+	info = vp_dev->vqs[vq->index];
+
+	/* delete vq from irq handler */
+	spin_lock_irqsave(&vp_dev->lock, flags);
+	list_del(&info->node);
+	spin_unlock_irqrestore(&vp_dev->lock, flags);
+
+	INIT_LIST_HEAD(&info->node);
+
+	vq->reset = VIRTQUEUE_RESET_STAGE_DEVICE;
+
+	return 0;
+}
+
+static int vp_modern_enable_reset_vq(struct virtqueue *vq)
+{
+	struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
+	struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
+	struct virtio_pci_vq_info *info;
+	struct virtqueue *_vq;
+
+	if (vq->reset != VIRTQUEUE_RESET_STAGE_RELEASE)
+		return -EBUSY;
+
+	/* check queue reset status */
+	if (vp_modern_get_queue_reset(mdev, vq->index) != 1)
+		return -EBUSY;
+
+	info = vp_dev->vqs[vq->index];
+	_vq = vp_setup_vq(vq->vdev, vq->index, NULL, NULL, NULL,
+			 info->msix_vector);
+	if (IS_ERR(_vq)) {
+		vq->reset = VIRTQUEUE_RESET_STAGE_RELEASE;
+		return PTR_ERR(_vq);
+	}
+
+	vp_modern_set_queue_enable(&vp_dev->mdev, vq->index, true);
+
+	return 0;
+}
+
 static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
 {
 	return vp_modern_config_vector(&vp_dev->mdev, vector);
@@ -397,6 +453,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
 	.set_vq_affinity = vp_set_vq_affinity,
 	.get_vq_affinity = vp_get_vq_affinity,
 	.get_shm_region  = vp_get_shm_region,
+	.reset_vq	 = vp_modern_reset_vq,
+	.enable_reset_vq = vp_modern_enable_reset_vq,
 };
 
 static const struct virtio_config_ops virtio_pci_config_ops = {
@@ -415,6 +473,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = {
 	.set_vq_affinity = vp_set_vq_affinity,
 	.get_vq_affinity = vp_get_vq_affinity,
 	.get_shm_region  = vp_get_shm_region,
+	.reset_vq	 = vp_modern_reset_vq,
+	.enable_reset_vq = vp_modern_enable_reset_vq,
 };
 
 /* the PCI probing function */
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 15/22] virtio: queue_reset: add helper
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (13 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 16/22] virtio_net: split free_unused_bufs() Xuan Zhuo
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Add helper for virtio queue reset.

* virtio_reset_vq: reset a queue individually
* virtio_enable_resetq: enable a reset queue

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 include/linux/virtio_config.h | 38 +++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
index 8cde339d40b4..cd7f7f44ce38 100644
--- a/include/linux/virtio_config.h
+++ b/include/linux/virtio_config.h
@@ -233,6 +233,44 @@ int virtio_find_vqs_ctx(struct virtio_device *vdev, unsigned nvqs,
 				      desc);
 }
 
+/**
+ * virtio_reset_vq - reset a queue individually
+ * @vq: the virtqueue
+ *
+ * returns 0 on success or error status
+ *
+ * After successfully calling this, be sure to call
+ * virtqueue_detach_unused_buf() to recycle the buffer in the ring, and
+ * then call vring_release_virtqueue() to release the vq ring.
+ *
+ * Caller should guarantee that the vring is not accessed by any functions
+ * of virtqueue.
+ */
+static inline
+int virtio_reset_vq(struct virtqueue *vq)
+{
+	if (!vq->vdev->config->reset_vq)
+		return -ENOENT;
+
+	return vq->vdev->config->reset_vq(vq);
+}
+
+/**
+ * virtio_enable_resetq - enable a reset queue
+ * @vq: the virtqueue
+ *
+ * returns 0 on success or error status
+ *
+ */
+static inline
+int virtio_enable_resetq(struct virtqueue *vq)
+{
+	if (!vq->vdev->config->enable_reset_vq)
+		return -ENOENT;
+
+	return vq->vdev->config->enable_reset_vq(vq);
+}
+
 /**
  * virtio_device_ready - enable vq use in probe function
  * @vdev: the device
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 16/22] virtio_net: split free_unused_bufs()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (14 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 15/22] virtio: queue_reset: add helper Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 17/22] virtio_net: support rx/tx queue reset Xuan Zhuo
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

This patch separates two functions for freeing sq buf and rq buf from
free_unused_bufs().

When supporting the enable/disable tx/rq queue in the future, it is
necessary to support separate recovery of a sq buf or a rq buf.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 53 +++++++++++++++++++++++-----------------
 1 file changed, 31 insertions(+), 22 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a801ea40908f..9a1445236e23 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2804,36 +2804,45 @@ static void free_receive_page_frags(struct virtnet_info *vi)
 			put_page(vi->rq[i].alloc_frag.page);
 }
 
-static void free_unused_bufs(struct virtnet_info *vi)
+static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi,
+					struct send_queue *sq)
 {
 	void *buf;
-	int i;
 
-	for (i = 0; i < vi->max_queue_pairs; i++) {
-		struct virtqueue *vq = vi->sq[i].vq;
-		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
-			if (!is_xdp_frame(buf))
-				dev_kfree_skb(buf);
-			else
-				xdp_return_frame(ptr_to_xdp(buf));
-		}
+	while ((buf = virtqueue_detach_unused_buf(sq->vq)) != NULL) {
+		if (!is_xdp_frame(buf))
+			dev_kfree_skb(buf);
+		else
+			xdp_return_frame(ptr_to_xdp(buf));
 	}
+}
 
-	for (i = 0; i < vi->max_queue_pairs; i++) {
-		struct virtqueue *vq = vi->rq[i].vq;
-
-		while ((buf = virtqueue_detach_unused_buf(vq)) != NULL) {
-			if (vi->mergeable_rx_bufs) {
-				put_page(virt_to_head_page(buf));
-			} else if (vi->big_packets) {
-				give_pages(&vi->rq[i], buf);
-			} else {
-				put_page(virt_to_head_page(buf));
-			}
-		}
+static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi,
+					struct receive_queue *rq)
+{
+	void *buf;
+
+	while ((buf = virtqueue_detach_unused_buf(rq->vq)) != NULL) {
+		if (vi->mergeable_rx_bufs)
+			put_page(virt_to_head_page(buf));
+		else if (vi->big_packets)
+			give_pages(rq, buf);
+		else
+			put_page(virt_to_head_page(buf));
 	}
 }
 
+static void free_unused_bufs(struct virtnet_info *vi)
+{
+	int i;
+
+	for (i = 0; i < vi->max_queue_pairs; i++)
+		virtnet_sq_free_unused_bufs(vi, vi->sq + i);
+
+	for (i = 0; i < vi->max_queue_pairs; i++)
+		virtnet_rq_free_unused_bufs(vi, vi->rq + i);
+}
+
 static void virtnet_del_vqs(struct virtnet_info *vi)
 {
 	struct virtio_device *vdev = vi->vdev;
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 17/22] virtio_net: support rx/tx queue reset
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (15 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 16/22] virtio_net: split free_unused_bufs() Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-16  4:14   ` Jason Wang
  2022-02-14  8:14 ` [PATCH v5 18/22] virtio: add helper virtqueue_get_vring_max_size() Xuan Zhuo
                   ` (4 subsequent siblings)
  21 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

This patch implements the reset function of the rx, tx queues.

Based on this function, it is possible to modify the ring num of the
queue. And quickly recycle the buffer in the queue.

In the process of the queue disable, in theory, as long as virtio
supports queue reset, there will be no exceptions.

However, in the process of the queue enable, there may be exceptions due to
memory allocation.  In this case, vq is not available, but we still have
to execute napi_enable(). Because napi_disable is similar to a lock,
napi_enable must be called after calling napi_disable.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 123 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 123 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 9a1445236e23..a4ffd7cdf623 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -251,6 +251,11 @@ struct padded_vnet_hdr {
 	char padding[4];
 };
 
+static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi,
+					struct send_queue *sq);
+static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi,
+					struct receive_queue *rq);
+
 static bool is_xdp_frame(void *ptr)
 {
 	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
@@ -1369,6 +1374,9 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
 {
 	napi_enable(napi);
 
+	if (vq->reset)
+		return;
+
 	/* If all buffers were filled by other side before we napi_enabled, we
 	 * won't get another interrupt, so process any outstanding packets now.
 	 * Call local_bh_enable after to trigger softIRQ processing.
@@ -1413,6 +1421,10 @@ static void refill_work(struct work_struct *work)
 		struct receive_queue *rq = &vi->rq[i];
 
 		napi_disable(&rq->napi);
+		if (rq->vq->reset) {
+			virtnet_napi_enable(rq->vq, &rq->napi);
+			continue;
+		}
 		still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
 		virtnet_napi_enable(rq->vq, &rq->napi);
 
@@ -1523,6 +1535,9 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
 	if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))
 		return;
 
+	if (sq->vq->reset)
+		return;
+
 	if (__netif_tx_trylock(txq)) {
 		do {
 			virtqueue_disable_cb(sq->vq);
@@ -1769,6 +1784,114 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
 	return NETDEV_TX_OK;
 }
 
+static int virtnet_rx_vq_disable(struct virtnet_info *vi,
+				 struct receive_queue *rq)
+{
+	int err;
+
+	napi_disable(&rq->napi);
+
+	err = virtio_reset_vq(rq->vq);
+	if (err)
+		goto err;
+
+	virtnet_rq_free_unused_bufs(vi, rq);
+
+	vring_release_virtqueue(rq->vq);
+
+	return 0;
+
+err:
+	virtnet_napi_enable(rq->vq, &rq->napi);
+	return err;
+}
+
+static int virtnet_tx_vq_disable(struct virtnet_info *vi,
+				 struct send_queue *sq)
+{
+	struct netdev_queue *txq;
+	int err, qindex;
+
+	qindex = sq - vi->sq;
+
+	txq = netdev_get_tx_queue(vi->dev, qindex);
+	__netif_tx_lock_bh(txq);
+
+	netif_stop_subqueue(vi->dev, qindex);
+	virtnet_napi_tx_disable(&sq->napi);
+
+	err = virtio_reset_vq(sq->vq);
+	if (err) {
+		virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
+		netif_start_subqueue(vi->dev, qindex);
+
+		__netif_tx_unlock_bh(txq);
+		return err;
+	}
+	__netif_tx_unlock_bh(txq);
+
+	virtnet_sq_free_unused_bufs(vi, sq);
+
+	vring_release_virtqueue(sq->vq);
+
+	return 0;
+}
+
+static int virtnet_tx_vq_enable(struct virtnet_info *vi, struct send_queue *sq)
+{
+	int err;
+
+	err = virtio_enable_resetq(sq->vq);
+	if (!err)
+		netif_start_subqueue(vi->dev, sq - vi->sq);
+
+	virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
+
+	return err;
+}
+
+static int virtnet_rx_vq_enable(struct virtnet_info *vi,
+				struct receive_queue *rq)
+{
+	int err;
+
+	err = virtio_enable_resetq(rq->vq);
+
+	virtnet_napi_enable(rq->vq, &rq->napi);
+
+	return err;
+}
+
+static int virtnet_rx_vq_reset(struct virtnet_info *vi, int i)
+{
+	int err;
+
+	err = virtnet_rx_vq_disable(vi, vi->rq + i);
+	if (err)
+		return err;
+
+	err = virtnet_rx_vq_enable(vi, vi->rq + i);
+	if (err)
+		netdev_err(vi->dev,
+			   "enable rx reset vq fail: rx queue index: %d err: %d\n", i, err);
+	return err;
+}
+
+static int virtnet_tx_vq_reset(struct virtnet_info *vi, int i)
+{
+	int err;
+
+	err = virtnet_tx_vq_disable(vi, vi->sq + i);
+	if (err)
+		return err;
+
+	err = virtnet_tx_vq_enable(vi, vi->sq + i);
+	if (err)
+		netdev_err(vi->dev,
+			   "enable tx reset vq fail: tx queue index: %d err: %d\n", i, err);
+	return err;
+}
+
 /*
  * Send command via the control virtqueue and check status.  Commands
  * supported by the hypervisor, as indicated by feature bits, should
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 18/22] virtio: add helper virtqueue_get_vring_max_size()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (16 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 17/22] virtio_net: support rx/tx queue reset Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 19/22] virtio: add helper virtio_set_max_ring_num() Xuan Zhuo
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Record the maximum queue num supported by the device.

virtio-net can display the maximum (supported by hardware) ring size in
ethtool -g eth0.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_mmio.c       |  2 ++
 drivers/virtio/virtio_pci_legacy.c |  2 ++
 drivers/virtio/virtio_pci_modern.c |  2 ++
 drivers/virtio/virtio_ring.c       | 13 +++++++++++++
 include/linux/virtio.h             |  2 ++
 5 files changed, 21 insertions(+)

diff --git a/drivers/virtio/virtio_mmio.c b/drivers/virtio/virtio_mmio.c
index 56128b9c46eb..a41abc8051b9 100644
--- a/drivers/virtio/virtio_mmio.c
+++ b/drivers/virtio/virtio_mmio.c
@@ -390,6 +390,8 @@ static struct virtqueue *vm_setup_vq(struct virtio_device *vdev, unsigned index,
 		goto error_new_virtqueue;
 	}
 
+	vq->num_max = num;
+
 	/* Activate the queue */
 	writel(virtqueue_get_vring_size(vq), vm_dev->base + VIRTIO_MMIO_QUEUE_NUM);
 	if (vm_dev->version == 1) {
diff --git a/drivers/virtio/virtio_pci_legacy.c b/drivers/virtio/virtio_pci_legacy.c
index 96ec2b04e97d..340149f6196d 100644
--- a/drivers/virtio/virtio_pci_legacy.c
+++ b/drivers/virtio/virtio_pci_legacy.c
@@ -135,6 +135,8 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	if (!vq)
 		return ERR_PTR(-ENOMEM);
 
+	vq->num_max = num;
+
 	q_pfn = virtqueue_get_desc_addr(vq) >> VIRTIO_PCI_QUEUE_ADDR_SHIFT;
 	if (q_pfn >> 32) {
 		dev_err(&vp_dev->pci_dev->dev,
diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
index 7d28f4c36fc2..5811691a90ec 100644
--- a/drivers/virtio/virtio_pci_modern.c
+++ b/drivers/virtio/virtio_pci_modern.c
@@ -274,6 +274,8 @@ static struct virtqueue *setup_vq(struct virtio_pci_device *vp_dev,
 	if (!vq)
 		return ERR_PTR(-ENOMEM);
 
+	vq->num_max = num;
+
 	/* activate the queue */
 	vp_modern_set_queue_size(mdev, index, virtqueue_get_vring_size(vq));
 	vp_modern_queue_address(mdev, index, virtqueue_get_desc_addr(vq),
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 6a892c8ea16e..1a123b5e5371 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2447,6 +2447,19 @@ void vring_transport_features(struct virtio_device *vdev)
 }
 EXPORT_SYMBOL_GPL(vring_transport_features);
 
+/**
+ * virtqueue_get_vring_max_size - return the max size of the virtqueue's vring
+ * @_vq: the struct virtqueue containing the vring of interest.
+ *
+ * Returns the max size of the vring.  This is mainly used for boasting to
+ * userspace.  Unlike other operations, this need not be serialized.
+ */
+unsigned int virtqueue_get_vring_max_size(struct virtqueue *_vq)
+{
+	return _vq->num_max;
+}
+EXPORT_SYMBOL_GPL(virtqueue_get_vring_max_size);
+
 /**
  * virtqueue_get_vring_size - return the size of the virtqueue's vring
  * @_vq: the struct virtqueue containing the vring of interest.
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index cdb2a551257c..1153b093c53d 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -37,6 +37,7 @@ struct virtqueue {
 	struct virtio_device *vdev;
 	unsigned int index;
 	unsigned int num_free;
+	unsigned int num_max;
 	void *priv;
 	enum virtqueue_reset_stage reset;
 };
@@ -87,6 +88,7 @@ bool virtqueue_enable_cb_delayed(struct virtqueue *vq);
 
 void *virtqueue_detach_unused_buf(struct virtqueue *vq);
 
+unsigned int virtqueue_get_vring_max_size(struct virtqueue *vq);
 unsigned int virtqueue_get_vring_size(struct virtqueue *vq);
 
 bool virtqueue_is_broken(struct virtqueue *vq);
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 19/22] virtio: add helper virtio_set_max_ring_num()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (17 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 18/22] virtio: add helper virtqueue_get_vring_max_size() Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-16  4:14   ` Jason Wang
  2022-02-14  8:14 ` [PATCH v5 20/22] virtio_net: set the default max ring num Xuan Zhuo
                   ` (2 subsequent siblings)
  21 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Added helper virtio_set_max_ring_num() to set the upper limit of ring
num when creating a virtqueue.

Can be used to limit ring num before find_vqs() call. Or change ring num
when re-enable reset queue.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c  |  6 ++++++
 include/linux/virtio.h        |  1 +
 include/linux/virtio_config.h | 30 ++++++++++++++++++++++++++++++
 3 files changed, 37 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 1a123b5e5371..a77a82883e44 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -943,6 +943,9 @@ static struct virtqueue *vring_create_virtqueue_split(
 	size_t queue_size_in_bytes;
 	struct vring vring;
 
+	if (vdev->max_ring_num && num > vdev->max_ring_num)
+		num = vdev->max_ring_num;
+
 	/* We assume num is a power of 2. */
 	if (num & (num - 1)) {
 		dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
@@ -1692,6 +1695,9 @@ static struct virtqueue *vring_create_virtqueue_packed(
 	dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr;
 	size_t ring_size_in_bytes, event_size_in_bytes;
 
+	if (vdev->max_ring_num && num > vdev->max_ring_num)
+		num = vdev->max_ring_num;
+
 	ring_size_in_bytes = num * sizeof(struct vring_packed_desc);
 
 	ring = vring_alloc_queue(vdev, ring_size_in_bytes,
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 1153b093c53d..45525beb2ec4 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -127,6 +127,7 @@ struct virtio_device {
 	struct list_head vqs;
 	u64 features;
 	void *priv;
+	u16 max_ring_num;
 };
 
 static inline struct virtio_device *dev_to_virtio(struct device *_dev)
diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
index cd7f7f44ce38..d7cb2d0341ee 100644
--- a/include/linux/virtio_config.h
+++ b/include/linux/virtio_config.h
@@ -200,6 +200,36 @@ static inline bool virtio_has_dma_quirk(const struct virtio_device *vdev)
 	return !virtio_has_feature(vdev, VIRTIO_F_ACCESS_PLATFORM);
 }
 
+/**
+ * virtio_set_max_ring_num - set max ring num
+ * @vdev: the device
+ * @num: max ring num. Zero clear the limit.
+ *
+ * When creating a virtqueue, use this value as the upper limit of ring num.
+ *
+ * Returns 0 on success or error status
+ */
+static inline
+int virtio_set_max_ring_num(struct virtio_device *vdev, u16 num)
+{
+	if (!num) {
+		vdev->max_ring_num = num;
+		return 0;
+	}
+
+	if (!virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
+		if (!is_power_of_2(num)) {
+			num = __rounddown_pow_of_two(num);
+
+			if (!num)
+				return -EINVAL;
+		}
+	}
+
+	vdev->max_ring_num = num;
+	return 0;
+}
+
 static inline
 struct virtqueue *virtio_find_single_vq(struct virtio_device *vdev,
 					vq_callback_t *c, const char *n)
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 20/22] virtio_net: set the default max ring num
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (18 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 19/22] virtio: add helper virtio_set_max_ring_num() Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-16  4:14   ` Jason Wang
  2022-02-14  8:14 ` [PATCH v5 21/22] virtio_net: get max ring size by virtqueue_get_vring_max_size() Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 22/22] virtio_net: support set_ringparam Xuan Zhuo
  21 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Sets the default maximum ring num based on virtio_set_max_ring_num().

The default maximum ring num is 1024.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a4ffd7cdf623..77e61fe0b2ce 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -35,6 +35,8 @@ module_param(napi_tx, bool, 0644);
 #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
 #define GOOD_COPY_LEN	128
 
+#define VIRTNET_DEFAULT_MAX_RING_NUM 1024
+
 #define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
 
 /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
@@ -3045,6 +3047,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
 			ctx[rxq2vq(i)] = true;
 	}
 
+	virtio_set_max_ring_num(vi->vdev, VIRTNET_DEFAULT_MAX_RING_NUM);
+
 	ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks,
 				  names, ctx, NULL);
 	if (ret)
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 21/22] virtio_net: get max ring size by virtqueue_get_vring_max_size()
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (19 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 20/22] virtio_net: set the default max ring num Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-14  8:14 ` [PATCH v5 22/22] virtio_net: support set_ringparam Xuan Zhuo
  21 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Use virtqueue_get_vring_max_size() in virtnet_get_ringparam() to set
tx,rx_max_pending.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 77e61fe0b2ce..f9bb760c6dbd 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2302,10 +2302,10 @@ static void virtnet_get_ringparam(struct net_device *dev,
 {
 	struct virtnet_info *vi = netdev_priv(dev);
 
-	ring->rx_max_pending = virtqueue_get_vring_size(vi->rq[0].vq);
-	ring->tx_max_pending = virtqueue_get_vring_size(vi->sq[0].vq);
-	ring->rx_pending = ring->rx_max_pending;
-	ring->tx_pending = ring->tx_max_pending;
+	ring->rx_max_pending = virtqueue_get_vring_max_size(vi->rq[0].vq);
+	ring->tx_max_pending = virtqueue_get_vring_max_size(vi->sq[0].vq);
+	ring->rx_pending = virtqueue_get_vring_size(vi->rq[0].vq);
+	ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
 }
 
 
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH v5 22/22] virtio_net: support set_ringparam
  2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
                   ` (20 preceding siblings ...)
  2022-02-14  8:14 ` [PATCH v5 21/22] virtio_net: get max ring size by virtqueue_get_vring_max_size() Xuan Zhuo
@ 2022-02-14  8:14 ` Xuan Zhuo
  2022-02-16  4:14   ` Jason Wang
  21 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-14  8:14 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski,
	Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer,
	John Fastabend, bpf

Support set_ringparam based on virtio queue reset.

The rx,tx_pending required to be passed must be power of 2.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 50 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 50 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index f9bb760c6dbd..bf460ea87354 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2308,6 +2308,55 @@ static void virtnet_get_ringparam(struct net_device *dev,
 	ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
 }
 
+static int virtnet_set_ringparam(struct net_device *dev,
+				 struct ethtool_ringparam *ring,
+				 struct kernel_ethtool_ringparam *kernel_ring,
+				 struct netlink_ext_ack *extack)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	u32 rx_pending, tx_pending;
+	int i, err;
+
+	if (ring->rx_mini_pending || ring->rx_jumbo_pending)
+		return -EINVAL;
+
+	rx_pending = virtqueue_get_vring_size(vi->rq[0].vq);
+	tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
+
+	if (ring->rx_pending == rx_pending &&
+	    ring->tx_pending == tx_pending)
+		return 0;
+
+	if (ring->rx_pending > virtqueue_get_vring_max_size(vi->rq[0].vq))
+		return -EINVAL;
+
+	if (ring->tx_pending > virtqueue_get_vring_max_size(vi->sq[0].vq))
+		return -EINVAL;
+
+	if (!is_power_of_2(ring->rx_pending))
+		return -EINVAL;
+
+	if (!is_power_of_2(ring->tx_pending))
+		return -EINVAL;
+
+	for (i = 0; i < vi->max_queue_pairs; i++) {
+		if (ring->tx_pending != tx_pending) {
+			virtio_set_max_ring_num(vi->vdev, ring->tx_pending);
+			err = virtnet_tx_vq_reset(vi, i);
+			if (err)
+				return err;
+		}
+
+		if (ring->rx_pending != rx_pending) {
+			virtio_set_max_ring_num(vi->vdev, ring->rx_pending);
+			err = virtnet_rx_vq_reset(vi, i);
+			if (err)
+				return err;
+		}
+	}
+
+	return 0;
+}
 
 static void virtnet_get_drvinfo(struct net_device *dev,
 				struct ethtool_drvinfo *info)
@@ -2541,6 +2590,7 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
 	.get_drvinfo = virtnet_get_drvinfo,
 	.get_link = ethtool_op_get_link,
 	.get_ringparam = virtnet_get_ringparam,
+	.set_ringparam = virtnet_set_ringparam,
 	.get_strings = virtnet_get_strings,
 	.get_sset_count = virtnet_get_sset_count,
 	.get_ethtool_stats = virtnet_get_ethtool_stats,
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 19/22] virtio: add helper virtio_set_max_ring_num()
  2022-02-14  8:14 ` [PATCH v5 19/22] virtio: add helper virtio_set_max_ring_num() Xuan Zhuo
@ 2022-02-16  4:14   ` Jason Wang
  2022-02-16  7:54     ` Xuan Zhuo
  0 siblings, 1 reply; 42+ messages in thread
From: Jason Wang @ 2022-02-16  4:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Mon, Feb 14, 2022 at 4:15 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Added helper virtio_set_max_ring_num() to set the upper limit of ring
> num when creating a virtqueue.
>
> Can be used to limit ring num before find_vqs() call. Or change ring num
> when re-enable reset queue.

Do we have a chance that RX and TX may want different ring size? If
yes, it might be even better to have per vq limit via find_vqs()?

>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c  |  6 ++++++
>  include/linux/virtio.h        |  1 +
>  include/linux/virtio_config.h | 30 ++++++++++++++++++++++++++++++
>  3 files changed, 37 insertions(+)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 1a123b5e5371..a77a82883e44 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -943,6 +943,9 @@ static struct virtqueue *vring_create_virtqueue_split(
>         size_t queue_size_in_bytes;
>         struct vring vring;
>
> +       if (vdev->max_ring_num && num > vdev->max_ring_num)
> +               num = vdev->max_ring_num;
> +
>         /* We assume num is a power of 2. */
>         if (num & (num - 1)) {
>                 dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
> @@ -1692,6 +1695,9 @@ static struct virtqueue *vring_create_virtqueue_packed(
>         dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr;
>         size_t ring_size_in_bytes, event_size_in_bytes;
>
> +       if (vdev->max_ring_num && num > vdev->max_ring_num)
> +               num = vdev->max_ring_num;
> +
>         ring_size_in_bytes = num * sizeof(struct vring_packed_desc);
>
>         ring = vring_alloc_queue(vdev, ring_size_in_bytes,
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index 1153b093c53d..45525beb2ec4 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -127,6 +127,7 @@ struct virtio_device {
>         struct list_head vqs;
>         u64 features;
>         void *priv;
> +       u16 max_ring_num;
>  };
>
>  static inline struct virtio_device *dev_to_virtio(struct device *_dev)
> diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
> index cd7f7f44ce38..d7cb2d0341ee 100644
> --- a/include/linux/virtio_config.h
> +++ b/include/linux/virtio_config.h
> @@ -200,6 +200,36 @@ static inline bool virtio_has_dma_quirk(const struct virtio_device *vdev)
>         return !virtio_has_feature(vdev, VIRTIO_F_ACCESS_PLATFORM);
>  }
>
> +/**
> + * virtio_set_max_ring_num - set max ring num
> + * @vdev: the device
> + * @num: max ring num. Zero clear the limit.
> + *
> + * When creating a virtqueue, use this value as the upper limit of ring num.
> + *
> + * Returns 0 on success or error status
> + */
> +static inline
> +int virtio_set_max_ring_num(struct virtio_device *vdev, u16 num)
> +{

Having a dedicated helper for a per device parameter usually means the
use cases are greatly limited. For example, this seems can only be
used when DRIVER_OK is not set?

And in patch 17 this function is called even if we only modify the RX
size, this is probably another call for a more flexible API as I
suggest like exporting vring allocation/deallocation helper and extend
find_vqs()?

Thanks


> +       if (!num) {
> +               vdev->max_ring_num = num;
> +               return 0;
> +       }
> +
> +       if (!virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> +               if (!is_power_of_2(num)) {
> +                       num = __rounddown_pow_of_two(num);
> +
> +                       if (!num)
> +                               return -EINVAL;
> +               }
> +       }
> +
> +       vdev->max_ring_num = num;
> +       return 0;
> +}
> +
>  static inline
>  struct virtqueue *virtio_find_single_vq(struct virtio_device *vdev,
>                                         vq_callback_t *c, const char *n)
> --
> 2.31.0
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 17/22] virtio_net: support rx/tx queue reset
  2022-02-14  8:14 ` [PATCH v5 17/22] virtio_net: support rx/tx queue reset Xuan Zhuo
@ 2022-02-16  4:14   ` Jason Wang
  2022-02-16  7:56     ` Xuan Zhuo
  0 siblings, 1 reply; 42+ messages in thread
From: Jason Wang @ 2022-02-16  4:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> This patch implements the reset function of the rx, tx queues.
>
> Based on this function, it is possible to modify the ring num of the
> queue. And quickly recycle the buffer in the queue.
>
> In the process of the queue disable, in theory, as long as virtio
> supports queue reset, there will be no exceptions.
>
> However, in the process of the queue enable, there may be exceptions due to
> memory allocation.  In this case, vq is not available, but we still have
> to execute napi_enable(). Because napi_disable is similar to a lock,
> napi_enable must be called after calling napi_disable.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 123 +++++++++++++++++++++++++++++++++++++++
>  1 file changed, 123 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 9a1445236e23..a4ffd7cdf623 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -251,6 +251,11 @@ struct padded_vnet_hdr {
>         char padding[4];
>  };
>
> +static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi,
> +                                       struct send_queue *sq);
> +static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi,
> +                                       struct receive_queue *rq);
> +
>  static bool is_xdp_frame(void *ptr)
>  {
>         return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> @@ -1369,6 +1374,9 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
>  {
>         napi_enable(napi);
>
> +       if (vq->reset)
> +               return;
> +
>         /* If all buffers were filled by other side before we napi_enabled, we
>          * won't get another interrupt, so process any outstanding packets now.
>          * Call local_bh_enable after to trigger softIRQ processing.
> @@ -1413,6 +1421,10 @@ static void refill_work(struct work_struct *work)
>                 struct receive_queue *rq = &vi->rq[i];
>
>                 napi_disable(&rq->napi);
> +               if (rq->vq->reset) {
> +                       virtnet_napi_enable(rq->vq, &rq->napi);
> +                       continue;
> +               }
>                 still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
>                 virtnet_napi_enable(rq->vq, &rq->napi);
>
> @@ -1523,6 +1535,9 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
>         if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))
>                 return;
>
> +       if (sq->vq->reset)
> +               return;
> +
>         if (__netif_tx_trylock(txq)) {
>                 do {
>                         virtqueue_disable_cb(sq->vq);
> @@ -1769,6 +1784,114 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
>         return NETDEV_TX_OK;
>  }
>
> +static int virtnet_rx_vq_disable(struct virtnet_info *vi,
> +                                struct receive_queue *rq)
> +{
> +       int err;
> +
> +       napi_disable(&rq->napi);
> +
> +       err = virtio_reset_vq(rq->vq);
> +       if (err)
> +               goto err;
> +
> +       virtnet_rq_free_unused_bufs(vi, rq);
> +
> +       vring_release_virtqueue(rq->vq);
> +
> +       return 0;
> +
> +err:
> +       virtnet_napi_enable(rq->vq, &rq->napi);
> +       return err;
> +}
> +
> +static int virtnet_tx_vq_disable(struct virtnet_info *vi,
> +                                struct send_queue *sq)
> +{
> +       struct netdev_queue *txq;
> +       int err, qindex;
> +
> +       qindex = sq - vi->sq;
> +
> +       txq = netdev_get_tx_queue(vi->dev, qindex);
> +       __netif_tx_lock_bh(txq);
> +
> +       netif_stop_subqueue(vi->dev, qindex);
> +       virtnet_napi_tx_disable(&sq->napi);
> +
> +       err = virtio_reset_vq(sq->vq);
> +       if (err) {
> +               virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> +               netif_start_subqueue(vi->dev, qindex);
> +
> +               __netif_tx_unlock_bh(txq);
> +               return err;
> +       }
> +       __netif_tx_unlock_bh(txq);
> +
> +       virtnet_sq_free_unused_bufs(vi, sq);
> +
> +       vring_release_virtqueue(sq->vq);
> +
> +       return 0;
> +}
> +
> +static int virtnet_tx_vq_enable(struct virtnet_info *vi, struct send_queue *sq)
> +{
> +       int err;
> +
> +       err = virtio_enable_resetq(sq->vq);
> +       if (!err)
> +               netif_start_subqueue(vi->dev, sq - vi->sq);
> +
> +       virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> +
> +       return err;
> +}
> +
> +static int virtnet_rx_vq_enable(struct virtnet_info *vi,
> +                               struct receive_queue *rq)
> +{
> +       int err;

So the API should be design in a consistent way.

In rx_vq_disable() we do:

reset()
detach_unused_bufs()
vring_release_virtqueue()

here it's better to exactly the reverse

vring_attach_virtqueue() // this is the helper I guess in patch 5,
reverse of the vring_release_virtqueue()
try_refill_recv() // reverse of the detach_unused_bufs()
enable_reset() // reverse of the reset

So did for the tx (no need for refill in that case).

> +
> +       err = virtio_enable_resetq(rq->vq);
> +
> +       virtnet_napi_enable(rq->vq, &rq->napi);
> +
> +       return err;
> +}
> +
> +static int virtnet_rx_vq_reset(struct virtnet_info *vi, int i)
> +{
> +       int err;
> +
> +       err = virtnet_rx_vq_disable(vi, vi->rq + i);
> +       if (err)
> +               return err;
> +
> +       err = virtnet_rx_vq_enable(vi, vi->rq + i);
> +       if (err)
> +               netdev_err(vi->dev,
> +                          "enable rx reset vq fail: rx queue index: %d err: %d\n", i, err);
> +       return err;
> +}
> +
> +static int virtnet_tx_vq_reset(struct virtnet_info *vi, int i)
> +{
> +       int err;
> +
> +       err = virtnet_tx_vq_disable(vi, vi->sq + i);
> +       if (err)
> +               return err;
> +
> +       err = virtnet_tx_vq_enable(vi, vi->sq + i);
> +       if (err)
> +               netdev_err(vi->dev,
> +                          "enable tx reset vq fail: tx queue index: %d err: %d\n", i, err);
> +       return err;
> +}
> +
>  /*
>   * Send command via the control virtqueue and check status.  Commands
>   * supported by the hypervisor, as indicated by feature bits, should
> --
> 2.31.0
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 06/22] virtio_ring: queue_reset: packed: support enable reset queue
  2022-02-14  8:14 ` [PATCH v5 06/22] virtio_ring: queue_reset: packed: " Xuan Zhuo
@ 2022-02-16  4:14   ` Jason Wang
  0 siblings, 0 replies; 42+ messages in thread
From: Jason Wang @ 2022-02-16  4:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> The purpose of this patch is to make vring packed support re-enable reset
> vq.
>
> Based on whether the incoming vq passed by vring_setup_virtqueue() is
> NULL or not, distinguish whether it is a normal create virtqueue or
> re-enable a reset queue.
>
> When re-enable a reset queue, reuse the original callback, name, indirect.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c | 29 ++++++++++++++++++-----------
>  1 file changed, 18 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 4639e1643c78..20659f7ca582 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -1683,7 +1683,8 @@ static struct virtqueue *vring_create_virtqueue_packed(
>         bool context,
>         bool (*notify)(struct virtqueue *),
>         void (*callback)(struct virtqueue *),
> -       const char *name)
> +       const char *name,
> +       struct virtqueue *_vq)
>  {
>         struct vring_virtqueue *vq;
>         struct vring_packed_desc *ring;
> @@ -1713,13 +1714,20 @@ static struct virtqueue *vring_create_virtqueue_packed(
>         if (!device)
>                 goto err_device;
>
> -       vq = kmalloc(sizeof(*vq), GFP_KERNEL);
> -       if (!vq)
> -               goto err_vq;
> +       if (_vq) {
> +               vq = to_vvq(_vq);
> +       } else {
> +               vq = kmalloc(sizeof(*vq), GFP_KERNEL);
> +               if (!vq)
> +                       goto err_vq;
> +
> +               vq->vq.callback = callback;
> +               vq->vq.name = name;
> +               vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
> +                       !context;
> +       }

The code looks tricky. Except for the memory we don't even need to
touch any of the other attributes.

I'd suggest splitting out the vring allocation into a dedicated helper
that could be called by both vring_create_queue_XXX and the enable()
logic (and in the enable logic we don't even need to relocate if size
is not changed).

Thanks

>
> -       vq->vq.callback = callback;
>         vq->vq.vdev = vdev;
> -       vq->vq.name = name;
>         vq->vq.num_free = num;
>         vq->vq.index = index;
>         vq->we_own_ring = true;
> @@ -1736,8 +1744,6 @@ static struct virtqueue *vring_create_virtqueue_packed(
>         vq->last_add_time_valid = false;
>  #endif
>
> -       vq->indirect = virtio_has_feature(vdev, VIRTIO_RING_F_INDIRECT_DESC) &&
> -               !context;
>         vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX);
>
>         if (virtio_has_feature(vdev, VIRTIO_F_ORDER_PLATFORM))
> @@ -1778,7 +1784,7 @@ static struct virtqueue *vring_create_virtqueue_packed(
>                 goto err_desc_extra;
>
>         /* No callback?  Tell other side not to bother us. */
> -       if (!callback) {
> +       if (!vq->vq.callback) {
>                 vq->packed.event_flags_shadow = VRING_PACKED_EVENT_FLAG_DISABLE;
>                 vq->packed.vring.driver->flags =
>                         cpu_to_le16(vq->packed.event_flags_shadow);
> @@ -1792,7 +1798,8 @@ static struct virtqueue *vring_create_virtqueue_packed(
>  err_desc_extra:
>         kfree(vq->packed.desc_state);
>  err_desc_state:
> -       kfree(vq);
> +       if (!_vq)
> +               kfree(vq);
>  err_vq:
>         vring_free_queue(vdev, event_size_in_bytes, device, device_event_dma_addr);
>  err_device:
> @@ -2317,7 +2324,7 @@ struct virtqueue *vring_setup_virtqueue(
>         if (virtio_has_feature(vdev, VIRTIO_F_RING_PACKED))
>                 return vring_create_virtqueue_packed(index, num, vring_align,
>                                 vdev, weak_barriers, may_reduce_num,
> -                               context, notify, callback, name);
> +                               context, notify, callback, name, vq);
>
>         return vring_create_virtqueue_split(index, num, vring_align,
>                         vdev, weak_barriers, may_reduce_num,
> --
> 2.31.0
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET
  2022-02-14  8:14 ` [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET Xuan Zhuo
@ 2022-02-16  4:14   ` Jason Wang
  2022-02-16  8:03     ` Xuan Zhuo
  0 siblings, 1 reply; 42+ messages in thread
From: Jason Wang @ 2022-02-16  4:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> This patch implements virtio pci support for QUEUE RESET.
>
> Performing reset on a queue is divided into these steps:
>
> 1. reset_vq: reset one vq
> 2. recycle the buffer from vq by virtqueue_detach_unused_buf()
> 3. release the ring of the vq by vring_release_virtqueue()
> 4. enable_reset_vq: re-enable the reset queue
>
> This patch implements reset_vq, enable_reset_vq in the pci scenario.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_pci_common.c |  8 ++--
>  drivers/virtio/virtio_pci_modern.c | 60 ++++++++++++++++++++++++++++++
>  2 files changed, 65 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
> index 5a4f750a0b97..9ea319b1d404 100644
> --- a/drivers/virtio/virtio_pci_common.c
> +++ b/drivers/virtio/virtio_pci_common.c
> @@ -255,9 +255,11 @@ static void vp_del_vq(struct virtqueue *vq)
>         struct virtio_pci_vq_info *info = vp_dev->vqs[vq->index];
>         unsigned long flags;
>
> -       spin_lock_irqsave(&vp_dev->lock, flags);
> -       list_del(&info->node);
> -       spin_unlock_irqrestore(&vp_dev->lock, flags);
> +       if (!vq->reset) {
> +               spin_lock_irqsave(&vp_dev->lock, flags);
> +               list_del(&info->node);
> +               spin_unlock_irqrestore(&vp_dev->lock, flags);
> +       }
>
>         vp_dev->del_vq(info);
>         kfree(info);
> diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
> index bed3e9b84272..7d28f4c36fc2 100644
> --- a/drivers/virtio/virtio_pci_modern.c
> +++ b/drivers/virtio/virtio_pci_modern.c
> @@ -34,6 +34,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features)
>         if ((features & BIT_ULL(VIRTIO_F_SR_IOV)) &&
>                         pci_find_ext_capability(pci_dev, PCI_EXT_CAP_ID_SRIOV))
>                 __virtio_set_bit(vdev, VIRTIO_F_SR_IOV);
> +
> +       if (features & BIT_ULL(VIRTIO_F_RING_RESET))
> +               __virtio_set_bit(vdev, VIRTIO_F_RING_RESET);
>  }
>
>  /* virtio config->finalize_features() implementation */
> @@ -176,6 +179,59 @@ static void vp_reset(struct virtio_device *vdev)
>         vp_disable_cbs(vdev);
>  }
>
> +static int vp_modern_reset_vq(struct virtqueue *vq)
> +{
> +       struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
> +       struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> +       struct virtio_pci_vq_info *info;
> +       unsigned long flags;
> +
> +       if (!virtio_has_feature(vq->vdev, VIRTIO_F_RING_RESET))
> +               return -ENOENT;
> +
> +       vp_modern_set_queue_reset(mdev, vq->index);
> +
> +       info = vp_dev->vqs[vq->index];
> +

Any reason that we don't need to disable irq here as the previous versions did?


> +       /* delete vq from irq handler */
> +       spin_lock_irqsave(&vp_dev->lock, flags);
> +       list_del(&info->node);
> +       spin_unlock_irqrestore(&vp_dev->lock, flags);
> +
> +       INIT_LIST_HEAD(&info->node);
> +
> +       vq->reset = VIRTQUEUE_RESET_STAGE_DEVICE;
> +
> +       return 0;
> +}
> +
> +static int vp_modern_enable_reset_vq(struct virtqueue *vq)
> +{
> +       struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
> +       struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> +       struct virtio_pci_vq_info *info;
> +       struct virtqueue *_vq;
> +
> +       if (vq->reset != VIRTQUEUE_RESET_STAGE_RELEASE)
> +               return -EBUSY;
> +
> +       /* check queue reset status */
> +       if (vp_modern_get_queue_reset(mdev, vq->index) != 1)
> +               return -EBUSY;
> +
> +       info = vp_dev->vqs[vq->index];
> +       _vq = vp_setup_vq(vq->vdev, vq->index, NULL, NULL, NULL,
> +                        info->msix_vector);

So we only care about moden devices, this means using vp_setup_vq()
with NULL seems tricky.

As replied in another thread, I would simply ask the caller to call
the vring reallocation helper. See the reply for patch 17.

Thanks


> +       if (IS_ERR(_vq)) {
> +               vq->reset = VIRTQUEUE_RESET_STAGE_RELEASE;
> +               return PTR_ERR(_vq);
> +       }
> +
> +       vp_modern_set_queue_enable(&vp_dev->mdev, vq->index, true);
> +
> +       return 0;
> +}
> +
>  static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
>  {
>         return vp_modern_config_vector(&vp_dev->mdev, vector);
> @@ -397,6 +453,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
>         .set_vq_affinity = vp_set_vq_affinity,
>         .get_vq_affinity = vp_get_vq_affinity,
>         .get_shm_region  = vp_get_shm_region,
> +       .reset_vq        = vp_modern_reset_vq,
> +       .enable_reset_vq = vp_modern_enable_reset_vq,
>  };
>
>  static const struct virtio_config_ops virtio_pci_config_ops = {
> @@ -415,6 +473,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = {
>         .set_vq_affinity = vp_set_vq_affinity,
>         .get_vq_affinity = vp_get_vq_affinity,
>         .get_shm_region  = vp_get_shm_region,
> +       .reset_vq        = vp_modern_reset_vq,
> +       .enable_reset_vq = vp_modern_enable_reset_vq,
>  };
>
>  /* the PCI probing function */
> --
> 2.31.0
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 20/22] virtio_net: set the default max ring num
  2022-02-14  8:14 ` [PATCH v5 20/22] virtio_net: set the default max ring num Xuan Zhuo
@ 2022-02-16  4:14   ` Jason Wang
  2022-02-16  7:46     ` Xuan Zhuo
  0 siblings, 1 reply; 42+ messages in thread
From: Jason Wang @ 2022-02-16  4:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Sets the default maximum ring num based on virtio_set_max_ring_num().
>
> The default maximum ring num is 1024.

Having a default value is pretty useful, I see 32K is used by default for IFCVF.

Rethink this, how about having a different default value based on the speed?

Without SPEED_DUPLEX, we use 1024. Otherwise

10g 4096
40g 8192

etc.

(The number are just copied from the 10g/40g default parameter from
other vendors)

Thanks

>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index a4ffd7cdf623..77e61fe0b2ce 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -35,6 +35,8 @@ module_param(napi_tx, bool, 0644);
>  #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
>  #define GOOD_COPY_LEN  128
>
> +#define VIRTNET_DEFAULT_MAX_RING_NUM 1024
> +
>  #define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
>
>  /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
> @@ -3045,6 +3047,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
>                         ctx[rxq2vq(i)] = true;
>         }
>
> +       virtio_set_max_ring_num(vi->vdev, VIRTNET_DEFAULT_MAX_RING_NUM);
> +
>         ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks,
>                                   names, ctx, NULL);
>         if (ret)
> --
> 2.31.0
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 08/22] virtio_ring: queue_reset: add vring_release_virtqueue()
  2022-02-14  8:14 ` [PATCH v5 08/22] virtio_ring: queue_reset: add vring_release_virtqueue() Xuan Zhuo
@ 2022-02-16  4:14   ` Jason Wang
  0 siblings, 0 replies; 42+ messages in thread
From: Jason Wang @ 2022-02-16  4:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Added vring_release_virtqueue() to release the ring of the vq.
>
> In this process, vq is removed from the vdev->vqs queue. And the memory
> of the ring is released
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c | 18 +++++++++++++++++-
>  include/linux/virtio.h       | 12 ++++++++++++
>  2 files changed, 29 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index c5dd17c7dd4a..b37753bdbbc4 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -1730,6 +1730,7 @@ static struct virtqueue *vring_create_virtqueue_packed(
>         vq->vq.vdev = vdev;
>         vq->vq.num_free = num;
>         vq->vq.index = index;
> +       vq->vq.reset = VIRTQUEUE_RESET_STAGE_NONE;

So we don't have a similar check for detach_unused_buf(), I guess it
should be sufficient to document the API requirement. Otherwise we
probably need some barriers/ordering which are not worthwhile just for
figuring out bad API usage.

>         vq->we_own_ring = true;
>         vq->notify = notify;
>         vq->weak_barriers = weak_barriers;
> @@ -2218,6 +2219,7 @@ static int __vring_init_virtqueue(struct virtqueue *_vq,
>         vq->vq.vdev = vdev;
>         vq->vq.num_free = vring.num;
>         vq->vq.index = index;
> +       vq->vq.reset = VIRTQUEUE_RESET_STAGE_NONE;
>         vq->we_own_ring = false;
>         vq->notify = notify;
>         vq->weak_barriers = weak_barriers;
> @@ -2397,11 +2399,25 @@ void vring_del_virtqueue(struct virtqueue *_vq)
>  {
>         struct vring_virtqueue *vq = to_vvq(_vq);
>
> -       __vring_del_virtqueue(vq);
> +       if (_vq->reset != VIRTQUEUE_RESET_STAGE_RELEASE)
> +               __vring_del_virtqueue(vq);
>         kfree(vq);
>  }
>  EXPORT_SYMBOL_GPL(vring_del_virtqueue);
>
> +void vring_release_virtqueue(struct virtqueue *_vq)
> +{

If we agree on that we need a allocation routine, we probably need to
rename this as vring_free_virtqueue()

Thanks

> +       struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +       if (_vq->reset != VIRTQUEUE_RESET_STAGE_DEVICE)
> +               return;
> +
> +       __vring_del_virtqueue(vq);
> +
> +       _vq->reset = VIRTQUEUE_RESET_STAGE_RELEASE;
> +}
> +EXPORT_SYMBOL_GPL(vring_release_virtqueue);
> +
>  /* Manipulates transport-specific feature bits. */
>  void vring_transport_features(struct virtio_device *vdev)
>  {
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index 72292a62cd90..cdb2a551257c 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -10,6 +10,12 @@
>  #include <linux/mod_devicetable.h>
>  #include <linux/gfp.h>
>
> +enum virtqueue_reset_stage {
> +       VIRTQUEUE_RESET_STAGE_NONE,
> +       VIRTQUEUE_RESET_STAGE_DEVICE,
> +       VIRTQUEUE_RESET_STAGE_RELEASE,
> +};
> +
>  /**
>   * virtqueue - a queue to register buffers for sending or receiving.
>   * @list: the chain of virtqueues for this device
> @@ -32,6 +38,7 @@ struct virtqueue {
>         unsigned int index;
>         unsigned int num_free;
>         void *priv;
> +       enum virtqueue_reset_stage reset;
>  };
>
>  int virtqueue_add_outbuf(struct virtqueue *vq,
> @@ -196,4 +203,9 @@ void unregister_virtio_driver(struct virtio_driver *drv);
>  #define module_virtio_driver(__virtio_driver) \
>         module_driver(__virtio_driver, register_virtio_driver, \
>                         unregister_virtio_driver)
> +/*
> + * Resets a virtqueue. Just frees the ring, not free vq.
> + * This function must be called after reset_vq().
> + */
> +void vring_release_virtqueue(struct virtqueue *vq);
>  #endif /* _LINUX_VIRTIO_H */
> --
> 2.31.0
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 22/22] virtio_net: support set_ringparam
  2022-02-14  8:14 ` [PATCH v5 22/22] virtio_net: support set_ringparam Xuan Zhuo
@ 2022-02-16  4:14   ` Jason Wang
  2022-02-16  7:21     ` Xuan Zhuo
  0 siblings, 1 reply; 42+ messages in thread
From: Jason Wang @ 2022-02-16  4:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Mon, Feb 14, 2022 at 4:15 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> Support set_ringparam based on virtio queue reset.
>
> The rx,tx_pending required to be passed must be power of 2.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 50 ++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 50 insertions(+)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index f9bb760c6dbd..bf460ea87354 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -2308,6 +2308,55 @@ static void virtnet_get_ringparam(struct net_device *dev,
>         ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
>  }
>
> +static int virtnet_set_ringparam(struct net_device *dev,
> +                                struct ethtool_ringparam *ring,
> +                                struct kernel_ethtool_ringparam *kernel_ring,
> +                                struct netlink_ext_ack *extack)
> +{
> +       struct virtnet_info *vi = netdev_priv(dev);
> +       u32 rx_pending, tx_pending;
> +       int i, err;
> +
> +       if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> +               return -EINVAL;
> +
> +       rx_pending = virtqueue_get_vring_size(vi->rq[0].vq);
> +       tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
> +
> +       if (ring->rx_pending == rx_pending &&
> +           ring->tx_pending == tx_pending)
> +               return 0;
> +
> +       if (ring->rx_pending > virtqueue_get_vring_max_size(vi->rq[0].vq))
> +               return -EINVAL;
> +
> +       if (ring->tx_pending > virtqueue_get_vring_max_size(vi->sq[0].vq))
> +               return -EINVAL;
> +
> +       if (!is_power_of_2(ring->rx_pending))
> +               return -EINVAL;
> +
> +       if (!is_power_of_2(ring->tx_pending))
> +               return -EINVAL;

We'd better leave those checks to the virtio core where it knows
packed virtqueue doesn't have this limitation.

> +
> +       for (i = 0; i < vi->max_queue_pairs; i++) {
> +               if (ring->tx_pending != tx_pending) {
> +                       virtio_set_max_ring_num(vi->vdev, ring->tx_pending);

The name is kind of confusing, I guess it should not be the maximum
ring. And this needs to be done after the reset, and it would be even
better to disallow such change when virtqueue is not resetted.

> +                       err = virtnet_tx_vq_reset(vi, i);
> +                       if (err)
> +                               return err;
> +               }
> +
> +               if (ring->rx_pending != rx_pending) {
> +                       virtio_set_max_ring_num(vi->vdev, ring->rx_pending);
> +                       err = virtnet_rx_vq_reset(vi, i);
> +                       if (err)
> +                               return err;
> +               }
> +       }
> +
> +       return 0;
> +}
>
>  static void virtnet_get_drvinfo(struct net_device *dev,
>                                 struct ethtool_drvinfo *info)
> @@ -2541,6 +2590,7 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
>         .get_drvinfo = virtnet_get_drvinfo,
>         .get_link = ethtool_op_get_link,
>         .get_ringparam = virtnet_get_ringparam,
> +       .set_ringparam = virtnet_set_ringparam,
>         .get_strings = virtnet_get_strings,
>         .get_sset_count = virtnet_get_sset_count,
>         .get_ethtool_stats = virtnet_get_ethtool_stats,
> --
> 2.31.0
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 22/22] virtio_net: support set_ringparam
  2022-02-16  4:14   ` Jason Wang
@ 2022-02-16  7:21     ` Xuan Zhuo
  0 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-16  7:21 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, 16 Feb 2022 12:14:39 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 14, 2022 at 4:15 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Support set_ringparam based on virtio queue reset.
> >
> > The rx,tx_pending required to be passed must be power of 2.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 50 ++++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 50 insertions(+)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index f9bb760c6dbd..bf460ea87354 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -2308,6 +2308,55 @@ static void virtnet_get_ringparam(struct net_device *dev,
> >         ring->tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
> >  }
> >
> > +static int virtnet_set_ringparam(struct net_device *dev,
> > +                                struct ethtool_ringparam *ring,
> > +                                struct kernel_ethtool_ringparam *kernel_ring,
> > +                                struct netlink_ext_ack *extack)
> > +{
> > +       struct virtnet_info *vi = netdev_priv(dev);
> > +       u32 rx_pending, tx_pending;
> > +       int i, err;
> > +
> > +       if (ring->rx_mini_pending || ring->rx_jumbo_pending)
> > +               return -EINVAL;
> > +
> > +       rx_pending = virtqueue_get_vring_size(vi->rq[0].vq);
> > +       tx_pending = virtqueue_get_vring_size(vi->sq[0].vq);
> > +
> > +       if (ring->rx_pending == rx_pending &&
> > +           ring->tx_pending == tx_pending)
> > +               return 0;
> > +
> > +       if (ring->rx_pending > virtqueue_get_vring_max_size(vi->rq[0].vq))
> > +               return -EINVAL;
> > +
> > +       if (ring->tx_pending > virtqueue_get_vring_max_size(vi->sq[0].vq))
> > +               return -EINVAL;
> > +
> > +       if (!is_power_of_2(ring->rx_pending))
> > +               return -EINVAL;
> > +
> > +       if (!is_power_of_2(ring->tx_pending))
> > +               return -EINVAL;
>
> We'd better leave those checks to the virtio core where it knows
> packed virtqueue doesn't have this limitation.

OK.

>
> > +
> > +       for (i = 0; i < vi->max_queue_pairs; i++) {
> > +               if (ring->tx_pending != tx_pending) {
> > +                       virtio_set_max_ring_num(vi->vdev, ring->tx_pending);
>
> The name is kind of confusing, I guess it should not be the maximum
> ring. And this needs to be done after the reset, and it would be even
> better to disallow such change when virtqueue is not resetted.

OK.

Thanks.

>
> > +                       err = virtnet_tx_vq_reset(vi, i);
> > +                       if (err)
> > +                               return err;
> > +               }
> > +
> > +               if (ring->rx_pending != rx_pending) {
> > +                       virtio_set_max_ring_num(vi->vdev, ring->rx_pending);
> > +                       err = virtnet_rx_vq_reset(vi, i);
> > +                       if (err)
> > +                               return err;
> > +               }
> > +       }
> > +
> > +       return 0;
> > +}
> >
> >  static void virtnet_get_drvinfo(struct net_device *dev,
> >                                 struct ethtool_drvinfo *info)
> > @@ -2541,6 +2590,7 @@ static const struct ethtool_ops virtnet_ethtool_ops = {
> >         .get_drvinfo = virtnet_get_drvinfo,
> >         .get_link = ethtool_op_get_link,
> >         .get_ringparam = virtnet_get_ringparam,
> > +       .set_ringparam = virtnet_set_ringparam,
> >         .get_strings = virtnet_get_strings,
> >         .get_sset_count = virtnet_get_sset_count,
> >         .get_ethtool_stats = virtnet_get_ethtool_stats,
> > --
> > 2.31.0
> >
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 20/22] virtio_net: set the default max ring num
  2022-02-16  4:14   ` Jason Wang
@ 2022-02-16  7:46     ` Xuan Zhuo
  2022-02-17  7:21       ` Jason Wang
  0 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-16  7:46 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, 16 Feb 2022 12:14:31 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Sets the default maximum ring num based on virtio_set_max_ring_num().
> >
> > The default maximum ring num is 1024.
>
> Having a default value is pretty useful, I see 32K is used by default for IFCVF.
>
> Rethink this, how about having a different default value based on the speed?
>
> Without SPEED_DUPLEX, we use 1024. Otherwise
>
> 10g 4096
> 40g 8192

We can define different default values of tx and rx by the way. This way I can
just use it in the new interface of find_vqs().

without SPEED_DUPLEX:  tx 512 rx 1024

Thanks.


>
> etc.
>
> (The number are just copied from the 10g/40g default parameter from
> other vendors)
>
> Thanks
>
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index a4ffd7cdf623..77e61fe0b2ce 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -35,6 +35,8 @@ module_param(napi_tx, bool, 0644);
> >  #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
> >  #define GOOD_COPY_LEN  128
> >
> > +#define VIRTNET_DEFAULT_MAX_RING_NUM 1024
> > +
> >  #define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
> >
> >  /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
> > @@ -3045,6 +3047,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> >                         ctx[rxq2vq(i)] = true;
> >         }
> >
> > +       virtio_set_max_ring_num(vi->vdev, VIRTNET_DEFAULT_MAX_RING_NUM);
> > +
> >         ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks,
> >                                   names, ctx, NULL);
> >         if (ret)
> > --
> > 2.31.0
> >
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 19/22] virtio: add helper virtio_set_max_ring_num()
  2022-02-16  4:14   ` Jason Wang
@ 2022-02-16  7:54     ` Xuan Zhuo
  0 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-16  7:54 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, 16 Feb 2022 12:14:04 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 14, 2022 at 4:15 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Added helper virtio_set_max_ring_num() to set the upper limit of ring
> > num when creating a virtqueue.
> >
> > Can be used to limit ring num before find_vqs() call. Or change ring num
> > when re-enable reset queue.
>
> Do we have a chance that RX and TX may want different ring size? If
> yes, it might be even better to have per vq limit via find_vqs()?
>
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/virtio/virtio_ring.c  |  6 ++++++
> >  include/linux/virtio.h        |  1 +
> >  include/linux/virtio_config.h | 30 ++++++++++++++++++++++++++++++
> >  3 files changed, 37 insertions(+)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 1a123b5e5371..a77a82883e44 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -943,6 +943,9 @@ static struct virtqueue *vring_create_virtqueue_split(
> >         size_t queue_size_in_bytes;
> >         struct vring vring;
> >
> > +       if (vdev->max_ring_num && num > vdev->max_ring_num)
> > +               num = vdev->max_ring_num;
> > +
> >         /* We assume num is a power of 2. */
> >         if (num & (num - 1)) {
> >                 dev_warn(&vdev->dev, "Bad virtqueue length %u\n", num);
> > @@ -1692,6 +1695,9 @@ static struct virtqueue *vring_create_virtqueue_packed(
> >         dma_addr_t ring_dma_addr, driver_event_dma_addr, device_event_dma_addr;
> >         size_t ring_size_in_bytes, event_size_in_bytes;
> >
> > +       if (vdev->max_ring_num && num > vdev->max_ring_num)
> > +               num = vdev->max_ring_num;
> > +
> >         ring_size_in_bytes = num * sizeof(struct vring_packed_desc);
> >
> >         ring = vring_alloc_queue(vdev, ring_size_in_bytes,
> > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > index 1153b093c53d..45525beb2ec4 100644
> > --- a/include/linux/virtio.h
> > +++ b/include/linux/virtio.h
> > @@ -127,6 +127,7 @@ struct virtio_device {
> >         struct list_head vqs;
> >         u64 features;
> >         void *priv;
> > +       u16 max_ring_num;
> >  };
> >
> >  static inline struct virtio_device *dev_to_virtio(struct device *_dev)
> > diff --git a/include/linux/virtio_config.h b/include/linux/virtio_config.h
> > index cd7f7f44ce38..d7cb2d0341ee 100644
> > --- a/include/linux/virtio_config.h
> > +++ b/include/linux/virtio_config.h
> > @@ -200,6 +200,36 @@ static inline bool virtio_has_dma_quirk(const struct virtio_device *vdev)
> >         return !virtio_has_feature(vdev, VIRTIO_F_ACCESS_PLATFORM);
> >  }
> >
> > +/**
> > + * virtio_set_max_ring_num - set max ring num
> > + * @vdev: the device
> > + * @num: max ring num. Zero clear the limit.
> > + *
> > + * When creating a virtqueue, use this value as the upper limit of ring num.
> > + *
> > + * Returns 0 on success or error status
> > + */
> > +static inline
> > +int virtio_set_max_ring_num(struct virtio_device *vdev, u16 num)
> > +{
>
> Having a dedicated helper for a per device parameter usually means the
> use cases are greatly limited. For example, this seems can only be
> used when DRIVER_OK is not set?
>
> And in patch 17 this function is called even if we only modify the RX
> size, this is probably another call for a more flexible API as I
> suggest like exporting vring allocation/deallocation helper and extend
> find_vqs()?
>

I understand.

Thanks.

> Thanks
>
>
> > +       if (!num) {
> > +               vdev->max_ring_num = num;
> > +               return 0;
> > +       }
> > +
> > +       if (!virtio_has_feature(vdev, VIRTIO_F_RING_PACKED)) {
> > +               if (!is_power_of_2(num)) {
> > +                       num = __rounddown_pow_of_two(num);
> > +
> > +                       if (!num)
> > +                               return -EINVAL;
> > +               }
> > +       }
> > +
> > +       vdev->max_ring_num = num;
> > +       return 0;
> > +}
> > +
> >  static inline
> >  struct virtqueue *virtio_find_single_vq(struct virtio_device *vdev,
> >                                         vq_callback_t *c, const char *n)
> > --
> > 2.31.0
> >
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 17/22] virtio_net: support rx/tx queue reset
  2022-02-16  4:14   ` Jason Wang
@ 2022-02-16  7:56     ` Xuan Zhuo
  2022-02-16  8:35       ` Michael S. Tsirkin
  0 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-16  7:56 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, 16 Feb 2022 12:14:11 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > This patch implements the reset function of the rx, tx queues.
> >
> > Based on this function, it is possible to modify the ring num of the
> > queue. And quickly recycle the buffer in the queue.
> >
> > In the process of the queue disable, in theory, as long as virtio
> > supports queue reset, there will be no exceptions.
> >
> > However, in the process of the queue enable, there may be exceptions due to
> > memory allocation.  In this case, vq is not available, but we still have
> > to execute napi_enable(). Because napi_disable is similar to a lock,
> > napi_enable must be called after calling napi_disable.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 123 +++++++++++++++++++++++++++++++++++++++
> >  1 file changed, 123 insertions(+)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 9a1445236e23..a4ffd7cdf623 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -251,6 +251,11 @@ struct padded_vnet_hdr {
> >         char padding[4];
> >  };
> >
> > +static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi,
> > +                                       struct send_queue *sq);
> > +static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi,
> > +                                       struct receive_queue *rq);
> > +
> >  static bool is_xdp_frame(void *ptr)
> >  {
> >         return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > @@ -1369,6 +1374,9 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> >  {
> >         napi_enable(napi);
> >
> > +       if (vq->reset)
> > +               return;
> > +
> >         /* If all buffers were filled by other side before we napi_enabled, we
> >          * won't get another interrupt, so process any outstanding packets now.
> >          * Call local_bh_enable after to trigger softIRQ processing.
> > @@ -1413,6 +1421,10 @@ static void refill_work(struct work_struct *work)
> >                 struct receive_queue *rq = &vi->rq[i];
> >
> >                 napi_disable(&rq->napi);
> > +               if (rq->vq->reset) {
> > +                       virtnet_napi_enable(rq->vq, &rq->napi);
> > +                       continue;
> > +               }
> >                 still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> >                 virtnet_napi_enable(rq->vq, &rq->napi);
> >
> > @@ -1523,6 +1535,9 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
> >         if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))
> >                 return;
> >
> > +       if (sq->vq->reset)
> > +               return;
> > +
> >         if (__netif_tx_trylock(txq)) {
> >                 do {
> >                         virtqueue_disable_cb(sq->vq);
> > @@ -1769,6 +1784,114 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> >         return NETDEV_TX_OK;
> >  }
> >
> > +static int virtnet_rx_vq_disable(struct virtnet_info *vi,
> > +                                struct receive_queue *rq)
> > +{
> > +       int err;
> > +
> > +       napi_disable(&rq->napi);
> > +
> > +       err = virtio_reset_vq(rq->vq);
> > +       if (err)
> > +               goto err;
> > +
> > +       virtnet_rq_free_unused_bufs(vi, rq);
> > +
> > +       vring_release_virtqueue(rq->vq);
> > +
> > +       return 0;
> > +
> > +err:
> > +       virtnet_napi_enable(rq->vq, &rq->napi);
> > +       return err;
> > +}
> > +
> > +static int virtnet_tx_vq_disable(struct virtnet_info *vi,
> > +                                struct send_queue *sq)
> > +{
> > +       struct netdev_queue *txq;
> > +       int err, qindex;
> > +
> > +       qindex = sq - vi->sq;
> > +
> > +       txq = netdev_get_tx_queue(vi->dev, qindex);
> > +       __netif_tx_lock_bh(txq);
> > +
> > +       netif_stop_subqueue(vi->dev, qindex);
> > +       virtnet_napi_tx_disable(&sq->napi);
> > +
> > +       err = virtio_reset_vq(sq->vq);
> > +       if (err) {
> > +               virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> > +               netif_start_subqueue(vi->dev, qindex);
> > +
> > +               __netif_tx_unlock_bh(txq);
> > +               return err;
> > +       }
> > +       __netif_tx_unlock_bh(txq);
> > +
> > +       virtnet_sq_free_unused_bufs(vi, sq);
> > +
> > +       vring_release_virtqueue(sq->vq);
> > +
> > +       return 0;
> > +}
> > +
> > +static int virtnet_tx_vq_enable(struct virtnet_info *vi, struct send_queue *sq)
> > +{
> > +       int err;
> > +
> > +       err = virtio_enable_resetq(sq->vq);
> > +       if (!err)
> > +               netif_start_subqueue(vi->dev, sq - vi->sq);
> > +
> > +       virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> > +
> > +       return err;
> > +}
> > +
> > +static int virtnet_rx_vq_enable(struct virtnet_info *vi,
> > +                               struct receive_queue *rq)
> > +{
> > +       int err;
>
> So the API should be design in a consistent way.
>
> In rx_vq_disable() we do:
>
> reset()
> detach_unused_bufs()
> vring_release_virtqueue()
>
> here it's better to exactly the reverse
>
> vring_attach_virtqueue() // this is the helper I guess in patch 5,
> reverse of the vring_release_virtqueue()
> try_refill_recv() // reverse of the detach_unused_bufs()
> enable_reset() // reverse of the reset

Such an api is ok

1. reset()
2. detach_unused_bufs()
3. vring_release_virtqueue()
   ---------------
4. vring_attach_virtqueue()
5. try_refill_recv()
6. enable_reset()


But if, we just want to recycle the buffer without modifying the ring num. As
you mentioned before, in the case where the ring num is not modified, we don't
have to reallocate, but can use the original vring.

1. reset()
2. detach_unused_bufs()
   ---------------
3. vring_reset_virtqueue() // just reset, no reallocate
4. try_refill_recv()
5. enable_reset()

Thanks.

>
> So did for the tx (no need for refill in that case).
>
> > +
> > +       err = virtio_enable_resetq(rq->vq);
> > +
> > +       virtnet_napi_enable(rq->vq, &rq->napi);
> > +
> > +       return err;
> > +}
> > +
> > +static int virtnet_rx_vq_reset(struct virtnet_info *vi, int i)
> > +{
> > +       int err;
> > +
> > +       err = virtnet_rx_vq_disable(vi, vi->rq + i);
> > +       if (err)
> > +               return err;
> > +
> > +       err = virtnet_rx_vq_enable(vi, vi->rq + i);
> > +       if (err)
> > +               netdev_err(vi->dev,
> > +                          "enable rx reset vq fail: rx queue index: %d err: %d\n", i, err);
> > +       return err;
> > +}
> > +
> > +static int virtnet_tx_vq_reset(struct virtnet_info *vi, int i)
> > +{
> > +       int err;
> > +
> > +       err = virtnet_tx_vq_disable(vi, vi->sq + i);
> > +       if (err)
> > +               return err;
> > +
> > +       err = virtnet_tx_vq_enable(vi, vi->sq + i);
> > +       if (err)
> > +               netdev_err(vi->dev,
> > +                          "enable tx reset vq fail: tx queue index: %d err: %d\n", i, err);
> > +       return err;
> > +}
> > +
> >  /*
> >   * Send command via the control virtqueue and check status.  Commands
> >   * supported by the hypervisor, as indicated by feature bits, should
> > --
> > 2.31.0
> >
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET
  2022-02-16  4:14   ` Jason Wang
@ 2022-02-16  8:03     ` Xuan Zhuo
  2022-02-17  7:25       ` Jason Wang
  0 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-16  8:03 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, 16 Feb 2022 12:14:25 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > This patch implements virtio pci support for QUEUE RESET.
> >
> > Performing reset on a queue is divided into these steps:
> >
> > 1. reset_vq: reset one vq
> > 2. recycle the buffer from vq by virtqueue_detach_unused_buf()
> > 3. release the ring of the vq by vring_release_virtqueue()
> > 4. enable_reset_vq: re-enable the reset queue
> >
> > This patch implements reset_vq, enable_reset_vq in the pci scenario.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/virtio/virtio_pci_common.c |  8 ++--
> >  drivers/virtio/virtio_pci_modern.c | 60 ++++++++++++++++++++++++++++++
> >  2 files changed, 65 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
> > index 5a4f750a0b97..9ea319b1d404 100644
> > --- a/drivers/virtio/virtio_pci_common.c
> > +++ b/drivers/virtio/virtio_pci_common.c
> > @@ -255,9 +255,11 @@ static void vp_del_vq(struct virtqueue *vq)
> >         struct virtio_pci_vq_info *info = vp_dev->vqs[vq->index];
> >         unsigned long flags;
> >
> > -       spin_lock_irqsave(&vp_dev->lock, flags);
> > -       list_del(&info->node);
> > -       spin_unlock_irqrestore(&vp_dev->lock, flags);
> > +       if (!vq->reset) {
> > +               spin_lock_irqsave(&vp_dev->lock, flags);
> > +               list_del(&info->node);
> > +               spin_unlock_irqrestore(&vp_dev->lock, flags);
> > +       }
> >
> >         vp_dev->del_vq(info);
> >         kfree(info);
> > diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
> > index bed3e9b84272..7d28f4c36fc2 100644
> > --- a/drivers/virtio/virtio_pci_modern.c
> > +++ b/drivers/virtio/virtio_pci_modern.c
> > @@ -34,6 +34,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features)
> >         if ((features & BIT_ULL(VIRTIO_F_SR_IOV)) &&
> >                         pci_find_ext_capability(pci_dev, PCI_EXT_CAP_ID_SRIOV))
> >                 __virtio_set_bit(vdev, VIRTIO_F_SR_IOV);
> > +
> > +       if (features & BIT_ULL(VIRTIO_F_RING_RESET))
> > +               __virtio_set_bit(vdev, VIRTIO_F_RING_RESET);
> >  }
> >
> >  /* virtio config->finalize_features() implementation */
> > @@ -176,6 +179,59 @@ static void vp_reset(struct virtio_device *vdev)
> >         vp_disable_cbs(vdev);
> >  }
> >
> > +static int vp_modern_reset_vq(struct virtqueue *vq)
> > +{
> > +       struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
> > +       struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> > +       struct virtio_pci_vq_info *info;
> > +       unsigned long flags;
> > +
> > +       if (!virtio_has_feature(vq->vdev, VIRTIO_F_RING_RESET))
> > +               return -ENOENT;
> > +
> > +       vp_modern_set_queue_reset(mdev, vq->index);
> > +
> > +       info = vp_dev->vqs[vq->index];
> > +
>
> Any reason that we don't need to disable irq here as the previous versions did?

Based on the spec, for the case of one interrupt per queue, there will be no
more interrupts after the reset queue operation. Whether the interrupt is turned
off or not has no effect. I turned off the interrupt before just to be safe.

And for irq sharing scenarios, I don't want to turn off shared interrupts for a
queue.

And the following list_del has been guaranteed to be safe, so I removed the code
for closing interrupts in the previous version.

Thanks.

>
>
> > +       /* delete vq from irq handler */
> > +       spin_lock_irqsave(&vp_dev->lock, flags);
> > +       list_del(&info->node);
> > +       spin_unlock_irqrestore(&vp_dev->lock, flags);
> > +
> > +       INIT_LIST_HEAD(&info->node);
> > +
> > +       vq->reset = VIRTQUEUE_RESET_STAGE_DEVICE;
> > +
> > +       return 0;
> > +}
> > +
> > +static int vp_modern_enable_reset_vq(struct virtqueue *vq)
> > +{
> > +       struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
> > +       struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> > +       struct virtio_pci_vq_info *info;
> > +       struct virtqueue *_vq;
> > +
> > +       if (vq->reset != VIRTQUEUE_RESET_STAGE_RELEASE)
> > +               return -EBUSY;
> > +
> > +       /* check queue reset status */
> > +       if (vp_modern_get_queue_reset(mdev, vq->index) != 1)
> > +               return -EBUSY;
> > +
> > +       info = vp_dev->vqs[vq->index];
> > +       _vq = vp_setup_vq(vq->vdev, vq->index, NULL, NULL, NULL,
> > +                        info->msix_vector);
>
> So we only care about moden devices, this means using vp_setup_vq()
> with NULL seems tricky.
>
> As replied in another thread, I would simply ask the caller to call
> the vring reallocation helper. See the reply for patch 17.
>
> Thanks
>
>
> > +       if (IS_ERR(_vq)) {
> > +               vq->reset = VIRTQUEUE_RESET_STAGE_RELEASE;
> > +               return PTR_ERR(_vq);
> > +       }
> > +
> > +       vp_modern_set_queue_enable(&vp_dev->mdev, vq->index, true);
> > +
> > +       return 0;
> > +}
> > +
> >  static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
> >  {
> >         return vp_modern_config_vector(&vp_dev->mdev, vector);
> > @@ -397,6 +453,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
> >         .set_vq_affinity = vp_set_vq_affinity,
> >         .get_vq_affinity = vp_get_vq_affinity,
> >         .get_shm_region  = vp_get_shm_region,
> > +       .reset_vq        = vp_modern_reset_vq,
> > +       .enable_reset_vq = vp_modern_enable_reset_vq,
> >  };
> >
> >  static const struct virtio_config_ops virtio_pci_config_ops = {
> > @@ -415,6 +473,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = {
> >         .set_vq_affinity = vp_set_vq_affinity,
> >         .get_vq_affinity = vp_get_vq_affinity,
> >         .get_shm_region  = vp_get_shm_region,
> > +       .reset_vq        = vp_modern_reset_vq,
> > +       .enable_reset_vq = vp_modern_enable_reset_vq,
> >  };
> >
> >  /* the PCI probing function */
> > --
> > 2.31.0
> >
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 17/22] virtio_net: support rx/tx queue reset
  2022-02-16  7:56     ` Xuan Zhuo
@ 2022-02-16  8:35       ` Michael S. Tsirkin
  2022-02-16  8:42         ` Xuan Zhuo
  0 siblings, 1 reply; 42+ messages in thread
From: Michael S. Tsirkin @ 2022-02-16  8:35 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Jason Wang, virtualization, netdev, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, Feb 16, 2022 at 03:56:13PM +0800, Xuan Zhuo wrote:
> On Wed, 16 Feb 2022 12:14:11 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > This patch implements the reset function of the rx, tx queues.
> > >
> > > Based on this function, it is possible to modify the ring num of the
> > > queue. And quickly recycle the buffer in the queue.
> > >
> > > In the process of the queue disable, in theory, as long as virtio
> > > supports queue reset, there will be no exceptions.
> > >
> > > However, in the process of the queue enable, there may be exceptions due to
> > > memory allocation.  In this case, vq is not available, but we still have
> > > to execute napi_enable(). Because napi_disable is similar to a lock,
> > > napi_enable must be called after calling napi_disable.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio_net.c | 123 +++++++++++++++++++++++++++++++++++++++
> > >  1 file changed, 123 insertions(+)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index 9a1445236e23..a4ffd7cdf623 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -251,6 +251,11 @@ struct padded_vnet_hdr {
> > >         char padding[4];
> > >  };
> > >
> > > +static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi,
> > > +                                       struct send_queue *sq);
> > > +static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi,
> > > +                                       struct receive_queue *rq);
> > > +
> > >  static bool is_xdp_frame(void *ptr)
> > >  {
> > >         return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > > @@ -1369,6 +1374,9 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > >  {
> > >         napi_enable(napi);
> > >
> > > +       if (vq->reset)
> > > +               return;
> > > +
> > >         /* If all buffers were filled by other side before we napi_enabled, we
> > >          * won't get another interrupt, so process any outstanding packets now.
> > >          * Call local_bh_enable after to trigger softIRQ processing.
> > > @@ -1413,6 +1421,10 @@ static void refill_work(struct work_struct *work)
> > >                 struct receive_queue *rq = &vi->rq[i];
> > >
> > >                 napi_disable(&rq->napi);
> > > +               if (rq->vq->reset) {
> > > +                       virtnet_napi_enable(rq->vq, &rq->napi);
> > > +                       continue;
> > > +               }
> > >                 still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> > >                 virtnet_napi_enable(rq->vq, &rq->napi);
> > >
> > > @@ -1523,6 +1535,9 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
> > >         if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))
> > >                 return;
> > >
> > > +       if (sq->vq->reset)
> > > +               return;
> > > +
> > >         if (__netif_tx_trylock(txq)) {
> > >                 do {
> > >                         virtqueue_disable_cb(sq->vq);
> > > @@ -1769,6 +1784,114 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> > >         return NETDEV_TX_OK;
> > >  }
> > >
> > > +static int virtnet_rx_vq_disable(struct virtnet_info *vi,
> > > +                                struct receive_queue *rq)
> > > +{
> > > +       int err;
> > > +
> > > +       napi_disable(&rq->napi);
> > > +
> > > +       err = virtio_reset_vq(rq->vq);
> > > +       if (err)
> > > +               goto err;
> > > +
> > > +       virtnet_rq_free_unused_bufs(vi, rq);
> > > +
> > > +       vring_release_virtqueue(rq->vq);
> > > +
> > > +       return 0;
> > > +
> > > +err:
> > > +       virtnet_napi_enable(rq->vq, &rq->napi);
> > > +       return err;
> > > +}
> > > +
> > > +static int virtnet_tx_vq_disable(struct virtnet_info *vi,
> > > +                                struct send_queue *sq)
> > > +{
> > > +       struct netdev_queue *txq;
> > > +       int err, qindex;
> > > +
> > > +       qindex = sq - vi->sq;
> > > +
> > > +       txq = netdev_get_tx_queue(vi->dev, qindex);
> > > +       __netif_tx_lock_bh(txq);
> > > +
> > > +       netif_stop_subqueue(vi->dev, qindex);
> > > +       virtnet_napi_tx_disable(&sq->napi);
> > > +
> > > +       err = virtio_reset_vq(sq->vq);
> > > +       if (err) {
> > > +               virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> > > +               netif_start_subqueue(vi->dev, qindex);
> > > +
> > > +               __netif_tx_unlock_bh(txq);
> > > +               return err;
> > > +       }
> > > +       __netif_tx_unlock_bh(txq);
> > > +
> > > +       virtnet_sq_free_unused_bufs(vi, sq);
> > > +
> > > +       vring_release_virtqueue(sq->vq);
> > > +
> > > +       return 0;
> > > +}
> > > +
> > > +static int virtnet_tx_vq_enable(struct virtnet_info *vi, struct send_queue *sq)
> > > +{
> > > +       int err;
> > > +
> > > +       err = virtio_enable_resetq(sq->vq);
> > > +       if (!err)
> > > +               netif_start_subqueue(vi->dev, sq - vi->sq);
> > > +
> > > +       virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> > > +
> > > +       return err;
> > > +}
> > > +
> > > +static int virtnet_rx_vq_enable(struct virtnet_info *vi,
> > > +                               struct receive_queue *rq)
> > > +{
> > > +       int err;
> >
> > So the API should be design in a consistent way.
> >
> > In rx_vq_disable() we do:
> >
> > reset()
> > detach_unused_bufs()
> > vring_release_virtqueue()
> >
> > here it's better to exactly the reverse
> >
> > vring_attach_virtqueue() // this is the helper I guess in patch 5,
> > reverse of the vring_release_virtqueue()
> > try_refill_recv() // reverse of the detach_unused_bufs()
> > enable_reset() // reverse of the reset
> 
> Such an api is ok
> 
> 1. reset()
> 2. detach_unused_bufs()
> 3. vring_release_virtqueue()
>    ---------------
> 4. vring_attach_virtqueue()
> 5. try_refill_recv()
> 6. enable_reset()
> 
> 
> But if, we just want to recycle the buffer without modifying the ring num. As
> you mentioned before, in the case where the ring num is not modified, we don't
> have to reallocate, but can use the original vring.
> 
> 1. reset()
> 2. detach_unused_bufs()
>    ---------------
> 3. vring_reset_virtqueue() // just reset, no reallocate
> 4. try_refill_recv()
> 5. enable_reset()
> 
> Thanks.

Further, can we queue the buffers instead of detach_unused_bufs
and just requeue them instead of try_refill_recv?

> >
> > So did for the tx (no need for refill in that case).
> >
> > > +
> > > +       err = virtio_enable_resetq(rq->vq);
> > > +
> > > +       virtnet_napi_enable(rq->vq, &rq->napi);
> > > +
> > > +       return err;
> > > +}
> > > +
> > > +static int virtnet_rx_vq_reset(struct virtnet_info *vi, int i)
> > > +{
> > > +       int err;
> > > +
> > > +       err = virtnet_rx_vq_disable(vi, vi->rq + i);
> > > +       if (err)
> > > +               return err;
> > > +
> > > +       err = virtnet_rx_vq_enable(vi, vi->rq + i);
> > > +       if (err)
> > > +               netdev_err(vi->dev,
> > > +                          "enable rx reset vq fail: rx queue index: %d err: %d\n", i, err);
> > > +       return err;
> > > +}
> > > +
> > > +static int virtnet_tx_vq_reset(struct virtnet_info *vi, int i)
> > > +{
> > > +       int err;
> > > +
> > > +       err = virtnet_tx_vq_disable(vi, vi->sq + i);
> > > +       if (err)
> > > +               return err;
> > > +
> > > +       err = virtnet_tx_vq_enable(vi, vi->sq + i);
> > > +       if (err)
> > > +               netdev_err(vi->dev,
> > > +                          "enable tx reset vq fail: tx queue index: %d err: %d\n", i, err);
> > > +       return err;
> > > +}
> > > +
> > >  /*
> > >   * Send command via the control virtqueue and check status.  Commands
> > >   * supported by the hypervisor, as indicated by feature bits, should
> > > --
> > > 2.31.0
> > >
> >


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 17/22] virtio_net: support rx/tx queue reset
  2022-02-16  8:35       ` Michael S. Tsirkin
@ 2022-02-16  8:42         ` Xuan Zhuo
  0 siblings, 0 replies; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-16  8:42 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, virtualization, netdev, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, 16 Feb 2022 03:35:01 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Wed, Feb 16, 2022 at 03:56:13PM +0800, Xuan Zhuo wrote:
> > On Wed, 16 Feb 2022 12:14:11 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > This patch implements the reset function of the rx, tx queues.
> > > >
> > > > Based on this function, it is possible to modify the ring num of the
> > > > queue. And quickly recycle the buffer in the queue.
> > > >
> > > > In the process of the queue disable, in theory, as long as virtio
> > > > supports queue reset, there will be no exceptions.
> > > >
> > > > However, in the process of the queue enable, there may be exceptions due to
> > > > memory allocation.  In this case, vq is not available, but we still have
> > > > to execute napi_enable(). Because napi_disable is similar to a lock,
> > > > napi_enable must be called after calling napi_disable.
> > > >
> > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > ---
> > > >  drivers/net/virtio_net.c | 123 +++++++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 123 insertions(+)
> > > >
> > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > index 9a1445236e23..a4ffd7cdf623 100644
> > > > --- a/drivers/net/virtio_net.c
> > > > +++ b/drivers/net/virtio_net.c
> > > > @@ -251,6 +251,11 @@ struct padded_vnet_hdr {
> > > >         char padding[4];
> > > >  };
> > > >
> > > > +static void virtnet_sq_free_unused_bufs(struct virtnet_info *vi,
> > > > +                                       struct send_queue *sq);
> > > > +static void virtnet_rq_free_unused_bufs(struct virtnet_info *vi,
> > > > +                                       struct receive_queue *rq);
> > > > +
> > > >  static bool is_xdp_frame(void *ptr)
> > > >  {
> > > >         return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > > > @@ -1369,6 +1374,9 @@ static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > >  {
> > > >         napi_enable(napi);
> > > >
> > > > +       if (vq->reset)
> > > > +               return;
> > > > +
> > > >         /* If all buffers were filled by other side before we napi_enabled, we
> > > >          * won't get another interrupt, so process any outstanding packets now.
> > > >          * Call local_bh_enable after to trigger softIRQ processing.
> > > > @@ -1413,6 +1421,10 @@ static void refill_work(struct work_struct *work)
> > > >                 struct receive_queue *rq = &vi->rq[i];
> > > >
> > > >                 napi_disable(&rq->napi);
> > > > +               if (rq->vq->reset) {
> > > > +                       virtnet_napi_enable(rq->vq, &rq->napi);
> > > > +                       continue;
> > > > +               }
> > > >                 still_empty = !try_fill_recv(vi, rq, GFP_KERNEL);
> > > >                 virtnet_napi_enable(rq->vq, &rq->napi);
> > > >
> > > > @@ -1523,6 +1535,9 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
> > > >         if (!sq->napi.weight || is_xdp_raw_buffer_queue(vi, index))
> > > >                 return;
> > > >
> > > > +       if (sq->vq->reset)
> > > > +               return;
> > > > +
> > > >         if (__netif_tx_trylock(txq)) {
> > > >                 do {
> > > >                         virtqueue_disable_cb(sq->vq);
> > > > @@ -1769,6 +1784,114 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
> > > >         return NETDEV_TX_OK;
> > > >  }
> > > >
> > > > +static int virtnet_rx_vq_disable(struct virtnet_info *vi,
> > > > +                                struct receive_queue *rq)
> > > > +{
> > > > +       int err;
> > > > +
> > > > +       napi_disable(&rq->napi);
> > > > +
> > > > +       err = virtio_reset_vq(rq->vq);
> > > > +       if (err)
> > > > +               goto err;
> > > > +
> > > > +       virtnet_rq_free_unused_bufs(vi, rq);
> > > > +
> > > > +       vring_release_virtqueue(rq->vq);
> > > > +
> > > > +       return 0;
> > > > +
> > > > +err:
> > > > +       virtnet_napi_enable(rq->vq, &rq->napi);
> > > > +       return err;
> > > > +}
> > > > +
> > > > +static int virtnet_tx_vq_disable(struct virtnet_info *vi,
> > > > +                                struct send_queue *sq)
> > > > +{
> > > > +       struct netdev_queue *txq;
> > > > +       int err, qindex;
> > > > +
> > > > +       qindex = sq - vi->sq;
> > > > +
> > > > +       txq = netdev_get_tx_queue(vi->dev, qindex);
> > > > +       __netif_tx_lock_bh(txq);
> > > > +
> > > > +       netif_stop_subqueue(vi->dev, qindex);
> > > > +       virtnet_napi_tx_disable(&sq->napi);
> > > > +
> > > > +       err = virtio_reset_vq(sq->vq);
> > > > +       if (err) {
> > > > +               virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> > > > +               netif_start_subqueue(vi->dev, qindex);
> > > > +
> > > > +               __netif_tx_unlock_bh(txq);
> > > > +               return err;
> > > > +       }
> > > > +       __netif_tx_unlock_bh(txq);
> > > > +
> > > > +       virtnet_sq_free_unused_bufs(vi, sq);
> > > > +
> > > > +       vring_release_virtqueue(sq->vq);
> > > > +
> > > > +       return 0;
> > > > +}
> > > > +
> > > > +static int virtnet_tx_vq_enable(struct virtnet_info *vi, struct send_queue *sq)
> > > > +{
> > > > +       int err;
> > > > +
> > > > +       err = virtio_enable_resetq(sq->vq);
> > > > +       if (!err)
> > > > +               netif_start_subqueue(vi->dev, sq - vi->sq);
> > > > +
> > > > +       virtnet_napi_tx_enable(vi, sq->vq, &sq->napi);
> > > > +
> > > > +       return err;
> > > > +}
> > > > +
> > > > +static int virtnet_rx_vq_enable(struct virtnet_info *vi,
> > > > +                               struct receive_queue *rq)
> > > > +{
> > > > +       int err;
> > >
> > > So the API should be design in a consistent way.
> > >
> > > In rx_vq_disable() we do:
> > >
> > > reset()
> > > detach_unused_bufs()
> > > vring_release_virtqueue()
> > >
> > > here it's better to exactly the reverse
> > >
> > > vring_attach_virtqueue() // this is the helper I guess in patch 5,
> > > reverse of the vring_release_virtqueue()
> > > try_refill_recv() // reverse of the detach_unused_bufs()
> > > enable_reset() // reverse of the reset
> >
> > Such an api is ok
> >
> > 1. reset()
> > 2. detach_unused_bufs()
> > 3. vring_release_virtqueue()
> >    ---------------
> > 4. vring_attach_virtqueue()
> > 5. try_refill_recv()
> > 6. enable_reset()
> >
> >
> > But if, we just want to recycle the buffer without modifying the ring num. As
> > you mentioned before, in the case where the ring num is not modified, we don't
> > have to reallocate, but can use the original vring.
> >
> > 1. reset()
> > 2. detach_unused_bufs()
> >    ---------------
> > 3. vring_reset_virtqueue() // just reset, no reallocate
> > 4. try_refill_recv()
> > 5. enable_reset()
> >
> > Thanks.
>
> Further, can we queue the buffers instead of detach_unused_bufs
> and just requeue them instead of try_refill_recv?

I think this is a good note, for support set_ringparam, this will be more
friendly.

I think I can implement this after the support for AF_XDP is done.

Thanks.

>
> > >
> > > So did for the tx (no need for refill in that case).
> > >
> > > > +
> > > > +       err = virtio_enable_resetq(rq->vq);
> > > > +
> > > > +       virtnet_napi_enable(rq->vq, &rq->napi);
> > > > +
> > > > +       return err;
> > > > +}
> > > > +
> > > > +static int virtnet_rx_vq_reset(struct virtnet_info *vi, int i)
> > > > +{
> > > > +       int err;
> > > > +
> > > > +       err = virtnet_rx_vq_disable(vi, vi->rq + i);
> > > > +       if (err)
> > > > +               return err;
> > > > +
> > > > +       err = virtnet_rx_vq_enable(vi, vi->rq + i);
> > > > +       if (err)
> > > > +               netdev_err(vi->dev,
> > > > +                          "enable rx reset vq fail: rx queue index: %d err: %d\n", i, err);
> > > > +       return err;
> > > > +}
> > > > +
> > > > +static int virtnet_tx_vq_reset(struct virtnet_info *vi, int i)
> > > > +{
> > > > +       int err;
> > > > +
> > > > +       err = virtnet_tx_vq_disable(vi, vi->sq + i);
> > > > +       if (err)
> > > > +               return err;
> > > > +
> > > > +       err = virtnet_tx_vq_enable(vi, vi->sq + i);
> > > > +       if (err)
> > > > +               netdev_err(vi->dev,
> > > > +                          "enable tx reset vq fail: tx queue index: %d err: %d\n", i, err);
> > > > +       return err;
> > > > +}
> > > > +
> > > >  /*
> > > >   * Send command via the control virtqueue and check status.  Commands
> > > >   * supported by the hypervisor, as indicated by feature bits, should
> > > > --
> > > > 2.31.0
> > > >
> > >
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 20/22] virtio_net: set the default max ring num
  2022-02-16  7:46     ` Xuan Zhuo
@ 2022-02-17  7:21       ` Jason Wang
  2022-02-17  9:30         ` Xuan Zhuo
  0 siblings, 1 reply; 42+ messages in thread
From: Jason Wang @ 2022-02-17  7:21 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, Feb 16, 2022 at 3:52 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Wed, 16 Feb 2022 12:14:31 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > Sets the default maximum ring num based on virtio_set_max_ring_num().
> > >
> > > The default maximum ring num is 1024.
> >
> > Having a default value is pretty useful, I see 32K is used by default for IFCVF.
> >
> > Rethink this, how about having a different default value based on the speed?
> >
> > Without SPEED_DUPLEX, we use 1024. Otherwise
> >
> > 10g 4096
> > 40g 8192
>
> We can define different default values of tx and rx by the way. This way I can
> just use it in the new interface of find_vqs().
>
> without SPEED_DUPLEX:  tx 512 rx 1024
>

Any reason that TX is smaller than RX?

Thanks

> Thanks.
>
>
> >
> > etc.
> >
> > (The number are just copied from the 10g/40g default parameter from
> > other vendors)
> >
> > Thanks
> >
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio_net.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index a4ffd7cdf623..77e61fe0b2ce 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -35,6 +35,8 @@ module_param(napi_tx, bool, 0644);
> > >  #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
> > >  #define GOOD_COPY_LEN  128
> > >
> > > +#define VIRTNET_DEFAULT_MAX_RING_NUM 1024
> > > +
> > >  #define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
> > >
> > >  /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
> > > @@ -3045,6 +3047,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> > >                         ctx[rxq2vq(i)] = true;
> > >         }
> > >
> > > +       virtio_set_max_ring_num(vi->vdev, VIRTNET_DEFAULT_MAX_RING_NUM);
> > > +
> > >         ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks,
> > >                                   names, ctx, NULL);
> > >         if (ret)
> > > --
> > > 2.31.0
> > >
> >
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET
  2022-02-16  8:03     ` Xuan Zhuo
@ 2022-02-17  7:25       ` Jason Wang
  0 siblings, 0 replies; 42+ messages in thread
From: Jason Wang @ 2022-02-17  7:25 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Wed, Feb 16, 2022 at 4:08 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Wed, 16 Feb 2022 12:14:25 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > This patch implements virtio pci support for QUEUE RESET.
> > >
> > > Performing reset on a queue is divided into these steps:
> > >
> > > 1. reset_vq: reset one vq
> > > 2. recycle the buffer from vq by virtqueue_detach_unused_buf()
> > > 3. release the ring of the vq by vring_release_virtqueue()
> > > 4. enable_reset_vq: re-enable the reset queue
> > >
> > > This patch implements reset_vq, enable_reset_vq in the pci scenario.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/virtio/virtio_pci_common.c |  8 ++--
> > >  drivers/virtio/virtio_pci_modern.c | 60 ++++++++++++++++++++++++++++++
> > >  2 files changed, 65 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/virtio/virtio_pci_common.c b/drivers/virtio/virtio_pci_common.c
> > > index 5a4f750a0b97..9ea319b1d404 100644
> > > --- a/drivers/virtio/virtio_pci_common.c
> > > +++ b/drivers/virtio/virtio_pci_common.c
> > > @@ -255,9 +255,11 @@ static void vp_del_vq(struct virtqueue *vq)
> > >         struct virtio_pci_vq_info *info = vp_dev->vqs[vq->index];
> > >         unsigned long flags;
> > >
> > > -       spin_lock_irqsave(&vp_dev->lock, flags);
> > > -       list_del(&info->node);
> > > -       spin_unlock_irqrestore(&vp_dev->lock, flags);
> > > +       if (!vq->reset) {
> > > +               spin_lock_irqsave(&vp_dev->lock, flags);
> > > +               list_del(&info->node);
> > > +               spin_unlock_irqrestore(&vp_dev->lock, flags);
> > > +       }
> > >
> > >         vp_dev->del_vq(info);
> > >         kfree(info);
> > > diff --git a/drivers/virtio/virtio_pci_modern.c b/drivers/virtio/virtio_pci_modern.c
> > > index bed3e9b84272..7d28f4c36fc2 100644
> > > --- a/drivers/virtio/virtio_pci_modern.c
> > > +++ b/drivers/virtio/virtio_pci_modern.c
> > > @@ -34,6 +34,9 @@ static void vp_transport_features(struct virtio_device *vdev, u64 features)
> > >         if ((features & BIT_ULL(VIRTIO_F_SR_IOV)) &&
> > >                         pci_find_ext_capability(pci_dev, PCI_EXT_CAP_ID_SRIOV))
> > >                 __virtio_set_bit(vdev, VIRTIO_F_SR_IOV);
> > > +
> > > +       if (features & BIT_ULL(VIRTIO_F_RING_RESET))
> > > +               __virtio_set_bit(vdev, VIRTIO_F_RING_RESET);
> > >  }
> > >
> > >  /* virtio config->finalize_features() implementation */
> > > @@ -176,6 +179,59 @@ static void vp_reset(struct virtio_device *vdev)
> > >         vp_disable_cbs(vdev);
> > >  }
> > >
> > > +static int vp_modern_reset_vq(struct virtqueue *vq)
> > > +{
> > > +       struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
> > > +       struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> > > +       struct virtio_pci_vq_info *info;
> > > +       unsigned long flags;
> > > +
> > > +       if (!virtio_has_feature(vq->vdev, VIRTIO_F_RING_RESET))
> > > +               return -ENOENT;
> > > +
> > > +       vp_modern_set_queue_reset(mdev, vq->index);
> > > +
> > > +       info = vp_dev->vqs[vq->index];
> > > +
> >
> > Any reason that we don't need to disable irq here as the previous versions did?
>
> Based on the spec, for the case of one interrupt per queue, there will be no
> more interrupts after the reset queue operation. Whether the interrupt is turned
> off or not has no effect. I turned off the interrupt before just to be safe.

So:

1) CPU0 -> get an interrupt
2) CPU1 -> vp_modern_reset_vq
2) CPU0 -> do_IRQ()

We still need to synchronize with the irq handler in this case?

Thanks

>
> And for irq sharing scenarios, I don't want to turn off shared interrupts for a
> queue.
>
> And the following list_del has been guaranteed to be safe, so I removed the code
> for closing interrupts in the previous version.
>
> Thanks.
>
> >
> >
> > > +       /* delete vq from irq handler */
> > > +       spin_lock_irqsave(&vp_dev->lock, flags);
> > > +       list_del(&info->node);
> > > +       spin_unlock_irqrestore(&vp_dev->lock, flags);
> > > +
> > > +       INIT_LIST_HEAD(&info->node);
> > > +
> > > +       vq->reset = VIRTQUEUE_RESET_STAGE_DEVICE;
> > > +
> > > +       return 0;
> > > +}
> > > +
> > > +static int vp_modern_enable_reset_vq(struct virtqueue *vq)
> > > +{
> > > +       struct virtio_pci_device *vp_dev = to_vp_device(vq->vdev);
> > > +       struct virtio_pci_modern_device *mdev = &vp_dev->mdev;
> > > +       struct virtio_pci_vq_info *info;
> > > +       struct virtqueue *_vq;
> > > +
> > > +       if (vq->reset != VIRTQUEUE_RESET_STAGE_RELEASE)
> > > +               return -EBUSY;
> > > +
> > > +       /* check queue reset status */
> > > +       if (vp_modern_get_queue_reset(mdev, vq->index) != 1)
> > > +               return -EBUSY;
> > > +
> > > +       info = vp_dev->vqs[vq->index];
> > > +       _vq = vp_setup_vq(vq->vdev, vq->index, NULL, NULL, NULL,
> > > +                        info->msix_vector);
> >
> > So we only care about moden devices, this means using vp_setup_vq()
> > with NULL seems tricky.
> >
> > As replied in another thread, I would simply ask the caller to call
> > the vring reallocation helper. See the reply for patch 17.

Right.

Thanks.

> >
> > Thanks
> >
> >
> > > +       if (IS_ERR(_vq)) {
> > > +               vq->reset = VIRTQUEUE_RESET_STAGE_RELEASE;
> > > +               return PTR_ERR(_vq);
> > > +       }
> > > +
> > > +       vp_modern_set_queue_enable(&vp_dev->mdev, vq->index, true);
> > > +
> > > +       return 0;
> > > +}
> > > +
> > >  static u16 vp_config_vector(struct virtio_pci_device *vp_dev, u16 vector)
> > >  {
> > >         return vp_modern_config_vector(&vp_dev->mdev, vector);
> > > @@ -397,6 +453,8 @@ static const struct virtio_config_ops virtio_pci_config_nodev_ops = {
> > >         .set_vq_affinity = vp_set_vq_affinity,
> > >         .get_vq_affinity = vp_get_vq_affinity,
> > >         .get_shm_region  = vp_get_shm_region,
> > > +       .reset_vq        = vp_modern_reset_vq,
> > > +       .enable_reset_vq = vp_modern_enable_reset_vq,
> > >  };
> > >
> > >  static const struct virtio_config_ops virtio_pci_config_ops = {
> > > @@ -415,6 +473,8 @@ static const struct virtio_config_ops virtio_pci_config_ops = {
> > >         .set_vq_affinity = vp_set_vq_affinity,
> > >         .get_vq_affinity = vp_get_vq_affinity,
> > >         .get_shm_region  = vp_get_shm_region,
> > > +       .reset_vq        = vp_modern_reset_vq,
> > > +       .enable_reset_vq = vp_modern_enable_reset_vq,
> > >  };
> > >
> > >  /* the PCI probing function */
> > > --
> > > 2.31.0
> > >
> >
>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 20/22] virtio_net: set the default max ring num
  2022-02-17  7:21       ` Jason Wang
@ 2022-02-17  9:30         ` Xuan Zhuo
  2022-02-21  3:40           ` Jason Wang
  0 siblings, 1 reply; 42+ messages in thread
From: Xuan Zhuo @ 2022-02-17  9:30 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf

On Thu, 17 Feb 2022 15:21:26 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Wed, Feb 16, 2022 at 3:52 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Wed, 16 Feb 2022 12:14:31 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > Sets the default maximum ring num based on virtio_set_max_ring_num().
> > > >
> > > > The default maximum ring num is 1024.
> > >
> > > Having a default value is pretty useful, I see 32K is used by default for IFCVF.
> > >
> > > Rethink this, how about having a different default value based on the speed?
> > >
> > > Without SPEED_DUPLEX, we use 1024. Otherwise
> > >
> > > 10g 4096
> > > 40g 8192
> >
> > We can define different default values of tx and rx by the way. This way I can
> > just use it in the new interface of find_vqs().
> >
> > without SPEED_DUPLEX:  tx 512 rx 1024
> >
>
> Any reason that TX is smaller than RX?
>

I've seen some NIC drivers with default tx smaller than rx.

One problem I have now is that inside virtnet_probe, init_vqs is before getting
speed/duplex. I'm not sure, can the logic to get speed/duplex be put before
init_vqs? Is there any risk?

Can you help me?

Thanks.

> Thanks
>
> > Thanks.
> >
> >
> > >
> > > etc.
> > >
> > > (The number are just copied from the 10g/40g default parameter from
> > > other vendors)
> > >
> > > Thanks
> > >
> > > >
> > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > ---
> > > >  drivers/net/virtio_net.c | 4 ++++
> > > >  1 file changed, 4 insertions(+)
> > > >
> > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > index a4ffd7cdf623..77e61fe0b2ce 100644
> > > > --- a/drivers/net/virtio_net.c
> > > > +++ b/drivers/net/virtio_net.c
> > > > @@ -35,6 +35,8 @@ module_param(napi_tx, bool, 0644);
> > > >  #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
> > > >  #define GOOD_COPY_LEN  128
> > > >
> > > > +#define VIRTNET_DEFAULT_MAX_RING_NUM 1024
> > > > +
> > > >  #define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
> > > >
> > > >  /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
> > > > @@ -3045,6 +3047,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
> > > >                         ctx[rxq2vq(i)] = true;
> > > >         }
> > > >
> > > > +       virtio_set_max_ring_num(vi->vdev, VIRTNET_DEFAULT_MAX_RING_NUM);
> > > > +
> > > >         ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks,
> > > >                                   names, ctx, NULL);
> > > >         if (ret)
> > > > --
> > > > 2.31.0
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 20/22] virtio_net: set the default max ring num
  2022-02-17  9:30         ` Xuan Zhuo
@ 2022-02-21  3:40           ` Jason Wang
  2022-02-21  7:00             ` Jason Wang
  0 siblings, 1 reply; 42+ messages in thread
From: Jason Wang @ 2022-02-21  3:40 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf


在 2022/2/17 下午5:30, Xuan Zhuo 写道:
> On Thu, 17 Feb 2022 15:21:26 +0800, Jason Wang <jasowang@redhat.com> wrote:
>> On Wed, Feb 16, 2022 at 3:52 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>>> On Wed, 16 Feb 2022 12:14:31 +0800, Jason Wang <jasowang@redhat.com> wrote:
>>>> On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>>>>> Sets the default maximum ring num based on virtio_set_max_ring_num().
>>>>>
>>>>> The default maximum ring num is 1024.
>>>> Having a default value is pretty useful, I see 32K is used by default for IFCVF.
>>>>
>>>> Rethink this, how about having a different default value based on the speed?
>>>>
>>>> Without SPEED_DUPLEX, we use 1024. Otherwise
>>>>
>>>> 10g 4096
>>>> 40g 8192
>>> We can define different default values of tx and rx by the way. This way I can
>>> just use it in the new interface of find_vqs().
>>>
>>> without SPEED_DUPLEX:  tx 512 rx 1024
>>>
>> Any reason that TX is smaller than RX?
>>
> I've seen some NIC drivers with default tx smaller than rx.


Interesting, do they use combined channels?


>
> One problem I have now is that inside virtnet_probe, init_vqs is before getting
> speed/duplex. I'm not sure, can the logic to get speed/duplex be put before
> init_vqs? Is there any risk?
>
> Can you help me?


The feature has been negotiated during probe(), so I don't see any risk.

Thanks


>
> Thanks.
>
>> Thanks
>>
>>> Thanks.
>>>
>>>
>>>> etc.
>>>>
>>>> (The number are just copied from the 10g/40g default parameter from
>>>> other vendors)
>>>>
>>>> Thanks
>>>>
>>>>> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
>>>>> ---
>>>>>   drivers/net/virtio_net.c | 4 ++++
>>>>>   1 file changed, 4 insertions(+)
>>>>>
>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>>> index a4ffd7cdf623..77e61fe0b2ce 100644
>>>>> --- a/drivers/net/virtio_net.c
>>>>> +++ b/drivers/net/virtio_net.c
>>>>> @@ -35,6 +35,8 @@ module_param(napi_tx, bool, 0644);
>>>>>   #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
>>>>>   #define GOOD_COPY_LEN  128
>>>>>
>>>>> +#define VIRTNET_DEFAULT_MAX_RING_NUM 1024
>>>>> +
>>>>>   #define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
>>>>>
>>>>>   /* Amount of XDP headroom to prepend to packets for use by xdp_adjust_head */
>>>>> @@ -3045,6 +3047,8 @@ static int virtnet_find_vqs(struct virtnet_info *vi)
>>>>>                          ctx[rxq2vq(i)] = true;
>>>>>          }
>>>>>
>>>>> +       virtio_set_max_ring_num(vi->vdev, VIRTNET_DEFAULT_MAX_RING_NUM);
>>>>> +
>>>>>          ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, callbacks,
>>>>>                                    names, ctx, NULL);
>>>>>          if (ret)
>>>>> --
>>>>> 2.31.0
>>>>>


^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH v5 20/22] virtio_net: set the default max ring num
  2022-02-21  3:40           ` Jason Wang
@ 2022-02-21  7:00             ` Jason Wang
  0 siblings, 0 replies; 42+ messages in thread
From: Jason Wang @ 2022-02-21  7:00 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, netdev, Michael S. Tsirkin, David S. Miller,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, bpf, Zhu, Lingshan


在 2022/2/21 上午11:40, Jason Wang 写道:
>
> 在 2022/2/17 下午5:30, Xuan Zhuo 写道:
>> On Thu, 17 Feb 2022 15:21:26 +0800, Jason Wang <jasowang@redhat.com> 
>> wrote:
>>> On Wed, Feb 16, 2022 at 3:52 PM Xuan Zhuo 
>>> <xuanzhuo@linux.alibaba.com> wrote:
>>>> On Wed, 16 Feb 2022 12:14:31 +0800, Jason Wang 
>>>> <jasowang@redhat.com> wrote:
>>>>> On Mon, Feb 14, 2022 at 4:14 PM Xuan Zhuo 
>>>>> <xuanzhuo@linux.alibaba.com> wrote:
>>>>>> Sets the default maximum ring num based on 
>>>>>> virtio_set_max_ring_num().
>>>>>>
>>>>>> The default maximum ring num is 1024.
>>>>> Having a default value is pretty useful, I see 32K is used by 
>>>>> default for IFCVF.
>>>>>
>>>>> Rethink this, how about having a different default value based on 
>>>>> the speed?
>>>>>
>>>>> Without SPEED_DUPLEX, we use 1024. Otherwise
>>>>>
>>>>> 10g 4096
>>>>> 40g 8192
>>>> We can define different default values of tx and rx by the way. 
>>>> This way I can
>>>> just use it in the new interface of find_vqs().
>>>>
>>>> without SPEED_DUPLEX:  tx 512 rx 1024
>>>>
>>> Any reason that TX is smaller than RX?
>>>
>> I've seen some NIC drivers with default tx smaller than rx.
>
>
> Interesting, do they use combined channels?


Adding Ling Shan.

I see 32K is used for IFCVF by default, this is another call for the 
this patch:

# ethtool -g eth0
Ring parameters for eth0:
Pre-set maximums:
RX:        32768
RX Mini:    0
RX Jumbo:    0
TX:        32768
Current hardware settings:
RX:        32768
RX Mini:    0
RX Jumbo:    0
TX:        32768

Thanks


>
>
>>
>> One problem I have now is that inside virtnet_probe, init_vqs is 
>> before getting
>> speed/duplex. I'm not sure, can the logic to get speed/duplex be put 
>> before
>> init_vqs? Is there any risk?
>>
>> Can you help me?
>
>
> The feature has been negotiated during probe(), so I don't see any risk.
>
> Thanks
>
>
>>
>> Thanks.
>>
>>> Thanks
>>>
>>>> Thanks.
>>>>
>>>>
>>>>> etc.
>>>>>
>>>>> (The number are just copied from the 10g/40g default parameter from
>>>>> other vendors)
>>>>>
>>>>> Thanks
>>>>>
>>>>>> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
>>>>>> ---
>>>>>>   drivers/net/virtio_net.c | 4 ++++
>>>>>>   1 file changed, 4 insertions(+)
>>>>>>
>>>>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>>>>> index a4ffd7cdf623..77e61fe0b2ce 100644
>>>>>> --- a/drivers/net/virtio_net.c
>>>>>> +++ b/drivers/net/virtio_net.c
>>>>>> @@ -35,6 +35,8 @@ module_param(napi_tx, bool, 0644);
>>>>>>   #define GOOD_PACKET_LEN (ETH_HLEN + VLAN_HLEN + ETH_DATA_LEN)
>>>>>>   #define GOOD_COPY_LEN  128
>>>>>>
>>>>>> +#define VIRTNET_DEFAULT_MAX_RING_NUM 1024
>>>>>> +
>>>>>>   #define VIRTNET_RX_PAD (NET_IP_ALIGN + NET_SKB_PAD)
>>>>>>
>>>>>>   /* Amount of XDP headroom to prepend to packets for use by 
>>>>>> xdp_adjust_head */
>>>>>> @@ -3045,6 +3047,8 @@ static int virtnet_find_vqs(struct 
>>>>>> virtnet_info *vi)
>>>>>>                          ctx[rxq2vq(i)] = true;
>>>>>>          }
>>>>>>
>>>>>> +       virtio_set_max_ring_num(vi->vdev, 
>>>>>> VIRTNET_DEFAULT_MAX_RING_NUM);
>>>>>> +
>>>>>>          ret = virtio_find_vqs_ctx(vi->vdev, total_vqs, vqs, 
>>>>>> callbacks,
>>>>>>                                    names, ctx, NULL);
>>>>>>          if (ret)
>>>>>> -- 
>>>>>> 2.31.0
>>>>>>


^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2022-02-21  7:00 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-14  8:13 [PATCH v5 00/22] virtio pci support VIRTIO_F_RING_RESET Xuan Zhuo
2022-02-14  8:13 ` [PATCH v5 01/22] virtio_pci: struct virtio_pci_common_cfg add queue_notify_data Xuan Zhuo
2022-02-14  8:13 ` [PATCH v5 02/22] virtio: queue_reset: add VIRTIO_F_RING_RESET Xuan Zhuo
2022-02-14  8:13 ` [PATCH v5 03/22] virtio_ring: queue_reset: add function vring_setup_virtqueue() Xuan Zhuo
2022-02-14  8:13 ` [PATCH v5 04/22] virtio_ring: queue_reset: split: add __vring_init_virtqueue() Xuan Zhuo
2022-02-14  8:13 ` [PATCH v5 05/22] virtio_ring: queue_reset: split: support enable reset queue Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 06/22] virtio_ring: queue_reset: packed: " Xuan Zhuo
2022-02-16  4:14   ` Jason Wang
2022-02-14  8:14 ` [PATCH v5 07/22] virtio_ring: queue_reset: extract the release function of the vq ring Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 08/22] virtio_ring: queue_reset: add vring_release_virtqueue() Xuan Zhuo
2022-02-16  4:14   ` Jason Wang
2022-02-14  8:14 ` [PATCH v5 09/22] virtio: queue_reset: struct virtio_config_ops add callbacks for queue_reset Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 10/22] virtio_pci: queue_reset: update struct virtio_pci_common_cfg and option functions Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 11/22] virtio_pci: queue_reset: release vq by vp_dev->vqs Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 12/22] virtio_pci: queue_reset: setup_vq() support vring_setup_virtqueue() Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 13/22] virtio_pci: queue_reset: reserve vq->priv for re-enable queue Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 14/22] virtio_pci: queue_reset: support VIRTIO_F_RING_RESET Xuan Zhuo
2022-02-16  4:14   ` Jason Wang
2022-02-16  8:03     ` Xuan Zhuo
2022-02-17  7:25       ` Jason Wang
2022-02-14  8:14 ` [PATCH v5 15/22] virtio: queue_reset: add helper Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 16/22] virtio_net: split free_unused_bufs() Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 17/22] virtio_net: support rx/tx queue reset Xuan Zhuo
2022-02-16  4:14   ` Jason Wang
2022-02-16  7:56     ` Xuan Zhuo
2022-02-16  8:35       ` Michael S. Tsirkin
2022-02-16  8:42         ` Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 18/22] virtio: add helper virtqueue_get_vring_max_size() Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 19/22] virtio: add helper virtio_set_max_ring_num() Xuan Zhuo
2022-02-16  4:14   ` Jason Wang
2022-02-16  7:54     ` Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 20/22] virtio_net: set the default max ring num Xuan Zhuo
2022-02-16  4:14   ` Jason Wang
2022-02-16  7:46     ` Xuan Zhuo
2022-02-17  7:21       ` Jason Wang
2022-02-17  9:30         ` Xuan Zhuo
2022-02-21  3:40           ` Jason Wang
2022-02-21  7:00             ` Jason Wang
2022-02-14  8:14 ` [PATCH v5 21/22] virtio_net: get max ring size by virtqueue_get_vring_max_size() Xuan Zhuo
2022-02-14  8:14 ` [PATCH v5 22/22] virtio_net: support set_ringparam Xuan Zhuo
2022-02-16  4:14   ` Jason Wang
2022-02-16  7:21     ` Xuan Zhuo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).