All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v5 0/4] virtio-net: support dynamic coalescing moderation
@ 2023-11-27  2:55 Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 1/4] virtio-net: returns whether napi is complete Heng Qi
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Heng Qi @ 2023-11-27  2:55 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, pabeni, davem, hawk,
	john.fastabend, ast, horms, xuanzhuo, yinjun.zhang

Now, virtio-net already supports per-queue moderation parameter
setting. Based on this, we use the linux dimlib to support
dynamic coalescing moderation for virtio-net.

Due to some scheduling issues, we only support and test the rx dim.

Some test results:

I. Sockperf UDP
=================================================
1. Env
rxq_0 with affinity to cpu_0.

2. Cmd
client: taskset -c 0 sockperf tp -p 8989 -i $IP -t 10 -m 16B
server: taskset -c 0 sockperf sr -p 8989

3. Result
dim off: 1143277.00 rxpps, throughput 17.844 MBps, cpu is 100%.
dim on:  1124161.00 rxpps, throughput 17.610 MBps, cpu is 83.5%.
=================================================

II. Redis
=================================================
1. Env
There are 8 rxqs, and rxq_i with affinity to cpu_i.

2. Result
When all cpus are 100%, ops/sec of memtier_benchmark client is
dim off:  978437.23
dim on:  1143638.28
=================================================

III. Nginx
=================================================
1. Env
There are 8 rxqs and rxq_i with affinity to cpu_i.

2. Result
When all cpus are 100%, requests/sec of wrk client is
dim off:  877931.67
dim on:  1019160.31
=================================================

IV. Latency of sockperf udp
=================================================
1. Rx cmd
taskset -c 0 sockperf sr -p 8989

2. Tx cmd
taskset -c 0 sockperf pp -i ${ip} -p 8989 -t 10

After running this cmd 5 times and averaging the results,

3. Result
dim off: 17.7735 usec
dim on:  18.0110 usec
=================================================

Changelog:
v4->v5:
- Patch(4/4):
   - Fix possible synchronization issues with cancel_work_sync.
   - Reduce if/else nesting levels

v3->v4:
- Patch(5/5): drop.

v2->v3:
- Patch(4/5): some minor modifications.

v1->v2:
- Patch(2/5): a minor fix.
- Patch(4/5):
   - improve the judgment of dim switch conditions.
   - Cancel the work when vq reset. 
- Patch(5/5): drop the tx dim implementation.

Heng Qi (4):
  virtio-net: returns whether napi is complete
  virtio-net: separate rx/tx coalescing moderation cmds
  virtio-net: extract virtqueue coalescig cmd for reuse
  virtio-net: support rx netdim

 drivers/net/virtio_net.c | 287 +++++++++++++++++++++++++++++++++------
 1 file changed, 242 insertions(+), 45 deletions(-)

-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 1/4] virtio-net: returns whether napi is complete
  2023-11-27  2:55 [PATCH net-next v5 0/4] virtio-net: support dynamic coalescing moderation Heng Qi
@ 2023-11-27  2:55 ` Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 2/4] virtio-net: separate rx/tx coalescing moderation cmds Heng Qi
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Heng Qi @ 2023-11-27  2:55 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, pabeni, davem, hawk,
	john.fastabend, ast, horms, xuanzhuo, yinjun.zhang

rx netdim needs to count the traffic during a complete napi process,
and start updating and comparing samples to make decisions after
the napi ends. Let virtqueue_napi_complete() return true if napi is done,
otherwise vice versa.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index d16f592c2061..0ad2894e6a5e 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -431,7 +431,7 @@ static void virtqueue_napi_schedule(struct napi_struct *napi,
 	}
 }
 
-static void virtqueue_napi_complete(struct napi_struct *napi,
+static bool virtqueue_napi_complete(struct napi_struct *napi,
 				    struct virtqueue *vq, int processed)
 {
 	int opaque;
@@ -440,9 +440,13 @@ static void virtqueue_napi_complete(struct napi_struct *napi,
 	if (napi_complete_done(napi, processed)) {
 		if (unlikely(virtqueue_poll(vq, opaque)))
 			virtqueue_napi_schedule(napi, vq);
+		else
+			return true;
 	} else {
 		virtqueue_disable_cb(vq);
 	}
+
+	return false;
 }
 
 static void skb_xmit_done(struct virtqueue *vq)
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 2/4] virtio-net: separate rx/tx coalescing moderation cmds
  2023-11-27  2:55 [PATCH net-next v5 0/4] virtio-net: support dynamic coalescing moderation Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 1/4] virtio-net: returns whether napi is complete Heng Qi
@ 2023-11-27  2:55 ` Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 3/4] virtio-net: extract virtqueue coalescig cmd for reuse Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 4/4] virtio-net: support rx netdim Heng Qi
  3 siblings, 0 replies; 10+ messages in thread
From: Heng Qi @ 2023-11-27  2:55 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, pabeni, davem, hawk,
	john.fastabend, ast, horms, xuanzhuo, yinjun.zhang

This patch separates the rx and tx global coalescing moderation
commands to support netdim switches in subsequent patches.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 31 ++++++++++++++++++++++++++++---
 1 file changed, 28 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0ad2894e6a5e..0285301caf78 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -3266,10 +3266,10 @@ static int virtnet_get_link_ksettings(struct net_device *dev,
 	return 0;
 }
 
-static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
-				       struct ethtool_coalesce *ec)
+static int virtnet_send_tx_notf_coal_cmds(struct virtnet_info *vi,
+					  struct ethtool_coalesce *ec)
 {
-	struct scatterlist sgs_tx, sgs_rx;
+	struct scatterlist sgs_tx;
 	int i;
 
 	vi->ctrl->coal_tx.tx_usecs = cpu_to_le32(ec->tx_coalesce_usecs);
@@ -3289,6 +3289,15 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 		vi->sq[i].intr_coal.max_packets = ec->tx_max_coalesced_frames;
 	}
 
+	return 0;
+}
+
+static int virtnet_send_rx_notf_coal_cmds(struct virtnet_info *vi,
+					  struct ethtool_coalesce *ec)
+{
+	struct scatterlist sgs_rx;
+	int i;
+
 	vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
 	vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
 	sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx));
@@ -3309,6 +3318,22 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
+static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
+				       struct ethtool_coalesce *ec)
+{
+	int err;
+
+	err = virtnet_send_tx_notf_coal_cmds(vi, ec);
+	if (err)
+		return err;
+
+	err = virtnet_send_rx_notf_coal_cmds(vi, ec);
+	if (err)
+		return err;
+
+	return 0;
+}
+
 static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi,
 					 u16 vqn, u32 max_usecs, u32 max_packets)
 {
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 3/4] virtio-net: extract virtqueue coalescig cmd for reuse
  2023-11-27  2:55 [PATCH net-next v5 0/4] virtio-net: support dynamic coalescing moderation Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 1/4] virtio-net: returns whether napi is complete Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 2/4] virtio-net: separate rx/tx coalescing moderation cmds Heng Qi
@ 2023-11-27  2:55 ` Heng Qi
  2023-11-27  2:55 ` [PATCH net-next v5 4/4] virtio-net: support rx netdim Heng Qi
  3 siblings, 0 replies; 10+ messages in thread
From: Heng Qi @ 2023-11-27  2:55 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, pabeni, davem, hawk,
	john.fastabend, ast, horms, xuanzhuo, yinjun.zhang

Extract commands to set virtqueue coalescing parameters for reuse
by ethtool -Q, vq resize and netdim.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 106 +++++++++++++++++++++++----------------
 1 file changed, 64 insertions(+), 42 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0285301caf78..69fe09e99b3c 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2849,6 +2849,58 @@ static void virtnet_cpu_notif_remove(struct virtnet_info *vi)
 					    &vi->node_dead);
 }
 
+static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+					 u16 vqn, u32 max_usecs, u32 max_packets)
+{
+	struct scatterlist sgs;
+
+	vi->ctrl->coal_vq.vqn = cpu_to_le16(vqn);
+	vi->ctrl->coal_vq.coal.max_usecs = cpu_to_le32(max_usecs);
+	vi->ctrl->coal_vq.coal.max_packets = cpu_to_le32(max_packets);
+	sg_init_one(&sgs, &vi->ctrl->coal_vq, sizeof(vi->ctrl->coal_vq));
+
+	if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
+				  VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET,
+				  &sgs))
+		return -EINVAL;
+
+	return 0;
+}
+
+static int virtnet_send_rx_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+					    u16 queue, u32 max_usecs,
+					    u32 max_packets)
+{
+	int err;
+
+	err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue),
+					    max_usecs, max_packets);
+	if (err)
+		return err;
+
+	vi->rq[queue].intr_coal.max_usecs = max_usecs;
+	vi->rq[queue].intr_coal.max_packets = max_packets;
+
+	return 0;
+}
+
+static int virtnet_send_tx_ctrl_coal_vq_cmd(struct virtnet_info *vi,
+					    u16 queue, u32 max_usecs,
+					    u32 max_packets)
+{
+	int err;
+
+	err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue),
+					    max_usecs, max_packets);
+	if (err)
+		return err;
+
+	vi->sq[queue].intr_coal.max_usecs = max_usecs;
+	vi->sq[queue].intr_coal.max_packets = max_packets;
+
+	return 0;
+}
+
 static void virtnet_get_ringparam(struct net_device *dev,
 				  struct ethtool_ringparam *ring,
 				  struct kernel_ethtool_ringparam *kernel_ring,
@@ -2906,14 +2958,11 @@ static int virtnet_set_ringparam(struct net_device *dev,
 			 * through the VIRTIO_NET_CTRL_NOTF_COAL_TX_SET command, or, if the driver
 			 * did not set any TX coalescing parameters, to 0.
 			 */
-			err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(i),
-							    vi->intr_coal_tx.max_usecs,
-							    vi->intr_coal_tx.max_packets);
+			err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, i,
+							       vi->intr_coal_tx.max_usecs,
+							       vi->intr_coal_tx.max_packets);
 			if (err)
 				return err;
-
-			vi->sq[i].intr_coal.max_usecs = vi->intr_coal_tx.max_usecs;
-			vi->sq[i].intr_coal.max_packets = vi->intr_coal_tx.max_packets;
 		}
 
 		if (ring->rx_pending != rx_pending) {
@@ -2922,14 +2971,11 @@ static int virtnet_set_ringparam(struct net_device *dev,
 				return err;
 
 			/* The reason is same as the transmit virtqueue reset */
-			err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(i),
-							    vi->intr_coal_rx.max_usecs,
-							    vi->intr_coal_rx.max_packets);
+			err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, i,
+							       vi->intr_coal_rx.max_usecs,
+							       vi->intr_coal_rx.max_packets);
 			if (err)
 				return err;
-
-			vi->rq[i].intr_coal.max_usecs = vi->intr_coal_rx.max_usecs;
-			vi->rq[i].intr_coal.max_packets = vi->intr_coal_rx.max_packets;
 		}
 	}
 
@@ -3334,48 +3380,24 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
-static int virtnet_send_ctrl_coal_vq_cmd(struct virtnet_info *vi,
-					 u16 vqn, u32 max_usecs, u32 max_packets)
-{
-	struct scatterlist sgs;
-
-	vi->ctrl->coal_vq.vqn = cpu_to_le16(vqn);
-	vi->ctrl->coal_vq.coal.max_usecs = cpu_to_le32(max_usecs);
-	vi->ctrl->coal_vq.coal.max_packets = cpu_to_le32(max_packets);
-	sg_init_one(&sgs, &vi->ctrl->coal_vq, sizeof(vi->ctrl->coal_vq));
-
-	if (!virtnet_send_command(vi, VIRTIO_NET_CTRL_NOTF_COAL,
-				  VIRTIO_NET_CTRL_NOTF_COAL_VQ_SET,
-				  &sgs))
-		return -EINVAL;
-
-	return 0;
-}
-
 static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
 					  struct ethtool_coalesce *ec,
 					  u16 queue)
 {
 	int err;
 
-	err = virtnet_send_ctrl_coal_vq_cmd(vi, rxq2vq(queue),
-					    ec->rx_coalesce_usecs,
-					    ec->rx_max_coalesced_frames);
+	err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, queue,
+					       ec->rx_coalesce_usecs,
+					       ec->rx_max_coalesced_frames);
 	if (err)
 		return err;
 
-	vi->rq[queue].intr_coal.max_usecs = ec->rx_coalesce_usecs;
-	vi->rq[queue].intr_coal.max_packets = ec->rx_max_coalesced_frames;
-
-	err = virtnet_send_ctrl_coal_vq_cmd(vi, txq2vq(queue),
-					    ec->tx_coalesce_usecs,
-					    ec->tx_max_coalesced_frames);
+	err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, queue,
+					       ec->tx_coalesce_usecs,
+					       ec->tx_max_coalesced_frames);
 	if (err)
 		return err;
 
-	vi->sq[queue].intr_coal.max_usecs = ec->tx_coalesce_usecs;
-	vi->sq[queue].intr_coal.max_packets = ec->tx_max_coalesced_frames;
-
 	return 0;
 }
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next v5 4/4] virtio-net: support rx netdim
  2023-11-27  2:55 [PATCH net-next v5 0/4] virtio-net: support dynamic coalescing moderation Heng Qi
                   ` (2 preceding siblings ...)
  2023-11-27  2:55 ` [PATCH net-next v5 3/4] virtio-net: extract virtqueue coalescig cmd for reuse Heng Qi
@ 2023-11-27  2:55 ` Heng Qi
  2023-11-30  9:33   ` Paolo Abeni
  3 siblings, 1 reply; 10+ messages in thread
From: Heng Qi @ 2023-11-27  2:55 UTC (permalink / raw)
  To: virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, pabeni, davem, hawk,
	john.fastabend, ast, horms, xuanzhuo, yinjun.zhang

By comparing the traffic information in the complete napi processes,
let the virtio-net driver automatically adjust the coalescing
moderation parameters of each receive queue.

Signed-off-by: Heng Qi <hengqi@linux.alibaba.com>
---
v4->v5:
- Fix possible synchronization issues using cancel_work().
- Reduce if/else nesting levels

v2->v3:
- Some minor modifications.

v1->v2:
- Improved the judgment of dim switch conditions.
- Cancel the work when vq reset.

 drivers/net/virtio_net.c | 166 ++++++++++++++++++++++++++++++++++++---
 1 file changed, 156 insertions(+), 10 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 69fe09e99b3c..d93a0d287d49 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -19,6 +19,7 @@
 #include <linux/average.h>
 #include <linux/filter.h>
 #include <linux/kernel.h>
+#include <linux/dim.h>
 #include <net/route.h>
 #include <net/xdp.h>
 #include <net/net_failover.h>
@@ -172,6 +173,17 @@ struct receive_queue {
 
 	struct virtnet_rq_stats stats;
 
+	/* The number of rx notifications */
+	u16 calls;
+
+	/* Is dynamic interrupt moderation enabled? */
+	bool dim_enabled;
+
+	/* Dynamic Interrupt Moderation */
+	struct dim dim;
+
+	u32 packets_in_napi;
+
 	struct virtnet_interrupt_coalesce intr_coal;
 
 	/* Chain pages by the private ptr. */
@@ -305,6 +317,9 @@ struct virtnet_info {
 	u8 duplex;
 	u32 speed;
 
+	/* Is rx dynamic interrupt moderation enabled? */
+	bool rx_dim_enabled;
+
 	/* Interrupt coalescing settings */
 	struct virtnet_interrupt_coalesce intr_coal_tx;
 	struct virtnet_interrupt_coalesce intr_coal_rx;
@@ -2001,6 +2016,7 @@ static void skb_recv_done(struct virtqueue *rvq)
 	struct virtnet_info *vi = rvq->vdev->priv;
 	struct receive_queue *rq = &vi->rq[vq2rxq(rvq)];
 
+	rq->calls++;
 	virtqueue_napi_schedule(&rq->napi, rvq);
 }
 
@@ -2141,6 +2157,26 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
 	}
 }
 
+static void virtnet_rx_dim_work(struct work_struct *work);
+
+static void virtnet_rx_dim_update(struct virtnet_info *vi, struct receive_queue *rq)
+{
+	struct dim_sample cur_sample = {};
+
+	if (!rq->packets_in_napi)
+		return;
+
+	u64_stats_update_begin(&rq->stats.syncp);
+	dim_update_sample(rq->calls,
+			  u64_stats_read(&rq->stats.packets),
+			  u64_stats_read(&rq->stats.bytes),
+			  &cur_sample);
+	u64_stats_update_end(&rq->stats.syncp);
+
+	net_dim(&rq->dim, cur_sample);
+	rq->packets_in_napi = 0;
+}
+
 static int virtnet_poll(struct napi_struct *napi, int budget)
 {
 	struct receive_queue *rq =
@@ -2149,17 +2185,22 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 	struct send_queue *sq;
 	unsigned int received;
 	unsigned int xdp_xmit = 0;
+	bool napi_complete;
 
 	virtnet_poll_cleantx(rq);
 
 	received = virtnet_receive(rq, budget, &xdp_xmit);
+	rq->packets_in_napi += received;
 
 	if (xdp_xmit & VIRTIO_XDP_REDIR)
 		xdp_do_flush();
 
 	/* Out of packets? */
-	if (received < budget)
-		virtqueue_napi_complete(napi, rq->vq, received);
+	if (received < budget) {
+		napi_complete = virtqueue_napi_complete(napi, rq->vq, received);
+		if (napi_complete && rq->dim_enabled)
+			virtnet_rx_dim_update(vi, rq);
+	}
 
 	if (xdp_xmit & VIRTIO_XDP_TX) {
 		sq = virtnet_xdp_get_sq(vi);
@@ -2230,8 +2271,11 @@ static int virtnet_open(struct net_device *dev)
 	disable_delayed_refill(vi);
 	cancel_delayed_work_sync(&vi->refill);
 
-	for (i--; i >= 0; i--)
+	for (i--; i >= 0; i--) {
 		virtnet_disable_queue_pair(vi, i);
+		cancel_work(&vi->rq[i].dim.work);
+	}
+
 	return err;
 }
 
@@ -2393,8 +2437,10 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
 
 	qindex = rq - vi->rq;
 
-	if (running)
+	if (running) {
 		napi_disable(&rq->napi);
+		cancel_work(&rq->dim.work);
+	}
 
 	err = virtqueue_resize(rq->vq, ring_num, virtnet_rq_free_unused_buf);
 	if (err)
@@ -2641,8 +2687,10 @@ static int virtnet_close(struct net_device *dev)
 	/* Make sure refill_work doesn't re-enable napi! */
 	cancel_delayed_work_sync(&vi->refill);
 
-	for (i = 0; i < vi->max_queue_pairs; i++)
+	for (i = 0; i < vi->max_queue_pairs; i++) {
 		virtnet_disable_queue_pair(vi, i);
+		cancel_work(&vi->rq[i].dim.work);
+	}
 
 	return 0;
 }
@@ -3341,9 +3389,35 @@ static int virtnet_send_tx_notf_coal_cmds(struct virtnet_info *vi,
 static int virtnet_send_rx_notf_coal_cmds(struct virtnet_info *vi,
 					  struct ethtool_coalesce *ec)
 {
+	bool rx_ctrl_dim_on = !!ec->use_adaptive_rx_coalesce;
 	struct scatterlist sgs_rx;
 	int i;
 
+	if (rx_ctrl_dim_on && !virtio_has_feature(vi->vdev, VIRTIO_NET_F_VQ_NOTF_COAL))
+		return -EOPNOTSUPP;
+
+	if (rx_ctrl_dim_on && (ec->rx_coalesce_usecs != vi->intr_coal_rx.max_usecs ||
+			       ec->rx_max_coalesced_frames != vi->intr_coal_rx.max_packets))
+		return -EINVAL;
+
+	if (rx_ctrl_dim_on && !vi->rx_dim_enabled) {
+		vi->rx_dim_enabled = true;
+		for (i = 0; i < vi->max_queue_pairs; i++)
+			vi->rq[i].dim_enabled = true;
+
+		return 0;
+	}
+
+	if (!rx_ctrl_dim_on && vi->rx_dim_enabled) {
+		vi->rx_dim_enabled = false;
+		for (i = 0; i < vi->max_queue_pairs; i++)
+			vi->rq[i].dim_enabled = false;
+	}
+
+	/* Since the per-queue coalescing params can be set,
+	 * we need apply the global new params even if they
+	 * are not updated.
+	 */
 	vi->ctrl->coal_rx.rx_usecs = cpu_to_le32(ec->rx_coalesce_usecs);
 	vi->ctrl->coal_rx.rx_max_packets = cpu_to_le32(ec->rx_max_coalesced_frames);
 	sg_init_one(&sgs_rx, &vi->ctrl->coal_rx, sizeof(vi->ctrl->coal_rx));
@@ -3353,7 +3427,6 @@ static int virtnet_send_rx_notf_coal_cmds(struct virtnet_info *vi,
 				  &sgs_rx))
 		return -EINVAL;
 
-	/* Save parameters */
 	vi->intr_coal_rx.max_usecs = ec->rx_coalesce_usecs;
 	vi->intr_coal_rx.max_packets = ec->rx_max_coalesced_frames;
 	for (i = 0; i < vi->max_queue_pairs; i++) {
@@ -3380,18 +3453,55 @@ static int virtnet_send_notf_coal_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
-static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
-					  struct ethtool_coalesce *ec,
-					  u16 queue)
+static int virtnet_send_rx_notf_coal_vq_cmds(struct virtnet_info *vi,
+					     struct ethtool_coalesce *ec,
+					     u16 queue)
 {
+	bool rx_ctrl_dim_on = !!ec->use_adaptive_rx_coalesce;
+	bool cur_rx_dim = vi->rq[queue].dim_enabled;
+	u32 max_usecs, max_packets;
+	bool update = false;
 	int err;
 
+	max_usecs = vi->rq[queue].intr_coal.max_usecs;
+	max_packets = vi->rq[queue].intr_coal.max_packets;
+	if (ec->rx_coalesce_usecs != max_usecs ||
+	    ec->rx_max_coalesced_frames != max_packets)
+		update = true;
+
+	if (rx_ctrl_dim_on && update)
+		return -EINVAL;
+
+	if (rx_ctrl_dim_on && !cur_rx_dim) {
+		vi->rq[queue].dim_enabled = true;
+		return 0;
+	}
+
+	if (!rx_ctrl_dim_on && cur_rx_dim)
+		vi->rq[queue].dim_enabled = false;
+
+	if (!update)
+		return 0;
+
 	err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, queue,
 					       ec->rx_coalesce_usecs,
 					       ec->rx_max_coalesced_frames);
 	if (err)
 		return err;
 
+	return 0;
+}
+
+static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
+					  struct ethtool_coalesce *ec,
+					  u16 queue)
+{
+	int err;
+
+	err = virtnet_send_rx_notf_coal_vq_cmds(vi, ec, queue);
+	if (err)
+		return err;
+
 	err = virtnet_send_tx_ctrl_coal_vq_cmd(vi, queue,
 					       ec->tx_coalesce_usecs,
 					       ec->tx_max_coalesced_frames);
@@ -3401,6 +3511,32 @@ static int virtnet_send_notf_coal_vq_cmds(struct virtnet_info *vi,
 	return 0;
 }
 
+static void virtnet_rx_dim_work(struct work_struct *work)
+{
+	struct dim *dim = container_of(work, struct dim, work);
+	struct receive_queue *rq = container_of(dim,
+			struct receive_queue, dim);
+	struct virtnet_info *vi = rq->vq->vdev->priv;
+	struct net_device *dev = vi->dev;
+	struct dim_cq_moder update_moder;
+	int qnum = rq - vi->rq, err;
+
+	update_moder = net_dim_get_rx_moderation(dim->mode, dim->profile_ix);
+	if (update_moder.usec != vi->rq[qnum].intr_coal.max_usecs ||
+	    update_moder.pkts != vi->rq[qnum].intr_coal.max_packets) {
+		rtnl_lock();
+		err = virtnet_send_rx_ctrl_coal_vq_cmd(vi, qnum,
+						       update_moder.usec,
+						       update_moder.pkts);
+		if (err)
+			pr_debug("%s: Failed to send dim parameters on rxq%d\n",
+				 dev->name, (int)(rq - vi->rq));
+		rtnl_unlock();
+	}
+
+	dim->state = DIM_START_MEASURE;
+}
+
 static int virtnet_coal_params_supported(struct ethtool_coalesce *ec)
 {
 	/* usecs coalescing is supported only if VIRTIO_NET_F_NOTF_COAL
@@ -3482,6 +3618,7 @@ static int virtnet_get_coalesce(struct net_device *dev,
 		ec->tx_coalesce_usecs = vi->intr_coal_tx.max_usecs;
 		ec->tx_max_coalesced_frames = vi->intr_coal_tx.max_packets;
 		ec->rx_max_coalesced_frames = vi->intr_coal_rx.max_packets;
+		ec->use_adaptive_rx_coalesce = vi->rx_dim_enabled;
 	} else {
 		ec->rx_max_coalesced_frames = 1;
 
@@ -3539,6 +3676,7 @@ static int virtnet_get_per_queue_coalesce(struct net_device *dev,
 		ec->tx_coalesce_usecs = vi->sq[queue].intr_coal.max_usecs;
 		ec->tx_max_coalesced_frames = vi->sq[queue].intr_coal.max_packets;
 		ec->rx_max_coalesced_frames = vi->rq[queue].intr_coal.max_packets;
+		ec->use_adaptive_rx_coalesce = vi->rq[queue].dim_enabled;
 	} else {
 		ec->rx_max_coalesced_frames = 1;
 
@@ -3664,7 +3802,7 @@ static int virtnet_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info)
 
 static const struct ethtool_ops virtnet_ethtool_ops = {
 	.supported_coalesce_params = ETHTOOL_COALESCE_MAX_FRAMES |
-		ETHTOOL_COALESCE_USECS,
+		ETHTOOL_COALESCE_USECS | ETHTOOL_COALESCE_USE_ADAPTIVE_RX,
 	.get_drvinfo = virtnet_get_drvinfo,
 	.get_link = ethtool_op_get_link,
 	.get_ringparam = virtnet_get_ringparam,
@@ -4254,6 +4392,9 @@ static int virtnet_alloc_queues(struct virtnet_info *vi)
 					 virtnet_poll_tx,
 					 napi_tx ? napi_weight : 0);
 
+		INIT_WORK(&vi->rq[i].dim.work, virtnet_rx_dim_work);
+		vi->rq[i].dim.mode = DIM_CQ_PERIOD_MODE_START_FROM_EQE;
+
 		sg_init_table(vi->rq[i].sg, ARRAY_SIZE(vi->rq[i].sg));
 		ewma_pkt_len_init(&vi->rq[i].mrg_avg_pkt_len);
 		sg_init_table(vi->sq[i].sg, ARRAY_SIZE(vi->sq[i].sg));
@@ -4714,6 +4855,8 @@ static int virtnet_probe(struct virtio_device *vdev)
 free_vqs:
 	virtio_reset_device(vdev);
 	cancel_delayed_work_sync(&vi->refill);
+	for (i = 0; i < vi->max_queue_pairs; i++)
+		cancel_work(&vi->rq[i].dim.work);
 	free_receive_page_frags(vi);
 	virtnet_del_vqs(vi);
 free:
@@ -4738,11 +4881,14 @@ static void remove_vq_common(struct virtnet_info *vi)
 static void virtnet_remove(struct virtio_device *vdev)
 {
 	struct virtnet_info *vi = vdev->priv;
+	int i;
 
 	virtnet_cpu_notif_remove(vi);
 
 	/* Make sure no work handler is accessing the device. */
 	flush_work(&vi->config_work);
+	for (i = 0; i < vi->max_queue_pairs; i++)
+		cancel_work(&vi->rq[i].dim.work);
 
 	unregister_netdev(vi->dev);
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 4/4] virtio-net: support rx netdim
  2023-11-27  2:55 ` [PATCH net-next v5 4/4] virtio-net: support rx netdim Heng Qi
@ 2023-11-30  9:33   ` Paolo Abeni
  2023-11-30 12:09     ` Heng Qi
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2023-11-30  9:33 UTC (permalink / raw)
  To: Heng Qi, virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, davem, hawk, john.fastabend, ast,
	horms, xuanzhuo, yinjun.zhang

On Mon, 2023-11-27 at 10:55 +0800, Heng Qi wrote:
> @@ -4738,11 +4881,14 @@ static void remove_vq_common(struct virtnet_info *vi)
>  static void virtnet_remove(struct virtio_device *vdev)
>  {
>  	struct virtnet_info *vi = vdev->priv;
> +	int i;
>  
>  	virtnet_cpu_notif_remove(vi);
>  
>  	/* Make sure no work handler is accessing the device. */
>  	flush_work(&vi->config_work);
> +	for (i = 0; i < vi->max_queue_pairs; i++)
> +		cancel_work(&vi->rq[i].dim.work);

If the dim work is still running here, what prevents it from completing
after the following unregister/free netdev?

It looks like you want need to call cancel_work_sync here?

Additionally the later remove_vq_common() will needless call
cancel_work() again; possibly is better to consolidate a single (sync)
call there.

Cheers,

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 4/4] virtio-net: support rx netdim
  2023-11-30  9:33   ` Paolo Abeni
@ 2023-11-30 12:09     ` Heng Qi
  2023-11-30 12:23       ` Paolo Abeni
  0 siblings, 1 reply; 10+ messages in thread
From: Heng Qi @ 2023-11-30 12:09 UTC (permalink / raw)
  To: Paolo Abeni, virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, davem, hawk, john.fastabend, ast,
	horms, xuanzhuo, yinjun.zhang



在 2023/11/30 下午5:33, Paolo Abeni 写道:
> On Mon, 2023-11-27 at 10:55 +0800, Heng Qi wrote:
>> @@ -4738,11 +4881,14 @@ static void remove_vq_common(struct virtnet_info *vi)
>>   static void virtnet_remove(struct virtio_device *vdev)
>>   {
>>   	struct virtnet_info *vi = vdev->priv;
>> +	int i;
>>   
>>   	virtnet_cpu_notif_remove(vi);
>>   
>>   	/* Make sure no work handler is accessing the device. */
>>   	flush_work(&vi->config_work);
>> +	for (i = 0; i < vi->max_queue_pairs; i++)
>> +		cancel_work(&vi->rq[i].dim.work);
> If the dim work is still running here, what prevents it from completing
> after the following unregister/free netdev?

Yes, no one here is trying to stop it, the situation is like 
unregister/free netdev
when rss are being set, so I think this is ok.

>
> It looks like you want need to call cancel_work_sync here?

In v4, Yinjun Zhang mentioned that _sync() can cause deadlock[1].
Therefore, cancel_work() is used here instead of cancel_work_sync() to 
avoid possible deadlock.

[1] 
https://lore.kernel.org/all/20231122092939.1005591-1-yinjun.zhang@corigine.com/

>
> Additionally the later remove_vq_common() will needless call
> cancel_work() again;

Yes. remove_vq_common() now does not call cancel_work().

> possibly is better to consolidate a single (sync)
> call there.

Do you mean add it in virtnet_freeze()?
cancel_work() has existed in the path virtnet_freeze() -> 
virtnet_freeze_down() -> virtnet_close().

Thanks!

>
> Cheers,
>
> Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 4/4] virtio-net: support rx netdim
  2023-11-30 12:09     ` Heng Qi
@ 2023-11-30 12:23       ` Paolo Abeni
  2023-11-30 12:42         ` Heng Qi
  0 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2023-11-30 12:23 UTC (permalink / raw)
  To: Heng Qi, virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, davem, hawk, john.fastabend, ast,
	horms, xuanzhuo, yinjun.zhang

On Thu, 2023-11-30 at 20:09 +0800, Heng Qi wrote:
> 
> 在 2023/11/30 下午5:33, Paolo Abeni 写道:
> > On Mon, 2023-11-27 at 10:55 +0800, Heng Qi wrote:
> > > @@ -4738,11 +4881,14 @@ static void remove_vq_common(struct virtnet_info *vi)
> > >   static void virtnet_remove(struct virtio_device *vdev)
> > >   {
> > >   	struct virtnet_info *vi = vdev->priv;
> > > +	int i;
> > >   
> > >   	virtnet_cpu_notif_remove(vi);
> > >   
> > >   	/* Make sure no work handler is accessing the device. */
> > >   	flush_work(&vi->config_work);
> > > +	for (i = 0; i < vi->max_queue_pairs; i++)
> > > +		cancel_work(&vi->rq[i].dim.work);
> > If the dim work is still running here, what prevents it from completing
> > after the following unregister/free netdev?
> 
> Yes, no one here is trying to stop it, 

So it will cause UaF, right?

> the situation is like 
> unregister/free netdev
> when rss are being set, so I think this is ok.
 
Could you please elaborate more the point?

> > It looks like you want need to call cancel_work_sync here?
> 
> In v4, Yinjun Zhang mentioned that _sync() can cause deadlock[1].
> Therefore, cancel_work() is used here instead of cancel_work_sync() to 
> avoid possible deadlock.
> 
> [1] 
> https://lore.kernel.org/all/20231122092939.1005591-1-yinjun.zhang@corigine.com/

Here the call to cancel_work() happens while the caller does not held
the rtnl lock, the deadlock reported above will not be triggered.

> > Additionally the later remove_vq_common() will needless call
> > cancel_work() again;
> 
> Yes. remove_vq_common() now does not call cancel_work().

I'm sorry, I missread the context in a previous chunk.

The other point should still apply.

Cheers,

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next v5 4/4] virtio-net: support rx netdim
  2023-11-30 12:23       ` Paolo Abeni
@ 2023-11-30 12:42         ` Heng Qi
  2023-12-01  2:11           ` Yinjun Zhang
  0 siblings, 1 reply; 10+ messages in thread
From: Heng Qi @ 2023-11-30 12:42 UTC (permalink / raw)
  To: Paolo Abeni, virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, davem, hawk, john.fastabend, ast,
	horms, xuanzhuo, yinjun.zhang



在 2023/11/30 下午8:23, Paolo Abeni 写道:
> On Thu, 2023-11-30 at 20:09 +0800, Heng Qi wrote:
>> 在 2023/11/30 下午5:33, Paolo Abeni 写道:
>>> On Mon, 2023-11-27 at 10:55 +0800, Heng Qi wrote:
>>>> @@ -4738,11 +4881,14 @@ static void remove_vq_common(struct virtnet_info *vi)
>>>>    static void virtnet_remove(struct virtio_device *vdev)
>>>>    {
>>>>    	struct virtnet_info *vi = vdev->priv;
>>>> +	int i;
>>>>    
>>>>    	virtnet_cpu_notif_remove(vi);
>>>>    
>>>>    	/* Make sure no work handler is accessing the device. */
>>>>    	flush_work(&vi->config_work);
>>>> +	for (i = 0; i < vi->max_queue_pairs; i++)
>>>> +		cancel_work(&vi->rq[i].dim.work);
>>> If the dim work is still running here, what prevents it from completing
>>> after the following unregister/free netdev?
>> Yes, no one here is trying to stop it,
> So it will cause UaF, right?
>
>> the situation is like
>> unregister/free netdev
>> when rss are being set, so I think this is ok.
>   
> Could you please elaborate more the point?

If I'm not wrong, I think the following 2 scenarios are similar:

Scen2 1:
1. User uses ethtool to configure rss settings
2. ethtool core holds rtnl_lock
2. virtnet_remove() is called
3. virtnet_send_command() is called.

Scene 2:
1. virtnet_poll() queues a virtnet_rx_dim_work()
1. virtnet_rx_dim_work() is called and holds rtnl_lock
2. virtnet_remove() is called
3. virtnet_send_command() is called.

So I think it's ok to use cancel_work() here.
What do you think? :)

>
>>> It looks like you want need to call cancel_work_sync here?
>> In v4, Yinjun Zhang mentioned that _sync() can cause deadlock[1].
>> Therefore, cancel_work() is used here instead of cancel_work_sync() to
>> avoid possible deadlock.
>>
>> [1]
>> https://lore.kernel.org/all/20231122092939.1005591-1-yinjun.zhang@corigine.com/
> Here the call to cancel_work() happens while the caller does not held
> the rtnl lock, the deadlock reported above will not be triggered.

There's cancel_work_sync() in v4 and I did reproduce the deadlock.

rtnl_lock held -> .ndo_stop() -> cancel_work_sync() -> 
virtnet_rx_dim_work(),
the work acquires the rtnl_lock again, then a deadlock occurs.

I tested the scenario of ctrl cmd/.remove/.ndo_stop()/dim_work when there is
a big concurrency, and cancel_work() works well.

Thanks!

>
>>> Additionally the later remove_vq_common() will needless call
>>> cancel_work() again;
>> Yes. remove_vq_common() now does not call cancel_work().
> I'm sorry, I missread the context in a previous chunk.
>
> The other point should still apply.
>
> Cheers,
>
> Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH net-next v5 4/4] virtio-net: support rx netdim
  2023-11-30 12:42         ` Heng Qi
@ 2023-12-01  2:11           ` Yinjun Zhang
  0 siblings, 0 replies; 10+ messages in thread
From: Yinjun Zhang @ 2023-12-01  2:11 UTC (permalink / raw)
  To: Heng Qi, Paolo Abeni, virtualization, netdev
  Cc: jasowang, mst, kuba, edumazet, davem, hawk, john.fastabend, ast,
	horms, xuanzhuo

On Thursday, November 30, 2023 8:42 PM, Heng Qi wrote:
<...>
> >>>>    static void virtnet_remove(struct virtio_device *vdev)
> >>>>    {
> >>>>            struct virtnet_info *vi = vdev->priv;
> >>>> +  int i;
> >>>>
> >>>>            virtnet_cpu_notif_remove(vi);
> >>>>
> >>>>            /* Make sure no work handler is accessing the device. */
> >>>>            flush_work(&vi->config_work);
> >>>> +  for (i = 0; i < vi->max_queue_pairs; i++)
> >>>> +          cancel_work(&vi->rq[i].dim.work);
<...> 
> There's cancel_work_sync() in v4 and I did reproduce the deadlock.
> 
> rtnl_lock held -> .ndo_stop() -> cancel_work_sync() ->
> virtnet_rx_dim_work(),
> the work acquires the rtnl_lock again, then a deadlock occurs.
> 
> I tested the scenario of ctrl cmd/.remove/.ndo_stop()/dim_work when there
> is
> a big concurrency, and cancel_work() works well.

I think the question here is why do you need call `cancel_work()` in `remove()`?
You already call it in `close()`, and the callstack is:
remove() ->  unregister_netdev() -> rtnl_lock() -> ndo_stop() -> close()

And similarly, you don't need it in the unwind path in `probe()` either.

> 
<...>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-12-01  2:11 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-27  2:55 [PATCH net-next v5 0/4] virtio-net: support dynamic coalescing moderation Heng Qi
2023-11-27  2:55 ` [PATCH net-next v5 1/4] virtio-net: returns whether napi is complete Heng Qi
2023-11-27  2:55 ` [PATCH net-next v5 2/4] virtio-net: separate rx/tx coalescing moderation cmds Heng Qi
2023-11-27  2:55 ` [PATCH net-next v5 3/4] virtio-net: extract virtqueue coalescig cmd for reuse Heng Qi
2023-11-27  2:55 ` [PATCH net-next v5 4/4] virtio-net: support rx netdim Heng Qi
2023-11-30  9:33   ` Paolo Abeni
2023-11-30 12:09     ` Heng Qi
2023-11-30 12:23       ` Paolo Abeni
2023-11-30 12:42         ` Heng Qi
2023-12-01  2:11           ` Yinjun Zhang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.