All of lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH netdev 2/5] virtio-net: support XDP_TX when not more queues
@ 2021-01-05 12:14 kernel test robot
  0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2021-01-05 12:14 UTC (permalink / raw)
  To: kbuild

[-- Attachment #1: Type: text/plain, Size: 3876 bytes --]

CC: kbuild-all(a)lists.01.org
In-Reply-To: <aa8d42a567f9e97a5071cad4ba88abc3ac5ac760.1609837120.git.xuanzhuo@linux.alibaba.com>
References: <aa8d42a567f9e97a5071cad4ba88abc3ac5ac760.1609837120.git.xuanzhuo@linux.alibaba.com>
TO: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
TO: netdev(a)vger.kernel.org
CC: dust.li(a)linux.alibaba.com
CC: tonylu(a)linux.alibaba.com
CC: "Michael S. Tsirkin" <mst@redhat.com>
CC: Jason Wang <jasowang@redhat.com>
CC: Jakub Kicinski <kuba@kernel.org>
CC: "Björn Töpel" <bjorn.topel@intel.com>
CC: Magnus Karlsson <magnus.karlsson@intel.com>
CC: Jonathan Lemon <jonathan.lemon@gmail.com>
CC: Alexei Starovoitov <ast@kernel.org>

Hi Xuan,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on ipvs/master]
[also build test WARNING on linus/master v5.11-rc2 next-20210104]
[cannot apply to sparc-next/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Xuan-Zhuo/virtio-net-support-xdp-socket-zero-copy-xmit/20210105-171505
base:   https://git.kernel.org/pub/scm/linux/kernel/git/horms/ipvs.git master
:::::: branch date: 3 hours ago
:::::: commit date: 3 hours ago
config: x86_64-randconfig-s032-20210105 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-15) 9.3.0
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.3-208-g46a52ca4-dirty
        # https://github.com/0day-ci/linux/commit/a2d366b9711956a7a3309fe64c206567fc7bdc9a
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Xuan-Zhuo/virtio-net-support-xdp-socket-zero-copy-xmit/20210105-171505
        git checkout a2d366b9711956a7a3309fe64c206567fc7bdc9a
        # save the attached .config to linux build tree
        make W=1 C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>


"sparse warnings: (new ones prefixed by >>)"
>> drivers/net/virtio_net.c:498:9: sparse: sparse: context imbalance in 'virtnet_get_xdp_sq' - different lock contexts for basic block
   drivers/net/virtio_net.c:507:22: sparse: sparse: context imbalance in 'virtnet_put_xdp_sq' - unexpected unlock

vim +/virtnet_get_xdp_sq +498 drivers/net/virtio_net.c

56434a01b12e99e John Fastabend  2016-12-15  484  
a2d366b9711956a Xuan Zhuo       2021-01-05  485  static struct send_queue *virtnet_get_xdp_sq(struct virtnet_info *vi)
2a43565c0646532 Toshiaki Makita 2018-07-23  486  {
2a43565c0646532 Toshiaki Makita 2018-07-23  487  	unsigned int qp;
a2d366b9711956a Xuan Zhuo       2021-01-05  488  	struct netdev_queue *txq;
2a43565c0646532 Toshiaki Makita 2018-07-23  489  
a2d366b9711956a Xuan Zhuo       2021-01-05  490  	if (vi->curr_queue_pairs > nr_cpu_ids) {
2a43565c0646532 Toshiaki Makita 2018-07-23  491  		qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id();
a2d366b9711956a Xuan Zhuo       2021-01-05  492  	} else {
a2d366b9711956a Xuan Zhuo       2021-01-05  493  		qp = smp_processor_id() % vi->curr_queue_pairs;
a2d366b9711956a Xuan Zhuo       2021-01-05  494  		txq = netdev_get_tx_queue(vi->dev, qp);
a2d366b9711956a Xuan Zhuo       2021-01-05  495  		__netif_tx_lock(txq, raw_smp_processor_id());
a2d366b9711956a Xuan Zhuo       2021-01-05  496  	}
a2d366b9711956a Xuan Zhuo       2021-01-05  497  
2a43565c0646532 Toshiaki Makita 2018-07-23 @498  	return &vi->sq[qp];
2a43565c0646532 Toshiaki Makita 2018-07-23  499  }
2a43565c0646532 Toshiaki Makita 2018-07-23  500  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 32997 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [PATCH netdev 2/5] virtio-net: support XDP_TX when not more queues
  2021-01-05  9:11 [PATCH netdev 0/5] virtio-net support xdp socket zero copy xmit Xuan Zhuo
@ 2021-01-05  9:11 ` Xuan Zhuo
  0 siblings, 0 replies; 2+ messages in thread
From: Xuan Zhuo @ 2021-01-05  9:11 UTC (permalink / raw)
  To: netdev
  Cc: dust.li, tonylu, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Jakub Kicinski, Björn Töpel, Magnus Karlsson,
	Jonathan Lemon, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko,
	Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh,
	open list:VIRTIO CORE AND NET DRIVERS, open list,
	open list:XDP SOCKETS (AF_XDP)

The number of queues implemented by many virtio backends is limited,
especially some machines have a large number of CPUs. In this case, it
is often impossible to allocate a separate queue for XDP_TX.

This patch allows XDP_TX to run by lock when not enough queue.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 42 ++++++++++++++++++++++++++++++++----------
 1 file changed, 32 insertions(+), 10 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index f65eea6..f2349b8 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -194,6 +194,7 @@ struct virtnet_info {
 
 	/* # of XDP queue pairs currently used by the driver */
 	u16 xdp_queue_pairs;
+	bool xdp_enable;
 
 	/* I like... big packets and I cannot lie! */
 	bool big_packets;
@@ -481,14 +482,34 @@ static int __virtnet_xdp_xmit_one(struct virtnet_info *vi,
 	return 0;
 }
 
-static struct send_queue *virtnet_xdp_sq(struct virtnet_info *vi)
+static struct send_queue *virtnet_get_xdp_sq(struct virtnet_info *vi)
 {
 	unsigned int qp;
+	struct netdev_queue *txq;
+
+	if (vi->curr_queue_pairs > nr_cpu_ids) {
+		qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id();
+	} else {
+		qp = smp_processor_id() % vi->curr_queue_pairs;
+		txq = netdev_get_tx_queue(vi->dev, qp);
+		__netif_tx_lock(txq, raw_smp_processor_id());
+	}
 
-	qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id();
 	return &vi->sq[qp];
 }
 
+static void virtnet_put_xdp_sq(struct virtnet_info *vi)
+{
+	unsigned int qp;
+	struct netdev_queue *txq;
+
+	if (vi->curr_queue_pairs <= nr_cpu_ids) {
+		qp = smp_processor_id() % vi->curr_queue_pairs;
+		txq = netdev_get_tx_queue(vi->dev, qp);
+		__netif_tx_unlock(txq);
+	}
+}
+
 static int virtnet_xdp_xmit(struct net_device *dev,
 			    int n, struct xdp_frame **frames, u32 flags)
 {
@@ -512,7 +533,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	if (!xdp_prog)
 		return -ENXIO;
 
-	sq = virtnet_xdp_sq(vi);
+	sq = virtnet_get_xdp_sq(vi);
 
 	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK)) {
 		ret = -EINVAL;
@@ -560,12 +581,13 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	sq->stats.kicks += kicks;
 	u64_stats_update_end(&sq->stats.syncp);
 
+	virtnet_put_xdp_sq(vi);
 	return ret;
 }
 
 static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
 {
-	return vi->xdp_queue_pairs ? VIRTIO_XDP_HEADROOM : 0;
+	return vi->xdp_enable ? VIRTIO_XDP_HEADROOM : 0;
 }
 
 /* We copy the packet for XDP in the following cases:
@@ -1457,12 +1479,13 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
 		xdp_do_flush();
 
 	if (xdp_xmit & VIRTIO_XDP_TX) {
-		sq = virtnet_xdp_sq(vi);
+		sq = virtnet_get_xdp_sq(vi);
 		if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq)) {
 			u64_stats_update_begin(&sq->stats.syncp);
 			sq->stats.kicks++;
 			u64_stats_update_end(&sq->stats.syncp);
 		}
+		virtnet_put_xdp_sq(vi);
 	}
 
 	return received;
@@ -2415,10 +2438,7 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 
 	/* XDP requires extra queues for XDP_TX */
 	if (curr_qp + xdp_qp > vi->max_queue_pairs) {
-		NL_SET_ERR_MSG_MOD(extack, "Too few free TX rings available");
-		netdev_warn(dev, "request %i queues but max is %i\n",
-			    curr_qp + xdp_qp, vi->max_queue_pairs);
-		return -ENOMEM;
+		xdp_qp = 0;
 	}
 
 	old_prog = rtnl_dereference(vi->rq[0].xdp_prog);
@@ -2451,12 +2471,14 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 	netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp);
 	vi->xdp_queue_pairs = xdp_qp;
 
+	vi->xdp_enable = false;
 	if (prog) {
 		for (i = 0; i < vi->max_queue_pairs; i++) {
 			rcu_assign_pointer(vi->rq[i].xdp_prog, prog);
 			if (i == 0 && !old_prog)
 				virtnet_clear_guest_offloads(vi);
 		}
+		vi->xdp_enable = true;
 	}
 
 	for (i = 0; i < vi->max_queue_pairs; i++) {
@@ -2524,7 +2546,7 @@ static int virtnet_set_features(struct net_device *dev,
 	int err;
 
 	if ((dev->features ^ features) & NETIF_F_LRO) {
-		if (vi->xdp_queue_pairs)
+		if (vi->xdp_enable)
 			return -EBUSY;
 
 		if (features & NETIF_F_LRO)
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2021-01-05 12:14 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-05 12:14 [PATCH netdev 2/5] virtio-net: support XDP_TX when not more queues kernel test robot
  -- strict thread matches above, loose matches on Subject: below --
2021-01-05  9:11 [PATCH netdev 0/5] virtio-net support xdp socket zero copy xmit Xuan Zhuo
2021-01-05  9:11 ` [PATCH netdev 2/5] virtio-net: support XDP_TX when not more queues Xuan Zhuo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.