* Re: [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit [not found] <1617786614.454336-5-xuanzhuo@linux.alibaba.com> @ 2021-04-07 9:16 ` Jason Wang 0 siblings, 0 replies; 3+ messages in thread From: Jason Wang @ 2021-04-07 9:16 UTC (permalink / raw) To: Xuan Zhuo Cc: Michael S. Tsirkin, David S. Miller, Jakub Kicinski, Björn Töpel, Magnus Karlsson, Jonathan Lemon, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, virtualization, bpf, Dust Li, netdev 在 2021/4/7 下午5:10, Xuan Zhuo 写道: > On Tue, 6 Apr 2021 15:03:29 +0800, Jason Wang <jasowang@redhat.com> wrote: >> 在 2021/3/31 下午3:11, Xuan Zhuo 写道: >>> poll tx call virtnet_xsk_run, then the data in the xsk tx queue will be >>> continuously consumed by napi. >>> >>> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> >>> Reviewed-by: Dust Li <dust.li@linux.alibaba.com> >> >> I think we need squash this into patch 4, it looks more like a bug fix >> to me. >> >> >>> --- >>> drivers/net/virtio_net.c | 20 +++++++++++++++++--- >>> 1 file changed, 17 insertions(+), 3 deletions(-) >>> >>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c >>> index d7e95f55478d..fac7d0020013 100644 >>> --- a/drivers/net/virtio_net.c >>> +++ b/drivers/net/virtio_net.c >>> @@ -264,6 +264,9 @@ struct padded_vnet_hdr { >>> char padding[4]; >>> }; >>> >>> +static int virtnet_xsk_run(struct send_queue *sq, struct xsk_buff_pool *pool, >>> + int budget, bool in_napi); >>> + >>> static bool is_xdp_frame(void *ptr) >>> { >>> return (unsigned long)ptr & VIRTIO_XDP_FLAG; >>> @@ -1553,7 +1556,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) >>> struct send_queue *sq = container_of(napi, struct send_queue, napi); >>> struct virtnet_info *vi = sq->vq->vdev->priv; >>> unsigned int index = vq2txq(sq->vq); >>> + struct xsk_buff_pool *pool; >>> struct netdev_queue *txq; >>> + int work = 0; >>> >>> if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { >>> /* We don't need to enable cb for XDP */ >>> @@ -1563,15 +1568,24 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) >>> >>> txq = netdev_get_tx_queue(vi->dev, index); >>> __netif_tx_lock(txq, raw_smp_processor_id()); >>> - free_old_xmit_skbs(sq, true); >>> + rcu_read_lock(); >>> + pool = rcu_dereference(sq->xsk.pool); >>> + if (pool) { >>> + work = virtnet_xsk_run(sq, pool, budget, true); >>> + rcu_read_unlock(); >>> + } else { >>> + rcu_read_unlock(); >>> + free_old_xmit_skbs(sq, true); >>> + } >>> __netif_tx_unlock(txq); >>> >>> - virtqueue_napi_complete(napi, sq->vq, 0); >>> + if (work < budget) >>> + virtqueue_napi_complete(napi, sq->vq, 0); >>> >>> if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) >>> netif_tx_wake_queue(txq); >>> >>> - return 0; >>> + return work; >> >> Need a separate patch to "fix" the budget returned by poll_tx here. > I will merge #5 #7 #8 into #4, which is indeed more reasonable, but maybe this > patch will be too big. > > But I don't understand, what you are talking about here, what is the separate > patch, when this is squashed into patch 4? So you modify the behaviour of NAPI poll to return the number of work now (0 is returned previously). Do we need to do that for non XSK part as well (which seems to be the behaviour of other nic driver)? If yes, this part should be a separated patch to be more bisect friendly. Thanks > >> Thanks >> >> >>> } >>> >>> static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) ^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH net-next v3 0/8] virtio-net support xdp socket zero copy xmit @ 2021-03-31 7:11 Xuan Zhuo 2021-03-31 7:11 ` [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit Xuan Zhuo 0 siblings, 1 reply; 3+ messages in thread From: Xuan Zhuo @ 2021-03-31 7:11 UTC (permalink / raw) To: netdev Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski, Björn Töpel, Magnus Karlsson, Jonathan Lemon, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, virtualization, bpf XDP socket is an excellent by pass kernel network transmission framework. The zero copy feature of xsk (XDP socket) needs to be supported by the driver. The performance of zero copy is very good. mlx5 and intel ixgbe already support this feature, This patch set allows virtio-net to support xsk's zerocopy xmit feature. And xsk's zerocopy rx has made major changes to virtio-net, and I hope to submit it after this patch set are received. Compared with other drivers, virtio-net does not directly obtain the dma address, so I first obtain the xsk page, and then pass the page to virtio. When recycling the sent packets, we have to distinguish between skb and xdp. Now we have to distinguish between skb, xdp, xsk. #0 #1 made some adjustments to xsk. ---------------- Performance Testing ------------ The udp package tool implemented by the interface of xsk vs sockperf(kernel udp) for performance testing: xsk zero copy xmit in virtio-net: CPU PPS MSGSIZE vhost-cpu 7.9% 511804 64 100% 13.3%% 484373 1500 100% sockperf: CPU PPS MSGSIZE vhost-cpu 100% 375227 64 89.1% 100% 307322 1500 81.5% Xuan Zhuo (8): xsk: XDP_SETUP_XSK_POOL support option check_dma xsk: support get page by addr virtio-net: xsk zero copy xmit setup virtio-net: xsk zero copy xmit implement wakeup and xmit virtio-net: xsk zero copy xmit support xsk unaligned mode virtio-net: xsk zero copy xmit kick by threshold virtio-net: poll tx call xsk zerocopy xmit virtio-net: free old xmit handle xsk drivers/net/virtio_net.c | 449 +++++++++++++++++++++++++++++++++---- include/linux/netdevice.h | 1 + include/net/xdp_sock_drv.h | 11 + net/xdp/xsk_buff_pool.c | 3 +- 4 files changed, 415 insertions(+), 49 deletions(-) -- 2.31.0 ^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit 2021-03-31 7:11 [PATCH net-next v3 0/8] virtio-net support xdp socket zero copy xmit Xuan Zhuo @ 2021-03-31 7:11 ` Xuan Zhuo 2021-04-06 7:03 ` Jason Wang 0 siblings, 1 reply; 3+ messages in thread From: Xuan Zhuo @ 2021-03-31 7:11 UTC (permalink / raw) To: netdev Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Jakub Kicinski, Björn Töpel, Magnus Karlsson, Jonathan Lemon, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, virtualization, bpf, Dust Li poll tx call virtnet_xsk_run, then the data in the xsk tx queue will be continuously consumed by napi. Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> Reviewed-by: Dust Li <dust.li@linux.alibaba.com> --- drivers/net/virtio_net.c | 20 +++++++++++++++++--- 1 file changed, 17 insertions(+), 3 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index d7e95f55478d..fac7d0020013 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -264,6 +264,9 @@ struct padded_vnet_hdr { char padding[4]; }; +static int virtnet_xsk_run(struct send_queue *sq, struct xsk_buff_pool *pool, + int budget, bool in_napi); + static bool is_xdp_frame(void *ptr) { return (unsigned long)ptr & VIRTIO_XDP_FLAG; @@ -1553,7 +1556,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) struct send_queue *sq = container_of(napi, struct send_queue, napi); struct virtnet_info *vi = sq->vq->vdev->priv; unsigned int index = vq2txq(sq->vq); + struct xsk_buff_pool *pool; struct netdev_queue *txq; + int work = 0; if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { /* We don't need to enable cb for XDP */ @@ -1563,15 +1568,24 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) txq = netdev_get_tx_queue(vi->dev, index); __netif_tx_lock(txq, raw_smp_processor_id()); - free_old_xmit_skbs(sq, true); + rcu_read_lock(); + pool = rcu_dereference(sq->xsk.pool); + if (pool) { + work = virtnet_xsk_run(sq, pool, budget, true); + rcu_read_unlock(); + } else { + rcu_read_unlock(); + free_old_xmit_skbs(sq, true); + } __netif_tx_unlock(txq); - virtqueue_napi_complete(napi, sq->vq, 0); + if (work < budget) + virtqueue_napi_complete(napi, sq->vq, 0); if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) netif_tx_wake_queue(txq); - return 0; + return work; } static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) -- 2.31.0 ^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit 2021-03-31 7:11 ` [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit Xuan Zhuo @ 2021-04-06 7:03 ` Jason Wang 0 siblings, 0 replies; 3+ messages in thread From: Jason Wang @ 2021-04-06 7:03 UTC (permalink / raw) To: Xuan Zhuo, netdev Cc: Michael S. Tsirkin, David S. Miller, Jakub Kicinski, Björn Töpel, Magnus Karlsson, Jonathan Lemon, Alexei Starovoitov, Daniel Borkmann, Jesper Dangaard Brouer, John Fastabend, Andrii Nakryiko, Martin KaFai Lau, Song Liu, Yonghong Song, KP Singh, virtualization, bpf, Dust Li 在 2021/3/31 下午3:11, Xuan Zhuo 写道: > poll tx call virtnet_xsk_run, then the data in the xsk tx queue will be > continuously consumed by napi. > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com> > Reviewed-by: Dust Li <dust.li@linux.alibaba.com> I think we need squash this into patch 4, it looks more like a bug fix to me. > --- > drivers/net/virtio_net.c | 20 +++++++++++++++++--- > 1 file changed, 17 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > index d7e95f55478d..fac7d0020013 100644 > --- a/drivers/net/virtio_net.c > +++ b/drivers/net/virtio_net.c > @@ -264,6 +264,9 @@ struct padded_vnet_hdr { > char padding[4]; > }; > > +static int virtnet_xsk_run(struct send_queue *sq, struct xsk_buff_pool *pool, > + int budget, bool in_napi); > + > static bool is_xdp_frame(void *ptr) > { > return (unsigned long)ptr & VIRTIO_XDP_FLAG; > @@ -1553,7 +1556,9 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > struct send_queue *sq = container_of(napi, struct send_queue, napi); > struct virtnet_info *vi = sq->vq->vdev->priv; > unsigned int index = vq2txq(sq->vq); > + struct xsk_buff_pool *pool; > struct netdev_queue *txq; > + int work = 0; > > if (unlikely(is_xdp_raw_buffer_queue(vi, index))) { > /* We don't need to enable cb for XDP */ > @@ -1563,15 +1568,24 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget) > > txq = netdev_get_tx_queue(vi->dev, index); > __netif_tx_lock(txq, raw_smp_processor_id()); > - free_old_xmit_skbs(sq, true); > + rcu_read_lock(); > + pool = rcu_dereference(sq->xsk.pool); > + if (pool) { > + work = virtnet_xsk_run(sq, pool, budget, true); > + rcu_read_unlock(); > + } else { > + rcu_read_unlock(); > + free_old_xmit_skbs(sq, true); > + } > __netif_tx_unlock(txq); > > - virtqueue_napi_complete(napi, sq->vq, 0); > + if (work < budget) > + virtqueue_napi_complete(napi, sq->vq, 0); > > if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS) > netif_tx_wake_queue(txq); > > - return 0; > + return work; Need a separate patch to "fix" the budget returned by poll_tx here. Thanks > } > > static int xmit_skb(struct send_queue *sq, struct sk_buff *skb) ^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-04-07 9:16 UTC | newest] Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- [not found] <1617786614.454336-5-xuanzhuo@linux.alibaba.com> 2021-04-07 9:16 ` [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit Jason Wang 2021-03-31 7:11 [PATCH net-next v3 0/8] virtio-net support xdp socket zero copy xmit Xuan Zhuo 2021-03-31 7:11 ` [PATCH net-next v3 7/8] virtio-net: poll tx call xsk zerocopy xmit Xuan Zhuo 2021-04-06 7:03 ` Jason Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).