From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mx3-rdu2.redhat.com ([66.187.233.73]:55508 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1751030AbeCIJpJ (ORCPT ); Fri, 9 Mar 2018 04:45:09 -0500 Date: Fri, 9 Mar 2018 10:44:58 +0100 From: Jesper Dangaard Brouer To: Jason Wang Cc: netdev@vger.kernel.org, =?UTF-8?B?QmrDtnJuVMO2cGVs?= , magnus.karlsson@intel.com, eugenia@mellanox.com, John Fastabend , Eran Ben Elisha , Saeed Mahameed , galp@mellanox.com, Daniel Borkmann , Alexei Starovoitov , Tariq Toukan , brouer@redhat.com Subject: Re: [bpf-next V2 PATCH 07/15] virtio_net: convert to use generic xdp_frame and xdp_return_frame API Message-ID: <20180309104458.213a9914@redhat.com> In-Reply-To: <323a325d-4eec-7a7a-efda-c0576a5b04ca@redhat.com> References: <152051439383.7018.11827926732878918934.stgit@firesoul> <152051449683.7018.8602154327710769401.stgit@firesoul> <323a325d-4eec-7a7a-efda-c0576a5b04ca@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8BIT Sender: netdev-owner@vger.kernel.org List-ID: On Fri, 9 Mar 2018 16:03:28 +0800 Jason Wang wrote: > On 2018年03月08日 21:08, Jesper Dangaard Brouer wrote: > > The virtio_net driver assumes XDP frames are always released based on > > page refcnt (via put_page). Thus, is only queues the XDP data pointer > > address and uses virt_to_head_page() to retrieve struct page. > > > > Use the XDP return API to get away from such assumptions. Instead > > queue an xdp_frame, which allow us to use the xdp_return_frame API, > > when releasing the frame. > > > > Signed-off-by: Jesper Dangaard Brouer > > --- > > drivers/net/virtio_net.c | 31 +++++++++++++++++++++---------- > > 1 file changed, 21 insertions(+), 10 deletions(-) > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c > > index 23374603e4d9..6c4220450506 100644 > > --- a/drivers/net/virtio_net.c > > +++ b/drivers/net/virtio_net.c > > @@ -419,30 +419,41 @@ static bool __virtnet_xdp_xmit(struct virtnet_info *vi, > > struct xdp_buff *xdp) > > { > > struct virtio_net_hdr_mrg_rxbuf *hdr; > > - unsigned int len; > > + struct xdp_frame *xdpf, *xdpf_sent; > > struct send_queue *sq; > > + unsigned int len; > > unsigned int qp; > > - void *xdp_sent; > > int err; > > > > qp = vi->curr_queue_pairs - vi->xdp_queue_pairs + smp_processor_id(); > > sq = &vi->sq[qp]; > > > > /* Free up any pending old buffers before queueing new ones. */ > > - while ((xdp_sent = virtqueue_get_buf(sq->vq, &len)) != NULL) { > > - struct page *sent_page = virt_to_head_page(xdp_sent); > > + while ((xdpf_sent = virtqueue_get_buf(sq->vq, &len)) != NULL) > > + xdp_return_frame(xdpf_sent->data, &xdpf_sent->mem); > > > > - put_page(sent_page); > > - } > > + xdpf = convert_to_xdp_frame(xdp); > > + if (unlikely(!xdpf)) > > + return -EOVERFLOW; > > + > > + /* virtqueue want to use data area in-front of packet */ > > + if (unlikely(xdpf->metasize > 0)) > > + return -EOPNOTSUPP; > > + > > + if (unlikely(xdpf->headroom < vi->hdr_len)) > > + return -EOVERFLOW; > > I think this need another independent patch. For now we can simply drop > the packet, but we probably need to think more to address it completely, > allocate header part either dynamically or statically. Okay, so we can followup later if we want to handle this case better than drop. > > > > - xdp->data -= vi->hdr_len; > > + /* Make room for virtqueue hdr (also change xdpf->headroom?) */ > > + xdpf->data -= vi->hdr_len; > > /* Zero header and leave csum up to XDP layers */ > > - hdr = xdp->data; > > + hdr = xdpf->data; > > memset(hdr, 0, vi->hdr_len); > > + hdr->hdr.hdr_len = xdpf->len; /* Q: is this needed? */ > > Maybe another patch but not a must, hdr_len is hint for the linear part > of skb used in host. Backend implementation may simply ignore this value. So, I should leave it out for now? Or it is okay to keep it? > > + xdpf->len += vi->hdr_len; > > > > - sg_init_one(sq->sg, xdp->data, xdp->data_end - xdp->data); > > + sg_init_one(sq->sg, xdpf->data, xdpf->len); When _later_ introducing bulking, can we use something else than sg_init_one() to send/queue multiple raw XDP frames for sending? (I'm asking as I don't know this "sg_*" API usage) > > - err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdp->data, > > GFP_ATOMIC); > > + err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, > > GFP_ATOMIC); if (unlikely(err)) > > return false; /* Caller handle free/refcnt */ > > -- Best regards, Jesper Dangaard Brouer MSc.CS, Principal Kernel Engineer at Red Hat LinkedIn: http://www.linkedin.com/in/brouer