linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: mst@redhat.com, jasowang@redhat.com
Cc: virtualization@lists.linux-foundation.org,
	netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
	bpf@vger.kernel.org, Jesper Dangaard Brouer <brouer@redhat.com>
Subject: [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP
Date: Wed,  6 May 2020 14:16:32 +0800	[thread overview]
Message-ID: <20200506061633.16327-1-jasowang@redhat.com> (raw)

We tried to reserve space for vnet header before
xdp.data_hard_start. But this is useless since the packet could be
modified by XDP which may invalidate the information stored in the
header and there's no way for XDP to know the existence of the vnet
header currently.

So let's just not reserve space for vnet header in this case.

Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 11f722460513..98dd75b665a5 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -684,8 +684,8 @@ static struct sk_buff *receive_small(struct net_device *dev,
 			page = xdp_page;
 		}
 
-		xdp.data_hard_start = buf + VIRTNET_RX_PAD + vi->hdr_len;
-		xdp.data = xdp.data_hard_start + xdp_headroom;
+		xdp.data_hard_start = buf + VIRTNET_RX_PAD;
+		xdp.data = xdp.data_hard_start + xdp_headroom + vi->hdr_len;
 		xdp.data_end = xdp.data + len;
 		xdp.data_meta = xdp.data;
 		xdp.rxq = &rq->xdp_rxq;
@@ -845,7 +845,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		 * the descriptor on if we get an XDP_TX return code.
 		 */
 		data = page_address(xdp_page) + offset;
-		xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM + vi->hdr_len;
+		xdp.data_hard_start = data - VIRTIO_XDP_HEADROOM;
 		xdp.data = data + vi->hdr_len;
 		xdp.data_end = xdp.data + (len - vi->hdr_len);
 		xdp.data_meta = xdp.data;
-- 
2.20.1


             reply	other threads:[~2020-05-06  6:16 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-06  6:16 Jason Wang [this message]
2020-05-06  6:16 ` [PATCH net-next 2/2] virtio-net: fix the XDP truesize calculation for mergeable buffers Jason Wang
2020-05-06  7:37   ` Michael S. Tsirkin
2020-05-06  8:21     ` Jason Wang
2020-05-06 12:08       ` Michael S. Tsirkin
2020-05-08  1:54         ` Jason Wang
2020-05-06  7:53 ` [PATCH net-next 1/2] virtio-net: don't reserve space for vnet header for XDP Michael S. Tsirkin
2020-05-06  8:19   ` Jason Wang
2020-05-06  9:54     ` Michael S. Tsirkin
2020-05-08  1:56       ` Jason Wang
2020-05-06  8:21 ` Jesper Dangaard Brouer
2020-05-06  8:34   ` Jason Wang
2020-05-06  9:46     ` Michael S. Tsirkin
2020-05-08  1:59       ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200506061633.16327-1-jasowang@redhat.com \
    --to=jasowang@redhat.com \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).