bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sasha Levin <sashal@kernel.org>
To: linux-kernel@vger.kernel.org, stable@vger.kernel.org
Cc: "Jesper Dangaard Brouer" <brouer@redhat.com>,
	"Mao Wenan" <maowenan@huawei.com>,
	"Alexei Starovoitov" <ast@kernel.org>,
	"Toshiaki Makita" <toshiaki.makita1@gmail.com>,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Sasha Levin" <sashal@kernel.org>,
	netdev@vger.kernel.org, bpf@vger.kernel.org
Subject: [PATCH AUTOSEL 5.7 204/274] veth: Adjust hard_start offset on redirect XDP frames
Date: Mon,  8 Jun 2020 19:04:57 -0400	[thread overview]
Message-ID: <20200608230607.3361041-204-sashal@kernel.org> (raw)
In-Reply-To: <20200608230607.3361041-1-sashal@kernel.org>

From: Jesper Dangaard Brouer <brouer@redhat.com>

[ Upstream commit 5c8572251fabc5bb49fd623c064e95a9daf6a3e3 ]

When native XDP redirect into a veth device, the frame arrives in the
xdp_frame structure. It is then processed in veth_xdp_rcv_one(),
which can run a new XDP bpf_prog on the packet. Doing so requires
converting xdp_frame to xdp_buff, but the tricky part is that
xdp_frame memory area is located in the top (data_hard_start) memory
area that xdp_buff will point into.

The current code tried to protect the xdp_frame area, by assigning
xdp_buff.data_hard_start past this memory. This results in 32 bytes
less headroom to expand into via BPF-helper bpf_xdp_adjust_head().

This protect step is actually not needed, because BPF-helper
bpf_xdp_adjust_head() already reserve this area, and don't allow
BPF-prog to expand into it. Thus, it is safe to point data_hard_start
directly at xdp_frame memory area.

Fixes: 9fc8d518d9d5 ("veth: Handle xdp_frames in xdp napi ring")
Reported-by: Mao Wenan <maowenan@huawei.com>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Toshiaki Makita <toshiaki.makita1@gmail.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/158945338331.97035.5923525383710752178.stgit@firesoul
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 drivers/net/veth.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index aece0e5eec8c..d5691bb84448 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -564,13 +564,15 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 					struct veth_stats *stats)
 {
 	void *hard_start = frame->data - frame->headroom;
-	void *head = hard_start - sizeof(struct xdp_frame);
 	int len = frame->len, delta = 0;
 	struct xdp_frame orig_frame;
 	struct bpf_prog *xdp_prog;
 	unsigned int headroom;
 	struct sk_buff *skb;
 
+	/* bpf_xdp_adjust_head() assures BPF cannot access xdp_frame area */
+	hard_start -= sizeof(struct xdp_frame);
+
 	rcu_read_lock();
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (likely(xdp_prog)) {
@@ -592,7 +594,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 			break;
 		case XDP_TX:
 			orig_frame = *frame;
-			xdp.data_hard_start = head;
 			xdp.rxq->mem = frame->mem;
 			if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) {
 				trace_xdp_exception(rq->dev, xdp_prog, act);
@@ -605,7 +606,6 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 			goto xdp_xmit;
 		case XDP_REDIRECT:
 			orig_frame = *frame;
-			xdp.data_hard_start = head;
 			xdp.rxq->mem = frame->mem;
 			if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
 				frame = &orig_frame;
@@ -629,7 +629,7 @@ static struct sk_buff *veth_xdp_rcv_one(struct veth_rq *rq,
 	rcu_read_unlock();
 
 	headroom = sizeof(struct xdp_frame) + frame->headroom - delta;
-	skb = veth_build_skb(head, headroom, len, 0);
+	skb = veth_build_skb(hard_start, headroom, len, 0);
 	if (!skb) {
 		xdp_return_frame(frame);
 		stats->rx_drops++;
-- 
2.25.1


  parent reply	other threads:[~2020-06-09  0:50 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20200608230607.3361041-1-sashal@kernel.org>
2020-06-08 23:01 ` [PATCH AUTOSEL 5.7 023/274] selftests/bpf: Copy runqslower to OUTPUT directory Sasha Levin
2020-06-08 23:01 ` [PATCH AUTOSEL 5.7 024/274] libbpf: Fix memory leak and possible double-free in hashmap__clear Sasha Levin
2020-06-08 23:02 ` [PATCH AUTOSEL 5.7 048/274] ixgbe: Fix XDP redirect on archs with PAGE_SIZE above 4K Sasha Levin
2020-06-08 23:02 ` [PATCH AUTOSEL 5.7 075/274] ice: Change number of XDP TxQ to 0 when destroying rings Sasha Levin
2020-06-08 23:02 ` [PATCH AUTOSEL 5.7 084/274] tun: correct header offsets in napi frags mode Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 123/274] bpf, riscv: Fix tail call count off by one in RV32 BPF JIT Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 128/274] libbpf: Refactor map creation logic and fix cleanup leak Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 129/274] selftests/bpf: Ensure test flavors use correct skeletons Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 130/274] selftests/bpf: Fix memory leak in test selector Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 131/274] selftests/bpf: Fix memory leak in extract_build_id() Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 132/274] selftests/bpf: Fix invalid memory reads in core_relo selftest Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 133/274] libbpf: Fix huge memory leak in libbpf_find_vmlinux_btf_id() Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 134/274] selftests/bpf: Fix bpf_link leak in ns_current_pid_tgid selftest Sasha Levin
2020-06-08 23:03 ` [PATCH AUTOSEL 5.7 135/274] selftests/bpf: Add runqslower binary to .gitignore Sasha Levin
2020-06-08 23:04 ` [PATCH AUTOSEL 5.7 197/274] selftests/bpf: Install generated test progs Sasha Levin
2020-06-08 23:04 ` Sasha Levin [this message]
2020-06-08 23:05 ` [PATCH AUTOSEL 5.7 208/274] selftests/bpf: Fix test_align verifier log patterns Sasha Levin
2020-06-08 23:05 ` [PATCH AUTOSEL 5.7 225/274] selftests/bpf: CONFIG_IPV6_SEG6_BPF required for test_seg6_loop.o Sasha Levin
2020-06-08 23:05 ` [PATCH AUTOSEL 5.7 226/274] selftests/bpf: CONFIG_LIRC required for test_lirc_mode2.sh Sasha Levin
2020-06-08 23:05 ` [PATCH AUTOSEL 5.7 265/274] libbpf: Fix perf_buffer__free() API for sparse allocs Sasha Levin
2020-06-08 23:05 ` [PATCH AUTOSEL 5.7 266/274] bpf: Fix map permissions check Sasha Levin
2020-06-08 23:06 ` [PATCH AUTOSEL 5.7 267/274] bpf: Refactor sockmap redirect code so its easy to reuse Sasha Levin
2020-06-08 23:06 ` [PATCH AUTOSEL 5.7 268/274] bpf: Fix running sk_skb program types with ktls Sasha Levin
2020-06-08 23:06 ` [PATCH AUTOSEL 5.7 269/274] selftests/bpf, flow_dissector: Close TAP device FD after the test Sasha Levin
2020-06-08 23:06 ` [PATCH AUTOSEL 5.7 270/274] bpf: Fix up bpf_skb_adjust_room helper's skb csum setting Sasha Levin
2020-06-08 23:06 ` [PATCH AUTOSEL 5.7 271/274] s390/bpf: Maintain 8-byte stack alignment Sasha Levin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200608230607.3361041-204-sashal@kernel.org \
    --to=sashal@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=maowenan@huawei.com \
    --cc=netdev@vger.kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=toke@redhat.com \
    --cc=toshiaki.makita1@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).