From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23503C433FE for ; Mon, 21 Nov 2022 18:26:06 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229782AbiKUS0D (ORCPT ); Mon, 21 Nov 2022 13:26:03 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38586 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229812AbiKUS0C (ORCPT ); Mon, 21 Nov 2022 13:26:02 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id DDB93D02E6 for ; Mon, 21 Nov 2022 10:25:59 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id s2-20020a170902ea0200b001891cffac7aso3074188plg.3 for ; Mon, 21 Nov 2022 10:25:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=XzY0whdCH0LNaf1li3i+ucIqbikEA8y1fjnf4/7T9NE=; b=rDpJ3D836QYRH/Ck97Z1YUaHHYzXKXRaF5X2X0Pe2AQnGqr6ruMTwoHtVocxLshVbi kAaMjPiFAQQPk4Sh++R3hrBZpKQ7UjNuXJoF6QBs8mmdM5l/pOQdXTV0dM9fXAOZQWRb nrOgrNV7SHBCd4M6FIl2BMgUXXgfA7tM3prW3EUIIrJzOt8ykp1JmWo2LQqpI2xfb3D1 D4r0n8cTPPsA2BEslZHPuAdZP6Sv1bdT4g7RoKBde09CJzqznbiCMNhUi4SWlawc+puJ 6nXMhhwP2T0G2RqkRNb5coU/vl+dRWiTRP8eOQgyRHJ2Gt1Zb+0Pree2n6hYatL71Q86 N3iQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=XzY0whdCH0LNaf1li3i+ucIqbikEA8y1fjnf4/7T9NE=; b=cxbi07EFkGhhNAm5eRmnFFb82zb7aSr4PXGgJviadnvkH6JiBT8/bsgEMkvfJpwYxT T8K+Jvc7cZON0gpzpK501/2rOVd6hilhX60gMJ82W83N8AI8G5qUJLVaYml4k4lhXu+t kMw1tGZV2b+akzxB2ZQ/KCPsSDRuz2kiB54vsspV7d7KP0oDG2lGyRkBUW704BRtqhZK gPCw38GuqY1PyjUBI3sP4xPDub01SFXzjhAM3iw+KCPeIyH0kYf6gFVfshVT+mktkmDe 6wxwK9oGL2Sb8HoiqbvxZTfMgVE0RSaoscQxrHiA0C3ulnO+AnD6hBGZcjKXNTi7B8ds EHaA== X-Gm-Message-State: ANoB5pkvB6XsaLJw94smq61ihQuRi4y4e6Mgu34Cnk+3KFzIy6XcHofW lzPRy36B7QRzjK82+AgErDjaNIF5zKv9NusqFGpHzU8900wP/nkvgSd0jERSuGSJZNALvZKtAU+ XeOELqi16DJg5yNsqy/f4wOdpQrOR9NDz6DS5hviB0HDYjIIeLQ== X-Google-Smtp-Source: AA0mqf6SvhcoRVrAaB/VE4e71ePstJVPsnbmxnNgsRhjAJec1qI7jLdZvhFdh/RtenPWa88bAfelRtQ= X-Received: from sdf.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5935]) (user=sdf job=sendgmr) by 2002:a17:903:c5:b0:187:4467:7aba with SMTP id x5-20020a17090300c500b0018744677abamr12868148plc.61.1669055159307; Mon, 21 Nov 2022 10:25:59 -0800 (PST) Date: Mon, 21 Nov 2022 10:25:47 -0800 In-Reply-To: <20221121182552.2152891-1-sdf@google.com> Mime-Version: 1.0 References: <20221121182552.2152891-1-sdf@google.com> X-Mailer: git-send-email 2.38.1.584.g0f3c55d4c2-goog Message-ID: <20221121182552.2152891-4-sdf@google.com> Subject: [PATCH bpf-next v2 3/8] veth: Introduce veth_xdp_buff wrapper for xdp_buff From: Stanislav Fomichev To: bpf@vger.kernel.org Cc: ast@kernel.org, daniel@iogearbox.net, andrii@kernel.org, martin.lau@linux.dev, song@kernel.org, yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org, sdf@google.com, haoluo@google.com, jolsa@kernel.org, Jakub Kicinski , Willem de Bruijn , Jesper Dangaard Brouer , Anatoly Burakov , Alexander Lobakin , Magnus Karlsson , Maryam Tahhan , xdp-hints@xdp-project.net, netdev@vger.kernel.org Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org No functional changes. Boilerplate to allow stuffing more data after xdp_buff. Cc: Martin KaFai Lau Cc: Jakub Kicinski Cc: Willem de Bruijn Cc: Jesper Dangaard Brouer Cc: Anatoly Burakov Cc: Alexander Lobakin Cc: Magnus Karlsson Cc: Maryam Tahhan Cc: xdp-hints@xdp-project.net Cc: netdev@vger.kernel.org Signed-off-by: Stanislav Fomichev --- drivers/net/veth.c | 56 +++++++++++++++++++++++++--------------------- 1 file changed, 31 insertions(+), 25 deletions(-) diff --git a/drivers/net/veth.c b/drivers/net/veth.c index 2a4592780141..bbabc592d431 100644 --- a/drivers/net/veth.c +++ b/drivers/net/veth.c @@ -116,6 +116,10 @@ static struct { { "peer_ifindex" }, }; +struct veth_xdp_buff { + struct xdp_buff xdp; +}; + static int veth_get_link_ksettings(struct net_device *dev, struct ethtool_link_ksettings *cmd) { @@ -592,23 +596,24 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq, rcu_read_lock(); xdp_prog = rcu_dereference(rq->xdp_prog); if (likely(xdp_prog)) { - struct xdp_buff xdp; + struct veth_xdp_buff vxbuf; + struct xdp_buff *xdp = &vxbuf.xdp; u32 act; - xdp_convert_frame_to_buff(frame, &xdp); - xdp.rxq = &rq->xdp_rxq; + xdp_convert_frame_to_buff(frame, xdp); + xdp->rxq = &rq->xdp_rxq; - act = bpf_prog_run_xdp(xdp_prog, &xdp); + act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { case XDP_PASS: - if (xdp_update_frame_from_buff(&xdp, frame)) + if (xdp_update_frame_from_buff(xdp, frame)) goto err_xdp; break; case XDP_TX: orig_frame = *frame; - xdp.rxq->mem = frame->mem; - if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) { + xdp->rxq->mem = frame->mem; + if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) { trace_xdp_exception(rq->dev, xdp_prog, act); frame = &orig_frame; stats->rx_drops++; @@ -619,8 +624,8 @@ static struct xdp_frame *veth_xdp_rcv_one(struct veth_rq *rq, goto xdp_xmit; case XDP_REDIRECT: orig_frame = *frame; - xdp.rxq->mem = frame->mem; - if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) { + xdp->rxq->mem = frame->mem; + if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) { frame = &orig_frame; stats->rx_drops++; goto err_xdp; @@ -801,7 +806,8 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, { void *orig_data, *orig_data_end; struct bpf_prog *xdp_prog; - struct xdp_buff xdp; + struct veth_xdp_buff vxbuf; + struct xdp_buff *xdp = &vxbuf.xdp; u32 act, metalen; int off; @@ -815,22 +821,22 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, } __skb_push(skb, skb->data - skb_mac_header(skb)); - if (veth_convert_skb_to_xdp_buff(rq, &xdp, &skb)) + if (veth_convert_skb_to_xdp_buff(rq, xdp, &skb)) goto drop; - orig_data = xdp.data; - orig_data_end = xdp.data_end; + orig_data = xdp->data; + orig_data_end = xdp->data_end; - act = bpf_prog_run_xdp(xdp_prog, &xdp); + act = bpf_prog_run_xdp(xdp_prog, xdp); switch (act) { case XDP_PASS: break; case XDP_TX: - veth_xdp_get(&xdp); + veth_xdp_get(xdp); consume_skb(skb); - xdp.rxq->mem = rq->xdp_mem; - if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) { + xdp->rxq->mem = rq->xdp_mem; + if (unlikely(veth_xdp_tx(rq, xdp, bq) < 0)) { trace_xdp_exception(rq->dev, xdp_prog, act); stats->rx_drops++; goto err_xdp; @@ -839,10 +845,10 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, rcu_read_unlock(); goto xdp_xmit; case XDP_REDIRECT: - veth_xdp_get(&xdp); + veth_xdp_get(xdp); consume_skb(skb); - xdp.rxq->mem = rq->xdp_mem; - if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) { + xdp->rxq->mem = rq->xdp_mem; + if (xdp_do_redirect(rq->dev, xdp, xdp_prog)) { stats->rx_drops++; goto err_xdp; } @@ -862,7 +868,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, rcu_read_unlock(); /* check if bpf_xdp_adjust_head was used */ - off = orig_data - xdp.data; + off = orig_data - xdp->data; if (off > 0) __skb_push(skb, off); else if (off < 0) @@ -871,21 +877,21 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, skb_reset_mac_header(skb); /* check if bpf_xdp_adjust_tail was used */ - off = xdp.data_end - orig_data_end; + off = xdp->data_end - orig_data_end; if (off != 0) __skb_put(skb, off); /* positive on grow, negative on shrink */ /* XDP frag metadata (e.g. nr_frags) are updated in eBPF helpers * (e.g. bpf_xdp_adjust_tail), we need to update data_len here. */ - if (xdp_buff_has_frags(&xdp)) + if (xdp_buff_has_frags(xdp)) skb->data_len = skb_shinfo(skb)->xdp_frags_size; else skb->data_len = 0; skb->protocol = eth_type_trans(skb, rq->dev); - metalen = xdp.data - xdp.data_meta; + metalen = xdp->data - xdp->data_meta; if (metalen) skb_metadata_set(skb, metalen); out: @@ -898,7 +904,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq, return NULL; err_xdp: rcu_read_unlock(); - xdp_return_buff(&xdp); + xdp_return_buff(xdp); xdp_xmit: return NULL; } -- 2.38.1.584.g0f3c55d4c2-goog