All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stanislav Fomichev <sdf@google.com>
To: Tariq Toukan <ttoukan.linux@gmail.com>
Cc: bpf@vger.kernel.org, ast@kernel.org, daniel@iogearbox.net,
	andrii@kernel.org, martin.lau@linux.dev, song@kernel.org,
	yhs@fb.com, john.fastabend@gmail.com, kpsingh@kernel.org,
	haoluo@google.com, jolsa@kernel.org,
	"Toke Høiland-Jørgensen" <toke@redhat.com>,
	"Tariq Toukan" <tariqt@nvidia.com>,
	"Saeed Mahameed" <saeedm@nvidia.com>,
	"David Ahern" <dsahern@gmail.com>,
	"Jakub Kicinski" <kuba@kernel.org>,
	"Willem de Bruijn" <willemb@google.com>,
	"Jesper Dangaard Brouer" <brouer@redhat.com>,
	"Anatoly Burakov" <anatoly.burakov@intel.com>,
	"Alexander Lobakin" <alexandr.lobakin@intel.com>,
	"Magnus Karlsson" <magnus.karlsson@gmail.com>,
	"Maryam Tahhan" <mtahhan@redhat.com>,
	xdp-hints@xdp-project.net, netdev@vger.kernel.org
Subject: Re: [PATCH bpf-next v7 15/17] net/mlx5e: Introduce wrapper for xdp_buff
Date: Thu, 12 Jan 2023 11:10:24 -0800	[thread overview]
Message-ID: <CAKH8qBsUOdRax0m5XM8guudSX_VYpJuMz_mzdMJegDsq4_ezwA@mail.gmail.com> (raw)
In-Reply-To: <a0bac9bd-6772-64d4-8fd5-756ff4d8c2ad@gmail.com>

On Thu, Jan 12, 2023 at 12:07 AM Tariq Toukan <ttoukan.linux@gmail.com> wrote:
>
>
>
> On 12/01/2023 2:32, Stanislav Fomichev wrote:
> > From: Toke Høiland-Jørgensen <toke@redhat.com>
> >
> > Preparation for implementing HW metadata kfuncs. No functional change.
> >
> > Cc: Tariq Toukan <tariqt@nvidia.com>
> > Cc: Saeed Mahameed <saeedm@nvidia.com>
> > Cc: John Fastabend <john.fastabend@gmail.com>
> > Cc: David Ahern <dsahern@gmail.com>
> > Cc: Martin KaFai Lau <martin.lau@linux.dev>
> > Cc: Jakub Kicinski <kuba@kernel.org>
> > Cc: Willem de Bruijn <willemb@google.com>
> > Cc: Jesper Dangaard Brouer <brouer@redhat.com>
> > Cc: Anatoly Burakov <anatoly.burakov@intel.com>
> > Cc: Alexander Lobakin <alexandr.lobakin@intel.com>
> > Cc: Magnus Karlsson <magnus.karlsson@gmail.com>
> > Cc: Maryam Tahhan <mtahhan@redhat.com>
> > Cc: xdp-hints@xdp-project.net
> > Cc: netdev@vger.kernel.org
> > Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
> > Signed-off-by: Stanislav Fomichev <sdf@google.com>
> > ---
> >   drivers/net/ethernet/mellanox/mlx5/core/en.h  |  1 +
> >   .../net/ethernet/mellanox/mlx5/core/en/xdp.c  |  3 +-
> >   .../net/ethernet/mellanox/mlx5/core/en/xdp.h  |  6 +-
> >   .../ethernet/mellanox/mlx5/core/en/xsk/rx.c   | 25 ++++----
> >   .../net/ethernet/mellanox/mlx5/core/en_rx.c   | 58 +++++++++----------
> >   5 files changed, 50 insertions(+), 43 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > index 2d77fb8a8a01..af663978d1b4 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
> > @@ -469,6 +469,7 @@ struct mlx5e_txqsq {
> >   union mlx5e_alloc_unit {
> >       struct page *page;
> >       struct xdp_buff *xsk;
> > +     struct mlx5e_xdp_buff *mxbuf;
>
> In XSK files below you mix usage of both alloc_units[page_idx].mxbuf and
> alloc_units[page_idx].xsk, while both fields share the memory of a union.
>
> As struct mlx5e_xdp_buff wraps struct xdp_buff, I think that you just
> need to change the existing xsk field type from struct xdp_buff *xsk
> into struct mlx5e_xdp_buff *xsk and align the usage.

Hmmm, good point. I'm actually not sure how it works currently.
mlx5e_alloc_unit.mxbuf doesn't seem to be initialized anywhere? Toke,
am I missing something?

I'm thinking about something like this:

diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h
b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index af663978d1b4..2d77fb8a8a01 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -469,7 +469,6 @@ struct mlx5e_txqsq {
 union mlx5e_alloc_unit {
        struct page *page;
        struct xdp_buff *xsk;
-       struct mlx5e_xdp_buff *mxbuf;
 };

 /* XDP packets can be transmitted in different ways. On completion, we need to
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
index 9cff82d764e3..fd0805df34b7 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
@@ -234,7 +234,8 @@ struct sk_buff
*mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
                                                    u32 head_offset,
                                                    u32 page_idx)
 {
-       struct mlx5e_xdp_buff *mxbuf = wi->alloc_units[page_idx].mxbuf;
+       struct mlx5e_xdp_buff *mxbuf =
container_of(wi->alloc_units[page_idx].xsk,
+                                                   struct mlx5e_xdp_buff, xdp);
        struct bpf_prog *prog;

        /* Check packet size. Note LRO doesn't use linear SKB */
@@ -286,7 +287,8 @@ struct sk_buff
*mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
                                              struct mlx5e_wqe_frag_info *wi,
                                              u32 cqe_bcnt)
 {
-       struct mlx5e_xdp_buff *mxbuf = wi->au->mxbuf;
+       struct mlx5e_xdp_buff *mxbuf = container_of(wi->au->xsk,
+                                                   struct mlx5e_xdp_buff, xdp);
        struct bpf_prog *prog;

        /* wi->offset is not used in this function, because xdp->data and the

Does it look sensible?


> >   };
> >
> >   /* XDP packets can be transmitted in different ways. On completion, we need to
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > index 20507ef2f956..31bb6806bf5d 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.c
> > @@ -158,8 +158,9 @@ mlx5e_xmit_xdp_buff(struct mlx5e_xdpsq *sq, struct mlx5e_rq *rq,
> >
> >   /* returns true if packet was consumed by xdp */
> >   bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page,
> > -                   struct bpf_prog *prog, struct xdp_buff *xdp)
> > +                   struct bpf_prog *prog, struct mlx5e_xdp_buff *mxbuf)
> >   {
> > +     struct xdp_buff *xdp = &mxbuf->xdp;
> >       u32 act;
> >       int err;
> >
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
> > index bc2d9034af5b..389818bf6833 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xdp.h
> > @@ -44,10 +44,14 @@
> >       (MLX5E_XDP_INLINE_WQE_MAX_DS_CNT * MLX5_SEND_WQE_DS - \
> >        sizeof(struct mlx5_wqe_inline_seg))
> >
> > +struct mlx5e_xdp_buff {
> > +     struct xdp_buff xdp;
> > +};
> > +
> >   struct mlx5e_xsk_param;
> >   int mlx5e_xdp_max_mtu(struct mlx5e_params *params, struct mlx5e_xsk_param *xsk);
> >   bool mlx5e_xdp_handle(struct mlx5e_rq *rq, struct page *page,
> > -                   struct bpf_prog *prog, struct xdp_buff *xdp);
> > +                   struct bpf_prog *prog, struct mlx5e_xdp_buff *mlctx);
> >   void mlx5e_xdp_mpwqe_complete(struct mlx5e_xdpsq *sq);
> >   bool mlx5e_poll_xdpsq_cq(struct mlx5e_cq *cq);
> >   void mlx5e_free_xdpsq_descs(struct mlx5e_xdpsq *sq);
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
> > index c91b54d9ff27..9cff82d764e3 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en/xsk/rx.c
> > @@ -22,6 +22,7 @@ int mlx5e_xsk_alloc_rx_mpwqe(struct mlx5e_rq *rq, u16 ix)
> >               goto err;
> >
> >       BUILD_BUG_ON(sizeof(wi->alloc_units[0]) != sizeof(wi->alloc_units[0].xsk));
> > +     XSK_CHECK_PRIV_TYPE(struct mlx5e_xdp_buff);
> >       batch = xsk_buff_alloc_batch(rq->xsk_pool, (struct xdp_buff **)wi->alloc_units,
> >                                    rq->mpwqe.pages_per_wqe);
> >
> > @@ -233,7 +234,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
> >                                                   u32 head_offset,
> >                                                   u32 page_idx)
> >   {
> > -     struct xdp_buff *xdp = wi->alloc_units[page_idx].xsk;
> > +     struct mlx5e_xdp_buff *mxbuf = wi->alloc_units[page_idx].mxbuf;
> >       struct bpf_prog *prog;
> >
> >       /* Check packet size. Note LRO doesn't use linear SKB */
> > @@ -249,9 +250,9 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
> >        */
> >       WARN_ON_ONCE(head_offset);
> >
> > -     xsk_buff_set_size(xdp, cqe_bcnt);
> > -     xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool);
> > -     net_prefetch(xdp->data);
> > +     xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt);
> > +     xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool);
> > +     net_prefetch(mxbuf->xdp.data);
> >
> >       /* Possible flows:
> >        * - XDP_REDIRECT to XSKMAP:
> > @@ -269,7 +270,7 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
> >        */
> >
> >       prog = rcu_dereference(rq->xdp_prog);
> > -     if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp))) {
> > +     if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, mxbuf))) {
> >               if (likely(__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)))
> >                       __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
> >               return NULL; /* page/packet was consumed by XDP */
> > @@ -278,14 +279,14 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq,
> >       /* XDP_PASS: copy the data from the UMEM to a new SKB and reuse the
> >        * frame. On SKB allocation failure, NULL is returned.
> >        */
> > -     return mlx5e_xsk_construct_skb(rq, xdp);
> > +     return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp);
> >   }
> >
> >   struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
> >                                             struct mlx5e_wqe_frag_info *wi,
> >                                             u32 cqe_bcnt)
> >   {
> > -     struct xdp_buff *xdp = wi->au->xsk;
> > +     struct mlx5e_xdp_buff *mxbuf = wi->au->mxbuf;
> >       struct bpf_prog *prog;
> >
> >       /* wi->offset is not used in this function, because xdp->data and the
> > @@ -295,17 +296,17 @@ struct sk_buff *mlx5e_xsk_skb_from_cqe_linear(struct mlx5e_rq *rq,
> >        */
> >       WARN_ON_ONCE(wi->offset);
> >
> > -     xsk_buff_set_size(xdp, cqe_bcnt);
> > -     xsk_buff_dma_sync_for_cpu(xdp, rq->xsk_pool);
> > -     net_prefetch(xdp->data);
> > +     xsk_buff_set_size(&mxbuf->xdp, cqe_bcnt);
> > +     xsk_buff_dma_sync_for_cpu(&mxbuf->xdp, rq->xsk_pool);
> > +     net_prefetch(mxbuf->xdp.data);
> >
> >       prog = rcu_dereference(rq->xdp_prog);
> > -     if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, xdp)))
> > +     if (likely(prog && mlx5e_xdp_handle(rq, NULL, prog, mxbuf)))
> >               return NULL; /* page/packet was consumed by XDP */
> >
> >       /* XDP_PASS: copy the data from the UMEM to a new SKB. The frame reuse
> >        * will be handled by mlx5e_free_rx_wqe.
> >        * On SKB allocation failure, NULL is returned.
> >        */
> > -     return mlx5e_xsk_construct_skb(rq, xdp);
> > +     return mlx5e_xsk_construct_skb(rq, &mxbuf->xdp);
> >   }
> > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > index c8820ab22169..6affdddf5bcf 100644
> > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
> > @@ -1575,11 +1575,11 @@ struct sk_buff *mlx5e_build_linear_skb(struct mlx5e_rq *rq, void *va,
> >       return skb;
> >   }
> >
> > -static void mlx5e_fill_xdp_buff(struct mlx5e_rq *rq, void *va, u16 headroom,
> > -                             u32 len, struct xdp_buff *xdp)
> > +static void mlx5e_fill_mxbuf(struct mlx5e_rq *rq, void *va, u16 headroom,
> > +                          u32 len, struct mlx5e_xdp_buff *mxbuf)
> >   {
> > -     xdp_init_buff(xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
> > -     xdp_prepare_buff(xdp, va, headroom, len, true);
> > +     xdp_init_buff(&mxbuf->xdp, rq->buff.frame0_sz, &rq->xdp_rxq);
> > +     xdp_prepare_buff(&mxbuf->xdp, va, headroom, len, true);
> >   }
> >
> >   static struct sk_buff *
> > @@ -1606,16 +1606,16 @@ mlx5e_skb_from_cqe_linear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi,
> >
> >       prog = rcu_dereference(rq->xdp_prog);
> >       if (prog) {
> > -             struct xdp_buff xdp;
> > +             struct mlx5e_xdp_buff mxbuf;
> >
> >               net_prefetchw(va); /* xdp_frame data area */
> > -             mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
> > -             if (mlx5e_xdp_handle(rq, au->page, prog, &xdp))
> > +             mlx5e_fill_mxbuf(rq, cqe, va, rx_headroom, cqe_bcnt, &mxbuf);
> > +             if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf))
> >                       return NULL; /* page/packet was consumed by XDP */
> >
> > -             rx_headroom = xdp.data - xdp.data_hard_start;
> > -             metasize = xdp.data - xdp.data_meta;
> > -             cqe_bcnt = xdp.data_end - xdp.data;
> > +             rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start;
> > +             metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta;
> > +             cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data;
> >       }
> >       frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
> >       skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize);
> > @@ -1637,9 +1637,9 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
> >       union mlx5e_alloc_unit *au = wi->au;
> >       u16 rx_headroom = rq->buff.headroom;
> >       struct skb_shared_info *sinfo;
> > +     struct mlx5e_xdp_buff mxbuf;
> >       u32 frag_consumed_bytes;
> >       struct bpf_prog *prog;
> > -     struct xdp_buff xdp;
> >       struct sk_buff *skb;
> >       dma_addr_t addr;
> >       u32 truesize;
> > @@ -1654,8 +1654,8 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
> >       net_prefetchw(va); /* xdp_frame data area */
> >       net_prefetch(va + rx_headroom);
> >
> > -     mlx5e_fill_xdp_buff(rq, va, rx_headroom, frag_consumed_bytes, &xdp);
> > -     sinfo = xdp_get_shared_info_from_buff(&xdp);
> > +     mlx5e_fill_mxbuf(rq, va, rx_headroom, frag_consumed_bytes, &mxbuf);
> > +     sinfo = xdp_get_shared_info_from_buff(&mxbuf.xdp);
> >       truesize = 0;
> >
> >       cqe_bcnt -= frag_consumed_bytes;
> > @@ -1673,13 +1673,13 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
> >               dma_sync_single_for_cpu(rq->pdev, addr + wi->offset,
> >                                       frag_consumed_bytes, rq->buff.map_dir);
> >
> > -             if (!xdp_buff_has_frags(&xdp)) {
> > +             if (!xdp_buff_has_frags(&mxbuf.xdp)) {
> >                       /* Init on the first fragment to avoid cold cache access
> >                        * when possible.
> >                        */
> >                       sinfo->nr_frags = 0;
> >                       sinfo->xdp_frags_size = 0;
> > -                     xdp_buff_set_frags_flag(&xdp);
> > +                     xdp_buff_set_frags_flag(&mxbuf.xdp);
> >               }
> >
> >               frag = &sinfo->frags[sinfo->nr_frags++];
> > @@ -1688,7 +1688,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
> >               skb_frag_size_set(frag, frag_consumed_bytes);
> >
> >               if (page_is_pfmemalloc(au->page))
> > -                     xdp_buff_set_frag_pfmemalloc(&xdp);
> > +                     xdp_buff_set_frag_pfmemalloc(&mxbuf.xdp);
> >
> >               sinfo->xdp_frags_size += frag_consumed_bytes;
> >               truesize += frag_info->frag_stride;
> > @@ -1701,7 +1701,7 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
> >       au = head_wi->au;
> >
> >       prog = rcu_dereference(rq->xdp_prog);
> > -     if (prog && mlx5e_xdp_handle(rq, au->page, prog, &xdp)) {
> > +     if (prog && mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) {
> >               if (test_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags)) {
> >                       int i;
> >
> > @@ -1711,22 +1711,22 @@ mlx5e_skb_from_cqe_nonlinear(struct mlx5e_rq *rq, struct mlx5e_wqe_frag_info *wi
> >               return NULL; /* page/packet was consumed by XDP */
> >       }
> >
> > -     skb = mlx5e_build_linear_skb(rq, xdp.data_hard_start, rq->buff.frame0_sz,
> > -                                  xdp.data - xdp.data_hard_start,
> > -                                  xdp.data_end - xdp.data,
> > -                                  xdp.data - xdp.data_meta);
> > +     skb = mlx5e_build_linear_skb(rq, mxbuf.xdp.data_hard_start, rq->buff.frame0_sz,
> > +                                  mxbuf.xdp.data - mxbuf.xdp.data_hard_start,
> > +                                  mxbuf.xdp.data_end - mxbuf.xdp.data,
> > +                                  mxbuf.xdp.data - mxbuf.xdp.data_meta);
> >       if (unlikely(!skb))
> >               return NULL;
> >
> >       page_ref_inc(au->page);
> >
> > -     if (unlikely(xdp_buff_has_frags(&xdp))) {
> > +     if (unlikely(xdp_buff_has_frags(&mxbuf.xdp))) {
> >               int i;
> >
> >               /* sinfo->nr_frags is reset by build_skb, calculate again. */
> >               xdp_update_skb_shared_info(skb, wi - head_wi - 1,
> >                                          sinfo->xdp_frags_size, truesize,
> > -                                        xdp_buff_is_frag_pfmemalloc(&xdp));
> > +                                        xdp_buff_is_frag_pfmemalloc(&mxbuf.xdp));
> >
> >               for (i = 0; i < sinfo->nr_frags; i++) {
> >                       skb_frag_t *frag = &sinfo->frags[i];
> > @@ -2007,19 +2007,19 @@ mlx5e_skb_from_cqe_mpwrq_linear(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi,
> >
> >       prog = rcu_dereference(rq->xdp_prog);
> >       if (prog) {
> > -             struct xdp_buff xdp;
> > +             struct mlx5e_xdp_buff mxbuf;
> >
> >               net_prefetchw(va); /* xdp_frame data area */
> > -             mlx5e_fill_xdp_buff(rq, va, rx_headroom, cqe_bcnt, &xdp);
> > -             if (mlx5e_xdp_handle(rq, au->page, prog, &xdp)) {
> > +             mlx5e_fill_mxbuf(rq, va, rx_headroom, cqe_bcnt, &mxbuf);
> > +             if (mlx5e_xdp_handle(rq, au->page, prog, &mxbuf)) {
> >                       if (__test_and_clear_bit(MLX5E_RQ_FLAG_XDP_XMIT, rq->flags))
> >                               __set_bit(page_idx, wi->xdp_xmit_bitmap); /* non-atomic */
> >                       return NULL; /* page/packet was consumed by XDP */
> >               }
> >
> > -             rx_headroom = xdp.data - xdp.data_hard_start;
> > -             metasize = xdp.data - xdp.data_meta;
> > -             cqe_bcnt = xdp.data_end - xdp.data;
> > +             rx_headroom = mxbuf.xdp.data - mxbuf.xdp.data_hard_start;
> > +             metasize = mxbuf.xdp.data - mxbuf.xdp.data_meta;
> > +             cqe_bcnt = mxbuf.xdp.data_end - mxbuf.xdp.data;
> >       }
> >       frag_size = MLX5_SKB_FRAG_SZ(rx_headroom + cqe_bcnt);
> >       skb = mlx5e_build_linear_skb(rq, va, frag_size, rx_headroom, cqe_bcnt, metasize);

  reply	other threads:[~2023-01-12 19:22 UTC|newest]

Thread overview: 43+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-12  0:32 [PATCH bpf-next v7 00/17] xdp: hints via kfuncs Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 01/17] bpf: Document XDP RX metadata Stanislav Fomichev
2023-01-16 13:09   ` Jesper Dangaard Brouer
2023-01-17 20:33     ` Stanislav Fomichev
2023-01-18 14:28       ` Jesper Dangaard Brouer
2023-01-18 17:55         ` Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 02/17] bpf: Rename bpf_{prog,map}_is_dev_bound to is_offloaded Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 03/17] bpf: Move offload initialization into late_initcall Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 04/17] bpf: Reshuffle some parts of bpf/offload.c Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 05/17] bpf: Introduce device-bound XDP programs Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 06/17] selftests/bpf: Update expected test_offload.py messages Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 07/17] bpf: XDP metadata RX kfuncs Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 08/17] bpf: Support consuming XDP HW metadata from fext programs Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 09/17] veth: Introduce veth_xdp_buff wrapper for xdp_buff Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 10/17] veth: Support RX XDP metadata Stanislav Fomichev
2023-01-16 16:21   ` Jesper Dangaard Brouer
2023-01-17 20:33     ` Stanislav Fomichev
2023-01-18 15:57       ` Jesper Dangaard Brouer
2023-01-12  0:32 ` [PATCH bpf-next v7 11/17] selftests/bpf: Verify xdp_metadata xdp->af_xdp path Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 12/17] net/mlx4_en: Introduce wrapper for xdp_buff Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 13/17] net/mlx4_en: Support RX XDP metadata Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 14/17] xsk: Add cb area to struct xdp_buff_xsk Stanislav Fomichev
2023-01-12  0:32 ` [PATCH bpf-next v7 15/17] net/mlx5e: Introduce wrapper for xdp_buff Stanislav Fomichev
2023-01-12  8:07   ` Tariq Toukan
2023-01-12 19:10     ` Stanislav Fomichev [this message]
2023-01-12 21:09       ` [xdp-hints] " Toke Høiland-Jørgensen
2023-01-12 21:55         ` Toke Høiland-Jørgensen
2023-01-12 22:18           ` Stanislav Fomichev
2023-01-12 22:29             ` Toke Høiland-Jørgensen
2023-01-13 20:55               ` Tariq Toukan
2023-01-13 20:53           ` Tariq Toukan
2023-01-13 21:31             ` Toke Høiland-Jørgensen
2023-01-15  6:59               ` Tariq Toukan
2023-01-15 11:13                 ` Toke Høiland-Jørgensen
2023-01-12  0:32 ` [PATCH bpf-next v7 16/17] net/mlx5e: Support RX XDP metadata Stanislav Fomichev
2023-01-12  8:13   ` Tariq Toukan
2023-01-12 19:09     ` Stanislav Fomichev
2023-01-13 20:25       ` Tariq Toukan
2023-01-12  0:32 ` [PATCH bpf-next v7 17/17] selftests/bpf: Simple program to dump XDP RX metadata Stanislav Fomichev
2023-01-12  7:29 ` [PATCH bpf-next v7 00/17] xdp: hints via kfuncs Martin KaFai Lau
2023-01-12  8:19   ` Tariq Toukan
2023-01-12 18:09     ` Stanislav Fomichev
2023-01-12 18:20       ` Martin KaFai Lau

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAKH8qBsUOdRax0m5XM8guudSX_VYpJuMz_mzdMJegDsq4_ezwA@mail.gmail.com \
    --to=sdf@google.com \
    --cc=alexandr.lobakin@intel.com \
    --cc=anatoly.burakov@intel.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=dsahern@gmail.com \
    --cc=haoluo@google.com \
    --cc=john.fastabend@gmail.com \
    --cc=jolsa@kernel.org \
    --cc=kpsingh@kernel.org \
    --cc=kuba@kernel.org \
    --cc=magnus.karlsson@gmail.com \
    --cc=martin.lau@linux.dev \
    --cc=mtahhan@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@nvidia.com \
    --cc=song@kernel.org \
    --cc=tariqt@nvidia.com \
    --cc=toke@redhat.com \
    --cc=ttoukan.linux@gmail.com \
    --cc=willemb@google.com \
    --cc=xdp-hints@xdp-project.net \
    --cc=yhs@fb.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.