netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Eelco Chaudron" <echaudro@redhat.com>
To: "John Fastabend" <john.fastabend@gmail.com>
Cc: "Lorenzo Bianconi" <lorenzo@kernel.org>,
	bpf@vger.kernel.org, netdev@vger.kernel.org,
	lorenzo.bianconi@redhat.com, davem@davemloft.net,
	kuba@kernel.org, ast@kernel.org, daniel@iogearbox.net,
	shayagr@amazon.com, sameehj@amazon.com, dsahern@kernel.org,
	brouer@redhat.com, jasowang@redhat.com,
	alexander.duyck@gmail.com, saeed@kernel.org,
	maciej.fijalkowski@intel.com
Subject: Re: [PATCH v8 bpf-next 00/14] mvneta: introduce XDP multi-buffer support
Date: Tue, 13 Apr 2021 17:16:00 +0200	[thread overview]
Message-ID: <FD3E6E08-DE78-4FBA-96F6-646C93E88631@redhat.com> (raw)
In-Reply-To: <606fa62f6fe99_c8b920884@john-XPS-13-9370.notmuch>



On 9 Apr 2021, at 2:56, John Fastabend wrote:

> Lorenzo Bianconi wrote:
>> This series introduce XDP multi-buffer support. The mvneta driver is
>> the first to support these new "non-linear" xdp_{buff,frame}. 
>> Reviewers
>> please focus on how these new types of xdp_{buff,frame} packets
>> traverse the different layers and the layout design. It is on purpose
>> that BPF-helpers are kept simple, as we don't want to expose the
>> internal layout to allow later changes.
>>
>> For now, to keep the design simple and to maintain performance, the 
>> XDP
>> BPF-prog (still) only have access to the first-buffer. It is left for
>> later (another patchset) to add payload access across multiple 
>> buffers.
>> This patchset should still allow for these future extensions. The 
>> goal
>> is to lift the XDP MTU restriction that comes with XDP, but maintain
>> same performance as before.
>>
>> The main idea for the new multi-buffer layout is to reuse the same
>> layout used for non-linear SKB. We introduced a "xdp_shared_info" 
>> data
>> structure at the end of the first buffer to link together subsequent 
>> buffers.
>> xdp_shared_info will alias skb_shared_info allowing to keep most of 
>> the frags
>> in the same cache-line (while with skb_shared_info only the first 
>> fragment will
>> be placed in the first "shared_info" cache-line). Moreover we 
>> introduced some
>> xdp_shared_info helpers aligned to skb_frag* ones.
>> Converting xdp_frame to SKB and deliver it to the network stack is 
>> shown in
>> patch 07/14. Building the SKB, the xdp_shared_info structure will be 
>> converted
>> in a skb_shared_info one.
>>
>> A multi-buffer bit (mb) has been introduced in xdp_{buff,frame} 
>> structure
>> to notify the bpf/network layer if this is a xdp multi-buffer frame 
>> (mb = 1)
>> or not (mb = 0).
>> The mb bit will be set by a xdp multi-buffer capable driver only for
>> non-linear frames maintaining the capability to receive linear frames
>> without any extra cost since the xdp_shared_info structure at the end
>> of the first buffer will be initialized only if mb is set.
>>
>> Typical use cases for this series are:
>> - Jumbo-frames
>> - Packet header split (please see Google���s use-case @ 
>> NetDevConf 0x14, [0])
>> - TSO
>>
>> A new frame_length field has been introduce in XDP ctx in order to 
>> notify the
>> eBPF layer about the total frame size (linear + paged parts).
>>
>> bpf_xdp_adjust_tail and bpf_xdp_copy helpers have been modified to 
>> take into
>> account xdp multi-buff frames.
>
> I just read the commit messages for v8 so far. But, I'm still 
> wondering how
> to handle use cases where we want to put extra bytes at the end of the
> packet, or really anywhere in the general case. We can extend tail 
> with above
> is there anyway to then write into that extra space?
>
> I think most use cases will only want headers so we can likely make it
> a callout to a helper. Could we add something like, 
> xdp_get_bytes(start, end)
> to pull in the bytes?
>
> My dumb pseudoprogram being something like,
>
>   trailer[16] = {0,1,2,3,4,5,6,7,8,9,a,b,c,d,e}
>   trailer_size = 16;
>   old_end = xdp->length;
>   new_end = xdp->length + trailer_size;
>
>   err = bpf_xdp_adjust_tail(xdp, trailer_size)
>   if (err) return err;
>
>   err = xdp_get_bytes(xdp, old_end, new_end);
>   if (err) return err;
>
>   memcpy(xdp->data, trailer, trailer_size);
>
> Do you think that could work if we code up xdp_get_bytes()? Does the 
> driver
> have enough context to adjust xdp to map to my get_bytes() call? I 
> think
> so but we should check.
>

I was thinking of doing something like the below, but I have no cycles 
to work on it:

void *bpf_xdp_access_bytes(struct xdp_buff *xdp_md, u32 offset, int 
*len, void *buffer)
      Description
              This function returns a pointer to the packet data, which 
can be
              accessed linearly for a maximum of *len* bytes.

              *offset* marks the starting point in the packet for which 
you
              would like to get a data pointer.

              *len* point to an initialized integer which tells the 
helper
              how many bytes from *offset* you would like to access. 
Supplying
              a value of 0 or less will tell the helper to report back 
how
              many bytes are available linearly from the offset (in this 
case
              the value of *buffer* is ignored). On return, the helper 
will
              update this value with the length available to access
              linearly at the address returned.

              *buffer* point to an optional buffer which MUST be the 
same size
              as *\*len* and will be used to copy in the data if it's 
not
              available linearly.

      Return
              Returns a pointer to the packet data requested accessible 
with
              a maximum length of *\*len*. NULL is returned on failure.

              Note that if a *buffer* is supplied and the data is not 
available
              linearly, the content is copied. In this case a pointer to
              *buffer* is returned.


int bpf_xdp_store_bytes(struct xdp_buff *xdp_md, u32 offset, const void 
*from, u32 len)
      Description
              Store *len* bytes from address *from* into the packet 
associated
              to *xdp_md*, at *offset*. This function will take care of 
copying
              data to multi-buffer XDP packets.

              A call to this helper is susceptible to change the 
underlying
              packet buffer. Therefore, at load time, all checks on 
pointers
              previously done by the verifier are invalidated and must 
be
              performed again, if the helper is used in combination with
              direct packet access.

      Return
              0 on success, or a negative error in case of failure.

>>
>> More info about the main idea behind this approach can be found here 
>> [1][2].
>
> Thanks for working on this!


  parent reply	other threads:[~2021-04-13 15:16 UTC|newest]

Thread overview: 57+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-08 12:50 [PATCH v8 bpf-next 00/14] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
2021-04-08 12:50 ` [PATCH v8 bpf-next 01/14] xdp: introduce mb in xdp_buff/xdp_frame Lorenzo Bianconi
2021-04-08 18:17   ` Vladimir Oltean
2021-04-09 16:03     ` Lorenzo Bianconi
2021-04-29 13:36   ` Jesper Dangaard Brouer
2021-04-29 13:54     ` Lorenzo Bianconi
2021-04-08 12:50 ` [PATCH v8 bpf-next 02/14] xdp: add xdp_shared_info data structure Lorenzo Bianconi
2021-04-08 13:39   ` Vladimir Oltean
2021-04-08 14:26     ` Lorenzo Bianconi
2021-04-08 18:06   ` kernel test robot
2021-04-08 12:50 ` [PATCH v8 bpf-next 03/14] net: mvneta: update mb bit before passing the xdp buffer to eBPF layer Lorenzo Bianconi
2021-04-08 18:19   ` Vladimir Oltean
2021-04-09 16:24     ` Lorenzo Bianconi
2021-04-08 12:50 ` [PATCH v8 bpf-next 04/14] xdp: add multi-buff support to xdp_return_{buff/frame} Lorenzo Bianconi
2021-04-08 18:30   ` Vladimir Oltean
2021-04-09 16:28     ` Lorenzo Bianconi
2021-04-08 12:50 ` [PATCH v8 bpf-next 05/14] net: mvneta: add multi buffer support to XDP_TX Lorenzo Bianconi
2021-04-08 18:40   ` Vladimir Oltean
2021-04-09 16:36     ` Lorenzo Bianconi
2021-04-08 12:50 ` [PATCH v8 bpf-next 06/14] net: mvneta: enable jumbo frames for XDP Lorenzo Bianconi
2021-04-08 12:50 ` [PATCH v8 bpf-next 07/14] net: xdp: add multi-buff support to xdp_build_skb_from_fram Lorenzo Bianconi
2021-04-08 12:51 ` [PATCH v8 bpf-next 08/14] bpf: add multi-buff support to the bpf_xdp_adjust_tail() API Lorenzo Bianconi
2021-04-08 19:15   ` Vladimir Oltean
2021-04-08 20:54     ` Vladimir Oltean
2021-04-09 18:13       ` Lorenzo Bianconi
2021-04-08 12:51 ` [PATCH v8 bpf-next 09/14] bpd: add multi-buffer support to xdp copy helpers Lorenzo Bianconi
2021-04-08 20:57   ` Vladimir Oltean
2021-04-09 18:19     ` Lorenzo Bianconi
2021-04-08 21:04   ` Vladimir Oltean
2021-04-14  8:08     ` Eelco Chaudron
2021-04-08 12:51 ` [PATCH v8 bpf-next 10/14] bpf: add new frame_length field to the XDP ctx Lorenzo Bianconi
2021-04-08 12:51 ` [PATCH v8 bpf-next 11/14] bpf: move user_size out of bpf_test_init Lorenzo Bianconi
2021-04-08 12:51 ` [PATCH v8 bpf-next 12/14] bpf: introduce multibuff support to bpf_prog_test_run_xdp() Lorenzo Bianconi
2021-04-08 12:51 ` [PATCH v8 bpf-next 13/14] bpf: test_run: add xdp_shared_info pointer in bpf_test_finish signature Lorenzo Bianconi
2021-04-08 12:51 ` [PATCH v8 bpf-next 14/14] bpf: update xdp_adjust_tail selftest to include multi-buffer Lorenzo Bianconi
2021-04-09  0:56 ` [PATCH v8 bpf-next 00/14] mvneta: introduce XDP multi-buffer support John Fastabend
2021-04-09 20:16   ` Lorenzo Bianconi
2021-04-13 15:16   ` Eelco Chaudron [this message]
2021-04-16 14:27 ` Magnus Karlsson
2021-04-16 21:29   ` Lorenzo Bianconi
2021-04-16 23:00     ` Daniel Borkmann
2021-04-18 16:18   ` Jesper Dangaard Brouer
2021-04-19  6:20     ` Magnus Karlsson
2021-04-19  6:55       ` Lorenzo Bianconi
2021-04-20 13:49         ` Magnus Karlsson
2021-04-21 12:47           ` Jesper Dangaard Brouer
2021-04-21 14:12             ` Magnus Karlsson
2021-04-21 15:39               ` Jesper Dangaard Brouer
2021-04-22 10:24                 ` Magnus Karlsson
2021-04-22 14:42                   ` Jesper Dangaard Brouer
2021-04-22 15:05                     ` Crash for i40e on net-next (was: [PATCH v8 bpf-next 00/14] mvneta: introduce XDP multi-buffer support) Jesper Dangaard Brouer
2021-04-23  5:28                       ` Magnus Karlsson
2021-04-23 16:43                         ` Alexander Duyck
2021-04-25  9:45                           ` Magnus Karlsson
2021-04-27 18:28   ` [PATCH v8 bpf-next 00/14] mvneta: introduce XDP multi-buffer support Lorenzo Bianconi
2021-04-28  7:41     ` Magnus Karlsson
2021-04-29 12:49       ` Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=FD3E6E08-DE78-4FBA-96F6-646C93E88631@redhat.com \
    --to=echaudro@redhat.com \
    --cc=alexander.duyck@gmail.com \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=brouer@redhat.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=dsahern@kernel.org \
    --cc=jasowang@redhat.com \
    --cc=john.fastabend@gmail.com \
    --cc=kuba@kernel.org \
    --cc=lorenzo.bianconi@redhat.com \
    --cc=lorenzo@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeed@kernel.org \
    --cc=sameehj@amazon.com \
    --cc=shayagr@amazon.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).