BPF Archive on lore.kernel.org
 help / color / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: "Björn Töpel" <bjorn.topel@intel.com>
Cc: "Björn Töpel" <bjorn.topel@gmail.com>,
	ast@kernel.org, daniel@iogearbox.net, davem@davemloft.net,
	kuba@kernel.org, hawk@kernel.org, john.fastabend@gmail.com,
	netdev@vger.kernel.org, bpf@vger.kernel.org,
	magnus.karlsson@intel.com, jonathan.lemon@gmail.com,
	jeffrey.t.kirsher@intel.com, maximmi@mellanox.com,
	maciej.fijalkowski@intel.com, brouer@redhat.com
Subject: Re: [PATCH bpf-next v4 01/15] xsk: fix xsk_umem_xdp_frame_sz()
Date: Thu, 21 May 2020 06:29:47 +0200
Message-ID: <20200521062947.71d9cddd@carbon> (raw)
In-Reply-To: <17701885-c91d-5bfc-b96d-29263a0d08ab@intel.com>

On Wed, 20 May 2020 16:34:05 +0200
Björn Töpel <bjorn.topel@intel.com> wrote:

> On 2020-05-20 15:18, Jesper Dangaard Brouer wrote:
> > On Wed, 20 May 2020 11:47:28 +0200
> > Björn Töpel <bjorn.topel@gmail.com> wrote:
> >   
> >> From: Björn Töpel <bjorn.topel@intel.com>
> >>
> >> Calculating the "data_hard_end" for an XDP buffer coming from AF_XDP
> >> zero-copy mode, the return value of xsk_umem_xdp_frame_sz() is added
> >> to "data_hard_start".
> >>
> >> Currently, the chunk size of the UMEM is returned by
> >> xsk_umem_xdp_frame_sz(). This is not correct, if the fixed UMEM
> >> headroom is non-zero. Fix this by returning the chunk_size without the
> >> UMEM headroom.
> >>
> >> Fixes: 2a637c5b1aaf ("xdp: For Intel AF_XDP drivers add XDP frame_sz")
> >> Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
> >> ---
> >>   include/net/xdp_sock.h | 2 +-
> >>   1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/include/net/xdp_sock.h b/include/net/xdp_sock.h
> >> index abd72de25fa4..6b1137ce1692 100644
> >> --- a/include/net/xdp_sock.h
> >> +++ b/include/net/xdp_sock.h
> >> @@ -239,7 +239,7 @@ static inline u64 xsk_umem_adjust_offset(struct xdp_umem *umem, u64 address,
> >>   
> >>   static inline u32 xsk_umem_xdp_frame_sz(struct xdp_umem *umem)
> >>   {
> >> -	return umem->chunk_size_nohr + umem->headroom;
> >> +	return umem->chunk_size_nohr;  
> > 
> > Hmm, is this correct?
> > 
> > As you write "xdp_data_hard_end" is calculated as an offset from
> > xdp->data_hard_start pointer based on the frame_sz.  Will your
> > xdp->data_hard_start + frame_sz point to packet end?
> >  
> 
> Yes, I believe this is correct.
> 
> Say that a user uses a chunk size of 2k, and a umem headroom of, say,
> 64. This means that the kernel should (at least) leave 64B which the
> kernel shouldn't touch.
> 
> umem->headroom | XDP_PACKET_HEADROOM | packet |          |
>                 ^                     ^        ^      ^   ^
>                 a                     b        c      d   e
> 
> a: data_hard_start
> b: data
> c: data_end
> d: data_hard_end, (e - 320)
> e: hardlimit of chunk, a + umem->chunk_size_nohr
> 
> Prior this fix the umem->headroom was *included* in frame_sz.

Thanks for the nice ascii art description. I can now see that you are
right.   We should add this kind of documentation, perhaps as a comment
in the code?


> > #define xdp_data_hard_end(xdp)                          \
> >          ((xdp)->data_hard_start + (xdp)->frame_sz -     \
> >           SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
> > 
> > Note the macro reserves the last 320 bytes (for skb_shared_info), but
> > for AF_XDP zero-copy mode, it will never create an SKB that use this
> > area.   Thus, in principle we can allow XDP-progs to extend/grow tail
> > into this area, but I don't think there is any use-case for this, as
> > it's much easier to access packet-data in userspace application.
> > (Thus, it might not be worth the complexity to give AF_XDP
> > bpf_xdp_adjust_tail access to this area, by e.g. "lying" via adding 320
> > bytes to frame_sz).
> >   
> 
> I agree, and in the picture (well...) above that would be "d". IOW
> data_hard_end is 320 "off" the real end.

Yes, we agree.

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer


  reply index

Thread overview: 27+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-20  9:47 [PATCH bpf-next v4 00/15] Introduce AF_XDP buffer allocation API Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 01/15] xsk: fix xsk_umem_xdp_frame_sz() Björn Töpel
2020-05-20 13:18   ` Jesper Dangaard Brouer
2020-05-20 14:34     ` Björn Töpel
2020-05-21  4:29       ` Jesper Dangaard Brouer [this message]
2020-05-21 18:06         ` Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 02/15] xsk: move xskmap.c to net/xdp/ Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 03/15] xsk: move driver interface to xdp_sock_drv.h Björn Töpel
2020-05-20 16:57   ` Jakub Kicinski
2020-05-20 17:18     ` Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 04/15] xsk: move defines only used by AF_XDP internals to xsk.h Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 05/15] xsk: introduce AF_XDP buffer allocation API Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 06/15] i40e: refactor rx_bi accesses Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 07/15] i40e: separate kernel allocated rx_bi rings from AF_XDP rings Björn Töpel
2020-05-20 17:02   ` Jakub Kicinski
2020-05-20 17:19     ` Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 08/15] i40e, xsk: migrate to new MEM_TYPE_XSK_BUFF_POOL Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 09/15] ice, " Björn Töpel
2020-05-20 17:03   ` Jakub Kicinski
2020-05-20 17:20     ` Björn Töpel
2020-05-20 18:49     ` Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 10/15] ixgbe, " Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 11/15] mlx5, " Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 12/15] xsk: remove MEM_TYPE_ZERO_COPY and corresponding code Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 13/15] xdp: simplify xdp_return_{frame,frame_rx_napi,buff} Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 14/15] xsk: explicitly inline functions and move definitions Björn Töpel
2020-05-20  9:47 ` [PATCH bpf-next v4 15/15] MAINTAINERS, xsk: update AF_XDP section after moves/adds Björn Töpel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200521062947.71d9cddd@carbon \
    --to=brouer@redhat.com \
    --cc=ast@kernel.org \
    --cc=bjorn.topel@gmail.com \
    --cc=bjorn.topel@intel.com \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hawk@kernel.org \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=john.fastabend@gmail.com \
    --cc=jonathan.lemon@gmail.com \
    --cc=kuba@kernel.org \
    --cc=maciej.fijalkowski@intel.com \
    --cc=magnus.karlsson@intel.com \
    --cc=maximmi@mellanox.com \
    --cc=netdev@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

BPF Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/bpf/0 bpf/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 bpf bpf/ https://lore.kernel.org/bpf \
		bpf@vger.kernel.org
	public-inbox-index bpf

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.bpf


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git