From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6C67C433E6 for ; Fri, 22 Jan 2021 11:50:21 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 83F55238EE for ; Fri, 22 Jan 2021 11:50:21 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726997AbhAVLuG (ORCPT ); Fri, 22 Jan 2021 06:50:06 -0500 Received: from mail-40131.protonmail.ch ([185.70.40.131]:39636 "EHLO mail-40131.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727744AbhAVLsf (ORCPT ); Fri, 22 Jan 2021 06:48:35 -0500 Date: Fri, 22 Jan 2021 11:47:45 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1611316068; bh=25qDxJLd92A3UCe0EKzxSRMKM/TP8P/UR+NwEW5WA6w=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=Ga1tZXjAsfI2SVVUNdSPKuUuKR8i/NdZCALGZ86SPm6Wu21SBlddyXIttnkimDWNp 9m5yzcDI5Dm+GOV05DGVfIJ0ZrG9aV+7J56HTfc0UO+MOEWxVfwBElqeG1oDqLXKOv 5glXm+BGSOqmmkY1KesAc7pjnMRJpCvl09loYAa9i+9eQVE+gWIfRi9r2oNMRnRgbN lEQR6E5fOYz8jqvXytwD04J6+Oh1tyrRqc7pXAjU3bV8SYfuBUrAXmxiw+NRwDAG/Q /M58i914HpODoSo78Sj2TqsGZEu7BYthkM7zYcYUoQW0Q2UIm7swQilQRLtH6Otl2B b0YNe/ng+7kJw== To: Eric Dumazet From: Alexander Lobakin Cc: Alexander Lobakin , Xuan Zhuo , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Jakub Kicinski , =?utf-8?Q?Bj=C3=B6rn_T=C3=B6pel?= , Magnus Karlsson , Jonathan Lemon , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: Re: [PATCH bpf-next v3 3/3] xsk: build skb by page Message-ID: <20210122114729.1758-1-alobakin@pm.me> In-Reply-To: References: <340f1dfa40416dd966a56e08507daba82d633088.1611236588.git.xuanzhuo@linux.alibaba.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Eric Dumazet Date: Thu, 21 Jan 2021 16:41:33 +0100 > On 1/21/21 2:47 PM, Xuan Zhuo wrote: > > This patch is used to construct skb based on page to save memory copy > > overhead. > >=20 > > This function is implemented based on IFF_TX_SKB_NO_LINEAR. Only the > > network card priv_flags supports IFF_TX_SKB_NO_LINEAR will use page to > > directly construct skb. If this feature is not supported, it is still > > necessary to copy data to construct skb. > >=20 > > ---------------- Performance Testing ------------ > >=20 > > The test environment is Aliyun ECS server. > > Test cmd: > > ``` > > xdpsock -i eth0 -t -S -s > > ``` > >=20 > > Test result data: > >=20 > > size 64 512 1024 1500 > > copy 1916747 1775988 1600203 1440054 > > page 1974058 1953655 1945463 1904478 > > percent 3.0% 10.0% 21.58% 32.3% > >=20 > > Signed-off-by: Xuan Zhuo > > Reviewed-by: Dust Li > > --- > > net/xdp/xsk.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++++--= -------- > > 1 file changed, 86 insertions(+), 18 deletions(-) > >=20 > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c > > index 4a83117..38af7f1 100644 > > --- a/net/xdp/xsk.c > > +++ b/net/xdp/xsk.c > > @@ -430,6 +430,87 @@ static void xsk_destruct_skb(struct sk_buff *skb) > > =09sock_wfree(skb); > > } > > =20 > > +static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs, > > +=09=09=09=09=09 struct xdp_desc *desc) > > +{ > > +=09u32 len, offset, copy, copied; > > +=09struct sk_buff *skb; > > +=09struct page *page; > > +=09void *buffer; > > +=09int err, i; > > +=09u64 addr; > > + > > +=09skb =3D sock_alloc_send_skb(&xs->sk, 0, 1, &err); > > +=09if (unlikely(!skb)) > > +=09=09return ERR_PTR(err); > > + > > +=09addr =3D desc->addr; > > +=09len =3D desc->len; > > + > > +=09buffer =3D xsk_buff_raw_get_data(xs->pool, addr); > > +=09offset =3D offset_in_page(buffer); > > +=09addr =3D buffer - xs->pool->addrs; > > + > > +=09for (copied =3D 0, i =3D 0; copied < len; i++) { > > +=09=09page =3D xs->pool->umem->pgs[addr >> PAGE_SHIFT]; > > + > > +=09=09get_page(page); > > + > > +=09=09copy =3D min_t(u32, PAGE_SIZE - offset, len - copied); > > + > > +=09=09skb_fill_page_desc(skb, i, page, offset, copy); > > + > > +=09=09copied +=3D copy; > > +=09=09addr +=3D copy; > > +=09=09offset =3D 0; > > +=09} > > + > > +=09skb->len +=3D len; > > +=09skb->data_len +=3D len; >=20 > > +=09skb->truesize +=3D len; >=20 > This is not the truesize, unfortunately. >=20 > We need to account for the number of pages, not number of bytes. The easiest solution is: =09skb->truesize +=3D PAGE_SIZE * i; i would be equal to skb_shinfo(skb)->nr_frags after exiting the loop. > > + > > +=09refcount_add(len, &xs->sk.sk_wmem_alloc); > > + > > +=09return skb; > > +} > > + Al