From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 23F1EC433E0 for ; Fri, 22 Jan 2021 11:56:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id D516922C9F for ; Fri, 22 Jan 2021 11:56:40 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727437AbhAVL4i (ORCPT ); Fri, 22 Jan 2021 06:56:38 -0500 Received: from mail-40134.protonmail.ch ([185.70.40.134]:33594 "EHLO mail-40134.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727072AbhAVL4g (ORCPT ); Fri, 22 Jan 2021 06:56:36 -0500 Date: Fri, 22 Jan 2021 11:55:35 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1611316542; bh=7A+bIR8MNPqdhF6YnTCaCIMQNA6knpBQGh4OvqlULpo=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=HLMSAeeApixBhvZ3tpz175dGXj/jgdg89s2NoCXwiW8UeRQIZdClncPD5Y/ZgppNS ou5J0N1hGSSOjRz6fph1o48g6BKu5GEyeaI84QjzCAwUjtVdXD6aAqGIE2E2DiTgnj WYxHKVp7k8ojDiOTtBFJ3KoZtzEaaSS9SPpPrqicxg2LNDacmaN9kgasauoY4VeDoH XoTRQsOPtAk7qfAasCEpEQVW6dtItxjAxwt2oqe0sVg36Mq7jwzzagbhIau6CpAzJS jpDZBgIKl0bOPk5LF3RwvVHJfnvRwFE3tiCe93hnR4+4LozuzZUG1F3ypqY5HMzYg2 2gerxDsul37rA== To: Eric Dumazet From: Alexander Lobakin Cc: Alexander Lobakin , Xuan Zhuo , "Michael S. Tsirkin" , Jason Wang , "David S. Miller" , Jakub Kicinski , =?utf-8?Q?Bj=C3=B6rn_T=C3=B6pel?= , Magnus Karlsson , Jonathan Lemon , Alexei Starovoitov , Daniel Borkmann , Jesper Dangaard Brouer , John Fastabend , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , KP Singh , virtualization@lists.linux-foundation.org, bpf@vger.kernel.org, netdev@vger.kernel.org, linux-kernel@vger.kernel.org Reply-To: Alexander Lobakin Subject: Re: [PATCH bpf-next v3 3/3] xsk: build skb by page Message-ID: <20210122115519.2183-1-alobakin@pm.me> In-Reply-To: <20210122114729.1758-1-alobakin@pm.me> References: <340f1dfa40416dd966a56e08507daba82d633088.1611236588.git.xuanzhuo@linux.alibaba.com> <20210122114729.1758-1-alobakin@pm.me> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: Alexander Lobakin Date: Fri, 22 Jan 2021 11:47:45 +0000 > From: Eric Dumazet > Date: Thu, 21 Jan 2021 16:41:33 +0100 >=20 > > On 1/21/21 2:47 PM, Xuan Zhuo wrote: > > > This patch is used to construct skb based on page to save memory copy > > > overhead. > > >=20 > > > This function is implemented based on IFF_TX_SKB_NO_LINEAR. Only the > > > network card priv_flags supports IFF_TX_SKB_NO_LINEAR will use page t= o > > > directly construct skb. If this feature is not supported, it is still > > > necessary to copy data to construct skb. > > >=20 > > > ---------------- Performance Testing ------------ > > >=20 > > > The test environment is Aliyun ECS server. > > > Test cmd: > > > ``` > > > xdpsock -i eth0 -t -S -s > > > ``` > > >=20 > > > Test result data: > > >=20 > > > size 64 512 1024 1500 > > > copy 1916747 1775988 1600203 1440054 > > > page 1974058 1953655 1945463 1904478 > > > percent 3.0% 10.0% 21.58% 32.3% > > >=20 > > > Signed-off-by: Xuan Zhuo > > > Reviewed-by: Dust Li > > > --- > > > net/xdp/xsk.c | 104 ++++++++++++++++++++++++++++++++++++++++++++++++= ---------- > > > 1 file changed, 86 insertions(+), 18 deletions(-) > > >=20 > > > diff --git a/net/xdp/xsk.c b/net/xdp/xsk.c > > > index 4a83117..38af7f1 100644 > > > --- a/net/xdp/xsk.c > > > +++ b/net/xdp/xsk.c > > > @@ -430,6 +430,87 @@ static void xsk_destruct_skb(struct sk_buff *skb= ) > > > =09sock_wfree(skb); > > > } > > > =20 > > > +static struct sk_buff *xsk_build_skb_zerocopy(struct xdp_sock *xs, > > > +=09=09=09=09=09 struct xdp_desc *desc) > > > +{ > > > +=09u32 len, offset, copy, copied; > > > +=09struct sk_buff *skb; > > > +=09struct page *page; > > > +=09void *buffer; > > > +=09int err, i; > > > +=09u64 addr; > > > + > > > +=09skb =3D sock_alloc_send_skb(&xs->sk, 0, 1, &err); > > > +=09if (unlikely(!skb)) > > > +=09=09return ERR_PTR(err); > > > + > > > +=09addr =3D desc->addr; > > > +=09len =3D desc->len; > > > + > > > +=09buffer =3D xsk_buff_raw_get_data(xs->pool, addr); > > > +=09offset =3D offset_in_page(buffer); > > > +=09addr =3D buffer - xs->pool->addrs; > > > + > > > +=09for (copied =3D 0, i =3D 0; copied < len; i++) { > > > +=09=09page =3D xs->pool->umem->pgs[addr >> PAGE_SHIFT]; > > > + > > > +=09=09get_page(page); > > > + > > > +=09=09copy =3D min_t(u32, PAGE_SIZE - offset, len - copied); > > > + > > > +=09=09skb_fill_page_desc(skb, i, page, offset, copy); > > > + > > > +=09=09copied +=3D copy; > > > +=09=09addr +=3D copy; > > > +=09=09offset =3D 0; > > > +=09} > > > + > > > +=09skb->len +=3D len; > > > +=09skb->data_len +=3D len; > >=20 > > > +=09skb->truesize +=3D len; > >=20 > > This is not the truesize, unfortunately. > >=20 > > We need to account for the number of pages, not number of bytes. >=20 > The easiest solution is: >=20 > =09skb->truesize +=3D PAGE_SIZE * i; >=20 > i would be equal to skb_shinfo(skb)->nr_frags after exiting the loop. Oops, pls ignore this. I forgot that XSK buffers are not "one per page". We need to count the number of pages manually and then do =09skb->truesize +=3D PAGE_SIZE * npages; Right. > > > + > > > +=09refcount_add(len, &xs->sk.sk_wmem_alloc); > > > + > > > +=09return skb; > > > +} > > > + >=20 > Al Thanks, Al