From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D06CFC433E0 for ; Tue, 12 Jan 2021 10:57:20 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8D48A23109 for ; Tue, 12 Jan 2021 10:57:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1729008AbhALK5E (ORCPT ); Tue, 12 Jan 2021 05:57:04 -0500 Received: from mail-40133.protonmail.ch ([185.70.40.133]:58637 "EHLO mail-40133.protonmail.ch" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725922AbhALK5D (ORCPT ); Tue, 12 Jan 2021 05:57:03 -0500 Date: Tue, 12 Jan 2021 10:56:06 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=pm.me; s=protonmail; t=1610448975; bh=o4WjqD8WRsgGX3zA0kQjiOkl+6TXdpW90sXH0o/3bao=; h=Date:To:From:Cc:Reply-To:Subject:In-Reply-To:References:From; b=mpXD/HtGnvzYArAWrxjXJJC6+7EgBZTl4SI5oarD8YmnknRdQ+vrY3gRp6AQdaUHY Eh8KFg8WnJg0Mpsv7OUGKokRP1CSj5arakiy47gN6YjJVgxPgUYNhFk76nJ5ykwUMG +6tjYUIvWEXzS9jmtf8MBVsc9KKYPJsy3wyznQxmH4YnuRSbmL7GjHUV5YhSORarh7 JBfDQzq8jkWok3U67UA4H3b7aS+iZiCosw/h7oPNM0u32dSwJs5/KaFNG1pPDQJPXa sF46v1qJO0+OzKzC/IDGUFEhdGSesX6P0wMquQqx7/h5CVeyQaxkr4WHzdceuO86DU It+dsY+WdgkpA== To: Eric Dumazet From: Alexander Lobakin Cc: Alexander Lobakin , "David S. Miller" , Jakub Kicinski , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev , LKML Reply-To: Alexander Lobakin Subject: Re: [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing Message-ID: <20210112105529.3592-1-alobakin@pm.me> In-Reply-To: References: <20210111182655.12159-1-alobakin@pm.me> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Eric Dumazet Date: Tue, 12 Jan 2021 09:20:39 +0100 > On Mon, Jan 11, 2021 at 7:27 PM Alexander Lobakin wrote: >> >> Inspired by cpu_map_kthread_run() and _kfree_skb_defer() logics. >> >> Currently, all sorts of skb allocation always do allocate >> skbuff_heads one by one via kmem_cache_alloc(). >> On the other hand, we have percpu napi_alloc_cache to store >> skbuff_heads queued up for freeing and flush them by bulks. >> >> We can use this struct to cache and bulk not only freeing, but also >> allocation of new skbuff_heads, as well as to reuse cached-to-free >> heads instead of allocating the new ones. >> As accessing napi_alloc_cache implies NAPI softirq context, do this >> only for __napi_alloc_skb() and its derivatives (napi_alloc_skb() >> and napi_get_frags()). The rough amount of their call sites are 69, >> which is quite a number. >> >> iperf3 showed a nice bump from 910 to 935 Mbits while performing >> UDP VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be >> way bigger on more powerful hosts and NICs with tens of Mpps. > > What is the latency cost of these bulk allocations, and for TCP traffic > on which GRO is the norm ? > > Adding caches is increasing cache foot print when the cache is populated. > > I wonder if your iperf3 numbers are simply wrong because of lack of > GRO in this UDP VLAN NAT case. Ah, I should've mentioned that I use UDP GRO Fraglists, so these numbers are for GRO. My board gives full 1 Gbps (link speed) for TCP for more than a year, so I can't really rely on TCP passthrough to measure the gains or regressions. > We are adding a log of additional code, thus icache pressure, that > iperf3 tests can not really measure. Not sure if MIPS arch can provide enough debug information to measure icache pressure, but I'll try to catch this. > Most linus devices simply handle one packet at a time (one packet per int= errupt) I disagree here, most modern NICs usually handle thousands of packets per single interrupt due to NAPI, hardware interrupt moderation and so on, at least in cases with high traffic load. Al