From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7398DC433E0 for ; Tue, 12 Jan 2021 12:24:27 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 2BB6921D93 for ; Tue, 12 Jan 2021 12:24:27 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387535AbhALMYL (ORCPT ); Tue, 12 Jan 2021 07:24:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38022 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2387457AbhALMYJ (ORCPT ); Tue, 12 Jan 2021 07:24:09 -0500 Received: from mail-io1-xd31.google.com (mail-io1-xd31.google.com [IPv6:2607:f8b0:4864:20::d31]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E3F3EC061575 for ; Tue, 12 Jan 2021 04:23:28 -0800 (PST) Received: by mail-io1-xd31.google.com with SMTP id p187so3525898iod.4 for ; Tue, 12 Jan 2021 04:23:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=nZIezxfzF5cQVfnp9/DvcL98QklFJMeqwz8OE0o7lBM=; b=cdhH9HEUB47b4fq3e07Gy4hh0BO4wPghQRXBEikULAHwkROwydMITx/LuS06kY8XkT hIngq3U3IZuDa/l2J6YMdt4dyvhL7iRs4NE9WlZGZ1K7iDpLya2TnUXsdJUlw00hIO4Y HiDq5o+4Zuy0jAJPVhAJ403wQW6Prj1ccTVWu8OYxUoGYkqbL1LJS5OXrx0LQhuWHgcY 1yi2u/URnELdm3je9GrLljj9yxcMCLjH+XzoTjUzU/eyyBNpgcC2ms21MrRZFxRc8cwi fmGovBHSfx3oMiI+yzacG7Gz+0hgLiWpwj1DJI7gfWCZ+nKU0PZuwTUSrP7UhqOP0bQG ByaQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=nZIezxfzF5cQVfnp9/DvcL98QklFJMeqwz8OE0o7lBM=; b=sljgolyKbzRx95F98raiXPSeLmLJS1h4NwRNWd8rvIuT5JK27+QWXkXeNuUyJlQWCh yqa8V571CbVJrjn6wu2YxY0TfF/mHLoEHYLbL2FPmgG37IH3MOWgsjx2xu0GmncdP8oz k4Kj3jUIlrVbISd32sKbeCAluTumvdxdeAwJyRCfSmhaDe8HTND648P/ULvgHRYu6hVJ ZJto7MZLur43lzkSCRZRZBSWLTJ+H/TLIbUTteUG0PZrMpOLcIh5RqiYmkg8ICdjOMGp Of3Ll783PN8jyo5VRMUVI3+PxIBvCnIwXClUwC40foUNPPYtwEgXyGoLv0Zj2h7hB+88 /83g== X-Gm-Message-State: AOAM533UkUmSX2GsxwwenZePw2axrnspWRUJnZG8stAdiHs0V4FKBfA5 dP1RVVQugueezlcQtgbL54cgJSaqgNHrS3THfV84Mw== X-Google-Smtp-Source: ABdhPJzCX4Z3ZyGgv0cScTyr9oPuvm6uuLBWsAZpZf4YTAFVuDYAtQ49Yue7KnksqB9gfzM0eWDP4MDKUCJ2/OTtvgY= X-Received: by 2002:a02:c981:: with SMTP id b1mr4002348jap.6.1610454207998; Tue, 12 Jan 2021 04:23:27 -0800 (PST) MIME-Version: 1.0 References: <20210111182655.12159-1-alobakin@pm.me> <20210112110802.3914-1-alobakin@pm.me> In-Reply-To: <20210112110802.3914-1-alobakin@pm.me> From: Eric Dumazet Date: Tue, 12 Jan 2021 13:23:16 +0100 Message-ID: Subject: Re: [PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing To: Alexander Lobakin Cc: Edward Cree , "David S. Miller" , Jakub Kicinski , Edward Cree , Jonathan Lemon , Willem de Bruijn , Miaohe Lin , Steffen Klassert , Guillaume Nault , Yadu Kishore , Al Viro , netdev , LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Jan 12, 2021 at 12:08 PM Alexander Lobakin wrote: > > From: Edward Cree > Date: Tue, 12 Jan 2021 09:54:04 +0000 > > > Without wishing to weigh in on whether this caching is a good idea... > > Well, we already have a cache to bulk flush "consumed" skbs, although > kmem_cache_free() is generally lighter than kmem_cache_alloc(), and > a page frag cache to allocate skb->head that is also bulking the > operations, since it contains a (compound) page with the size of > min(SZ_32K, PAGE_SIZE). > If they wouldn't give any visible boosts, I think they wouldn't hit > mainline. > > > Wouldn't it be simpler, rather than having two separate "alloc" and "flush" > > caches, to have a single larger cache, such that whenever it becomes full > > we bulk flush the top half, and when it's empty we bulk alloc the bottom > > half? That should mean fewer branches, fewer instructions etc. than > > having to decide which cache to act upon every time. > > I though about a unified cache, but couldn't decide whether to flush > or to allocate heads and how much to process. Your suggestion answers > these questions and generally seems great. I'll try that one, thanks! > The thing is : kmalloc() is supposed to have batches already, and nice per-cpu caches. This looks like an mm issue, are we sure we want to get over it ? I would like a full analysis of why SLAB/SLUB does not work well for your test workload. More details, more numbers.... before we accept yet another 'networking optimization' adding more code to the 'fast' path. More code means more latencies when all code needs to be brought up in cpu caches.