From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D9A09C0502A for ; Wed, 31 Aug 2022 15:37:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231855AbiHaPhb (ORCPT ); Wed, 31 Aug 2022 11:37:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60788 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231817AbiHaPh2 (ORCPT ); Wed, 31 Aug 2022 11:37:28 -0400 Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com [IPv6:2607:f8b0:4864:20::112f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 14980D8B3F for ; Wed, 31 Aug 2022 08:37:26 -0700 (PDT) Received: by mail-yw1-x112f.google.com with SMTP id 00721157ae682-33dc345ad78so311015587b3.3 for ; Wed, 31 Aug 2022 08:37:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc; bh=Z5bWT124P9msiKtHJxXWtamGxGE9nU7NFSbNkXeVir4=; b=q4yt7LQZaKZTdzRnqOfBDWuxDUawgahY2C4+8NClZrlVWpZiq8b8wuUtt2FXanHGEc GKUHZeMZM5jPluxM3gpqfRkq+ZEpQLG+BNvz98YUVO6qqcwbNq/QfwpxfY1bBqJ/PbUe qN9rURlcwiX2U8yaFrSKUwROmM3LF8ndCfS6j0AHC4HsQL8z7s2/MCCQMTRvX3Ne6rDh NX1iVO84IZTdrxORhVNpFjbUB389eDGDLX9acAC39/jQL/Xkcw+zdEhsbVQSkIB5PGSl 3MgCLAYUQZJEhdAOCDXe/UR2KtUxqwQZuMK/LjLdJHtYQZbxs0yHQAWgM70EmS5KjdKg raEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc; bh=Z5bWT124P9msiKtHJxXWtamGxGE9nU7NFSbNkXeVir4=; b=sVmti25SRPQT8fDRzdldHwkDCFKyR9tJHdLBerooxxK1FLwUcRjOHLVXe8p2A1QX7B wcN8zJtGJBvekGEV/b7in7Zw4QrDCETF9lbuuSQQQERPLdp0bDwIQJ3elhEd8aIxglPo tO5xQShKvmE0CJ2ZymLQSN24/8Td7WYyevxSKh0ACCjQbC2mToryGIdwINoNYjFHIfmF rpJf9oMMlaiUrCe4vVZn1875iMDidToeaqbYH3PfKTJ3rc2MtuXpBqaV0G0rbMB6FcDJ KIE7lWwAUpyJW9tUa4U3rdwY7QbNZrPLllKHls6VqKMpSFfMoO4JcteSOqRzTtyZwu2g gsDg== X-Gm-Message-State: ACgBeo1oUOWOOh97Ntes6w7M8/sXFIvxEgSyc/JW21uOlkiqMZl2e89g lZiYP4//44JFExwbxhkJn5RhZEQpRGbc7VDaA77tLA== X-Google-Smtp-Source: AA6agR4eyfrVHwFKMQTAM4+K85p4B7I6c+zSy9tLGnKDo68fc8kY3s9yyQtJtbnPccojelXd5+hkkFEpEU7+woAq0vg= X-Received: by 2002:a81:7784:0:b0:33d:ca62:45f5 with SMTP id s126-20020a817784000000b0033dca6245f5mr18452862ywc.180.1661960245620; Wed, 31 Aug 2022 08:37:25 -0700 (PDT) MIME-Version: 1.0 References: <20220830214919.53220-1-surenb@google.com> <20220830214919.53220-4-surenb@google.com> <20220831100249.f2o27ri7ho4ma3pe@suse.de> In-Reply-To: <20220831100249.f2o27ri7ho4ma3pe@suse.de> From: Suren Baghdasaryan Date: Wed, 31 Aug 2022 08:37:14 -0700 Message-ID: Subject: Re: [RFC PATCH 03/30] Lazy percpu counters To: Mel Gorman Cc: Andrew Morton , Kent Overstreet , Michal Hocko , Vlastimil Babka , Johannes Weiner , Roman Gushchin , Davidlohr Bueso , Matthew Wilcox , "Liam R. Howlett" , David Vernet , Peter Zijlstra , Juri Lelli , Laurent Dufour , Peter Xu , David Hildenbrand , Jens Axboe , mcgrof@kernel.org, masahiroy@kernel.org, nathan@kernel.org, changbin.du@intel.com, ytcoode@gmail.com, Vincent Guittot , Dietmar Eggemann , Steven Rostedt , Benjamin Segall , Daniel Bristot de Oliveira , Valentin Schneider , Christopher Lameter , Pekka Enberg , Joonsoo Kim , 42.hyeyoo@gmail.com, Alexander Potapenko , Marco Elver , dvyukov@google.com, Shakeel Butt , Muchun Song , arnd@arndb.de, jbaron@akamai.com, David Rientjes , Minchan Kim , Kalesh Singh , kernel-team , linux-mm , iommu@lists.linux.dev, kasan-dev@googlegroups.com, io-uring@vger.kernel.org, linux-arch@vger.kernel.org, xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org, linux-modules@vger.kernel.org, LKML Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-bcache@vger.kernel.org On Wed, Aug 31, 2022 at 3:02 AM Mel Gorman wrote: > > On Tue, Aug 30, 2022 at 02:48:52PM -0700, Suren Baghdasaryan wrote: > > From: Kent Overstreet > > > > This patch adds lib/lazy-percpu-counter.c, which implements counters > > that start out as atomics, but lazily switch to percpu mode if the > > update rate crosses some threshold (arbitrarily set at 256 per second). > > > > Signed-off-by: Kent Overstreet > > Why not use percpu_counter? It has a per-cpu counter that is synchronised > when a batch threshold (default 32) is exceeded and can explicitly sync > the counters when required assuming the synchronised count is only needed > when reading debugfs. The intent is to use atomic counters for places that are not updated very often. This would save memory required for the counters. Originally I had a config option to choose which counter type to use but with lazy counters we sacrifice memory for performance only when needed while keeping the other counters small. > > -- > Mel Gorman > SUSE Labs