From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8225FC433F5 for ; Tue, 19 Apr 2022 22:26:01 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1357846AbiDSW2m (ORCPT ); Tue, 19 Apr 2022 18:28:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:54966 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231162AbiDSW2k (ORCPT ); Tue, 19 Apr 2022 18:28:40 -0400 Received: from mail-vs1-xe34.google.com (mail-vs1-xe34.google.com [IPv6:2607:f8b0:4864:20::e34]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3FC7E2018A for ; Tue, 19 Apr 2022 15:25:56 -0700 (PDT) Received: by mail-vs1-xe34.google.com with SMTP id j16so17049808vsv.2 for ; Tue, 19 Apr 2022 15:25:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1SSVHI315z/RBsosmoq+DUHmCLwLjqcBK/6yGPaqYLw=; b=bfljGrmwzt76fFNtfv1uodcdGEJ0apZDy0dCLzdgwQVYJ14sKe73b0wwu7HW5/Suzz 7FtTSwmdPlDcPqbz8RTN50cse3FKAgGrsXS+C8bfQcOFqkeuYPEYglNqA/WPXSAGeWMM OdTJXW4BklnuiiObtv1xiIxaZ9zPXI2+BSwwXMFFDOVQjCxcNSmRx5aw071RWZyadwAl Rv4llSwvtBnDgWv/S/wa3Cdy/Vwtz7QPFBvQTJj6YQOwx89PofNFfsHxtRq+hlDGd/pV IiWaDbsqcOozuV7ZQvYdUBn2QhnmjvtGzuOHwxOUjBvhpllHCVDZylzBvTZ5g6E0A5mn MA0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1SSVHI315z/RBsosmoq+DUHmCLwLjqcBK/6yGPaqYLw=; b=WqS4/H9X8cPsGsTynCwY9SaC7bBAhg4Ub55H30gfrme+qVqQ4AgZzLaPcNjKb/vJaJ rEmwVQDR1Vy5RdygTBUkbMKavFpnSkvW4dHCZlgSjABEqzYsrGGPp+bURH/drFhrbsD/ s5wu+x96qcjLuLr6WDWzT9bfyB7W/eetvPN5PWJ+vfz7EK8GAwiIBlOm+woLHMyNTUMQ JeOU9yn4SiqHeEnkBySpc4Ze4PtHKpccSoZZknj/LD4ZEjzkMarJQBy6g6VU7zl6c0Mb EvY2SkuJtzGgsplCkU5FcuP6tgA/WOS4S3avNlGZuKWjKtJxAnYeeWwd644K/MdldtZk F+1A== X-Gm-Message-State: AOAM532nzTXbPY6JFbtOpbIGZuF48obIV8pwZsyeWRiabAljsVqvhOHS VyaCnn6VjYnmt4fTEF5evnbgScjySrifp9Qka66Nrw== X-Google-Smtp-Source: ABdhPJzh4FLwqXWhLNS9K4J6wbTJYY5R2UEC0it7lTqn+s/X/xkskvCzDFdlhwGQfxmpJaNgdaMiuJlDIp70EUjauDE= X-Received: by 2002:a05:6102:5e1:b0:32a:6d68:171d with SMTP id w1-20020a05610205e100b0032a6d68171dmr1714816vsf.6.1650407154980; Tue, 19 Apr 2022 15:25:54 -0700 (PDT) MIME-Version: 1.0 References: <20220407031525.2368067-1-yuzhao@google.com> <20220407031525.2368067-7-yuzhao@google.com> In-Reply-To: From: Yu Zhao Date: Tue, 19 Apr 2022 16:25:18 -0600 Message-ID: Subject: Re: [PATCH v10 06/14] mm: multi-gen LRU: minimal implementation To: Barry Song <21cnbao@gmail.com> Cc: Stephen Rothwell , Linux-MM , Andi Kleen , Andrew Morton , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , LAK , Linux Doc Mailing List , LKML , Kernel Page Reclaim v2 , x86 , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , =?UTF-8?Q?Holger_Hoffst=C3=A4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Mon, Apr 18, 2022 at 10:36 PM Barry Song <21cnbao@gmail.com> wrote: > > On Tue, Apr 19, 2022 at 4:25 PM Barry Song <21cnbao@gmail.com> wrote: > > > > On Tue, Apr 19, 2022 at 12:54 PM Yu Zhao wrote: > > > > > > On Mon, Apr 18, 2022 at 3:58 AM Barry Song <21cnbao@gmail.com> wrote: > > > > > > > > On Thu, Apr 7, 2022 at 3:16 PM Yu Zhao wrote: > > > > > > > > > > To avoid confusion, the terms "promotion" and "demotion" will be > > > > > applied to the multi-gen LRU, as a new convention; the terms > > > > > "activation" and "deactivation" will be applied to the active/inactive > > > > > LRU, as usual. > > > > > > > > > > The aging produces young generations. Given an lruvec, it increments > > > > > max_seq when max_seq-min_seq+1 approaches MIN_NR_GENS. The aging > > > > > promotes hot pages to the youngest generation when it finds them > > > > > accessed through page tables; the demotion of cold pages happens > > > > > consequently when it increments max_seq. The aging has the complexity > > > > > O(nr_hot_pages), since it is only interested in hot pages. Promotion > > > > > in the aging path does not require any LRU list operations, only the > > > > > updates of the gen counter and lrugen->nr_pages[]; demotion, unless as > > > > > the result of the increment of max_seq, requires LRU list operations, > > > > > e.g., lru_deactivate_fn(). > > > > > > > > > > The eviction consumes old generations. Given an lruvec, it increments > > > > > min_seq when the lists indexed by min_seq%MAX_NR_GENS become empty. A > > > > > feedback loop modeled after the PID controller monitors refaults over > > > > > anon and file types and decides which type to evict when both types > > > > > are available from the same generation. > > > > > > > > > > Each generation is divided into multiple tiers. Tiers represent > > > > > different ranges of numbers of accesses through file descriptors. A > > > > > page accessed N times through file descriptors is in tier > > > > > order_base_2(N). Tiers do not have dedicated lrugen->lists[], only > > > > > bits in folio->flags. In contrast to moving across generations, which > > > > > requires the LRU lock, moving across tiers only involves operations on > > > > > folio->flags. The feedback loop also monitors refaults over all tiers > > > > > and decides when to protect pages in which tiers (N>1), using the > > > > > first tier (N=0,1) as a baseline. The first tier contains single-use > > > > > unmapped clean pages, which are most likely the best choices. The > > > > > eviction moves a page to the next generation, i.e., min_seq+1, if the > > > > > feedback loop decides so. This approach has the following advantages: > > > > > 1. It removes the cost of activation in the buffered access path by > > > > > inferring whether pages accessed multiple times through file > > > > > descriptors are statistically hot and thus worth protecting in the > > > > > eviction path. > > > > > 2. It takes pages accessed through page tables into account and avoids > > > > > overprotecting pages accessed multiple times through file > > > > > descriptors. (Pages accessed through page tables are in the first > > > > > tier, since N=0.) > > > > > 3. More tiers provide better protection for pages accessed more than > > > > > twice through file descriptors, when under heavy buffered I/O > > > > > workloads. > > > > > > > > > > > > > Hi Yu, > > > > As I told you before, I tried to change the current LRU (not MGLRU) by only > > > > promoting unmapped file pages to the head of the inactive head rather than > > > > the active head on its second access: > > > > https://lore.kernel.org/lkml/CAGsJ_4y=TkCGoWWtWSAptW4RDFUEBeYXwfwu=fUFvV4Sa4VA4A@mail.gmail.com/ > > > > I have already seen some very good results by the decease of cpu consumption of > > > > kswapd and direct reclamation in the testing. > > > > > > Glad to hear. I suspected you'd see some good results with that change :) > > > > > > > in mglru, it seems "twice" isn't a concern at all, one unmapped file > > > > page accessed > > > > twice has no much difference with those ones which are accessed once as you > > > > only begin to increase refs from the third time: > > > > > > refs are *additional* accesses: > > > PG_referenced: N=1 > > > PG_referenced+PG_workingset: N=2 > > > PG_referenced+PG_workingset+refs: N=3,4,5 > > > > > > When N=2, order_base_2(N)=1. So pages accessed twice are in the second > > > tier. Therefore they are "different". > > > > > > More details [1]: > > > > > > +/* > > > + * Each generation is divided into multiple tiers. Tiers represent different > > > + * ranges of numbers of accesses through file descriptors. A page accessed N > > > + * times through file descriptors is in tier order_base_2(N). A page in the > > > + * first tier (N=0,1) is marked by PG_referenced unless it was faulted in > > > + * though page tables or read ahead. A page in any other tier (N>1) is marked > > > + * by PG_referenced and PG_workingset. > > > + * > > > + * In contrast to moving across generations which requires the LRU lock, moving > > > + * across tiers only requires operations on folio->flags and therefore has a > > > + * negligible cost in the buffered access path. In the eviction path, > > > + * comparisons of refaulted/(evicted+protected) from the first tier and the > > > + * rest infer whether pages accessed multiple times through file descriptors > > > + * are statistically hot and thus worth protecting. > > > + * > > > + * MAX_NR_TIERS is set to 4 so that the multi-gen LRU can support twice of the > > > + * categories of the active/inactive LRU when keeping track of accesses through > > > + * file descriptors. It requires MAX_NR_TIERS-2 additional bits in > > > folio->flags. > > > + */ > > > +#define MAX_NR_TIERS 4U > > > > > > [1] https://lore.kernel.org/linux-mm/20220407031525.2368067-7-yuzhao@google.com/ > > > > > > > +static void folio_inc_refs(struct folio *folio) > > > > +{ > > > > + unsigned long refs; > > > > + unsigned long old_flags, new_flags; > > > > + > > > > + if (folio_test_unevictable(folio)) > > > > + return; > > > > + > > > > + /* see the comment on MAX_NR_TIERS */ > > > > + do { > > > > + new_flags = old_flags = READ_ONCE(folio->flags); > > > > + > > > > + if (!(new_flags & BIT(PG_referenced))) { > > > > + new_flags |= BIT(PG_referenced); > > > > + continue; > > > > + } > > > > + > > > > + if (!(new_flags & BIT(PG_workingset))) { > > > > + new_flags |= BIT(PG_workingset); > > > > + continue; > > > > + } > > > > + > > > > + refs = new_flags & LRU_REFS_MASK; > > > > + refs = min(refs + BIT(LRU_REFS_PGOFF), LRU_REFS_MASK); > > > > + > > > > + new_flags &= ~LRU_REFS_MASK; > > > > + new_flags |= refs; > > > > + } while (new_flags != old_flags && > > > > + cmpxchg(&folio->flags, old_flags, new_flags) != old_flags); > > > > +} > > > > > > > > So my question is what makes you so confident that twice doesn't need > > > > any special treatment while the vanilla kernel is upgrading this kind of page > > > > to the head of the active instead? I am asking this because I am considering > > > > reclaiming unmapped file pages which are only accessed twice when they > > > > get to the tail of the inactive list. > > > > > > Per above, pages accessed twice are in their own tier. Hope this clarifies it. > > > > Yep, I found the trick here , "+1" is magic behind the code, haha. > > > > +static int folio_lru_tier(struct folio *folio) > > +{ > > + int refs; > > + unsigned long flags = READ_ONCE(folio->flags); > > + > > + refs = (flags & LRU_REFS_FLAGS) == LRU_REFS_FLAGS ? > > + ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + 1 : 0; > > + > > + return lru_tier_from_refs(refs); > > +} > > + > > > > TBH, this might need some comments, otherwise, it is easy to misunderstand > > we are beginning to have protection from 3rd access :-) > > as anyway, it would be much more straightforward to have the below if > we can also > increase refs for the 1st and 2nd access in folio_inc_refs(): It would if there were abundant spare bits in page->flags. On some machines, we don't, so we have to reuse PG_referenced and PG_workingset. > +static int folio_lru_tier(struct folio *folio) > +{ > + int refs; > + unsigned long flags = READ_ONCE(folio->flags); > + > + refs = (flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF; > + > + return lru_tier_from_refs(refs); > +} From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 9B9FAC433F5 for ; Tue, 19 Apr 2022 22:27:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:Cc:To:Subject:Message-ID:Date:From: In-Reply-To:References:MIME-Version:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=8C2imCKINC0Cc42+s/DmD2ah4lZygl0YsPl0zjtUiC0=; b=ViBF3JGwt3Yzot hEByblxvri35iNJZHoDR7MsGd1ROTFAAHa6Ck+qkvEhpAVIwQG780qnwfYzHNkisEB4YaGER3SD8W jZy/1LXpsAkvwF+eRjdJMWvxwnhvmCoed8X1NunAQEEDSbo/uN2/JZVbcYAJDbbwfrEZ4BWcSJgK4 8ncmharcuxQP+/BHss8xbZ3T2/x2sP2uXGWyY6rLRLKWx64ZpJJ7h4R9HRg+HEhwiXojlfb292VzT OmBSCoJY6vRXtPiVXmiKy+mMEgBhp2RNmT+BrCk6OrOikZ2kJJdW3FBjoHx/TpV137Ge2aFAUZP8b Ru/nH3XKLl5DPTbSJztQ==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngwIN-006ViD-KM; Tue, 19 Apr 2022 22:26:03 +0000 Received: from mail-vs1-xe31.google.com ([2607:f8b0:4864:20::e31]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1ngwII-006VgB-Ki for linux-arm-kernel@lists.infradead.org; Tue, 19 Apr 2022 22:26:01 +0000 Received: by mail-vs1-xe31.google.com with SMTP id z139so11609955vsz.0 for ; Tue, 19 Apr 2022 15:25:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=1SSVHI315z/RBsosmoq+DUHmCLwLjqcBK/6yGPaqYLw=; b=bfljGrmwzt76fFNtfv1uodcdGEJ0apZDy0dCLzdgwQVYJ14sKe73b0wwu7HW5/Suzz 7FtTSwmdPlDcPqbz8RTN50cse3FKAgGrsXS+C8bfQcOFqkeuYPEYglNqA/WPXSAGeWMM OdTJXW4BklnuiiObtv1xiIxaZ9zPXI2+BSwwXMFFDOVQjCxcNSmRx5aw071RWZyadwAl Rv4llSwvtBnDgWv/S/wa3Cdy/Vwtz7QPFBvQTJj6YQOwx89PofNFfsHxtRq+hlDGd/pV IiWaDbsqcOozuV7ZQvYdUBn2QhnmjvtGzuOHwxOUjBvhpllHCVDZylzBvTZ5g6E0A5mn MA0w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=1SSVHI315z/RBsosmoq+DUHmCLwLjqcBK/6yGPaqYLw=; b=QwujbWT3h0yB1NDIM12arIejUZ8EoKY/ZgKN1b+51mk1mToK0WMWPIcDITSIADUqCs cSRNr/HXK1/WfToQdbVMn3QdQiUeT9PhybUjHS8k2gwwxotfnfI8I8HVkU0ZHq5fUkQJ CBFkPMo6H9HuUlK+CwOJIot7Vnnzcl9rYSDSNZRiJmkw0HtmJEWKPf+21ek6Gb6XItGJ ZMNXenU5V7BaQKwFGK5z2h5aQ+5Y/QzsgSmU2BqgmlkQiUJ38zh/Ht+CzJar0njORUKg J1gRMpggSJMvFPI8fu2dU49u0Fe6+9Zz0D2wQoKco86KwC4jw1C511pZjw1YT/zOgEN7 N2RA== X-Gm-Message-State: AOAM531Dsly9fdcuL3OkzWH6JfdIJXQWVf39XuGXWyuRlXQ6nPfUry+1 PzxOvu9fIoaahSrtiOI84habgoBR+hZefza2GGce9A== X-Google-Smtp-Source: ABdhPJzh4FLwqXWhLNS9K4J6wbTJYY5R2UEC0it7lTqn+s/X/xkskvCzDFdlhwGQfxmpJaNgdaMiuJlDIp70EUjauDE= X-Received: by 2002:a05:6102:5e1:b0:32a:6d68:171d with SMTP id w1-20020a05610205e100b0032a6d68171dmr1714816vsf.6.1650407154980; Tue, 19 Apr 2022 15:25:54 -0700 (PDT) MIME-Version: 1.0 References: <20220407031525.2368067-1-yuzhao@google.com> <20220407031525.2368067-7-yuzhao@google.com> In-Reply-To: From: Yu Zhao Date: Tue, 19 Apr 2022 16:25:18 -0600 Message-ID: Subject: Re: [PATCH v10 06/14] mm: multi-gen LRU: minimal implementation To: Barry Song <21cnbao@gmail.com> Cc: Stephen Rothwell , Linux-MM , Andi Kleen , Andrew Morton , Aneesh Kumar , Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , LAK , Linux Doc Mailing List , LKML , Kernel Page Reclaim v2 , x86 , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , =?UTF-8?Q?Holger_Hoffst=C3=A4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20220419_152558_747166_C4F0F7D1 X-CRM114-Status: GOOD ( 51.81 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Mon, Apr 18, 2022 at 10:36 PM Barry Song <21cnbao@gmail.com> wrote: > > On Tue, Apr 19, 2022 at 4:25 PM Barry Song <21cnbao@gmail.com> wrote: > > > > On Tue, Apr 19, 2022 at 12:54 PM Yu Zhao wrote: > > > > > > On Mon, Apr 18, 2022 at 3:58 AM Barry Song <21cnbao@gmail.com> wrote: > > > > > > > > On Thu, Apr 7, 2022 at 3:16 PM Yu Zhao wrote: > > > > > > > > > > To avoid confusion, the terms "promotion" and "demotion" will be > > > > > applied to the multi-gen LRU, as a new convention; the terms > > > > > "activation" and "deactivation" will be applied to the active/inactive > > > > > LRU, as usual. > > > > > > > > > > The aging produces young generations. Given an lruvec, it increments > > > > > max_seq when max_seq-min_seq+1 approaches MIN_NR_GENS. The aging > > > > > promotes hot pages to the youngest generation when it finds them > > > > > accessed through page tables; the demotion of cold pages happens > > > > > consequently when it increments max_seq. The aging has the complexity > > > > > O(nr_hot_pages), since it is only interested in hot pages. Promotion > > > > > in the aging path does not require any LRU list operations, only the > > > > > updates of the gen counter and lrugen->nr_pages[]; demotion, unless as > > > > > the result of the increment of max_seq, requires LRU list operations, > > > > > e.g., lru_deactivate_fn(). > > > > > > > > > > The eviction consumes old generations. Given an lruvec, it increments > > > > > min_seq when the lists indexed by min_seq%MAX_NR_GENS become empty. A > > > > > feedback loop modeled after the PID controller monitors refaults over > > > > > anon and file types and decides which type to evict when both types > > > > > are available from the same generation. > > > > > > > > > > Each generation is divided into multiple tiers. Tiers represent > > > > > different ranges of numbers of accesses through file descriptors. A > > > > > page accessed N times through file descriptors is in tier > > > > > order_base_2(N). Tiers do not have dedicated lrugen->lists[], only > > > > > bits in folio->flags. In contrast to moving across generations, which > > > > > requires the LRU lock, moving across tiers only involves operations on > > > > > folio->flags. The feedback loop also monitors refaults over all tiers > > > > > and decides when to protect pages in which tiers (N>1), using the > > > > > first tier (N=0,1) as a baseline. The first tier contains single-use > > > > > unmapped clean pages, which are most likely the best choices. The > > > > > eviction moves a page to the next generation, i.e., min_seq+1, if the > > > > > feedback loop decides so. This approach has the following advantages: > > > > > 1. It removes the cost of activation in the buffered access path by > > > > > inferring whether pages accessed multiple times through file > > > > > descriptors are statistically hot and thus worth protecting in the > > > > > eviction path. > > > > > 2. It takes pages accessed through page tables into account and avoids > > > > > overprotecting pages accessed multiple times through file > > > > > descriptors. (Pages accessed through page tables are in the first > > > > > tier, since N=0.) > > > > > 3. More tiers provide better protection for pages accessed more than > > > > > twice through file descriptors, when under heavy buffered I/O > > > > > workloads. > > > > > > > > > > > > > Hi Yu, > > > > As I told you before, I tried to change the current LRU (not MGLRU) by only > > > > promoting unmapped file pages to the head of the inactive head rather than > > > > the active head on its second access: > > > > https://lore.kernel.org/lkml/CAGsJ_4y=TkCGoWWtWSAptW4RDFUEBeYXwfwu=fUFvV4Sa4VA4A@mail.gmail.com/ > > > > I have already seen some very good results by the decease of cpu consumption of > > > > kswapd and direct reclamation in the testing. > > > > > > Glad to hear. I suspected you'd see some good results with that change :) > > > > > > > in mglru, it seems "twice" isn't a concern at all, one unmapped file > > > > page accessed > > > > twice has no much difference with those ones which are accessed once as you > > > > only begin to increase refs from the third time: > > > > > > refs are *additional* accesses: > > > PG_referenced: N=1 > > > PG_referenced+PG_workingset: N=2 > > > PG_referenced+PG_workingset+refs: N=3,4,5 > > > > > > When N=2, order_base_2(N)=1. So pages accessed twice are in the second > > > tier. Therefore they are "different". > > > > > > More details [1]: > > > > > > +/* > > > + * Each generation is divided into multiple tiers. Tiers represent different > > > + * ranges of numbers of accesses through file descriptors. A page accessed N > > > + * times through file descriptors is in tier order_base_2(N). A page in the > > > + * first tier (N=0,1) is marked by PG_referenced unless it was faulted in > > > + * though page tables or read ahead. A page in any other tier (N>1) is marked > > > + * by PG_referenced and PG_workingset. > > > + * > > > + * In contrast to moving across generations which requires the LRU lock, moving > > > + * across tiers only requires operations on folio->flags and therefore has a > > > + * negligible cost in the buffered access path. In the eviction path, > > > + * comparisons of refaulted/(evicted+protected) from the first tier and the > > > + * rest infer whether pages accessed multiple times through file descriptors > > > + * are statistically hot and thus worth protecting. > > > + * > > > + * MAX_NR_TIERS is set to 4 so that the multi-gen LRU can support twice of the > > > + * categories of the active/inactive LRU when keeping track of accesses through > > > + * file descriptors. It requires MAX_NR_TIERS-2 additional bits in > > > folio->flags. > > > + */ > > > +#define MAX_NR_TIERS 4U > > > > > > [1] https://lore.kernel.org/linux-mm/20220407031525.2368067-7-yuzhao@google.com/ > > > > > > > +static void folio_inc_refs(struct folio *folio) > > > > +{ > > > > + unsigned long refs; > > > > + unsigned long old_flags, new_flags; > > > > + > > > > + if (folio_test_unevictable(folio)) > > > > + return; > > > > + > > > > + /* see the comment on MAX_NR_TIERS */ > > > > + do { > > > > + new_flags = old_flags = READ_ONCE(folio->flags); > > > > + > > > > + if (!(new_flags & BIT(PG_referenced))) { > > > > + new_flags |= BIT(PG_referenced); > > > > + continue; > > > > + } > > > > + > > > > + if (!(new_flags & BIT(PG_workingset))) { > > > > + new_flags |= BIT(PG_workingset); > > > > + continue; > > > > + } > > > > + > > > > + refs = new_flags & LRU_REFS_MASK; > > > > + refs = min(refs + BIT(LRU_REFS_PGOFF), LRU_REFS_MASK); > > > > + > > > > + new_flags &= ~LRU_REFS_MASK; > > > > + new_flags |= refs; > > > > + } while (new_flags != old_flags && > > > > + cmpxchg(&folio->flags, old_flags, new_flags) != old_flags); > > > > +} > > > > > > > > So my question is what makes you so confident that twice doesn't need > > > > any special treatment while the vanilla kernel is upgrading this kind of page > > > > to the head of the active instead? I am asking this because I am considering > > > > reclaiming unmapped file pages which are only accessed twice when they > > > > get to the tail of the inactive list. > > > > > > Per above, pages accessed twice are in their own tier. Hope this clarifies it. > > > > Yep, I found the trick here , "+1" is magic behind the code, haha. > > > > +static int folio_lru_tier(struct folio *folio) > > +{ > > + int refs; > > + unsigned long flags = READ_ONCE(folio->flags); > > + > > + refs = (flags & LRU_REFS_FLAGS) == LRU_REFS_FLAGS ? > > + ((flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF) + 1 : 0; > > + > > + return lru_tier_from_refs(refs); > > +} > > + > > > > TBH, this might need some comments, otherwise, it is easy to misunderstand > > we are beginning to have protection from 3rd access :-) > > as anyway, it would be much more straightforward to have the below if > we can also > increase refs for the 1st and 2nd access in folio_inc_refs(): It would if there were abundant spare bits in page->flags. On some machines, we don't, so we have to reuse PG_referenced and PG_workingset. > +static int folio_lru_tier(struct folio *folio) > +{ > + int refs; > + unsigned long flags = READ_ONCE(folio->flags); > + > + refs = (flags & LRU_REFS_MASK) >> LRU_REFS_PGOFF; > + > + return lru_tier_from_refs(refs); > +} _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel