From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id CE376C433EF for ; Tue, 26 Apr 2022 22:39:46 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 0EB446B0074; Tue, 26 Apr 2022 18:39:46 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09C506B0075; Tue, 26 Apr 2022 18:39:46 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E7C1F6B0078; Tue, 26 Apr 2022 18:39:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (relay.hostedemail.com [64.99.140.27]) by kanga.kvack.org (Postfix) with ESMTP id D96C06B0074 for ; Tue, 26 Apr 2022 18:39:45 -0400 (EDT) Received: from smtpin24.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay01.hostedemail.com (Postfix) with ESMTP id B39ED61347 for ; Tue, 26 Apr 2022 22:39:45 +0000 (UTC) X-FDA: 79400498730.24.CFBED40 Received: from mail-vs1-f41.google.com (mail-vs1-f41.google.com [209.85.217.41]) by imf20.hostedemail.com (Postfix) with ESMTP id 281B61C005F for ; Tue, 26 Apr 2022 22:39:41 +0000 (UTC) Received: by mail-vs1-f41.google.com with SMTP id u205so164504vsu.6 for ; Tue, 26 Apr 2022 15:39:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=07Zd/16O2HE5Be6dl0YhxA/WYrWu/Ra4RQxC/o3rHN4=; b=mXqqHglO8XEfpNHe4xKmQqH8vvmuEmWI2bFNvmyTbbCSV0ya7UVXg/CycRw8F18FwN UcSuyVAxAkkxGzNmJMQQLPhr7tKKwVRzQg5TEQVjgMzzOOilYUIOThNtoL67+JZ+ZyhN 5Pajba/FLU+hVEX2+YLmzV+ru2ChYCNly/8fbTWXtQoLaEtfEOoL7/pUYffGG7YWBfwo IwcYTlGqHA67e2TC163DJobE7uBZLHCJTXOOq/kEfJ0EBn/W8KeKWzAk1RDA7rhSVdju YJznNyD0LtFe11RwAAZLedbIeNvEpcI7u+tykPQ0OZYSXc4Ih1OudNrgdkLaGY0wVuJs QhnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=07Zd/16O2HE5Be6dl0YhxA/WYrWu/Ra4RQxC/o3rHN4=; b=IJx1BjA7o8xpWIMq1/x+kKBIyevzudMpz5gVX42xOLrlF7HqeqSZyDlMoberQ78W5N waOVw+6cvmSom2fjrFTFw2SyPKdV2VLU8FT6sFjXFZPNychb/r7N7FCDZPhlq42+B6CG 1+syku3fjbPMkFgEG6oo4a8meFS8uS1jN+wXzZKMFIffu+pQGFmu+EWpZxXUqRIkC9ev JtEFI9NvLRsLWKm/sVMzSj7e7lyYT6tiG3FjgOXke58lhyfq7QepJP6X2N45YP6h0MeD 1w5Rkum7QbkN2c/GEre80ng4cVx3nGB25kHZeIQlYJF0XkuhR7qA93w0JGtHs/eeAG1+ E/0Q== X-Gm-Message-State: AOAM530MEqB9bYbpiaSUXzIAYrQLCXdNMC7olqsmKyuJw4069Eup3J19 phjowPyZ/hHiQbFRk2LM9vGz5YlQo7sbD9ZtfqnPLQ== X-Google-Smtp-Source: ABdhPJz+toQujxv6dyO8lpqxNaRIpc4rYh12ObWjdCPFl2wj3PCxuXZcbz1W3wRfVwURFc/mI9NOxhiwyLvLEQzX21c= X-Received: by 2002:a05:6102:158a:b0:32a:56ea:3fba with SMTP id g10-20020a056102158a00b0032a56ea3fbamr7477528vsv.84.1651012784332; Tue, 26 Apr 2022 15:39:44 -0700 (PDT) MIME-Version: 1.0 References: <20220407031525.2368067-1-yuzhao@google.com> <20220407031525.2368067-6-yuzhao@google.com> <20220411191615.a34959bdcc25ef3f9c16a7ce@linux-foundation.org> In-Reply-To: <20220411191615.a34959bdcc25ef3f9c16a7ce@linux-foundation.org> From: Yu Zhao Date: Tue, 26 Apr 2022 16:39:07 -0600 Message-ID: Subject: Re: [PATCH v10 05/14] mm: multi-gen LRU: groundwork To: Andrew Morton Cc: Stephen Rothwell , Linux-MM , Andi Kleen , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , Linux ARM , "open list:DOCUMENTATION" , linux-kernel , Kernel Page Reclaim v2 , "the arch/x86 maintainers" , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , =?UTF-8?Q?Holger_Hoffst=C3=A4tte?= , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" X-Rspam-User: X-Rspamd-Server: rspam11 X-Rspamd-Queue-Id: 281B61C005F X-Stat-Signature: ip56y9ety69samww9deggewn6y6ae3ey Authentication-Results: imf20.hostedemail.com; dkim=pass header.d=google.com header.s=20210112 header.b=mXqqHglO; spf=pass (imf20.hostedemail.com: domain of yuzhao@google.com designates 209.85.217.41 as permitted sender) smtp.mailfrom=yuzhao@google.com; dmarc=pass (policy=reject) header.from=google.com X-HE-Tag: 1651012781-490909 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Apr 11, 2022 at 8:16 PM Andrew Morton wrote: > > On Wed, 6 Apr 2022 21:15:17 -0600 Yu Zhao wrote: > > > Evictable pages are divided into multiple generations for each lruvec. > > The youngest generation number is stored in lrugen->max_seq for both > > anon and file types as they are aged on an equal footing. The oldest > > generation numbers are stored in lrugen->min_seq[] separately for anon > > and file types as clean file pages can be evicted regardless of swap > > constraints. These three variables are monotonically increasing. > > > > ... > > > > +static inline bool lru_gen_del_folio(struct lruvec *lruvec, struct folio *folio, bool reclaiming) > > There's a lot of function inlining here. Fortunately the compiler will > ignore it all, because some of it looks wrong. Please review (and > remeasure!). If inlining is reqlly justified, use __always_inline, and > document the reasons for doing so. I totally expect modern compilers to make better decisions than I do. And personally, I'd never use __always_inline; instead, I'd strongly recommend FDO/LTO. > > +{ > > + int gen; > > + unsigned long old_flags, new_flags; > > + > > + do { > > + new_flags = old_flags = READ_ONCE(folio->flags); > > + if (!(new_flags & LRU_GEN_MASK)) > > + return false; > > + > > + VM_BUG_ON_FOLIO(folio_test_active(folio), folio); > > + VM_BUG_ON_FOLIO(folio_test_unevictable(folio), folio); > > + > > + gen = ((new_flags & LRU_GEN_MASK) >> LRU_GEN_PGOFF) - 1; > > + > > + new_flags &= ~LRU_GEN_MASK; > > + /* for shrink_page_list() */ > > + if (reclaiming) > > + new_flags &= ~(BIT(PG_referenced) | BIT(PG_reclaim)); > > + else if (lru_gen_is_active(lruvec, gen)) > > + new_flags |= BIT(PG_active); > > + } while (cmpxchg(&folio->flags, old_flags, new_flags) != old_flags); > > Clearly the cmpxchg loop is handling races against a concurrent > updater. But it's unclear who that updater is, what are the dynamics > here and why the designer didn't use, say, spin_lock(). The way to > clarify such thigs is with code comments! Right. set_mask_bits() should suffice here. > > +#endif /* !__GENERATING_BOUNDS_H */ > > + > > +/* > > + * Evictable pages are divided into multiple generations. The youngest and the > > + * oldest generation numbers, max_seq and min_seq, are monotonically increasing. > > + * They form a sliding window of a variable size [MIN_NR_GENS, MAX_NR_GENS]. An > > + * offset within MAX_NR_GENS, gen, indexes the LRU list of the corresponding > > The "within MAX_NR_GENS, gen," text here is unclear? Will update: "i.e., gen". > > + * generation. The gen counter in folio->flags stores gen+1 while a page is on > > + * one of lrugen->lists[]. Otherwise it stores 0. > > + * > > + * A page is added to the youngest generation on faulting. The aging needs to > > + * check the accessed bit at least twice before handing this page over to the > > + * eviction. The first check takes care of the accessed bit set on the initial > > + * fault; the second check makes sure this page hasn't been used since then. > > + * This process, AKA second chance, requires a minimum of two generations, > > + * hence MIN_NR_GENS. And to maintain ABI compatibility with the active/inactive > > Where is the ABI compatibility issue? Is it in some way in which the > legacy LRU is presented to userspace? Will update: yes, active/inactive LRU sizes in /proc/vmstat. > > + * LRU, these two generations are considered active; the rest of generations, if > > + * they exist, are considered inactive. See lru_gen_is_active(). PG_active is > > + * always cleared while a page is on one of lrugen->lists[] so that the aging > > + * needs not to worry about it. And it's set again when a page considered active > > + * is isolated for non-reclaiming purposes, e.g., migration. See > > + * lru_gen_add_folio() and lru_gen_del_folio(). > > + * > > + * MAX_NR_GENS is set to 4 so that the multi-gen LRU can support twice of the > > "twice the number of"? Will update. > > + * categories of the active/inactive LRU when keeping track of accesses through > > + * page tables. It requires order_base_2(MAX_NR_GENS+1) bits in folio->flags. > > + */ > > Helpful comment, overall. > > > > > ... > > > > --- a/mm/Kconfig > > +++ b/mm/Kconfig > > @@ -909,6 +909,14 @@ config ANON_VMA_NAME > > area from being merged with adjacent virtual memory areas due to the > > difference in their name. > > > > +config LRU_GEN > > + bool "Multi-Gen LRU" > > + depends on MMU > > + # the following options can use up the spare bits in page flags > > + depends on !MAXSMP && (64BIT || !SPARSEMEM || SPARSEMEM_VMEMMAP) > > + help > > + A high performance LRU implementation to overcommit memory. > > + > > source "mm/damon/Kconfig" > > This is a problem. I had to jump through hoops just to be able to > compile-test this. Turns out I had to figure out how to disable > MAXSMP. > > Can we please figure out a way to ensure that more testers are at least > compile testing this? Allnoconfig, defconfig, allyesconfig, allmodconfig. > > Also, I suggest that we actually make MGLRU the default while in linux-next. The !MAXSMP is to work around [1], which I haven't had the time to fix. That BUILD_BUG_ON() shouldn't assert sizeof(struct page) == 64 since the true size depends on WANT_PAGE_VIRTUAL as well as LAST_CPUPID_NOT_IN_PAGE_FLAGS. My plan is here [2]. [1] https://lore.kernel.org/r/20190905154603.10349-4-aneesh.kumar@linux.ibm.com/ [2] https://lore.kernel.org/r/Ygl1Gf+ATBuI%2Fm2q@google.com/