From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.3 required=3.0 tests=BAYES_00,DKIMWL_WL_MED, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7FC12C433E0 for ; Tue, 16 Mar 2021 02:25:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id D1E1265026 for ; Tue, 16 Mar 2021 02:25:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D1E1265026 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4AB8B6B006C; Mon, 15 Mar 2021 22:25:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 40B666B006E; Mon, 15 Mar 2021 22:25:04 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 25E8D6B0070; Mon, 15 Mar 2021 22:25:04 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0037.hostedemail.com [216.40.44.37]) by kanga.kvack.org (Postfix) with ESMTP id 045AA6B006C for ; Mon, 15 Mar 2021 22:25:03 -0400 (EDT) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AC79B1801ED7D for ; Tue, 16 Mar 2021 02:25:03 +0000 (UTC) X-FDA: 77924144886.11.FE09CB0 Received: from mail-il1-f180.google.com (mail-il1-f180.google.com [209.85.166.180]) by imf02.hostedemail.com (Postfix) with ESMTP id 3A6974000345 for ; Tue, 16 Mar 2021 02:25:03 +0000 (UTC) Received: by mail-il1-f180.google.com with SMTP id c10so11305500ilo.8 for ; Mon, 15 Mar 2021 19:25:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=8bXXxVKUm/wku6nIA5mKFHhS+mmzHtA4BcT1Vr9wqvk=; b=U9b5JDWV5YGhulk55tuMt6WYK9ty1mA9i+hJqTBVXklujtTKxEcJDupfnX4FvoSMoA mNVQzZLkK5mjC2vMKTipeRhrMtVY7m9UTefXymRRgTguWWxEQsR27LWcggZcQPoIsDXX D0YrUQ7IKB0RTGvvAO6KprA5fTU0C/3fJ27Y8FfsDAH2F2wMIYjwEM9IJQd3qSBu14f4 7TxmbX+pfgXd2h0zTydAfUmRGKj3dt5NuleYGhIMkUqQFvasRq30clbVls+I/UCrBQGU L48voceS64M+B1UiIQGFaFL+16hx0g4Ww3Y15l13ZVyduw54n9KI0hix79tP6Akm2psL dVDA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=8bXXxVKUm/wku6nIA5mKFHhS+mmzHtA4BcT1Vr9wqvk=; b=q7WwgKwUHbjPHx1pasR00xakljcDlNa6UQpxAPFgNRNvql0tsCvU4NINzMMASyTJhV FUdjcNzdm2YVa1CUNNDmr+rEjgUeAa/RBG5u9djvjDM4mhpLjRI1ZWiBxs5IvZ3k8vJ1 N68+2KjCiEbre8P9vhS2NOH8aPPKxp+FO9dwdI4gBGpOCWzT9j9UDkSXJUXmASIYX/4T hCWkCdgjDUr3cf6npFJF1am04PU4L4V58M65eWrvX++rQAZbVv5uSPVyqq9nLAJA4vr+ yJ0tivc4lL+nH+ZxD5qj6fOXVIram4Gs9K91aKod5wx1tMx+4JX6wtGAZtY6/wLTPZot zsVA== X-Gm-Message-State: AOAM531JQT0sr+rBjFgkI7GBSMx+jfFhn2gm7EaC3tGVUD3oAKdHH1KD 7XSvp3DMQPF0PCow080eKfrFnQ== X-Google-Smtp-Source: ABdhPJzXHpkwMAMIcTOsGhUthl6jva3VuX+YFfsdeVMGTV64642RDtt7DednAvRwvWRgifK+CCGQbg== X-Received: by 2002:a92:cb49:: with SMTP id f9mr1918127ilq.0.1615861502423; Mon, 15 Mar 2021 19:25:02 -0700 (PDT) Received: from google.com ([2620:15c:183:200:d825:37a2:4b55:995f]) by smtp.gmail.com with ESMTPSA id f13sm8457772ila.51.2021.03.15.19.25.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 15 Mar 2021 19:25:01 -0700 (PDT) Date: Mon, 15 Mar 2021 20:24:56 -0600 From: Yu Zhao To: Dave Hansen Cc: linux-mm@kvack.org, Alex Shi , Andrew Morton , Dave Hansen , Hillf Danton , Johannes Weiner , Joonsoo Kim , Matthew Wilcox , Mel Gorman , Michal Hocko , Roman Gushchin , Vlastimil Babka , Wei Yang , Yang Shi , Ying Huang , linux-kernel@vger.kernel.org, page-reclaim@google.com Subject: Re: [PATCH v1 00/14] Multigenerational LRU Message-ID: References: <20210313075747.3781593-1-yuzhao@google.com> <5f621dd6-4bbd-dbf7-8fa1-d63d9a5bfc16@intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <5f621dd6-4bbd-dbf7-8fa1-d63d9a5bfc16@intel.com> X-Stat-Signature: boakrjf86r9zyh6118w7b34h1q8kdopj X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 3A6974000345 Received-SPF: none (google.com>: No applicable sender policy available) receiver=imf02; identity=mailfrom; envelope-from=""; helo=mail-il1-f180.google.com; client-ip=209.85.166.180 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1615861503-831867 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Mar 15, 2021 at 11:00:06AM -0700, Dave Hansen wrote: > On 3/12/21 11:57 PM, Yu Zhao wrote: > > Background > > ========== > > DRAM is a major factor in total cost of ownership, and improving > > memory overcommit brings a high return on investment. Over the past > > decade of research and experimentation in memory overcommit, we > > observed a distinct trend across millions of servers and clients: the > > size of page cache has been decreasing because of the growing > > popularity of cloud storage. Nowadays anon pages account for more than > > 90% of our memory consumption and page cache contains mostly > > executable pages. > > This makes a compelling argument that current reclaim is not well > optimized for anonymous memory with low rates of sharing. Basically, > anonymous rmap is very powerful, but we're not getting enough bang for > our buck out of it. > > I also understand that the workloads you reference are anonymous-heavy > and that page cache isn't a *major* component. > > But, what does happens to page-cache-heavy workloads? Does this just > effectively force databases that want to use shmem over to hugetlbfs? No, they should benefit too. In terms of page reclaim, shmem pages are basically considered anon: they are on anon lru and dirty shmem pages can only be swapped (we can safely assume clean shmem pages are virtually nonexistent) in contrast to file pages that have backing storage and need to be written back. I should have phrased it better: our accounting is based on what the kernel provides, i.e., anon/file (lru) sizes you listed below. > How bad does this scanning get in the worst case if there's a lot of > sharing? Actually the improvement is larger when there is more sharing, i.e., higher map_count larger improvement. Let's assume we have a shmem page mapped by two processes. To reclaim this page, we need to make sure neither PTE from the two sets of page tables has the accessed bit. The current page reclaim uses the rmap, i.e., rmap_walk_file(). It first looks up the two VMAs (from the two processes mapping this shmem file) in the interval tree of this shmem file, then from each VMA, it goes through PGD/PUD/PMD to reach the PTE. The page can't be reclaimed if either of the PTEs has the accessed bit, therefore cost of the scanning is more than proportional to the number of accesses, when there is a lot sharing. Why this series makes it better? We track the usage of page tables. Specifically, we work alongside switch_mm(): if one of the processes above hasn't be scheduled since the last scan, we don't need to scan its page tables. So the cost is roughly proportional to the number of accesses, regardless of how many processes. And instead of scanning pages one by one, we do it in large batches. However, page tables can be very sparse -- this is not a problem for the rmap because it knows exactly where the PTEs are (by vma_address()). We only know ranges (by vma->vm_start/vm_end). This is where the accessed bit on non-leaf PMDs can be of help. But I guess you are wondering what downsides are. Well, we haven't seen any (yet). We do have page cache (non-shmem) heavy workloads, but not at a scale large enough to make any statistically meaningful observations. We are very interested in working with anybody who has page cache (non-shmem) heavy workloads and is willing to try out this series. > I'm kinda surprised by this, but my 16GB laptop has a lot more page > cache than I would have guessed: > > > Active(anon): 4065088 kB > > Inactive(anon): 3981928 kB > > Active(file): 2260580 kB > > Inactive(file): 3738096 kB > > AnonPages: 6624776 kB > > Mapped: 692036 kB > > Shmem: 776276 kB > > Most of it isn't mapped, but it's far from all being used for text. We have categorized two groups: 1) average users that haven't experienced memory pressure since their systems have booted. The booting process fills up page cache with one-off file pages, and they remain until users experience memory pressure. This can be confirmed by looking at those counters of a freshly rebooted and idle system. My guess this is the case for your laptop. 2) engineering users who store git repos and compile locally. They complained about their browsers being janky because anon memory got swapped even though their systems had a lot of stale file pages in page cache, with the current page reclaim. They are what we consider part of the page cache (non-shmem) heavy group.