From: Barry Song <21cnbao@gmail.com>
To: Yu Zhao <yuzhao@google.com>
Cc: "Andrew Morton" <akpm@linux-foundation.org>,
"Linus Torvalds" <torvalds@linux-foundation.org>,
"Andi Kleen" <ak@linux.intel.com>,
"Aneesh Kumar" <aneesh.kumar@linux.ibm.com>,
"Catalin Marinas" <catalin.marinas@arm.com>,
"Dave Hansen" <dave.hansen@linux.intel.com>,
"Hillf Danton" <hdanton@sina.com>, "Jens Axboe" <axboe@kernel.dk>,
"Jesse Barnes" <jsbarnes@google.com>,
"Johannes Weiner" <hannes@cmpxchg.org>,
"Jonathan Corbet" <corbet@lwn.net>,
"Matthew Wilcox" <willy@infradead.org>,
"Mel Gorman" <mgorman@suse.de>,
"Michael Larabel" <Michael@michaellarabel.com>,
"Michal Hocko" <mhocko@kernel.org>,
"Mike Rapoport" <rppt@kernel.org>,
"Rik van Riel" <riel@surriel.com>,
"Vlastimil Babka" <vbabka@suse.cz>,
"Will Deacon" <will@kernel.org>,
"Ying Huang" <ying.huang@intel.com>,
LAK <linux-arm-kernel@lists.infradead.org>,
"Linux Doc Mailing List" <linux-doc@vger.kernel.org>,
LKML <linux-kernel@vger.kernel.org>,
Linux-MM <linux-mm@kvack.org>,
page-reclaim@google.com, x86 <x86@kernel.org>,
"Brian Geffon" <bgeffon@google.com>,
"Jan Alexander Steffens" <heftig@archlinux.org>,
"Oleksandr Natalenko" <oleksandr@natalenko.name>,
"Steven Barrett" <steven@liquorix.net>,
"Suleiman Souhlal" <suleiman@google.com>,
"Daniel Byrne" <djbyrne@mtu.edu>,
"Donald Carr" <d@chaos-reins.com>,
"Holger Hoffstätte" <holger@applied-asynchrony.com>,
"Konstantin Kharlamov" <Hi-Angel@yandex.ru>,
"Shuang Zhai" <szhai2@cs.rochester.edu>,
"Sofia Trinh" <sofia.trinh@edi.works>,
"Vaibhav Jain" <vaibhav@linux.ibm.com>
Subject: Re: [PATCH v9 01/14] mm: x86, arm64: add arch_has_hw_pte_young()
Date: Fri, 11 Mar 2022 23:55:19 +1300 [thread overview]
Message-ID: <CAGsJ_4yt_q4=pPW1M6fHN9HrV5JuTo9_9GQ0wv4-VT7tivU1+Q@mail.gmail.com> (raw)
In-Reply-To: <20220309021230.721028-2-yuzhao@google.com>
On Wed, Mar 9, 2022 at 3:47 PM Yu Zhao <yuzhao@google.com> wrote:
>
> Some architectures automatically set the accessed bit in PTEs, e.g.,
> x86 and arm64 v8.2. On architectures that do not have this capability,
> clearing the accessed bit in a PTE usually triggers a page fault
> following the TLB miss of this PTE (to emulate the accessed bit).
>
> Being aware of this capability can help make better decisions, e.g.,
> whether to spread the work out over a period of time to reduce bursty
> page faults when trying to clear the accessed bit in many PTEs.
>
> Note that theoretically this capability can be unreliable, e.g.,
> hotplugged CPUs might be different from builtin ones. Therefore it
> should not be used in architecture-independent code that involves
> correctness, e.g., to determine whether TLB flushes are required (in
> combination with the accessed bit).
>
> Signed-off-by: Yu Zhao <yuzhao@google.com>
> Acked-by: Brian Geffon <bgeffon@google.com>
> Acked-by: Jan Alexander Steffens (heftig) <heftig@archlinux.org>
> Acked-by: Oleksandr Natalenko <oleksandr@natalenko.name>
> Acked-by: Steven Barrett <steven@liquorix.net>
> Acked-by: Suleiman Souhlal <suleiman@google.com>
> Acked-by: Will Deacon <will@kernel.org>
> Tested-by: Daniel Byrne <djbyrne@mtu.edu>
> Tested-by: Donald Carr <d@chaos-reins.com>
> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com>
> Tested-by: Konstantin Kharlamov <Hi-Angel@yandex.ru>
> Tested-by: Shuang Zhai <szhai2@cs.rochester.edu>
> Tested-by: Sofia Trinh <sofia.trinh@edi.works>
> Tested-by: Vaibhav Jain <vaibhav@linux.ibm.com>
> ---
Reviewed-by: Barry Song <baohua@kernel.org>
i guess arch_has_hw_pte_young() isn't called that often in either
mm/memory.c or mm/vmscan.c.
Otherwise, moving to a static key might help. Is it?
> arch/arm64/include/asm/pgtable.h | 14 ++------------
> arch/x86/include/asm/pgtable.h | 6 +++---
> include/linux/pgtable.h | 13 +++++++++++++
> mm/memory.c | 14 +-------------
> 4 files changed, 19 insertions(+), 28 deletions(-)
>
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index c4ba047a82d2..990358eca359 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -999,23 +999,13 @@ static inline void update_mmu_cache(struct vm_area_struct *vma,
> * page after fork() + CoW for pfn mappings. We don't always have a
> * hardware-managed access flag on arm64.
> */
> -static inline bool arch_faults_on_old_pte(void)
> -{
> - WARN_ON(preemptible());
> -
> - return !cpu_has_hw_af();
> -}
> -#define arch_faults_on_old_pte arch_faults_on_old_pte
> +#define arch_has_hw_pte_young cpu_has_hw_af
>
> /*
> * Experimentally, it's cheap to set the access flag in hardware and we
> * benefit from prefaulting mappings as 'old' to start with.
> */
> -static inline bool arch_wants_old_prefaulted_pte(void)
> -{
> - return !arch_faults_on_old_pte();
> -}
> -#define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte
> +#define arch_wants_old_prefaulted_pte cpu_has_hw_af
>
> static inline pgprot_t arch_filter_pgprot(pgprot_t prot)
> {
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index 8a9432fb3802..60b6ce45c2e3 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1423,10 +1423,10 @@ static inline bool arch_has_pfn_modify_check(void)
> return boot_cpu_has_bug(X86_BUG_L1TF);
> }
>
> -#define arch_faults_on_old_pte arch_faults_on_old_pte
> -static inline bool arch_faults_on_old_pte(void)
> +#define arch_has_hw_pte_young arch_has_hw_pte_young
> +static inline bool arch_has_hw_pte_young(void)
> {
> - return false;
> + return true;
> }
>
> #endif /* __ASSEMBLY__ */
> diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
> index f4f4077b97aa..79f64dcff07d 100644
> --- a/include/linux/pgtable.h
> +++ b/include/linux/pgtable.h
> @@ -259,6 +259,19 @@ static inline int pmdp_clear_flush_young(struct vm_area_struct *vma,
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
> #endif
>
> +#ifndef arch_has_hw_pte_young
> +/*
> + * Return whether the accessed bit is supported on the local CPU.
> + *
> + * This stub assumes accessing through an old PTE triggers a page fault.
> + * Architectures that automatically set the access bit should overwrite it.
> + */
> +static inline bool arch_has_hw_pte_young(void)
> +{
> + return false;
> +}
> +#endif
> +
> #ifndef __HAVE_ARCH_PTEP_CLEAR
> static inline void ptep_clear(struct mm_struct *mm, unsigned long addr,
> pte_t *ptep)
> diff --git a/mm/memory.c b/mm/memory.c
> index c125c4969913..a7379196a47e 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -122,18 +122,6 @@ int randomize_va_space __read_mostly =
> 2;
> #endif
>
> -#ifndef arch_faults_on_old_pte
> -static inline bool arch_faults_on_old_pte(void)
> -{
> - /*
> - * Those arches which don't have hw access flag feature need to
> - * implement their own helper. By default, "true" means pagefault
> - * will be hit on old pte.
> - */
> - return true;
> -}
> -#endif
> -
> #ifndef arch_wants_old_prefaulted_pte
> static inline bool arch_wants_old_prefaulted_pte(void)
> {
> @@ -2778,7 +2766,7 @@ static inline bool cow_user_page(struct page *dst, struct page *src,
> * On architectures with software "accessed" bits, we would
> * take a double page fault, so mark it accessed here.
> */
> - if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) {
> + if (!arch_has_hw_pte_young() && !pte_young(vmf->orig_pte)) {
> pte_t entry;
>
> vmf->pte = pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl);
> --
> 2.35.1.616.g0bdcbb4464-goog
>
Thanks
Barry
next prev parent reply other threads:[~2022-03-11 10:55 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-09 2:12 [PATCH v9 00/14] Multi-Gen LRU Framework Yu Zhao
2022-03-09 2:12 ` [PATCH v9 01/14] mm: x86, arm64: add arch_has_hw_pte_young() Yu Zhao
2022-03-11 10:55 ` Barry Song [this message]
2022-03-11 22:57 ` Yu Zhao
2022-03-09 2:12 ` [PATCH v9 02/14] mm: x86: add CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG Yu Zhao
2022-03-16 22:15 ` Barry Song
2022-03-09 2:12 ` [PATCH v9 03/14] mm/vmscan.c: refactor shrink_node() Yu Zhao
2022-03-18 1:15 ` Barry Song
2022-03-09 2:12 ` [PATCH v9 04/14] Revert "include/linux/mm_inline.h: fold __update_lru_size() into its sole caller" Yu Zhao
2022-03-09 2:12 ` [PATCH v9 05/14] mm: multi-gen LRU: groundwork Yu Zhao
2022-03-14 8:08 ` Huang, Ying
2022-03-14 9:30 ` Yu Zhao
2022-03-15 0:34 ` Huang, Ying
2022-03-15 0:50 ` Yu Zhao
2022-03-21 18:58 ` Justin Forbes
2022-03-21 19:17 ` Prarit Bhargava
2022-03-22 4:52 ` Yu Zhao
2022-03-16 23:25 ` Barry Song
2022-03-21 9:04 ` Yu Zhao
2022-03-21 11:47 ` Barry Song
2022-03-09 2:12 ` [PATCH v9 06/14] mm: multi-gen LRU: minimal implementation Yu Zhao
2022-03-16 5:55 ` Huang, Ying
2022-03-16 7:54 ` Yu Zhao
2022-03-19 3:01 ` Barry Song
2022-03-19 3:11 ` Yu Zhao
2022-03-23 7:47 ` Barry Song
2022-03-24 6:24 ` Yu Zhao
2022-03-24 8:13 ` Barry Song
2022-03-19 10:14 ` Barry Song
2022-03-21 23:51 ` Yu Zhao
2022-03-19 11:15 ` Barry Song
2022-03-22 0:30 ` Yu Zhao
2022-03-21 12:51 ` Aneesh Kumar K.V
2022-03-22 4:02 ` Yu Zhao
2022-03-21 13:01 ` Aneesh Kumar K.V
2022-03-22 4:39 ` Yu Zhao
2022-03-22 5:26 ` Aneesh Kumar K.V
2022-03-22 5:55 ` Yu Zhao
2022-03-09 2:12 ` [PATCH v9 07/14] mm: multi-gen LRU: exploit locality in rmap Yu Zhao
2022-04-07 2:29 ` Barry Song
2022-04-07 3:04 ` Yu Zhao
2022-04-07 3:46 ` Barry Song
2022-04-07 23:51 ` Yu Zhao
2022-03-09 2:12 ` [PATCH v9 08/14] mm: multi-gen LRU: support page table walks Yu Zhao
2022-03-09 2:12 ` [PATCH v9 09/14] mm: multi-gen LRU: optimize multiple memcgs Yu Zhao
2022-03-09 2:12 ` [PATCH v9 10/14] mm: multi-gen LRU: kill switch Yu Zhao
2022-03-22 7:47 ` Barry Song
2022-03-22 8:20 ` Yu Zhao
2022-03-22 8:45 ` Barry Song
2022-03-22 9:00 ` Yu Zhao
2022-03-09 2:12 ` [PATCH v9 11/14] mm: multi-gen LRU: thrashing prevention Yu Zhao
2022-03-22 7:22 ` Barry Song
2022-03-22 8:14 ` Yu Zhao
2022-03-09 2:12 ` [PATCH v9 12/14] mm: multi-gen LRU: debugfs interface Yu Zhao
2022-03-09 2:12 ` [PATCH v9 13/14] mm: multi-gen LRU: admin guide Yu Zhao
2022-03-10 12:29 ` Mike Rapoport
2022-03-11 0:37 ` Yu Zhao
2022-03-09 2:12 ` [PATCH v9 14/14] mm: multi-gen LRU: design doc Yu Zhao
2022-03-11 8:22 ` Mike Rapoport
2022-03-11 9:38 ` Yu Zhao
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAGsJ_4yt_q4=pPW1M6fHN9HrV5JuTo9_9GQ0wv4-VT7tivU1+Q@mail.gmail.com' \
--to=21cnbao@gmail.com \
--cc=Hi-Angel@yandex.ru \
--cc=Michael@michaellarabel.com \
--cc=ak@linux.intel.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=axboe@kernel.dk \
--cc=bgeffon@google.com \
--cc=catalin.marinas@arm.com \
--cc=corbet@lwn.net \
--cc=d@chaos-reins.com \
--cc=dave.hansen@linux.intel.com \
--cc=djbyrne@mtu.edu \
--cc=hannes@cmpxchg.org \
--cc=hdanton@sina.com \
--cc=heftig@archlinux.org \
--cc=holger@applied-asynchrony.com \
--cc=jsbarnes@google.com \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@suse.de \
--cc=mhocko@kernel.org \
--cc=oleksandr@natalenko.name \
--cc=page-reclaim@google.com \
--cc=riel@surriel.com \
--cc=rppt@kernel.org \
--cc=sofia.trinh@edi.works \
--cc=steven@liquorix.net \
--cc=suleiman@google.com \
--cc=szhai2@cs.rochester.edu \
--cc=torvalds@linux-foundation.org \
--cc=vaibhav@linux.ibm.com \
--cc=vbabka@suse.cz \
--cc=will@kernel.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).