From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5EADC433EF for ; Thu, 7 Apr 2022 03:16:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S239924AbiDGDSK (ORCPT ); Wed, 6 Apr 2022 23:18:10 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42458 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239848AbiDGDR6 (ORCPT ); Wed, 6 Apr 2022 23:17:58 -0400 Received: from mail-yw1-x114a.google.com (mail-yw1-x114a.google.com [IPv6:2607:f8b0:4864:20::114a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 77A256D38A for ; Wed, 6 Apr 2022 20:15:59 -0700 (PDT) Received: by mail-yw1-x114a.google.com with SMTP id 00721157ae682-2e9e838590dso38553277b3.5 for ; Wed, 06 Apr 2022 20:15:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc:content-transfer-encoding; bh=6QY+PPZkM5UJQ/884cqbyYi9s5Db7D9+e+0IVeUY5fQ=; b=nEPxSLtg6u4TgWQt9dabuzaYb492tcGE/akVOfwlqqKDhm1WBbQVTjchieVCMk096+ QxLTwM3kLYao+8DrrUbo8GMwSwC+Pmu86VbFJnpkgMBR2q37Sh0STzsOeUa3QmpYPVEk FvgxylFKoR/boI7JQzm2RECj5DtEgFpoh4W7V6PMF6zksJZOLx/jXPhqtxhg0exMjaLg boVS7UveGqDvF65A/m7yuY6yau4Co94admCY6acoFxNVUB5ONDQ01hHxsRtQXiOBeAGK Gkk/5QYqRfrT5LiMNQP6h08lz+DK33sY+hp7FQGWIr8CNN5LhmglxzRTj4Zcj+tbbx5+ zlJg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc:content-transfer-encoding; bh=6QY+PPZkM5UJQ/884cqbyYi9s5Db7D9+e+0IVeUY5fQ=; b=PHNUpjP2A42st67+jbr83HNxdI1yn1tm4fSrXrY9dU4bX6fJKkuUeLpPcSxdZW0+g9 YoN4VB1xRfV3REpi/NfYrawYPANwFdYSCOhY9x9Ypo+5Zqf7dMVUkcym1h7FWc7NkMFO R8bSeJSpPdMNF3LrvPGIKjuTrGHyg0PWyt0sxcbBHIQWbweHFiBbPFx/5fqTP+F5X9j9 ba3WtwGTPb3i3TRa7bsZrdF8je4gmcYtyxjMivXNuEYHWv1s+b0NmTzagZUL7Z5FwTYM 545RD90E43TtthEZNoz/gevRkPYCQ7L3raH3T/4Qd/9L3oyFTv2qkdRmo1My1GnTIdWX UG5Q== X-Gm-Message-State: AOAM531Z1fPAKhBK9CLcFDFWLlME0i81xRUIHDJXmDJEkOTH6fWYMCSy +0XT/f89J/lQPC/fDUmrgHq4162/kXs= X-Google-Smtp-Source: ABdhPJwR9OjNDn2vZMD26M53z4jjUPTvPaXB7634uHguYxB0WKstJRhyJ2jygpZ+/OqInF6hyJUDfHyP68g= X-Received: from yuzhao.bld.corp.google.com ([2620:15c:183:200:9ea2:c755:ae22:6862]) (user=yuzhao job=sendgmr) by 2002:a25:7388:0:b0:63d:9a0f:cc33 with SMTP id o130-20020a257388000000b0063d9a0fcc33mr8066047ybc.415.1649301358643; Wed, 06 Apr 2022 20:15:58 -0700 (PDT) Date: Wed, 6 Apr 2022 21:15:13 -0600 In-Reply-To: <20220407031525.2368067-1-yuzhao@google.com> Message-Id: <20220407031525.2368067-2-yuzhao@google.com> Mime-Version: 1.0 References: <20220407031525.2368067-1-yuzhao@google.com> X-Mailer: git-send-email 2.35.1.1094.g7c7d902a7c-goog Subject: [PATCH v10 01/14] mm: x86, arm64: add arch_has_hw_pte_young() From: Yu Zhao To: Stephen Rothwell , linux-mm@kvack.org Cc: Andi Kleen , Andrew Morton , Aneesh Kumar , Barry Song <21cnbao@gmail.com>, Catalin Marinas , Dave Hansen , Hillf Danton , Jens Axboe , Jesse Barnes , Johannes Weiner , Jonathan Corbet , Linus Torvalds , Matthew Wilcox , Mel Gorman , Michael Larabel , Michal Hocko , Mike Rapoport , Rik van Riel , Vlastimil Babka , Will Deacon , Ying Huang , linux-arm-kernel@lists.infradead.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, page-reclaim@google.com, x86@kernel.org, Yu Zhao , Barry Song , Brian Geffon , Jan Alexander Steffens , Oleksandr Natalenko , Steven Barrett , Suleiman Souhlal , Daniel Byrne , Donald Carr , "=?UTF-8?q?Holger=20Hoffst=C3=A4tte?=" , Konstantin Kharlamov , Shuang Zhai , Sofia Trinh , Vaibhav Jain Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Some architectures automatically set the accessed bit in PTEs, e.g., x86 and arm64 v8.2. On architectures that do not have this capability, clearing the accessed bit in a PTE usually triggers a page fault following the TLB miss of this PTE (to emulate the accessed bit). Being aware of this capability can help make better decisions, e.g., whether to spread the work out over a period of time to reduce bursty page faults when trying to clear the accessed bit in many PTEs. Note that theoretically this capability can be unreliable, e.g., hotplugged CPUs might be different from builtin ones. Therefore it should not be used in architecture-independent code that involves correctness, e.g., to determine whether TLB flushes are required (in combination with the accessed bit). Signed-off-by: Yu Zhao Reviewed-by: Barry Song Acked-by: Brian Geffon Acked-by: Jan Alexander Steffens (heftig) Acked-by: Oleksandr Natalenko Acked-by: Steven Barrett Acked-by: Suleiman Souhlal Acked-by: Will Deacon Tested-by: Daniel Byrne Tested-by: Donald Carr Tested-by: Holger Hoffst=C3=A4tte Tested-by: Konstantin Kharlamov Tested-by: Shuang Zhai Tested-by: Sofia Trinh Tested-by: Vaibhav Jain --- arch/arm64/include/asm/pgtable.h | 14 ++------------ arch/x86/include/asm/pgtable.h | 6 +++--- include/linux/pgtable.h | 13 +++++++++++++ mm/memory.c | 14 +------------- 4 files changed, 19 insertions(+), 28 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgta= ble.h index 94e147e5456c..85d509a08ce3 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -999,23 +999,13 @@ static inline void update_mmu_cache(struct vm_area_st= ruct *vma, * page after fork() + CoW for pfn mappings. We don't always have a * hardware-managed access flag on arm64. */ -static inline bool arch_faults_on_old_pte(void) -{ - WARN_ON(preemptible()); - - return !cpu_has_hw_af(); -} -#define arch_faults_on_old_pte arch_faults_on_old_pte +#define arch_has_hw_pte_young cpu_has_hw_af =20 /* * Experimentally, it's cheap to set the access flag in hardware and we * benefit from prefaulting mappings as 'old' to start with. */ -static inline bool arch_wants_old_prefaulted_pte(void) -{ - return !arch_faults_on_old_pte(); -} -#define arch_wants_old_prefaulted_pte arch_wants_old_prefaulted_pte +#define arch_wants_old_prefaulted_pte cpu_has_hw_af =20 static inline bool pud_sect_supported(void) { diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.= h index 62ab07e24aef..016606a0cf20 100644 --- a/arch/x86/include/asm/pgtable.h +++ b/arch/x86/include/asm/pgtable.h @@ -1424,10 +1424,10 @@ static inline bool arch_has_pfn_modify_check(void) return boot_cpu_has_bug(X86_BUG_L1TF); } =20 -#define arch_faults_on_old_pte arch_faults_on_old_pte -static inline bool arch_faults_on_old_pte(void) +#define arch_has_hw_pte_young arch_has_hw_pte_young +static inline bool arch_has_hw_pte_young(void) { - return false; + return true; } =20 #endif /* __ASSEMBLY__ */ diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index f4f4077b97aa..79f64dcff07d 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -259,6 +259,19 @@ static inline int pmdp_clear_flush_young(struct vm_are= a_struct *vma, #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif =20 +#ifndef arch_has_hw_pte_young +/* + * Return whether the accessed bit is supported on the local CPU. + * + * This stub assumes accessing through an old PTE triggers a page fault. + * Architectures that automatically set the access bit should overwrite it= . + */ +static inline bool arch_has_hw_pte_young(void) +{ + return false; +} +#endif + #ifndef __HAVE_ARCH_PTEP_CLEAR static inline void ptep_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep) diff --git a/mm/memory.c b/mm/memory.c index 76e3af9639d9..44a1ec7a2cac 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -122,18 +122,6 @@ int randomize_va_space __read_mostly =3D 2; #endif =20 -#ifndef arch_faults_on_old_pte -static inline bool arch_faults_on_old_pte(void) -{ - /* - * Those arches which don't have hw access flag feature need to - * implement their own helper. By default, "true" means pagefault - * will be hit on old pte. - */ - return true; -} -#endif - #ifndef arch_wants_old_prefaulted_pte static inline bool arch_wants_old_prefaulted_pte(void) { @@ -2784,7 +2772,7 @@ static inline bool cow_user_page(struct page *dst, st= ruct page *src, * On architectures with software "accessed" bits, we would * take a double page fault, so mark it accessed here. */ - if (arch_faults_on_old_pte() && !pte_young(vmf->orig_pte)) { + if (!arch_has_hw_pte_young() && !pte_young(vmf->orig_pte)) { pte_t entry; =20 vmf->pte =3D pte_offset_map_lock(mm, vmf->pmd, addr, &vmf->ptl); --=20 2.35.1.1094.g7c7d902a7c-goog