linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Ryan Roberts <ryan.roberts@arm.com>
To: Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>, Ard Biesheuvel <ardb@kernel.org>,
	Marc Zyngier <maz@kernel.org>, James Morse <james.morse@arm.com>,
	Andrey Ryabinin <ryabinin.a.a@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Matthew Wilcox <willy@infradead.org>,
	Mark Rutland <mark.rutland@arm.com>,
	David Hildenbrand <david@redhat.com>,
	Kefeng Wang <wangkefeng.wang@huawei.com>,
	John Hubbard <jhubbard@nvidia.com>, Zi Yan <ziy@nvidia.com>,
	Barry Song <21cnbao@gmail.com>,
	Alistair Popple <apopple@nvidia.com>,
	Yang Shi <shy828301@gmail.com>,
	Nicholas Piggin <npiggin@gmail.com>,
	Christophe Leroy <christophe.leroy@csgroup.eu>,
	"Aneesh Kumar K.V" <aneesh.kumar@kernel.org>,
	"Naveen N. Rao" <naveen.n.rao@linux.ibm.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	Dave Hansen <dave.hansen@linux.intel.com>,
	"H. Peter Anvin" <hpa@zytor.com>
Cc: Ryan Roberts <ryan.roberts@arm.com>,
	linux-arm-kernel@lists.infradead.org, x86@kernel.org,
	linuxppc-dev@lists.ozlabs.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH v5 25/25] arm64/mm: Automatically fold contpte mappings
Date: Fri,  2 Feb 2024 08:07:56 +0000	[thread overview]
Message-ID: <20240202080756.1453939-26-ryan.roberts@arm.com> (raw)
In-Reply-To: <20240202080756.1453939-1-ryan.roberts@arm.com>

There are situations where a change to a single PTE could cause the
contpte block in which it resides to become foldable (i.e. could be
repainted with the contiguous bit). Such situations arise, for example,
when user space temporarily changes protections, via mprotect, for
individual pages, such can be the case for certain garbage collectors.

We would like to detect when such a PTE change occurs. However this can
be expensive due to the amount of checking required. Therefore only
perform the checks when an indiviual PTE is modified via mprotect
(ptep_modify_prot_commit() -> set_pte_at() -> set_ptes(nr=1)) and only
when we are setting the final PTE in a contpte-aligned block.

Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
---
 arch/arm64/include/asm/pgtable.h | 26 +++++++++++++
 arch/arm64/mm/contpte.c          | 64 ++++++++++++++++++++++++++++++++
 2 files changed, 90 insertions(+)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index cdc310880a3b..d3357fe4eb89 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1192,6 +1192,8 @@ void vmemmap_update_pte(unsigned long addr, pte_t *ptep, pte_t pte);
  * where it is possible and makes sense to do so. The PTE_CONT bit is considered
  * a private implementation detail of the public ptep API (see below).
  */
+extern void __contpte_try_fold(struct mm_struct *mm, unsigned long addr,
+				pte_t *ptep, pte_t pte);
 extern void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr,
 				pte_t *ptep, pte_t pte);
 extern pte_t contpte_ptep_get(pte_t *ptep, pte_t orig_pte);
@@ -1213,6 +1215,29 @@ extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
 				unsigned long addr, pte_t *ptep,
 				pte_t entry, int dirty);
 
+static __always_inline void contpte_try_fold(struct mm_struct *mm,
+				unsigned long addr, pte_t *ptep, pte_t pte)
+{
+	/*
+	 * Only bother trying if both the virtual and physical addresses are
+	 * aligned and correspond to the last entry in a contig range. The core
+	 * code mostly modifies ranges from low to high, so this is the likely
+	 * the last modification in the contig range, so a good time to fold.
+	 * We can't fold special mappings, because there is no associated folio.
+	 */
+
+	const unsigned long contmask = CONT_PTES - 1;
+	bool valign = ((addr >> PAGE_SHIFT) & contmask) == contmask;
+
+	if (unlikely(valign)) {
+		bool palign = (pte_pfn(pte) & contmask) == contmask;
+
+		if (unlikely(palign &&
+		    pte_valid(pte) && !pte_cont(pte) && !pte_special(pte)))
+			__contpte_try_fold(mm, addr, ptep, pte);
+	}
+}
+
 static __always_inline void contpte_try_unfold(struct mm_struct *mm,
 				unsigned long addr, pte_t *ptep, pte_t pte)
 {
@@ -1287,6 +1312,7 @@ static __always_inline void set_ptes(struct mm_struct *mm, unsigned long addr,
 	if (likely(nr == 1)) {
 		contpte_try_unfold(mm, addr, ptep, __ptep_get(ptep));
 		__set_ptes(mm, addr, ptep, pte, 1);
+		contpte_try_fold(mm, addr, ptep, pte);
 	} else {
 		contpte_set_ptes(mm, addr, ptep, pte, nr);
 	}
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index 80346108450b..2c7dafd0552a 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -67,6 +67,70 @@ static void contpte_convert(struct mm_struct *mm, unsigned long addr,
 	__set_ptes(mm, start_addr, start_ptep, pte, CONT_PTES);
 }
 
+void __contpte_try_fold(struct mm_struct *mm, unsigned long addr,
+			pte_t *ptep, pte_t pte)
+{
+	/*
+	 * We have already checked that the virtual and pysical addresses are
+	 * correctly aligned for a contpte mapping in contpte_try_fold() so the
+	 * remaining checks are to ensure that the contpte range is fully
+	 * covered by a single folio, and ensure that all the ptes are valid
+	 * with contiguous PFNs and matching prots. We ignore the state of the
+	 * access and dirty bits for the purpose of deciding if its a contiguous
+	 * range; the folding process will generate a single contpte entry which
+	 * has a single access and dirty bit. Those 2 bits are the logical OR of
+	 * their respective bits in the constituent pte entries. In order to
+	 * ensure the contpte range is covered by a single folio, we must
+	 * recover the folio from the pfn, but special mappings don't have a
+	 * folio backing them. Fortunately contpte_try_fold() already checked
+	 * that the pte is not special - we never try to fold special mappings.
+	 * Note we can't use vm_normal_page() for this since we don't have the
+	 * vma.
+	 */
+
+	unsigned long folio_saddr, folio_eaddr;
+	unsigned long cont_saddr, cont_eaddr;
+	pte_t expected_pte, subpte;
+	struct folio *folio;
+	struct page *page;
+	unsigned long pfn;
+	pte_t *orig_ptep;
+	pgprot_t prot;
+
+	int i;
+
+	if (!mm_is_user(mm))
+		return;
+
+	page = pte_page(pte);
+	folio = page_folio(page);
+	folio_saddr = addr - (page - &folio->page) * PAGE_SIZE;
+	folio_eaddr = folio_saddr + folio_nr_pages(folio) * PAGE_SIZE;
+	cont_saddr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
+	cont_eaddr = cont_saddr + CONT_PTE_SIZE;
+
+	if (folio_saddr > cont_saddr || folio_eaddr < cont_eaddr)
+		return;
+
+	pfn = pte_pfn(pte) - ((addr - cont_saddr) >> PAGE_SHIFT);
+	prot = pte_pgprot(pte_mkold(pte_mkclean(pte)));
+	expected_pte = pfn_pte(pfn, prot);
+	orig_ptep = ptep;
+	ptep = contpte_align_down(ptep);
+
+	for (i = 0; i < CONT_PTES; i++) {
+		subpte = pte_mkold(pte_mkclean(__ptep_get(ptep)));
+		if (!pte_same(subpte, expected_pte))
+			return;
+		expected_pte = pte_advance_pfn(expected_pte, 1);
+		ptep++;
+	}
+
+	pte = pte_mkcont(pte);
+	contpte_convert(mm, addr, orig_ptep, pte);
+}
+EXPORT_SYMBOL(__contpte_try_fold);
+
 void __contpte_try_unfold(struct mm_struct *mm, unsigned long addr,
 			pte_t *ptep, pte_t pte)
 {
-- 
2.25.1



  parent reply	other threads:[~2024-02-02  8:09 UTC|newest]

Thread overview: 80+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-02-02  8:07 [PATCH v5 00/25] Transparent Contiguous PTEs for User Mappings Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 01/25] mm: Clarify the spec for set_ptes() Ryan Roberts
2024-02-12 12:03   ` David Hildenbrand
2024-02-02  8:07 ` [PATCH v5 02/25] mm: thp: Batch-collapse PMD with set_ptes() Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 03/25] mm: Make pte_next_pfn() a wrapper around pte_advance_pfn() Ryan Roberts
2024-02-12 12:14   ` David Hildenbrand
2024-02-12 14:10     ` Ryan Roberts
2024-02-12 14:29       ` David Hildenbrand
2024-02-12 21:34         ` Ryan Roberts
2024-02-13  9:54           ` David Hildenbrand
2024-02-02  8:07 ` [PATCH v5 04/25] arm/mm: Convert pte_next_pfn() to pte_advance_pfn() Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 05/25] arm64/mm: " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 06/25] powerpc/mm: " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 07/25] x86/mm: " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 08/25] mm: Remove pte_next_pfn() and replace with pte_advance_pfn() Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 09/25] arm64/mm: set_pte(): New layer to manage contig bit Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 10/25] arm64/mm: set_ptes()/set_pte_at(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 11/25] arm64/mm: pte_clear(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 12/25] arm64/mm: ptep_get_and_clear(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 13/25] arm64/mm: ptep_test_and_clear_young(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 14/25] arm64/mm: ptep_clear_flush_young(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 15/25] arm64/mm: ptep_set_wrprotect(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 16/25] arm64/mm: ptep_set_access_flags(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 17/25] arm64/mm: ptep_get(): " Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 18/25] arm64/mm: Split __flush_tlb_range() to elide trailing DSB Ryan Roberts
2024-02-12 12:44   ` David Hildenbrand
2024-02-12 13:05     ` Ryan Roberts
2024-02-12 13:15       ` David Hildenbrand
2024-02-12 13:27         ` Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 19/25] arm64/mm: Wire up PTE_CONT for user mappings Ryan Roberts
2024-02-12 12:00   ` Mark Rutland
2024-02-12 12:59     ` Ryan Roberts
2024-02-12 13:54       ` David Hildenbrand
2024-02-12 14:45         ` Ryan Roberts
2024-02-12 15:26           ` David Hildenbrand
2024-02-12 15:34             ` Ryan Roberts
2024-02-12 16:24               ` David Hildenbrand
2024-02-13 15:29                 ` Ryan Roberts
2024-02-12 15:30       ` Ryan Roberts
2024-02-12 20:38         ` Ryan Roberts
2024-02-13 10:01           ` David Hildenbrand
2024-02-13 12:06           ` Ryan Roberts
2024-02-13 12:19             ` David Hildenbrand
2024-02-13 13:06               ` Ryan Roberts
2024-02-13 13:13                 ` David Hildenbrand
2024-02-13 13:20                   ` Ryan Roberts
2024-02-13 13:22                     ` David Hildenbrand
2024-02-13 13:24                       ` Ryan Roberts
2024-02-13 13:33                     ` Ard Biesheuvel
2024-02-13 13:45                       ` David Hildenbrand
2024-02-13 14:02                         ` Ryan Roberts
2024-02-13 14:05                           ` David Hildenbrand
2024-02-13 14:08                             ` Ard Biesheuvel
2024-02-13 14:21                               ` Ryan Roberts
2024-02-13 12:02       ` Mark Rutland
2024-02-13 13:03         ` Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 20/25] arm64/mm: Implement new wrprotect_ptes() batch API Ryan Roberts
2024-02-13 16:31   ` Mark Rutland
2024-02-13 16:36     ` Ryan Roberts
2024-02-02  8:07 ` [PATCH v5 21/25] arm64/mm: Implement new [get_and_]clear_full_ptes() batch APIs Ryan Roberts
2024-02-13 16:43   ` Mark Rutland
2024-02-13 16:48     ` Ryan Roberts
2024-02-13 16:53       ` Mark Rutland
2024-02-02  8:07 ` [PATCH v5 22/25] mm: Add pte_batch_hint() to reduce scanning in folio_pte_batch() Ryan Roberts
2024-02-12 13:43   ` David Hildenbrand
2024-02-12 15:00     ` Ryan Roberts
2024-02-12 15:47     ` Ryan Roberts
2024-02-12 16:27       ` David Hildenbrand
2024-02-02  8:07 ` [PATCH v5 23/25] arm64/mm: Implement pte_batch_hint() Ryan Roberts
2024-02-12 13:46   ` David Hildenbrand
2024-02-13 16:54   ` Mark Rutland
2024-02-02  8:07 ` [PATCH v5 24/25] arm64/mm: __always_inline to improve fork() perf Ryan Roberts
2024-02-13 16:55   ` Mark Rutland
2024-02-02  8:07 ` Ryan Roberts [this message]
2024-02-13 17:44   ` [PATCH v5 25/25] arm64/mm: Automatically fold contpte mappings Mark Rutland
2024-02-13 18:05     ` Ryan Roberts
2024-02-08 17:34 ` [PATCH v5 00/25] Transparent Contiguous PTEs for User Mappings Mark Rutland
2024-02-09  8:54   ` Ryan Roberts
2024-02-09 22:16     ` David Hildenbrand
2024-02-09 23:52       ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240202080756.1453939-26-ryan.roberts@arm.com \
    --to=ryan.roberts@arm.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@kernel.org \
    --cc=apopple@nvidia.com \
    --cc=ardb@kernel.org \
    --cc=bp@alien8.de \
    --cc=catalin.marinas@arm.com \
    --cc=christophe.leroy@csgroup.eu \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=hpa@zytor.com \
    --cc=james.morse@arm.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mark.rutland@arm.com \
    --cc=maz@kernel.org \
    --cc=mingo@redhat.com \
    --cc=naveen.n.rao@linux.ibm.com \
    --cc=npiggin@gmail.com \
    --cc=ryabinin.a.a@gmail.com \
    --cc=shy828301@gmail.com \
    --cc=tglx@linutronix.de \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=x86@kernel.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).