All of lore.kernel.org
 help / color / mirror / Atom feed
* + mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch added to mm-unstable branch
@ 2024-04-10 21:08 Andrew Morton
  0 siblings, 0 replies; only message in thread
From: Andrew Morton @ 2024-04-10 21:08 UTC (permalink / raw)
  To: mm-commits, ziy, yuzhao, yosryahmed, ying.huang, xiang, willy,
	ryan.roberts, kasong, hughd, hannes, hanchuanhua, david, chrisl,
	baolin.wang, v-songbaohua, akpm


The patch titled
     Subject: mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive
has been added to the -mm mm-unstable branch.  Its filename is
     mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch

This patch will shortly appear at
     https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch

This patch will later appear in the mm-unstable branch at
    git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next via the mm-everything
branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
and is updated there every 2-3 working days

------------------------------------------------------
From: Barry Song <v-songbaohua@oppo.com>
Subject: mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive
Date: Tue, 9 Apr 2024 20:26:29 +1200

Add a boolean argument named any_shared.  If any of the swap entries are
non-exclusive, set any_shared to true.  The function do_swap_page() can
then utilize this information to determine whether the entire large folio
can be reused.

Link: https://lkml.kernel.org/r/20240409082631.187483-4-21cnbao@gmail.com
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
Cc: Chris Li <chrisl@kernel.org>
Cc: Chuanhua Han <hanchuanhua@oppo.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Gao Xiang <xiang@kernel.org>
Cc: "Huang, Ying" <ying.huang@intel.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Kairui Song <kasong@tencent.com>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Yosry Ahmed <yosryahmed@google.com>
Cc: Yu Zhao <yuzhao@google.com>
Cc: Zi Yan <ziy@nvidia.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/internal.h |    9 ++++++++-
 mm/madvise.c  |    2 +-
 mm/memory.c   |    2 +-
 3 files changed, 10 insertions(+), 3 deletions(-)

--- a/mm/internal.h~mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive
+++ a/mm/internal.h
@@ -241,7 +241,8 @@ static inline pte_t pte_next_swp_offset(
  *
  * Return: the number of table entries in the batch.
  */
-static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte)
+static inline int swap_pte_batch(pte_t *start_ptep, int max_nr, pte_t pte,
+				bool *any_shared)
 {
 	pte_t expected_pte = pte_next_swp_offset(pte);
 	const pte_t *end_ptep = start_ptep + max_nr;
@@ -251,12 +252,18 @@ static inline int swap_pte_batch(pte_t *
 	VM_WARN_ON(!is_swap_pte(pte));
 	VM_WARN_ON(non_swap_entry(pte_to_swp_entry(pte)));
 
+	if (any_shared)
+		*any_shared |= !pte_swp_exclusive(pte);
+
 	while (ptep < end_ptep) {
 		pte = ptep_get(ptep);
 
 		if (!pte_same(pte, expected_pte))
 			break;
 
+		if (any_shared)
+			*any_shared |= !pte_swp_exclusive(pte);
+
 		expected_pte = pte_next_swp_offset(expected_pte);
 		ptep++;
 	}
--- a/mm/madvise.c~mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive
+++ a/mm/madvise.c
@@ -671,7 +671,7 @@ static int madvise_free_pte_range(pmd_t
 			entry = pte_to_swp_entry(ptent);
 			if (!non_swap_entry(entry)) {
 				max_nr = (end - addr) / PAGE_SIZE;
-				nr = swap_pte_batch(pte, max_nr, ptent);
+				nr = swap_pte_batch(pte, max_nr, ptent, NULL);
 				nr_swap -= nr;
 				free_swap_and_cache_nr(entry, nr);
 				clear_not_present_full_ptes(mm, addr, pte, nr, tlb->fullmm);
--- a/mm/memory.c~mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive
+++ a/mm/memory.c
@@ -1637,7 +1637,7 @@ static unsigned long zap_pte_range(struc
 			folio_put(folio);
 		} else if (!non_swap_entry(entry)) {
 			max_nr = (end - addr) / PAGE_SIZE;
-			nr = swap_pte_batch(pte, max_nr, ptent);
+			nr = swap_pte_batch(pte, max_nr, ptent, NULL);
 			/* Genuine swap entries, hence a private anon pages */
 			if (!should_zap_cows(details))
 				continue;
_

Patches currently in -mm which might be from v-songbaohua@oppo.com are

arm64-mm-swap-support-thp_swap-on-hardware-with-mte.patch
mm-hold-ptl-from-the-first-pte-while-reclaiming-a-large-folio.patch
mm-alloc_anon_folio-avoid-doing-vma_thp_gfp_mask-in-fallback-cases.patch
mm-add-per-order-mthp-anon_alloc-and-anon_alloc_fallback-counters.patch
mm-add-per-order-mthp-anon_alloc-and-anon_alloc_fallback-counters-fix.patch
mm-add-per-order-mthp-anon_swpout-and-anon_swpout_fallback-counters.patch
mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch
mm-add-per-order-mthp-swpin_refault-counter.patch


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2024-04-10 21:08 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-10 21:08 + mm-swap_pte_batch-add-an-output-argument-to-reture-if-all-swap-entries-are-exclusive.patch added to mm-unstable branch Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.