All of lore.kernel.org
 help / color / mirror / Atom feed
From: Barry Song <21cnbao@gmail.com>
To: akpm@linux-foundation.org, linux-mm@kvack.org
Cc: baolin.wang@linux.alibaba.com, chrisl@kernel.org,
	david@redhat.com, hanchuanhua@oppo.com, hannes@cmpxchg.org,
	hughd@google.com, kasong@tencent.com, ryan.roberts@arm.com,
	surenb@google.com, v-songbaohua@oppo.com, willy@infradead.org,
	xiang@kernel.org, ying.huang@intel.com, yosryahmed@google.com,
	yuzhao@google.com, ziy@nvidia.com, linux-kernel@vger.kernel.org
Subject: [PATCH v2 1/5] mm: swap: introduce swap_free_nr() for batched swap_free()
Date: Tue,  9 Apr 2024 20:26:27 +1200	[thread overview]
Message-ID: <20240409082631.187483-2-21cnbao@gmail.com> (raw)
In-Reply-To: <20240409082631.187483-1-21cnbao@gmail.com>

From: Chuanhua Han <hanchuanhua@oppo.com>

While swapping in a large folio, we need to free swaps related to the whole
folio. To avoid frequently acquiring and releasing swap locks, it is better
to introduce an API for batched free.

Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
Co-developed-by: Barry Song <v-songbaohua@oppo.com>
Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---
 include/linux/swap.h |  5 +++++
 mm/swapfile.c        | 51 ++++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 56 insertions(+)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 11c53692f65f..b7a107e983b8 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -483,6 +483,7 @@ extern void swap_shmem_alloc(swp_entry_t);
 extern int swap_duplicate(swp_entry_t);
 extern int swapcache_prepare(swp_entry_t);
 extern void swap_free(swp_entry_t);
+extern void swap_free_nr(swp_entry_t entry, int nr_pages);
 extern void swapcache_free_entries(swp_entry_t *entries, int n);
 extern void free_swap_and_cache_nr(swp_entry_t entry, int nr);
 int swap_type_of(dev_t device, sector_t offset);
@@ -564,6 +565,10 @@ static inline void swap_free(swp_entry_t swp)
 {
 }
 
+void swap_free_nr(swp_entry_t entry, int nr_pages)
+{
+}
+
 static inline void put_swap_folio(struct folio *folio, swp_entry_t swp)
 {
 }
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 28642c188c93..f4c65aeb088d 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -1356,6 +1356,57 @@ void swap_free(swp_entry_t entry)
 		__swap_entry_free(p, entry);
 }
 
+/*
+ * Free up the maximum number of swap entries at once to limit the
+ * maximum kernel stack usage.
+ */
+#define SWAP_BATCH_NR (SWAPFILE_CLUSTER > 512 ? 512 : SWAPFILE_CLUSTER)
+
+/*
+ * Called after swapping in a large folio, batched free swap entries
+ * for this large folio, entry should be for the first subpage and
+ * its offset is aligned with nr_pages
+ */
+void swap_free_nr(swp_entry_t entry, int nr_pages)
+{
+	int i, j;
+	struct swap_cluster_info *ci;
+	struct swap_info_struct *p;
+	unsigned int type = swp_type(entry);
+	unsigned long offset = swp_offset(entry);
+	int batch_nr, remain_nr;
+	DECLARE_BITMAP(usage, SWAP_BATCH_NR) = { 0 };
+
+	/* all swap entries are within a cluster for mTHP */
+	VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER);
+
+	if (nr_pages == 1) {
+		swap_free(entry);
+		return;
+	}
+
+	remain_nr = nr_pages;
+	p = _swap_info_get(entry);
+	if (p) {
+		for (i = 0; i < nr_pages; i += batch_nr) {
+			batch_nr = min_t(int, SWAP_BATCH_NR, remain_nr);
+
+			ci = lock_cluster_or_swap_info(p, offset);
+			for (j = 0; j < batch_nr; j++) {
+				if (__swap_entry_free_locked(p, offset + i * SWAP_BATCH_NR + j, 1))
+					__bitmap_set(usage, j, 1);
+			}
+			unlock_cluster_or_swap_info(p, ci);
+
+			for_each_clear_bit(j, usage, batch_nr)
+				free_swap_slot(swp_entry(type, offset + i * SWAP_BATCH_NR + j));
+
+			bitmap_clear(usage, 0, SWAP_BATCH_NR);
+			remain_nr -= batch_nr;
+		}
+	}
+}
+
 /*
  * Called after dropping swapcache to decrease refcnt to swap entries.
  */
-- 
2.34.1


  reply	other threads:[~2024-04-09  8:26 UTC|newest]

Thread overview: 54+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-09  8:26 [PATCH v2 0/5] large folios swap-in: handle refault cases first Barry Song
2024-04-09  8:26 ` Barry Song [this message]
2024-04-10 23:37   ` [PATCH v2 1/5] mm: swap: introduce swap_free_nr() for batched swap_free() SeongJae Park
2024-04-11  1:27     ` Barry Song
2024-04-11 14:30   ` Ryan Roberts
2024-04-12  2:07     ` Chuanhua Han
2024-04-12 11:28       ` Ryan Roberts
2024-04-12 11:38         ` Chuanhua Han
2024-04-15  6:17   ` Huang, Ying
2024-04-15  7:04     ` Barry Song
2024-04-15  8:06       ` Barry Song
2024-04-15  8:19       ` Huang, Ying
2024-04-15  8:34         ` Barry Song
2024-04-15  8:51           ` Huang, Ying
2024-04-15  9:01             ` Barry Song
2024-04-16  1:40               ` Huang, Ying
2024-04-16  2:08                 ` Barry Song
2024-04-16  3:11                   ` Huang, Ying
2024-04-16  4:32                     ` Barry Song
2024-04-17  0:32                       ` Huang, Ying
2024-04-17  1:35                         ` Barry Song
2024-04-18  5:27                           ` Barry Song
2024-04-18  8:55                             ` Huang, Ying
2024-04-18  9:14                               ` Barry Song
2024-05-02 23:05                                 ` Barry Song
2024-04-09  8:26 ` [PATCH v2 2/5] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-04-15  7:11   ` Huang, Ying
2024-04-09  8:26 ` [PATCH v2 3/5] mm: swap_pte_batch: add an output argument to reture if all swap entries are exclusive Barry Song
2024-04-11 14:54   ` Ryan Roberts
2024-04-11 15:00     ` David Hildenbrand
2024-04-11 15:36       ` Ryan Roberts
2024-04-09  8:26 ` [PATCH v2 4/5] mm: swap: entirely map large folios found in swapcache Barry Song
2024-04-11 15:33   ` Ryan Roberts
2024-04-11 23:30     ` Barry Song
2024-04-12 11:31       ` Ryan Roberts
2024-04-15  8:37   ` Huang, Ying
2024-04-15  8:53     ` Barry Song
2024-04-16  2:25       ` Huang, Ying
2024-04-16  2:36         ` Barry Song
2024-04-16  2:39           ` Huang, Ying
2024-04-16  2:52             ` Barry Song
2024-04-16  3:17               ` Huang, Ying
2024-04-16  4:40                 ` Barry Song
2024-04-18  9:55           ` Barry Song
2024-04-09  8:26 ` [PATCH v2 5/5] mm: add per-order mTHP swpin_refault counter Barry Song
2024-04-10 23:15   ` SeongJae Park
2024-04-11  1:46     ` Barry Song
2024-04-11 16:14       ` SeongJae Park
2024-04-11 15:53   ` Ryan Roberts
2024-04-11 23:01     ` Barry Song
2024-04-17  0:45   ` Huang, Ying
2024-04-17  1:16     ` Barry Song
2024-04-17  1:38       ` Huang, Ying
2024-04-17  1:48         ` Barry Song

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240409082631.187483-2-21cnbao@gmail.com \
    --to=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=chrisl@kernel.org \
    --cc=david@redhat.com \
    --cc=hanchuanhua@oppo.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=kasong@tencent.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=surenb@google.com \
    --cc=v-songbaohua@oppo.com \
    --cc=willy@infradead.org \
    --cc=xiang@kernel.org \
    --cc=ying.huang@intel.com \
    --cc=yosryahmed@google.com \
    --cc=yuzhao@google.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.