From: Chris Li <chrisl@kernel.org>
To: Barry Song <21cnbao@gmail.com>
Cc: ryan.roberts@arm.com, akpm@linux-foundation.org,
david@redhat.com, linux-mm@kvack.org,
linux-kernel@vger.kernel.org, mhocko@suse.com,
shy828301@gmail.com, wangkefeng.wang@huawei.com,
willy@infradead.org, xiang@kernel.org, ying.huang@intel.com,
yuzhao@google.com, surenb@google.com, steven.price@arm.com,
Chuanhua Han <hanchuanhua@oppo.com>,
Barry Song <v-songbaohua@oppo.com>
Subject: Re: [PATCH RFC 2/6] mm: swap: introduce swap_nr_free() for batched swap_free()
Date: Fri, 26 Jan 2024 15:17:06 -0800 [thread overview]
Message-ID: <CAF8kJuOPXyAxmmh9QO1SdU=8GWtMhPjaWgGtQ8gvnNyfbSZbig@mail.gmail.com> (raw)
In-Reply-To: <20240118111036.72641-3-21cnbao@gmail.com>
On Thu, Jan 18, 2024 at 3:11 AM Barry Song <21cnbao@gmail.com> wrote:
>
> From: Chuanhua Han <hanchuanhua@oppo.com>
>
> While swapping in a large folio, we need to free swaps related to the whole
> folio. To avoid frequently acquiring and releasing swap locks, it is better
> to introduce an API for batched free.
>
> Signed-off-by: Chuanhua Han <hanchuanhua@oppo.com>
> Co-developed-by: Barry Song <v-songbaohua@oppo.com>
> Signed-off-by: Barry Song <v-songbaohua@oppo.com>
> ---
> include/linux/swap.h | 6 ++++++
> mm/swapfile.c | 29 +++++++++++++++++++++++++++++
> 2 files changed, 35 insertions(+)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 4db00ddad261..31a4ee2dcd1c 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -478,6 +478,7 @@ extern void swap_shmem_alloc(swp_entry_t);
> extern int swap_duplicate(swp_entry_t);
> extern int swapcache_prepare(swp_entry_t);
> extern void swap_free(swp_entry_t);
> +extern void swap_nr_free(swp_entry_t entry, int nr_pages);
> extern void swapcache_free_entries(swp_entry_t *entries, int n);
> extern int free_swap_and_cache(swp_entry_t);
> int swap_type_of(dev_t device, sector_t offset);
> @@ -553,6 +554,11 @@ static inline void swap_free(swp_entry_t swp)
> {
> }
>
> +void swap_nr_free(swp_entry_t entry, int nr_pages)
> +{
> +
> +}
> +
> static inline void put_swap_folio(struct folio *folio, swp_entry_t swp)
> {
> }
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 556ff7347d5f..6321bda96b77 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -1335,6 +1335,35 @@ void swap_free(swp_entry_t entry)
> __swap_entry_free(p, entry);
> }
>
> +void swap_nr_free(swp_entry_t entry, int nr_pages)
> +{
> + int i;
> + struct swap_cluster_info *ci;
> + struct swap_info_struct *p;
> + unsigned type = swp_type(entry);
> + unsigned long offset = swp_offset(entry);
> + DECLARE_BITMAP(usage, SWAPFILE_CLUSTER) = { 0 };
> +
> + VM_BUG_ON(offset % SWAPFILE_CLUSTER + nr_pages > SWAPFILE_CLUSTER);
BUG_ON here seems a bit too developer originated. Maybe warn once and
roll back to free one by one?
How big is your typical SWAPFILE_CUSTER and nr_pages typically in arm?
I ask this question because nr_ppages > 64, that is a totally
different game, we can completely bypass the swap cache slots.
> +
> + if (nr_pages == 1) {
> + swap_free(entry);
> + return;
> + }
> +
> + p = _swap_info_get(entry);
> +
> + ci = lock_cluster(p, offset);
> + for (i = 0; i < nr_pages; i++) {
> + if (__swap_entry_free_locked(p, offset + i, 1))
> + __bitmap_set(usage, i, 1);
> + }
> + unlock_cluster(ci);
> +
> + for_each_clear_bit(i, usage, nr_pages)
> + free_swap_slot(swp_entry(type, offset + i));
Notice that free_swap_slot() internal has per CPU cache batching as
well. Every free_swap_slot will get some per_cpu swap slot cache and
cache->lock. There is double batching here.
If the typical batch size here is bigger than 64 entries, we can go
directly to batching swap_entry_free and avoid the free_swap_slot()
batching altogether. Unlike free_swap_slot_entries(), here swap slots
are all from one swap device, there is no need to sort and group the
swap slot by swap devices.
Chris
Chris
> +}
> +
> /*
> * Called after dropping swapcache to decrease refcnt to swap entries.
> */
> --
> 2.34.1
>
>
next prev parent reply other threads:[~2024-01-26 23:17 UTC|newest]
Thread overview: 116+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-10-25 14:45 [PATCH v3 0/4] Swap-out small-sized THP without splitting Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 1/4] mm: swap: Remove CLUSTER_FLAG_HUGE from swap_cluster_info:flags Ryan Roberts
2024-02-22 10:19 ` David Hildenbrand
2024-02-22 10:20 ` David Hildenbrand
2024-02-26 17:41 ` Ryan Roberts
2024-02-27 17:10 ` Ryan Roberts
2024-02-27 19:17 ` David Hildenbrand
2024-02-28 9:37 ` Ryan Roberts
2024-02-28 12:12 ` David Hildenbrand
2024-02-28 14:57 ` Ryan Roberts
2024-02-28 15:12 ` David Hildenbrand
2024-02-28 15:18 ` Ryan Roberts
2024-03-01 16:27 ` Ryan Roberts
2024-03-01 16:31 ` Matthew Wilcox
2024-03-01 16:44 ` Ryan Roberts
2024-03-01 17:00 ` David Hildenbrand
2024-03-01 17:14 ` Ryan Roberts
2024-03-01 17:18 ` David Hildenbrand
2024-03-01 17:06 ` Ryan Roberts
2024-03-04 4:52 ` Barry Song
2024-03-04 5:42 ` Barry Song
2024-03-05 7:41 ` Ryan Roberts
2024-03-01 16:31 ` Ryan Roberts
2024-03-01 16:32 ` David Hildenbrand
2024-03-04 16:03 ` Ryan Roberts
2024-03-04 17:30 ` David Hildenbrand
2024-03-04 18:38 ` Ryan Roberts
2024-03-04 20:50 ` David Hildenbrand
2024-03-04 21:55 ` Ryan Roberts
2024-03-04 22:02 ` David Hildenbrand
2024-03-04 22:34 ` Ryan Roberts
2024-03-05 6:11 ` Huang, Ying
2024-03-05 8:35 ` David Hildenbrand
2024-03-05 8:46 ` Ryan Roberts
2024-02-28 13:33 ` Matthew Wilcox
2024-02-28 14:24 ` Ryan Roberts
2024-02-28 14:59 ` Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 2/4] mm: swap: Remove struct percpu_cluster Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 3/4] mm: swap: Simplify ssd behavior when scanner steals entry Ryan Roberts
2023-10-25 14:45 ` [PATCH v3 4/4] mm: swap: Swap-out small-sized THP without splitting Ryan Roberts
2023-10-30 8:18 ` Huang, Ying
2023-10-30 13:59 ` Ryan Roberts
2023-10-31 8:12 ` Huang, Ying
2023-11-03 11:42 ` Ryan Roberts
2023-11-02 7:40 ` Barry Song
2023-11-02 10:21 ` Ryan Roberts
2023-11-02 22:36 ` Barry Song
2023-11-03 11:31 ` Ryan Roberts
2023-11-03 13:57 ` Steven Price
2023-11-04 9:34 ` Barry Song
2023-11-06 10:12 ` Steven Price
2023-11-06 21:39 ` Barry Song
2023-11-08 11:51 ` Steven Price
2023-11-07 12:46 ` Ryan Roberts
2023-11-07 18:05 ` Barry Song
2023-11-08 11:23 ` Barry Song
2023-11-08 20:20 ` Ryan Roberts
2023-11-08 21:04 ` Barry Song
2023-11-04 5:49 ` Barry Song
2024-02-05 9:51 ` Barry Song
2024-02-05 12:14 ` Ryan Roberts
2024-02-18 23:40 ` Barry Song
2024-02-20 20:03 ` Ryan Roberts
2024-03-05 9:00 ` Ryan Roberts
2024-03-05 9:54 ` Barry Song
2024-03-05 10:44 ` Ryan Roberts
2024-02-27 12:28 ` Ryan Roberts
2024-02-27 13:37 ` Ryan Roberts
2024-02-28 2:46 ` Barry Song
2024-02-22 7:05 ` Barry Song
2024-02-22 10:09 ` David Hildenbrand
2024-02-23 9:46 ` Barry Song
2024-02-27 12:05 ` Ryan Roberts
2024-02-28 1:23 ` Barry Song
2024-02-28 9:34 ` David Hildenbrand
2024-02-28 23:18 ` Barry Song
2024-02-28 15:57 ` Ryan Roberts
2023-11-29 7:47 ` [PATCH v3 0/4] " Barry Song
2023-11-29 12:06 ` Ryan Roberts
2023-11-29 20:38 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 0/6] mm: support large folios swap-in Barry Song
2024-01-18 11:10 ` [PATCH RFC 1/6] arm64: mm: swap: support THP_SWAP on hardware with MTE Barry Song
2024-01-26 23:14 ` Chris Li
2024-02-26 2:59 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 2/6] mm: swap: introduce swap_nr_free() for batched swap_free() Barry Song
2024-01-26 23:17 ` Chris Li [this message]
2024-02-26 4:47 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 3/6] mm: swap: make should_try_to_free_swap() support large-folio Barry Song
2024-01-26 23:22 ` Chris Li
2024-01-18 11:10 ` [PATCH RFC 4/6] mm: support large folios swapin as a whole Barry Song
2024-01-27 19:53 ` Chris Li
2024-02-26 7:29 ` Barry Song
2024-01-27 20:06 ` Chris Li
2024-02-26 7:31 ` Barry Song
2024-01-18 11:10 ` [PATCH RFC 5/6] mm: rmap: weaken the WARN_ON in __folio_add_anon_rmap() Barry Song
2024-01-18 11:54 ` David Hildenbrand
2024-01-23 6:49 ` Barry Song
2024-01-29 3:25 ` Chris Li
2024-01-29 10:06 ` David Hildenbrand
2024-01-29 16:31 ` Chris Li
2024-02-26 5:05 ` Barry Song
2024-04-06 23:27 ` Barry Song
2024-01-27 23:41 ` Chris Li
2024-01-18 11:10 ` [PATCH RFC 6/6] mm: madvise: don't split mTHP for MADV_PAGEOUT Barry Song
2024-01-29 2:15 ` Chris Li
2024-02-26 6:39 ` Barry Song
2024-02-27 12:22 ` Ryan Roberts
2024-02-27 22:39 ` Barry Song
2024-02-27 14:40 ` Ryan Roberts
2024-02-27 18:57 ` Barry Song
2024-02-28 3:49 ` Barry Song
2024-01-18 15:25 ` [PATCH RFC 0/6] mm: support large folios swap-in Ryan Roberts
2024-01-18 23:54 ` Barry Song
2024-01-19 13:25 ` Ryan Roberts
2024-01-27 14:27 ` Barry Song
2024-01-29 9:05 ` Huang, Ying
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAF8kJuOPXyAxmmh9QO1SdU=8GWtMhPjaWgGtQ8gvnNyfbSZbig@mail.gmail.com' \
--to=chrisl@kernel.org \
--cc=21cnbao@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=david@redhat.com \
--cc=hanchuanhua@oppo.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@suse.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=steven.price@arm.com \
--cc=surenb@google.com \
--cc=v-songbaohua@oppo.com \
--cc=wangkefeng.wang@huawei.com \
--cc=willy@infradead.org \
--cc=xiang@kernel.org \
--cc=ying.huang@intel.com \
--cc=yuzhao@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).