All of lore.kernel.org
 help / color / mirror / Atom feed
* + mm-free-idle-swap-cache-page-after-cow.patch added to -mm tree
@ 2021-06-01 23:59 akpm
  0 siblings, 0 replies; only message in thread
From: akpm @ 2021-06-01 23:59 UTC (permalink / raw)
  To: aarcange, dave.hansen, hannes, hughd, mgorman, mhocko,
	mm-commits, peterx, riel, tim.c.chen, torvalds, willy,
	ying.huang


The patch titled
     Subject: mm: free idle swap cache page after COW
has been added to the -mm tree.  Its filename is
     mm-free-idle-swap-cache-page-after-cow.patch

This patch should soon appear at
    https://ozlabs.org/~akpm/mmots/broken-out/mm-free-idle-swap-cache-page-after-cow.patch
and later at
    https://ozlabs.org/~akpm/mmotm/broken-out/mm-free-idle-swap-cache-page-after-cow.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Huang Ying <ying.huang@intel.com>
Subject: mm: free idle swap cache page after COW

With commit 09854ba94c6a ("mm: do_wp_page() simplification"), after COW,
the idle swap cache page (neither the page nor the corresponding swap
entry is mapped by any process) will be left in the LRU list, even if it's
in the active list or the head of the inactive list.  So, the page
reclaimer may take quite some overhead to reclaim these actually unused
pages.

To help the page reclaiming, in this patch, after COW, the idle swap cache
page will be tried to be freed.  To avoid to introduce much overhead to
the hot COW code path,

a) there's almost zero overhead for non-swap case via checking
   PageSwapCache() firstly.

b) the page lock is acquired via trylock only.

To test the patch, we used pmbench memory accessing benchmark with
working-set larger than available memory on a 2-socket Intel server with a
NVMe SSD as swap device.  Test results shows that the pmbench score
increases up to 23.8% with the decreased size of swap cache and swapin
throughput.

Link: https://lkml.kernel.org/r/20210601053143.1380078-1-ying.huang@intel.com
Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
Suggested-by: Johannes Weiner <hannes@cmpxchg.org>	[use free_swap_cache()]
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Xu <peterx@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Rik van Riel <riel@surriel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/swap.h |    5 +++++
 mm/memory.c          |    2 ++
 mm/swap_state.c      |    2 +-
 3 files changed, 8 insertions(+), 1 deletion(-)

--- a/include/linux/swap.h~mm-free-idle-swap-cache-page-after-cow
+++ a/include/linux/swap.h
@@ -446,6 +446,7 @@ extern void __delete_from_swap_cache(str
 extern void delete_from_swap_cache(struct page *);
 extern void clear_shadow_from_swap_cache(int type, unsigned long begin,
 				unsigned long end);
+extern void free_swap_cache(struct page *);
 extern void free_page_and_swap_cache(struct page *);
 extern void free_pages_and_swap_cache(struct page **, int);
 extern struct page *lookup_swap_cache(swp_entry_t entry,
@@ -551,6 +552,10 @@ static inline void put_swap_device(struc
 #define free_pages_and_swap_cache(pages, nr) \
 	release_pages((pages), (nr));
 
+static inline void free_swap_cache(struct page *page)
+{
+}
+
 static inline void show_swap_cache_info(void)
 {
 }
--- a/mm/memory.c~mm-free-idle-swap-cache-page-after-cow
+++ a/mm/memory.c
@@ -3012,6 +3012,8 @@ static vm_fault_t wp_page_copy(struct vm
 				munlock_vma_page(old_page);
 			unlock_page(old_page);
 		}
+		if (page_copied)
+			free_swap_cache(old_page);
 		put_page(old_page);
 	}
 	return page_copied ? VM_FAULT_WRITE : 0;
--- a/mm/swap_state.c~mm-free-idle-swap-cache-page-after-cow
+++ a/mm/swap_state.c
@@ -285,7 +285,7 @@ void clear_shadow_from_swap_cache(int ty
  * try_to_free_swap() _with_ the lock.
  * 					- Marcelo
  */
-static inline void free_swap_cache(struct page *page)
+void free_swap_cache(struct page *page)
 {
 	if (PageSwapCache(page) && !page_mapped(page) && trylock_page(page)) {
 		try_to_free_swap(page);
_

Patches currently in -mm which might be from ying.huang@intel.com are

mm-swap-remove-unnecessary-smp_rmb-in-swap_type_to_swap_info.patch
mm-free-idle-swap-cache-page-after-cow.patch


^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-06-01 23:59 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-01 23:59 + mm-free-idle-swap-cache-page-after-cow.patch added to -mm tree akpm

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.