linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4] mm/swap: fix race when skipping swapcache
@ 2024-02-19  8:20 Kairui Song
  2024-02-19  8:47 ` Huang, Ying
                   ` (4 more replies)
  0 siblings, 5 replies; 15+ messages in thread
From: Kairui Song @ 2024-02-19  8:20 UTC (permalink / raw)
  To: linux-mm
  Cc: Andrew Morton, Huang, Ying, Chris Li, Minchan Kim, Barry Song,
	Yu Zhao, SeongJae Park, David Hildenbrand, Hugh Dickins,
	Johannes Weiner, Matthew Wilcox, Michal Hocko, Yosry Ahmed,
	Aaron Lu, stable, linux-kernel, Kairui Song

From: Kairui Song <kasong@tencent.com>

When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
swapin the same entry at the same time, they get different pages (A, B).
Before one thread (T0) finishes the swapin and installs page (A)
to the PTE, another thread (T1) could finish swapin of page (B),
swap_free the entry, then swap out the possibly modified page
reusing the same entry. It breaks the pte_same check in (T0) because
PTE value is unchanged, causing ABA problem. Thread (T0) will
install a stalled page (A) into the PTE and cause data corruption.

One possible callstack is like this:

CPU0                                 CPU1
----                                 ----
do_swap_page()                       do_swap_page() with same entry
<direct swapin path>                 <direct swapin path>
<alloc page A>                       <alloc page B>
swap_read_folio() <- read to page A  swap_read_folio() <- read to page B
<slow on later locks or interrupt>   <finished swapin first>
...                                  set_pte_at()
                                     swap_free() <- entry is free
                                     <write to page B, now page A stalled>
                                     <swap out page B to same swap entry>
pte_same() <- Check pass, PTE seems
              unchanged, but page A
              is stalled!
swap_free() <- page B content lost!
set_pte_at() <- staled page A installed!

And besides, for ZRAM, swap_free() allows the swap device to discard
the entry content, so even if page (B) is not modified, if
swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
it may also cause data loss.

To fix this, reuse swapcache_prepare which will pin the swap entry using
the cache flag, and allow only one thread to swap it in, also prevent
any parallel code from putting the entry in the cache. Release the pin
after PT unlocked.

Racers just loop and wait since it's a rare and very short event.
A schedule_timeout_uninterruptible(1) call is added to avoid repeated
page faults wasting too much CPU, causing livelock or adding too much
noise to perf statistics. A similar livelock issue was described in
commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead")

Reproducer:

This race issue can be triggered easily using a well constructed
reproducer and patched brd (with a delay in read path) [1]:

With latest 6.8 mainline, race caused data loss can be observed easily:
$ gcc -g -lpthread test-thread-swap-race.c && ./a.out
  Polulating 32MB of memory region...
  Keep swapping out...
  Starting round 0...
  Spawning 65536 workers...
  32746 workers spawned, wait for done...
  Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
  Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
  Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
  Round 0 Failed, 15 data loss!

This reproducer spawns multiple threads sharing the same memory region
using a small swap device. Every two threads updates mapped pages one by
one in opposite direction trying to create a race, with one dedicated
thread keep swapping out the data out using madvise.

The reproducer created a reproduce rate of about once every 5 minutes,
so the race should be totally possible in production.

After this patch, I ran the reproducer for over a few hundred rounds
and no data loss observed.

Performance overhead is minimal, microbenchmark swapin 10G from 32G
zram:

Before:     10934698 us
After:      11157121 us
Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)

Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
Reported-by: "Huang, Ying" <ying.huang@intel.com>
Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
Signed-off-by: Kairui Song <kasong@tencent.com>
Cc: stable@vger.kernel.org

---
V3: https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
Update from V3:
- Use schedule_timeout_uninterruptible(1) for now instead of schedule() to
  prevent the busy faulting task holds CPU and livelocks [Huang, Ying]

V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
Update from V2:
- Add a schedule() if raced to prevent repeated page faults wasting CPU
  and add noise to perf statistics.
- Use a bool to state the special case instead of reusing existing
  variables fixing error handling [Minchan Kim].

V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
Update from V1:
- Add some words on ZRAM case, it will discard swap content on swap_free
  so the race window is a bit different but cure is the same. [Barry Song]
- Update comments make it cleaner [Huang, Ying]
- Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
- Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
  instead of "direct swapin path" [Yu Zhao]
- Update commit message.
- Collect Review and Acks.

 include/linux/swap.h |  5 +++++
 mm/memory.c          | 20 ++++++++++++++++++++
 mm/swap.h            |  5 +++++
 mm/swapfile.c        | 13 +++++++++++++
 4 files changed, 43 insertions(+)

diff --git a/include/linux/swap.h b/include/linux/swap.h
index 4db00ddad261..8d28f6091a32 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
 	return 0;
 }
 
+static inline int swapcache_prepare(swp_entry_t swp)
+{
+	return 0;
+}
+
 static inline void swap_free(swp_entry_t swp)
 {
 }
diff --git a/mm/memory.c b/mm/memory.c
index 7e1f4849463a..a99f5e7be9a5 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	struct page *page;
 	struct swap_info_struct *si = NULL;
 	rmap_t rmap_flags = RMAP_NONE;
+	bool need_clear_cache = false;
 	bool exclusive = false;
 	swp_entry_t entry;
 	pte_t pte;
@@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	if (!folio) {
 		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
 		    __swap_count(entry) == 1) {
+			/*
+			 * Prevent parallel swapin from proceeding with
+			 * the cache flag. Otherwise, another thread may
+			 * finish swapin first, free the entry, and swapout
+			 * reusing the same entry. It's undetectable as
+			 * pte_same() returns true due to entry reuse.
+			 */
+			if (swapcache_prepare(entry)) {
+				/* Relax a bit to prevent rapid repeated page faults */
+				schedule_timeout_uninterruptible(1);
+				goto out;
+			}
+			need_clear_cache = true;
+
 			/* skip swapcache */
 			folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
 						vma, vmf->address, false);
@@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 	if (vmf->pte)
 		pte_unmap_unlock(vmf->pte, vmf->ptl);
 out:
+	/* Clear the swap cache pin for direct swapin after PTL unlock */
+	if (need_clear_cache)
+		swapcache_clear(si, entry);
 	if (si)
 		put_swap_device(si);
 	return ret;
@@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 		folio_unlock(swapcache);
 		folio_put(swapcache);
 	}
+	if (need_clear_cache)
+		swapcache_clear(si, entry);
 	if (si)
 		put_swap_device(si);
 	return ret;
diff --git a/mm/swap.h b/mm/swap.h
index 758c46ca671e..fc2f6ade7f80 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio,
 void delete_from_swap_cache(struct folio *folio);
 void clear_shadow_from_swap_cache(int type, unsigned long begin,
 				  unsigned long end);
+void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry);
 struct folio *swap_cache_get_folio(swp_entry_t entry,
 		struct vm_area_struct *vma, unsigned long addr);
 struct folio *filemap_get_incore_folio(struct address_space *mapping,
@@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
 	return 0;
 }
 
+static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
+{
+}
+
 static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
 		struct vm_area_struct *vma, unsigned long addr)
 {
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 556ff7347d5f..746aa9da5302 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry)
 	return __swap_duplicate(entry, SWAP_HAS_CACHE);
 }
 
+void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
+{
+	struct swap_cluster_info *ci;
+	unsigned long offset = swp_offset(entry);
+	unsigned char usage;
+
+	ci = lock_cluster_or_swap_info(si, offset);
+	usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
+	unlock_cluster_or_swap_info(si, ci);
+	if (!usage)
+		free_swap_slot(entry);
+}
+
 struct swap_info_struct *swp_swap_info(swp_entry_t entry)
 {
 	return swap_type_to_swap_info(swp_type(entry));
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-19  8:20 [PATCH v4] mm/swap: fix race when skipping swapcache Kairui Song
@ 2024-02-19  8:47 ` Huang, Ying
  2024-02-19  9:02 ` David Hildenbrand
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: Huang, Ying @ 2024-02-19  8:47 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Kairui Song, Andrew Morton, Chris Li, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

Kairui Song <ryncsn@gmail.com> writes:

> From: Kairui Song <kasong@tencent.com>
>
> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> swapin the same entry at the same time, they get different pages (A, B).
> Before one thread (T0) finishes the swapin and installs page (A)
> to the PTE, another thread (T1) could finish swapin of page (B),
> swap_free the entry, then swap out the possibly modified page
> reusing the same entry. It breaks the pte_same check in (T0) because
> PTE value is unchanged, causing ABA problem. Thread (T0) will
> install a stalled page (A) into the PTE and cause data corruption.
>
> One possible callstack is like this:
>
> CPU0                                 CPU1
> ----                                 ----
> do_swap_page()                       do_swap_page() with same entry
> <direct swapin path>                 <direct swapin path>
> <alloc page A>                       <alloc page B>
> swap_read_folio() <- read to page A  swap_read_folio() <- read to page B
> <slow on later locks or interrupt>   <finished swapin first>
> ...                                  set_pte_at()
>                                      swap_free() <- entry is free
>                                      <write to page B, now page A stalled>
>                                      <swap out page B to same swap entry>
> pte_same() <- Check pass, PTE seems
>               unchanged, but page A
>               is stalled!
> swap_free() <- page B content lost!
> set_pte_at() <- staled page A installed!
>
> And besides, for ZRAM, swap_free() allows the swap device to discard
> the entry content, so even if page (B) is not modified, if
> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
> it may also cause data loss.
>
> To fix this, reuse swapcache_prepare which will pin the swap entry using
> the cache flag, and allow only one thread to swap it in, also prevent
> any parallel code from putting the entry in the cache. Release the pin
> after PT unlocked.
>
> Racers just loop and wait since it's a rare and very short event.
> A schedule_timeout_uninterruptible(1) call is added to avoid repeated
> page faults wasting too much CPU, causing livelock or adding too much
> noise to perf statistics. A similar livelock issue was described in
> commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead")
>
> Reproducer:
>
> This race issue can be triggered easily using a well constructed
> reproducer and patched brd (with a delay in read path) [1]:
>
> With latest 6.8 mainline, race caused data loss can be observed easily:
> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
>   Polulating 32MB of memory region...
>   Keep swapping out...
>   Starting round 0...
>   Spawning 65536 workers...
>   32746 workers spawned, wait for done...
>   Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
>   Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
>   Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
>   Round 0 Failed, 15 data loss!
>
> This reproducer spawns multiple threads sharing the same memory region
> using a small swap device. Every two threads updates mapped pages one by
> one in opposite direction trying to create a race, with one dedicated
> thread keep swapping out the data out using madvise.
>
> The reproducer created a reproduce rate of about once every 5 minutes,
> so the race should be totally possible in production.
>
> After this patch, I ran the reproducer for over a few hundred rounds
> and no data loss observed.
>
> Performance overhead is minimal, microbenchmark swapin 10G from 32G
> zram:
>
> Before:     10934698 us
> After:      11157121 us
> Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
>
> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
> Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
> Reported-by: "Huang, Ying" <ying.huang@intel.com>
> Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
> Signed-off-by: Kairui Song <kasong@tencent.com>
> Cc: stable@vger.kernel.org

LGTM. Feel free to add,

Reviewed-by: "Huang, Ying" <ying.huang@intel.com>

--
Best Regards,
Huang, Ying

> ---
> V3: https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
> Update from V3:
> - Use schedule_timeout_uninterruptible(1) for now instead of schedule() to
>   prevent the busy faulting task holds CPU and livelocks [Huang, Ying]
>
> V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
> Update from V2:
> - Add a schedule() if raced to prevent repeated page faults wasting CPU
>   and add noise to perf statistics.
> - Use a bool to state the special case instead of reusing existing
>   variables fixing error handling [Minchan Kim].
>
> V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
> Update from V1:
> - Add some words on ZRAM case, it will discard swap content on swap_free
>   so the race window is a bit different but cure is the same. [Barry Song]
> - Update comments make it cleaner [Huang, Ying]
> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
>   instead of "direct swapin path" [Yu Zhao]
> - Update commit message.
> - Collect Review and Acks.
>
>  include/linux/swap.h |  5 +++++
>  mm/memory.c          | 20 ++++++++++++++++++++
>  mm/swap.h            |  5 +++++
>  mm/swapfile.c        | 13 +++++++++++++
>  4 files changed, 43 insertions(+)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 4db00ddad261..8d28f6091a32 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
>  	return 0;
>  }
>  
> +static inline int swapcache_prepare(swp_entry_t swp)
> +{
> +	return 0;
> +}
> +
>  static inline void swap_free(swp_entry_t swp)
>  {
>  }
> diff --git a/mm/memory.c b/mm/memory.c
> index 7e1f4849463a..a99f5e7be9a5 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	struct page *page;
>  	struct swap_info_struct *si = NULL;
>  	rmap_t rmap_flags = RMAP_NONE;
> +	bool need_clear_cache = false;
>  	bool exclusive = false;
>  	swp_entry_t entry;
>  	pte_t pte;
> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	if (!folio) {
>  		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>  		    __swap_count(entry) == 1) {
> +			/*
> +			 * Prevent parallel swapin from proceeding with
> +			 * the cache flag. Otherwise, another thread may
> +			 * finish swapin first, free the entry, and swapout
> +			 * reusing the same entry. It's undetectable as
> +			 * pte_same() returns true due to entry reuse.
> +			 */
> +			if (swapcache_prepare(entry)) {
> +				/* Relax a bit to prevent rapid repeated page faults */
> +				schedule_timeout_uninterruptible(1);
> +				goto out;
> +			}
> +			need_clear_cache = true;
> +
>  			/* skip swapcache */
>  			folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
>  						vma, vmf->address, false);
> @@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	if (vmf->pte)
>  		pte_unmap_unlock(vmf->pte, vmf->ptl);
>  out:
> +	/* Clear the swap cache pin for direct swapin after PTL unlock */
> +	if (need_clear_cache)
> +		swapcache_clear(si, entry);
>  	if (si)
>  		put_swap_device(si);
>  	return ret;
> @@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  		folio_unlock(swapcache);
>  		folio_put(swapcache);
>  	}
> +	if (need_clear_cache)
> +		swapcache_clear(si, entry);
>  	if (si)
>  		put_swap_device(si);
>  	return ret;
> diff --git a/mm/swap.h b/mm/swap.h
> index 758c46ca671e..fc2f6ade7f80 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio,
>  void delete_from_swap_cache(struct folio *folio);
>  void clear_shadow_from_swap_cache(int type, unsigned long begin,
>  				  unsigned long end);
> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry);
>  struct folio *swap_cache_get_folio(swp_entry_t entry,
>  		struct vm_area_struct *vma, unsigned long addr);
>  struct folio *filemap_get_incore_folio(struct address_space *mapping,
> @@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
>  	return 0;
>  }
>  
> +static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
> +{
> +}
> +
>  static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
>  		struct vm_area_struct *vma, unsigned long addr)
>  {
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 556ff7347d5f..746aa9da5302 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry)
>  	return __swap_duplicate(entry, SWAP_HAS_CACHE);
>  }
>  
> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
> +{
> +	struct swap_cluster_info *ci;
> +	unsigned long offset = swp_offset(entry);
> +	unsigned char usage;
> +
> +	ci = lock_cluster_or_swap_info(si, offset);
> +	usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
> +	unlock_cluster_or_swap_info(si, ci);
> +	if (!usage)
> +		free_swap_slot(entry);
> +}
> +
>  struct swap_info_struct *swp_swap_info(swp_entry_t entry)
>  {
>  	return swap_type_to_swap_info(swp_type(entry));

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-19  8:20 [PATCH v4] mm/swap: fix race when skipping swapcache Kairui Song
  2024-02-19  8:47 ` Huang, Ying
@ 2024-02-19  9:02 ` David Hildenbrand
  2024-02-19 16:37 ` Chris Li
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 15+ messages in thread
From: David Hildenbrand @ 2024-02-19  9:02 UTC (permalink / raw)
  To: Kairui Song, linux-mm
  Cc: Andrew Morton, Huang, Ying, Chris Li, Minchan Kim, Barry Song,
	Yu Zhao, SeongJae Park, Hugh Dickins, Johannes Weiner,
	Matthew Wilcox, Michal Hocko, Yosry Ahmed, Aaron Lu, stable,
	linux-kernel

On 19.02.24 09:20, Kairui Song wrote:
> From: Kairui Song <kasong@tencent.com>
> 
> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> swapin the same entry at the same time, they get different pages (A, B).
> Before one thread (T0) finishes the swapin and installs page (A)
> to the PTE, another thread (T1) could finish swapin of page (B),
> swap_free the entry, then swap out the possibly modified page
> reusing the same entry. It breaks the pte_same check in (T0) because
> PTE value is unchanged, causing ABA problem. Thread (T0) will
> install a stalled page (A) into the PTE and cause data corruption.
> 
> One possible callstack is like this:
> 
> CPU0                                 CPU1
> ----                                 ----
> do_swap_page()                       do_swap_page() with same entry
> <direct swapin path>                 <direct swapin path>
> <alloc page A>                       <alloc page B>
> swap_read_folio() <- read to page A  swap_read_folio() <- read to page B
> <slow on later locks or interrupt>   <finished swapin first>
> ...                                  set_pte_at()
>                                       swap_free() <- entry is free
>                                       <write to page B, now page A stalled>
>                                       <swap out page B to same swap entry>
> pte_same() <- Check pass, PTE seems
>                unchanged, but page A
>                is stalled!
> swap_free() <- page B content lost!
> set_pte_at() <- staled page A installed!
> 
> And besides, for ZRAM, swap_free() allows the swap device to discard
> the entry content, so even if page (B) is not modified, if
> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
> it may also cause data loss.
> 
> To fix this, reuse swapcache_prepare which will pin the swap entry using
> the cache flag, and allow only one thread to swap it in, also prevent
> any parallel code from putting the entry in the cache. Release the pin
> after PT unlocked.
> 
> Racers just loop and wait since it's a rare and very short event.
> A schedule_timeout_uninterruptible(1) call is added to avoid repeated
> page faults wasting too much CPU, causing livelock or adding too much
> noise to perf statistics. A similar livelock issue was described in
> commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead")
> 
> Reproducer:
> 
> This race issue can be triggered easily using a well constructed
> reproducer and patched brd (with a delay in read path) [1]:
> 
> With latest 6.8 mainline, race caused data loss can be observed easily:
> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
>    Polulating 32MB of memory region...
>    Keep swapping out...
>    Starting round 0...
>    Spawning 65536 workers...
>    32746 workers spawned, wait for done...
>    Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
>    Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
>    Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
>    Round 0 Failed, 15 data loss!
> 
> This reproducer spawns multiple threads sharing the same memory region
> using a small swap device. Every two threads updates mapped pages one by
> one in opposite direction trying to create a race, with one dedicated
> thread keep swapping out the data out using madvise.
> 
> The reproducer created a reproduce rate of about once every 5 minutes,
> so the race should be totally possible in production.
> 
> After this patch, I ran the reproducer for over a few hundred rounds
> and no data loss observed.
> 
> Performance overhead is minimal, microbenchmark swapin 10G from 32G
> zram:
> 
> Before:     10934698 us
> After:      11157121 us
> Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
> 
> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
> Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
> Reported-by: "Huang, Ying" <ying.huang@intel.com>
> Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
> Signed-off-by: Kairui Song <kasong@tencent.com>
> Cc: stable@vger.kernel.org
> 
> ---
> V3: https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
> Update from V3:
> - Use schedule_timeout_uninterruptible(1) for now instead of schedule() to
>    prevent the busy faulting task holds CPU and livelocks [Huang, Ying]
> 
> V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
> Update from V2:
> - Add a schedule() if raced to prevent repeated page faults wasting CPU
>    and add noise to perf statistics.
> - Use a bool to state the special case instead of reusing existing
>    variables fixing error handling [Minchan Kim].
> 
> V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
> Update from V1:
> - Add some words on ZRAM case, it will discard swap content on swap_free
>    so the race window is a bit different but cure is the same. [Barry Song]
> - Update comments make it cleaner [Huang, Ying]
> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
>    instead of "direct swapin path" [Yu Zhao]
> - Update commit message.
> - Collect Review and Acks.
> 

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-19  8:20 [PATCH v4] mm/swap: fix race when skipping swapcache Kairui Song
  2024-02-19  8:47 ` Huang, Ying
  2024-02-19  9:02 ` David Hildenbrand
@ 2024-02-19 16:37 ` Chris Li
  2024-02-19 22:10 ` Barry Song
  2024-02-20  1:31 ` Andrew Morton
  4 siblings, 0 replies; 15+ messages in thread
From: Chris Li @ 2024-02-19 16:37 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Andrew Morton, Huang, Ying, Minchan Kim, Barry Song,
	Yu Zhao, SeongJae Park, David Hildenbrand, Hugh Dickins,
	Johannes Weiner, Matthew Wilcox, Michal Hocko, Yosry Ahmed,
	Aaron Lu, stable, linux-kernel

Acked-by: Chris Li <chrisl@kernel.org>

Chris

On Mon, Feb 19, 2024 at 12:21 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> swapin the same entry at the same time, they get different pages (A, B).
> Before one thread (T0) finishes the swapin and installs page (A)
> to the PTE, another thread (T1) could finish swapin of page (B),
> swap_free the entry, then swap out the possibly modified page
> reusing the same entry. It breaks the pte_same check in (T0) because
> PTE value is unchanged, causing ABA problem. Thread (T0) will
> install a stalled page (A) into the PTE and cause data corruption.
>
> One possible callstack is like this:
>
> CPU0                                 CPU1
> ----                                 ----
> do_swap_page()                       do_swap_page() with same entry
> <direct swapin path>                 <direct swapin path>
> <alloc page A>                       <alloc page B>
> swap_read_folio() <- read to page A  swap_read_folio() <- read to page B
> <slow on later locks or interrupt>   <finished swapin first>
> ...                                  set_pte_at()
>                                      swap_free() <- entry is free
>                                      <write to page B, now page A stalled>
>                                      <swap out page B to same swap entry>
> pte_same() <- Check pass, PTE seems
>               unchanged, but page A
>               is stalled!
> swap_free() <- page B content lost!
> set_pte_at() <- staled page A installed!
>
> And besides, for ZRAM, swap_free() allows the swap device to discard
> the entry content, so even if page (B) is not modified, if
> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
> it may also cause data loss.
>
> To fix this, reuse swapcache_prepare which will pin the swap entry using
> the cache flag, and allow only one thread to swap it in, also prevent
> any parallel code from putting the entry in the cache. Release the pin
> after PT unlocked.
>
> Racers just loop and wait since it's a rare and very short event.
> A schedule_timeout_uninterruptible(1) call is added to avoid repeated
> page faults wasting too much CPU, causing livelock or adding too much
> noise to perf statistics. A similar livelock issue was described in
> commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead")
>
> Reproducer:
>
> This race issue can be triggered easily using a well constructed
> reproducer and patched brd (with a delay in read path) [1]:
>
> With latest 6.8 mainline, race caused data loss can be observed easily:
> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
>   Polulating 32MB of memory region...
>   Keep swapping out...
>   Starting round 0...
>   Spawning 65536 workers...
>   32746 workers spawned, wait for done...
>   Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
>   Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
>   Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
>   Round 0 Failed, 15 data loss!
>
> This reproducer spawns multiple threads sharing the same memory region
> using a small swap device. Every two threads updates mapped pages one by
> one in opposite direction trying to create a race, with one dedicated
> thread keep swapping out the data out using madvise.
>
> The reproducer created a reproduce rate of about once every 5 minutes,
> so the race should be totally possible in production.
>
> After this patch, I ran the reproducer for over a few hundred rounds
> and no data loss observed.
>
> Performance overhead is minimal, microbenchmark swapin 10G from 32G
> zram:
>
> Before:     10934698 us
> After:      11157121 us
> Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
>
> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
> Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
> Reported-by: "Huang, Ying" <ying.huang@intel.com>
> Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
> Signed-off-by: Kairui Song <kasong@tencent.com>
> Cc: stable@vger.kernel.org
>
> ---
> V3: https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
> Update from V3:
> - Use schedule_timeout_uninterruptible(1) for now instead of schedule() to
>   prevent the busy faulting task holds CPU and livelocks [Huang, Ying]
>
> V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
> Update from V2:
> - Add a schedule() if raced to prevent repeated page faults wasting CPU
>   and add noise to perf statistics.
> - Use a bool to state the special case instead of reusing existing
>   variables fixing error handling [Minchan Kim].
>
> V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
> Update from V1:
> - Add some words on ZRAM case, it will discard swap content on swap_free
>   so the race window is a bit different but cure is the same. [Barry Song]
> - Update comments make it cleaner [Huang, Ying]
> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
>   instead of "direct swapin path" [Yu Zhao]
> - Update commit message.
> - Collect Review and Acks.
>
>  include/linux/swap.h |  5 +++++
>  mm/memory.c          | 20 ++++++++++++++++++++
>  mm/swap.h            |  5 +++++
>  mm/swapfile.c        | 13 +++++++++++++
>  4 files changed, 43 insertions(+)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 4db00ddad261..8d28f6091a32 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
>         return 0;
>  }
>
> +static inline int swapcache_prepare(swp_entry_t swp)
> +{
> +       return 0;
> +}
> +
>  static inline void swap_free(swp_entry_t swp)
>  {
>  }
> diff --git a/mm/memory.c b/mm/memory.c
> index 7e1f4849463a..a99f5e7be9a5 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         struct page *page;
>         struct swap_info_struct *si = NULL;
>         rmap_t rmap_flags = RMAP_NONE;
> +       bool need_clear_cache = false;
>         bool exclusive = false;
>         swp_entry_t entry;
>         pte_t pte;
> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         if (!folio) {
>                 if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>                     __swap_count(entry) == 1) {
> +                       /*
> +                        * Prevent parallel swapin from proceeding with
> +                        * the cache flag. Otherwise, another thread may
> +                        * finish swapin first, free the entry, and swapout
> +                        * reusing the same entry. It's undetectable as
> +                        * pte_same() returns true due to entry reuse.
> +                        */
> +                       if (swapcache_prepare(entry)) {
> +                               /* Relax a bit to prevent rapid repeated page faults */
> +                               schedule_timeout_uninterruptible(1);
> +                               goto out;
> +                       }
> +                       need_clear_cache = true;
> +
>                         /* skip swapcache */
>                         folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
>                                                 vma, vmf->address, false);
> @@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         if (vmf->pte)
>                 pte_unmap_unlock(vmf->pte, vmf->ptl);
>  out:
> +       /* Clear the swap cache pin for direct swapin after PTL unlock */
> +       if (need_clear_cache)
> +               swapcache_clear(si, entry);
>         if (si)
>                 put_swap_device(si);
>         return ret;
> @@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>                 folio_unlock(swapcache);
>                 folio_put(swapcache);
>         }
> +       if (need_clear_cache)
> +               swapcache_clear(si, entry);
>         if (si)
>                 put_swap_device(si);
>         return ret;
> diff --git a/mm/swap.h b/mm/swap.h
> index 758c46ca671e..fc2f6ade7f80 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio,
>  void delete_from_swap_cache(struct folio *folio);
>  void clear_shadow_from_swap_cache(int type, unsigned long begin,
>                                   unsigned long end);
> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry);
>  struct folio *swap_cache_get_folio(swp_entry_t entry,
>                 struct vm_area_struct *vma, unsigned long addr);
>  struct folio *filemap_get_incore_folio(struct address_space *mapping,
> @@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
>         return 0;
>  }
>
> +static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
> +{
> +}
> +
>  static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
>                 struct vm_area_struct *vma, unsigned long addr)
>  {
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 556ff7347d5f..746aa9da5302 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry)
>         return __swap_duplicate(entry, SWAP_HAS_CACHE);
>  }
>
> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
> +{
> +       struct swap_cluster_info *ci;
> +       unsigned long offset = swp_offset(entry);
> +       unsigned char usage;
> +
> +       ci = lock_cluster_or_swap_info(si, offset);
> +       usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
> +       unlock_cluster_or_swap_info(si, ci);
> +       if (!usage)
> +               free_swap_slot(entry);
> +}
> +
>  struct swap_info_struct *swp_swap_info(swp_entry_t entry)
>  {
>         return swap_type_to_swap_info(swp_type(entry));
> --
> 2.43.0
>
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-19  8:20 [PATCH v4] mm/swap: fix race when skipping swapcache Kairui Song
                   ` (2 preceding siblings ...)
  2024-02-19 16:37 ` Chris Li
@ 2024-02-19 22:10 ` Barry Song
  2024-02-20  1:09   ` Huang, Ying
  2024-02-20  4:49   ` Chengming Zhou
  2024-02-20  1:31 ` Andrew Morton
  4 siblings, 2 replies; 15+ messages in thread
From: Barry Song @ 2024-02-19 22:10 UTC (permalink / raw)
  To: Kairui Song
  Cc: linux-mm, Andrew Morton, Huang, Ying, Chris Li, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

On Mon, Feb 19, 2024 at 9:21 PM Kairui Song <ryncsn@gmail.com> wrote:
>
> From: Kairui Song <kasong@tencent.com>
>
> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> swapin the same entry at the same time, they get different pages (A, B).
> Before one thread (T0) finishes the swapin and installs page (A)
> to the PTE, another thread (T1) could finish swapin of page (B),
> swap_free the entry, then swap out the possibly modified page
> reusing the same entry. It breaks the pte_same check in (T0) because
> PTE value is unchanged, causing ABA problem. Thread (T0) will
> install a stalled page (A) into the PTE and cause data corruption.
>
> One possible callstack is like this:
>
> CPU0                                 CPU1
> ----                                 ----
> do_swap_page()                       do_swap_page() with same entry
> <direct swapin path>                 <direct swapin path>
> <alloc page A>                       <alloc page B>
> swap_read_folio() <- read to page A  swap_read_folio() <- read to page B
> <slow on later locks or interrupt>   <finished swapin first>
> ..                                  set_pte_at()
>                                      swap_free() <- entry is free
>                                      <write to page B, now page A stalled>
>                                      <swap out page B to same swap entry>
> pte_same() <- Check pass, PTE seems
>               unchanged, but page A
>               is stalled!
> swap_free() <- page B content lost!
> set_pte_at() <- staled page A installed!
>
> And besides, for ZRAM, swap_free() allows the swap device to discard
> the entry content, so even if page (B) is not modified, if
> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
> it may also cause data loss.
>
> To fix this, reuse swapcache_prepare which will pin the swap entry using
> the cache flag, and allow only one thread to swap it in, also prevent
> any parallel code from putting the entry in the cache. Release the pin
> after PT unlocked.
>
> Racers just loop and wait since it's a rare and very short event.
> A schedule_timeout_uninterruptible(1) call is added to avoid repeated
> page faults wasting too much CPU, causing livelock or adding too much
> noise to perf statistics. A similar livelock issue was described in
> commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead")
>
> Reproducer:
>
> This race issue can be triggered easily using a well constructed
> reproducer and patched brd (with a delay in read path) [1]:
>
> With latest 6.8 mainline, race caused data loss can be observed easily:
> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
>   Polulating 32MB of memory region...
>   Keep swapping out...
>   Starting round 0...
>   Spawning 65536 workers...
>   32746 workers spawned, wait for done...
>   Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
>   Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
>   Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
>   Round 0 Failed, 15 data loss!
>
> This reproducer spawns multiple threads sharing the same memory region
> using a small swap device. Every two threads updates mapped pages one by
> one in opposite direction trying to create a race, with one dedicated
> thread keep swapping out the data out using madvise.
>
> The reproducer created a reproduce rate of about once every 5 minutes,
> so the race should be totally possible in production.
>
> After this patch, I ran the reproducer for over a few hundred rounds
> and no data loss observed.
>
> Performance overhead is minimal, microbenchmark swapin 10G from 32G
> zram:
>
> Before:     10934698 us
> After:      11157121 us
> Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
>
> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
> Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
> Reported-by: "Huang, Ying" <ying.huang@intel.com>
> Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
> Signed-off-by: Kairui Song <kasong@tencent.com>
> Cc: stable@vger.kernel.org
>
> ---
> V3: https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
> Update from V3:
> - Use schedule_timeout_uninterruptible(1) for now instead of schedule() to
>   prevent the busy faulting task holds CPU and livelocks [Huang, Ying]
>
> V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
> Update from V2:
> - Add a schedule() if raced to prevent repeated page faults wasting CPU
>   and add noise to perf statistics.
> - Use a bool to state the special case instead of reusing existing
>   variables fixing error handling [Minchan Kim].
>
> V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
> Update from V1:
> - Add some words on ZRAM case, it will discard swap content on swap_free
>   so the race window is a bit different but cure is the same. [Barry Song]
> - Update comments make it cleaner [Huang, Ying]
> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
>   instead of "direct swapin path" [Yu Zhao]
> - Update commit message.
> - Collect Review and Acks.
>
>  include/linux/swap.h |  5 +++++
>  mm/memory.c          | 20 ++++++++++++++++++++
>  mm/swap.h            |  5 +++++
>  mm/swapfile.c        | 13 +++++++++++++
>  4 files changed, 43 insertions(+)
>
> diff --git a/include/linux/swap.h b/include/linux/swap.h
> index 4db00ddad261..8d28f6091a32 100644
> --- a/include/linux/swap.h
> +++ b/include/linux/swap.h
> @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
>         return 0;
>  }
>
> +static inline int swapcache_prepare(swp_entry_t swp)
> +{
> +       return 0;
> +}
> +
>  static inline void swap_free(swp_entry_t swp)
>  {
>  }
> diff --git a/mm/memory.c b/mm/memory.c
> index 7e1f4849463a..a99f5e7be9a5 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         struct page *page;
>         struct swap_info_struct *si = NULL;
>         rmap_t rmap_flags = RMAP_NONE;
> +       bool need_clear_cache = false;
>         bool exclusive = false;
>         swp_entry_t entry;
>         pte_t pte;
> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         if (!folio) {
>                 if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>                     __swap_count(entry) == 1) {
> +                       /*
> +                        * Prevent parallel swapin from proceeding with
> +                        * the cache flag. Otherwise, another thread may
> +                        * finish swapin first, free the entry, and swapout
> +                        * reusing the same entry. It's undetectable as
> +                        * pte_same() returns true due to entry reuse.
> +                        */
> +                       if (swapcache_prepare(entry)) {
> +                               /* Relax a bit to prevent rapid repeated page faults */
> +                               schedule_timeout_uninterruptible(1);

Not a ideal model, imaging two tasks,

task A  - low priority running on a LITTLE core
task B - high priority and have real-time requirements such as audio,
              graphics running on a big core.

The original code will make B win even if it is a bit later than A as its CPU is
much faster to finish swap_read_folio for example from zRAM. task B's
swap-in can finish very soon.

With the patch, B will wait a tick and its real-time performance will be
negatively affected from time to time once low priority and high priority
tasks fault in the same PTE and high priority tasks are a bit later than
low priority tasks. This is a kind of priority inversion.

When we support large folio swap-in, things can get worse. For example,
to swap-in 16 or even more pages in one do_swap_page, the chance for
task A and task B located in the same range of 16 PTEs will increase
though they are not located in the same PTE.

Please consider this is not a blocker for this patch. But I will put the problem
in my list and run some real tests on Android phones later.

> +                               goto out;
> +                       }
> +                       need_clear_cache = true;
> +
>                         /* skip swapcache */
>                         folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
>                                                 vma, vmf->address, false);
> @@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>         if (vmf->pte)
>                 pte_unmap_unlock(vmf->pte, vmf->ptl);
>  out:
> +       /* Clear the swap cache pin for direct swapin after PTL unlock */
> +       if (need_clear_cache)
> +               swapcache_clear(si, entry);
>         if (si)
>                 put_swap_device(si);
>         return ret;
> @@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>                 folio_unlock(swapcache);
>                 folio_put(swapcache);
>         }
> +       if (need_clear_cache)
> +               swapcache_clear(si, entry);
>         if (si)
>                 put_swap_device(si);
>         return ret;
> diff --git a/mm/swap.h b/mm/swap.h
> index 758c46ca671e..fc2f6ade7f80 100644
> --- a/mm/swap.h
> +++ b/mm/swap.h
> @@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio,
>  void delete_from_swap_cache(struct folio *folio);
>  void clear_shadow_from_swap_cache(int type, unsigned long begin,
>                                   unsigned long end);
> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry);
>  struct folio *swap_cache_get_folio(swp_entry_t entry,
>                 struct vm_area_struct *vma, unsigned long addr);
>  struct folio *filemap_get_incore_folio(struct address_space *mapping,
> @@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
>         return 0;
>  }
>
> +static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
> +{
> +}
> +
>  static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
>                 struct vm_area_struct *vma, unsigned long addr)
>  {
> diff --git a/mm/swapfile.c b/mm/swapfile.c
> index 556ff7347d5f..746aa9da5302 100644
> --- a/mm/swapfile.c
> +++ b/mm/swapfile.c
> @@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry)
>         return __swap_duplicate(entry, SWAP_HAS_CACHE);
>  }
>
> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
> +{
> +       struct swap_cluster_info *ci;
> +       unsigned long offset = swp_offset(entry);
> +       unsigned char usage;
> +
> +       ci = lock_cluster_or_swap_info(si, offset);
> +       usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
> +       unlock_cluster_or_swap_info(si, ci);
> +       if (!usage)
> +               free_swap_slot(entry);
> +}
> +
>  struct swap_info_struct *swp_swap_info(swp_entry_t entry)
>  {
>         return swap_type_to_swap_info(swp_type(entry));
> --
> 2.43.0
>
>

Thanks
Barry

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-19 22:10 ` Barry Song
@ 2024-02-20  1:09   ` Huang, Ying
  2024-02-20  4:49   ` Chengming Zhou
  1 sibling, 0 replies; 15+ messages in thread
From: Huang, Ying @ 2024-02-20  1:09 UTC (permalink / raw)
  To: Barry Song
  Cc: Kairui Song, linux-mm, Andrew Morton, Chris Li, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

Barry Song <21cnbao@gmail.com> writes:

> On Mon, Feb 19, 2024 at 9:21 PM Kairui Song <ryncsn@gmail.com> wrote:
>>
>> From: Kairui Song <kasong@tencent.com>
>>
>> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
>> swapin the same entry at the same time, they get different pages (A, B).
>> Before one thread (T0) finishes the swapin and installs page (A)
>> to the PTE, another thread (T1) could finish swapin of page (B),
>> swap_free the entry, then swap out the possibly modified page
>> reusing the same entry. It breaks the pte_same check in (T0) because
>> PTE value is unchanged, causing ABA problem. Thread (T0) will
>> install a stalled page (A) into the PTE and cause data corruption.
>>
>> One possible callstack is like this:
>>
>> CPU0                                 CPU1
>> ----                                 ----
>> do_swap_page()                       do_swap_page() with same entry
>> <direct swapin path>                 <direct swapin path>
>> <alloc page A>                       <alloc page B>
>> swap_read_folio() <- read to page A  swap_read_folio() <- read to page B
>> <slow on later locks or interrupt>   <finished swapin first>
>> ..                                  set_pte_at()
>>                                      swap_free() <- entry is free
>>                                      <write to page B, now page A stalled>
>>                                      <swap out page B to same swap entry>
>> pte_same() <- Check pass, PTE seems
>>               unchanged, but page A
>>               is stalled!
>> swap_free() <- page B content lost!
>> set_pte_at() <- staled page A installed!
>>
>> And besides, for ZRAM, swap_free() allows the swap device to discard
>> the entry content, so even if page (B) is not modified, if
>> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
>> it may also cause data loss.
>>
>> To fix this, reuse swapcache_prepare which will pin the swap entry using
>> the cache flag, and allow only one thread to swap it in, also prevent
>> any parallel code from putting the entry in the cache. Release the pin
>> after PT unlocked.
>>
>> Racers just loop and wait since it's a rare and very short event.
>> A schedule_timeout_uninterruptible(1) call is added to avoid repeated
>> page faults wasting too much CPU, causing livelock or adding too much
>> noise to perf statistics. A similar livelock issue was described in
>> commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead")
>>
>> Reproducer:
>>
>> This race issue can be triggered easily using a well constructed
>> reproducer and patched brd (with a delay in read path) [1]:
>>
>> With latest 6.8 mainline, race caused data loss can be observed easily:
>> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
>>   Polulating 32MB of memory region...
>>   Keep swapping out...
>>   Starting round 0...
>>   Spawning 65536 workers...
>>   32746 workers spawned, wait for done...
>>   Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
>>   Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
>>   Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
>>   Round 0 Failed, 15 data loss!
>>
>> This reproducer spawns multiple threads sharing the same memory region
>> using a small swap device. Every two threads updates mapped pages one by
>> one in opposite direction trying to create a race, with one dedicated
>> thread keep swapping out the data out using madvise.
>>
>> The reproducer created a reproduce rate of about once every 5 minutes,
>> so the race should be totally possible in production.
>>
>> After this patch, I ran the reproducer for over a few hundred rounds
>> and no data loss observed.
>>
>> Performance overhead is minimal, microbenchmark swapin 10G from 32G
>> zram:
>>
>> Before:     10934698 us
>> After:      11157121 us
>> Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
>>
>> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
>> Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
>> Reported-by: "Huang, Ying" <ying.huang@intel.com>
>> Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
>> Signed-off-by: Kairui Song <kasong@tencent.com>
>> Cc: stable@vger.kernel.org
>>
>> ---
>> V3: https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
>> Update from V3:
>> - Use schedule_timeout_uninterruptible(1) for now instead of schedule() to
>>   prevent the busy faulting task holds CPU and livelocks [Huang, Ying]
>>
>> V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
>> Update from V2:
>> - Add a schedule() if raced to prevent repeated page faults wasting CPU
>>   and add noise to perf statistics.
>> - Use a bool to state the special case instead of reusing existing
>>   variables fixing error handling [Minchan Kim].
>>
>> V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
>> Update from V1:
>> - Add some words on ZRAM case, it will discard swap content on swap_free
>>   so the race window is a bit different but cure is the same. [Barry Song]
>> - Update comments make it cleaner [Huang, Ying]
>> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
>> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
>>   instead of "direct swapin path" [Yu Zhao]
>> - Update commit message.
>> - Collect Review and Acks.
>>
>>  include/linux/swap.h |  5 +++++
>>  mm/memory.c          | 20 ++++++++++++++++++++
>>  mm/swap.h            |  5 +++++
>>  mm/swapfile.c        | 13 +++++++++++++
>>  4 files changed, 43 insertions(+)
>>
>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>> index 4db00ddad261..8d28f6091a32 100644
>> --- a/include/linux/swap.h
>> +++ b/include/linux/swap.h
>> @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
>>         return 0;
>>  }
>>
>> +static inline int swapcache_prepare(swp_entry_t swp)
>> +{
>> +       return 0;
>> +}
>> +
>>  static inline void swap_free(swp_entry_t swp)
>>  {
>>  }
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 7e1f4849463a..a99f5e7be9a5 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>         struct page *page;
>>         struct swap_info_struct *si = NULL;
>>         rmap_t rmap_flags = RMAP_NONE;
>> +       bool need_clear_cache = false;
>>         bool exclusive = false;
>>         swp_entry_t entry;
>>         pte_t pte;
>> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>         if (!folio) {
>>                 if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>>                     __swap_count(entry) == 1) {
>> +                       /*
>> +                        * Prevent parallel swapin from proceeding with
>> +                        * the cache flag. Otherwise, another thread may
>> +                        * finish swapin first, free the entry, and swapout
>> +                        * reusing the same entry. It's undetectable as
>> +                        * pte_same() returns true due to entry reuse.
>> +                        */
>> +                       if (swapcache_prepare(entry)) {
>> +                               /* Relax a bit to prevent rapid repeated page faults */
>> +                               schedule_timeout_uninterruptible(1);
>
> Not a ideal model, imaging two tasks,
>
> task A  - low priority running on a LITTLE core
> task B - high priority and have real-time requirements such as audio,
>               graphics running on a big core.
>
> The original code will make B win even if it is a bit later than A as its CPU is
> much faster to finish swap_read_folio for example from zRAM. task B's
> swap-in can finish very soon.
>
> With the patch, B will wait a tick and its real-time performance will be
> negatively affected from time to time once low priority and high priority
> tasks fault in the same PTE and high priority tasks are a bit later than
> low priority tasks. This is a kind of priority inversion.
>
> When we support large folio swap-in, things can get worse. For example,
> to swap-in 16 or even more pages in one do_swap_page, the chance for
> task A and task B located in the same range of 16 PTEs will increase
> though they are not located in the same PTE.
>
> Please consider this is not a blocker for this patch. But I will put the problem
> in my list and run some real tests on Android phones later.

Yes.  This may hurt performance.  But this is necessary to solve a
livelock problem similar as commit 029c4628b2eb ("mm: swap: get rid of
livelock in swapin readahead").  Please consider that too.

--
Best Regards,
Huang, Ying

>> +                               goto out;
>> +                       }
>> +                       need_clear_cache = true;
>> +
>>                         /* skip swapcache */
>>                         folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
>>                                                 vma, vmf->address, false);
>> @@ -4117,6 +4132,9 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>         if (vmf->pte)
>>                 pte_unmap_unlock(vmf->pte, vmf->ptl);
>>  out:
>> +       /* Clear the swap cache pin for direct swapin after PTL unlock */
>> +       if (need_clear_cache)
>> +               swapcache_clear(si, entry);
>>         if (si)
>>                 put_swap_device(si);
>>         return ret;
>> @@ -4131,6 +4149,8 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>                 folio_unlock(swapcache);
>>                 folio_put(swapcache);
>>         }
>> +       if (need_clear_cache)
>> +               swapcache_clear(si, entry);
>>         if (si)
>>                 put_swap_device(si);
>>         return ret;
>> diff --git a/mm/swap.h b/mm/swap.h
>> index 758c46ca671e..fc2f6ade7f80 100644
>> --- a/mm/swap.h
>> +++ b/mm/swap.h
>> @@ -41,6 +41,7 @@ void __delete_from_swap_cache(struct folio *folio,
>>  void delete_from_swap_cache(struct folio *folio);
>>  void clear_shadow_from_swap_cache(int type, unsigned long begin,
>>                                   unsigned long end);
>> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry);
>>  struct folio *swap_cache_get_folio(swp_entry_t entry,
>>                 struct vm_area_struct *vma, unsigned long addr);
>>  struct folio *filemap_get_incore_folio(struct address_space *mapping,
>> @@ -97,6 +98,10 @@ static inline int swap_writepage(struct page *p, struct writeback_control *wbc)
>>         return 0;
>>  }
>>
>> +static inline void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
>> +{
>> +}
>> +
>>  static inline struct folio *swap_cache_get_folio(swp_entry_t entry,
>>                 struct vm_area_struct *vma, unsigned long addr)
>>  {
>> diff --git a/mm/swapfile.c b/mm/swapfile.c
>> index 556ff7347d5f..746aa9da5302 100644
>> --- a/mm/swapfile.c
>> +++ b/mm/swapfile.c
>> @@ -3365,6 +3365,19 @@ int swapcache_prepare(swp_entry_t entry)
>>         return __swap_duplicate(entry, SWAP_HAS_CACHE);
>>  }
>>
>> +void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
>> +{
>> +       struct swap_cluster_info *ci;
>> +       unsigned long offset = swp_offset(entry);
>> +       unsigned char usage;
>> +
>> +       ci = lock_cluster_or_swap_info(si, offset);
>> +       usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
>> +       unlock_cluster_or_swap_info(si, ci);
>> +       if (!usage)
>> +               free_swap_slot(entry);
>> +}
>> +
>>  struct swap_info_struct *swp_swap_info(swp_entry_t entry)
>>  {
>>         return swap_type_to_swap_info(swp_type(entry));
>> --
>> 2.43.0
>>
>>
>
> Thanks
> Barry

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-19  8:20 [PATCH v4] mm/swap: fix race when skipping swapcache Kairui Song
                   ` (3 preceding siblings ...)
  2024-02-19 22:10 ` Barry Song
@ 2024-02-20  1:31 ` Andrew Morton
  2024-02-20  3:42   ` Kairui Song
  4 siblings, 1 reply; 15+ messages in thread
From: Andrew Morton @ 2024-02-20  1:31 UTC (permalink / raw)
  To: Kairui Song
  Cc: Kairui Song, linux-mm, Huang, Ying, Chris Li, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

On Mon, 19 Feb 2024 16:20:40 +0800 Kairui Song <ryncsn@gmail.com> wrote:

> From: Kairui Song <kasong@tencent.com>
> 
> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> swapin the same entry at the same time, they get different pages (A, B).
> Before one thread (T0) finishes the swapin and installs page (A)
> to the PTE, another thread (T1) could finish swapin of page (B),
> swap_free the entry, then swap out the possibly modified page
> reusing the same entry. It breaks the pte_same check in (T0) because
> PTE value is unchanged, causing ABA problem. Thread (T0) will
> install a stalled page (A) into the PTE and cause data corruption.
> 
> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>  	if (!folio) {
>  		if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>  		    __swap_count(entry) == 1) {
> +			/*
> +			 * Prevent parallel swapin from proceeding with
> +			 * the cache flag. Otherwise, another thread may
> +			 * finish swapin first, free the entry, and swapout
> +			 * reusing the same entry. It's undetectable as
> +			 * pte_same() returns true due to entry reuse.
> +			 */
> +			if (swapcache_prepare(entry)) {
> +				/* Relax a bit to prevent rapid repeated page faults */
> +				schedule_timeout_uninterruptible(1);

Well this is unpleasant.  How often can we expect this to occur?

> +				goto out;
> +			}
> +			need_clear_cache = true;
> +
>  			/* skip swapcache */
>  			folio = vma_alloc_folio(GFP_HIGHUSER_MOVABLE, 0,
>  						vma, vmf->address, false);


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-20  1:31 ` Andrew Morton
@ 2024-02-20  3:42   ` Kairui Song
  2024-02-20  4:01     ` Barry Song
  0 siblings, 1 reply; 15+ messages in thread
From: Kairui Song @ 2024-02-20  3:42 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, Huang, Ying, Chris Li, Minchan Kim, Barry Song,
	Yu Zhao, SeongJae Park, David Hildenbrand, Hugh Dickins,
	Johannes Weiner, Matthew Wilcox, Michal Hocko, Yosry Ahmed,
	Aaron Lu, stable, linux-kernel

On Tue, Feb 20, 2024 at 9:31 AM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> On Mon, 19 Feb 2024 16:20:40 +0800 Kairui Song <ryncsn@gmail.com> wrote:
>
> > From: Kairui Song <kasong@tencent.com>
> >
> > When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> > swapin the same entry at the same time, they get different pages (A, B).
> > Before one thread (T0) finishes the swapin and installs page (A)
> > to the PTE, another thread (T1) could finish swapin of page (B),
> > swap_free the entry, then swap out the possibly modified page
> > reusing the same entry. It breaks the pte_same check in (T0) because
> > PTE value is unchanged, causing ABA problem. Thread (T0) will
> > install a stalled page (A) into the PTE and cause data corruption.
> >
> > @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> >       if (!folio) {
> >               if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
> >                   __swap_count(entry) == 1) {
> > +                     /*
> > +                      * Prevent parallel swapin from proceeding with
> > +                      * the cache flag. Otherwise, another thread may
> > +                      * finish swapin first, free the entry, and swapout
> > +                      * reusing the same entry. It's undetectable as
> > +                      * pte_same() returns true due to entry reuse.
> > +                      */
> > +                     if (swapcache_prepare(entry)) {
> > +                             /* Relax a bit to prevent rapid repeated page faults */
> > +                             schedule_timeout_uninterruptible(1);
>
> Well this is unpleasant.  How often can we expect this to occur?
>

The chance is very low, using the current mainline kernel and ZRAM,
even with threads set to race on purpose using the reproducer I
provides, for 647132 page faults it occured 1528 times (~0.2%).

If I run MySQL and sysbench with 128 threads and 16G buffer pool, with
6G cgroup limit and 32G ZRAM, it occured 1372 times for 40 min,
109930201 page faults in total (~0.001%).

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-20  3:42   ` Kairui Song
@ 2024-02-20  4:01     ` Barry Song
  2024-02-20  4:56       ` Kairui Song
  0 siblings, 1 reply; 15+ messages in thread
From: Barry Song @ 2024-02-20  4:01 UTC (permalink / raw)
  To: Kairui Song
  Cc: Andrew Morton, linux-mm, Huang, Ying, Chris Li, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

On Tue, Feb 20, 2024 at 4:42 PM Kairui Song <ryncsn@gmail.com> wrote:
>
> On Tue, Feb 20, 2024 at 9:31 AM Andrew Morton <akpm@linux-foundation.org> wrote:
> >
> > On Mon, 19 Feb 2024 16:20:40 +0800 Kairui Song <ryncsn@gmail.com> wrote:
> >
> > > From: Kairui Song <kasong@tencent.com>
> > >
> > > When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> > > swapin the same entry at the same time, they get different pages (A, B).
> > > Before one thread (T0) finishes the swapin and installs page (A)
> > > to the PTE, another thread (T1) could finish swapin of page (B),
> > > swap_free the entry, then swap out the possibly modified page
> > > reusing the same entry. It breaks the pte_same check in (T0) because
> > > PTE value is unchanged, causing ABA problem. Thread (T0) will
> > > install a stalled page (A) into the PTE and cause data corruption.
> > >
> > > @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> > >       if (!folio) {
> > >               if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
> > >                   __swap_count(entry) == 1) {
> > > +                     /*
> > > +                      * Prevent parallel swapin from proceeding with
> > > +                      * the cache flag. Otherwise, another thread may
> > > +                      * finish swapin first, free the entry, and swapout
> > > +                      * reusing the same entry. It's undetectable as
> > > +                      * pte_same() returns true due to entry reuse.
> > > +                      */
> > > +                     if (swapcache_prepare(entry)) {
> > > +                             /* Relax a bit to prevent rapid repeated page faults */
> > > +                             schedule_timeout_uninterruptible(1);
> >
> > Well this is unpleasant.  How often can we expect this to occur?
> >
>
> The chance is very low, using the current mainline kernel and ZRAM,
> even with threads set to race on purpose using the reproducer I
> provides, for 647132 page faults it occured 1528 times (~0.2%).
>
> If I run MySQL and sysbench with 128 threads and 16G buffer pool, with
> 6G cgroup limit and 32G ZRAM, it occured 1372 times for 40 min,
> 109930201 page faults in total (~0.001%).

it might not be a problem for throughput. but for real-time and tail latency,
this hurts. For example, this might increase dropping frames of UI which
is an important parameter to evaluate performance :-)

BTW, I wonder if ying's previous proposal - moving swapcache_prepare()
after swap_read_folio() will further help decrease the number?

Thanks
Barry

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-19 22:10 ` Barry Song
  2024-02-20  1:09   ` Huang, Ying
@ 2024-02-20  4:49   ` Chengming Zhou
       [not found]     ` <CAMgjq7A9Gy0QK1f379wR6gp=dJM9DLSqMDq8OH5nMF_XxiOggQ@mail.gmail.com>
  1 sibling, 1 reply; 15+ messages in thread
From: Chengming Zhou @ 2024-02-20  4:49 UTC (permalink / raw)
  To: Barry Song, Kairui Song, Huang, Ying
  Cc: linux-mm, Andrew Morton, Chris Li, Minchan Kim, Barry Song,
	Yu Zhao, SeongJae Park, David Hildenbrand, Hugh Dickins,
	Johannes Weiner, Matthew Wilcox, Michal Hocko, Yosry Ahmed,
	Aaron Lu, stable, linux-kernel

On 2024/2/20 06:10, Barry Song wrote:
> On Mon, Feb 19, 2024 at 9:21 PM Kairui Song <ryncsn@gmail.com> wrote:
>>
>> From: Kairui Song <kasong@tencent.com>
>>
>> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
>> swapin the same entry at the same time, they get different pages (A, B).
>> Before one thread (T0) finishes the swapin and installs page (A)
>> to the PTE, another thread (T1) could finish swapin of page (B),
>> swap_free the entry, then swap out the possibly modified page
>> reusing the same entry. It breaks the pte_same check in (T0) because
>> PTE value is unchanged, causing ABA problem. Thread (T0) will
>> install a stalled page (A) into the PTE and cause data corruption.
>>
>> One possible callstack is like this:
>>
>> CPU0                                 CPU1
>> ----                                 ----
>> do_swap_page()                       do_swap_page() with same entry
>> <direct swapin path>                 <direct swapin path>
>> <alloc page A>                       <alloc page B>
>> swap_read_folio() <- read to page A  swap_read_folio() <- read to page B
>> <slow on later locks or interrupt>   <finished swapin first>
>> ..                                  set_pte_at()
>>                                      swap_free() <- entry is free
>>                                      <write to page B, now page A stalled>
>>                                      <swap out page B to same swap entry>
>> pte_same() <- Check pass, PTE seems
>>               unchanged, but page A
>>               is stalled!
>> swap_free() <- page B content lost!
>> set_pte_at() <- staled page A installed!
>>
>> And besides, for ZRAM, swap_free() allows the swap device to discard
>> the entry content, so even if page (B) is not modified, if
>> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
>> it may also cause data loss.
>>
>> To fix this, reuse swapcache_prepare which will pin the swap entry using
>> the cache flag, and allow only one thread to swap it in, also prevent
>> any parallel code from putting the entry in the cache. Release the pin
>> after PT unlocked.
>>
>> Racers just loop and wait since it's a rare and very short event.
>> A schedule_timeout_uninterruptible(1) call is added to avoid repeated
>> page faults wasting too much CPU, causing livelock or adding too much
>> noise to perf statistics. A similar livelock issue was described in
>> commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin readahead")
>>
>> Reproducer:
>>
>> This race issue can be triggered easily using a well constructed
>> reproducer and patched brd (with a delay in read path) [1]:
>>
>> With latest 6.8 mainline, race caused data loss can be observed easily:
>> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
>>   Polulating 32MB of memory region...
>>   Keep swapping out...
>>   Starting round 0...
>>   Spawning 65536 workers...
>>   32746 workers spawned, wait for done...
>>   Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
>>   Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
>>   Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
>>   Round 0 Failed, 15 data loss!
>>
>> This reproducer spawns multiple threads sharing the same memory region
>> using a small swap device. Every two threads updates mapped pages one by
>> one in opposite direction trying to create a race, with one dedicated
>> thread keep swapping out the data out using madvise.
>>
>> The reproducer created a reproduce rate of about once every 5 minutes,
>> so the race should be totally possible in production.
>>
>> After this patch, I ran the reproducer for over a few hundred rounds
>> and no data loss observed.
>>
>> Performance overhead is minimal, microbenchmark swapin 10G from 32G
>> zram:
>>
>> Before:     10934698 us
>> After:      11157121 us
>> Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
>>
>> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of synchronous device")
>> Link: https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
>> Reported-by: "Huang, Ying" <ying.huang@intel.com>
>> Closes: https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
>> Signed-off-by: Kairui Song <kasong@tencent.com>
>> Cc: stable@vger.kernel.org
>>
>> ---
>> V3: https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
>> Update from V3:
>> - Use schedule_timeout_uninterruptible(1) for now instead of schedule() to
>>   prevent the busy faulting task holds CPU and livelocks [Huang, Ying]
>>
>> V2: https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
>> Update from V2:
>> - Add a schedule() if raced to prevent repeated page faults wasting CPU
>>   and add noise to perf statistics.
>> - Use a bool to state the special case instead of reusing existing
>>   variables fixing error handling [Minchan Kim].
>>
>> V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
>> Update from V1:
>> - Add some words on ZRAM case, it will discard swap content on swap_free
>>   so the race window is a bit different but cure is the same. [Barry Song]
>> - Update comments make it cleaner [Huang, Ying]
>> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae Park]
>> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
>>   instead of "direct swapin path" [Yu Zhao]
>> - Update commit message.
>> - Collect Review and Acks.
>>
>>  include/linux/swap.h |  5 +++++
>>  mm/memory.c          | 20 ++++++++++++++++++++
>>  mm/swap.h            |  5 +++++
>>  mm/swapfile.c        | 13 +++++++++++++
>>  4 files changed, 43 insertions(+)
>>
>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>> index 4db00ddad261..8d28f6091a32 100644
>> --- a/include/linux/swap.h
>> +++ b/include/linux/swap.h
>> @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
>>         return 0;
>>  }
>>
>> +static inline int swapcache_prepare(swp_entry_t swp)
>> +{
>> +       return 0;
>> +}
>> +
>>  static inline void swap_free(swp_entry_t swp)
>>  {
>>  }
>> diff --git a/mm/memory.c b/mm/memory.c
>> index 7e1f4849463a..a99f5e7be9a5 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>         struct page *page;
>>         struct swap_info_struct *si = NULL;
>>         rmap_t rmap_flags = RMAP_NONE;
>> +       bool need_clear_cache = false;
>>         bool exclusive = false;
>>         swp_entry_t entry;
>>         pte_t pte;
>> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>         if (!folio) {
>>                 if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>>                     __swap_count(entry) == 1) {
>> +                       /*
>> +                        * Prevent parallel swapin from proceeding with
>> +                        * the cache flag. Otherwise, another thread may
>> +                        * finish swapin first, free the entry, and swapout
>> +                        * reusing the same entry. It's undetectable as
>> +                        * pte_same() returns true due to entry reuse.
>> +                        */
>> +                       if (swapcache_prepare(entry)) {
>> +                               /* Relax a bit to prevent rapid repeated page faults */
>> +                               schedule_timeout_uninterruptible(1);
> 
> Not a ideal model, imaging two tasks,
> 
> task A  - low priority running on a LITTLE core
> task B - high priority and have real-time requirements such as audio,
>               graphics running on a big core.
> 
> The original code will make B win even if it is a bit later than A as its CPU is
> much faster to finish swap_read_folio for example from zRAM. task B's
> swap-in can finish very soon.
> 
> With the patch, B will wait a tick and its real-time performance will be
> negatively affected from time to time once low priority and high priority
> tasks fault in the same PTE and high priority tasks are a bit later than
> low priority tasks. This is a kind of priority inversion.
> 
> When we support large folio swap-in, things can get worse. For example,
> to swap-in 16 or even more pages in one do_swap_page, the chance for
> task A and task B located in the same range of 16 PTEs will increase
> though they are not located in the same PTE.
> 
> Please consider this is not a blocker for this patch. But I will put the problem
> in my list and run some real tests on Android phones later.

Good point. Late for the discussion, I'm wondering why not get an extra reference
on the swap entry, instead of swapcache_prepare()? Then the faster thread will
succeed, but can't free the swap entry. Later, the slower thread will find the
changed pte value and fail, and free the swap entry. Maybe I missed something?

Thanks.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-20  4:01     ` Barry Song
@ 2024-02-20  4:56       ` Kairui Song
  2024-02-20 10:26         ` Barry Song
  2024-02-20 16:32         ` Chris Li
  0 siblings, 2 replies; 15+ messages in thread
From: Kairui Song @ 2024-02-20  4:56 UTC (permalink / raw)
  To: Barry Song
  Cc: Andrew Morton, linux-mm, Huang, Ying, Chris Li, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

On Tue, Feb 20, 2024 at 12:01 PM Barry Song <21cnbao@gmail.com> wrote:
>
> On Tue, Feb 20, 2024 at 4:42 PM Kairui Song <ryncsn@gmail.com> wrote:
> >
> > On Tue, Feb 20, 2024 at 9:31 AM Andrew Morton <akpm@linux-foundation.org> wrote:
> > >
> > > On Mon, 19 Feb 2024 16:20:40 +0800 Kairui Song <ryncsn@gmail.com> wrote:
> > >
> > > > From: Kairui Song <kasong@tencent.com>
> > > >
> > > > When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> > > > swapin the same entry at the same time, they get different pages (A, B).
> > > > Before one thread (T0) finishes the swapin and installs page (A)
> > > > to the PTE, another thread (T1) could finish swapin of page (B),
> > > > swap_free the entry, then swap out the possibly modified page
> > > > reusing the same entry. It breaks the pte_same check in (T0) because
> > > > PTE value is unchanged, causing ABA problem. Thread (T0) will
> > > > install a stalled page (A) into the PTE and cause data corruption.
> > > >
> > > > @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> > > >       if (!folio) {
> > > >               if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
> > > >                   __swap_count(entry) == 1) {
> > > > +                     /*
> > > > +                      * Prevent parallel swapin from proceeding with
> > > > +                      * the cache flag. Otherwise, another thread may
> > > > +                      * finish swapin first, free the entry, and swapout
> > > > +                      * reusing the same entry. It's undetectable as
> > > > +                      * pte_same() returns true due to entry reuse.
> > > > +                      */
> > > > +                     if (swapcache_prepare(entry)) {
> > > > +                             /* Relax a bit to prevent rapid repeated page faults */
> > > > +                             schedule_timeout_uninterruptible(1);
> > >
> > > Well this is unpleasant.  How often can we expect this to occur?
> > >
> >
> > The chance is very low, using the current mainline kernel and ZRAM,
> > even with threads set to race on purpose using the reproducer I
> > provides, for 647132 page faults it occured 1528 times (~0.2%).
> >
> > If I run MySQL and sysbench with 128 threads and 16G buffer pool, with
> > 6G cgroup limit and 32G ZRAM, it occured 1372 times for 40 min,
> > 109930201 page faults in total (~0.001%).
>

Hi Barry,

> it might not be a problem for throughput. but for real-time and tail latency,
> this hurts. For example, this might increase dropping frames of UI which
> is an important parameter to evaluate performance :-)
>

That's a true issue, as Chris mentioned before I think we need to
think of some clever data struct to solve this more naturally in the
future, similar issue exists for cached swapin as well and it has been
there for a while. On the other hand I think maybe applications that
are extremely latency sensitive should try to avoid swap on fault? A
swapin could cause other issues like reclaim, throttled or contention
with many other things, these seem to have a higher chance than this
race.

> BTW, I wonder if ying's previous proposal - moving swapcache_prepare()
> after swap_read_folio() will further help decrease the number?

We can move the swapcache_prepare after folio alloc or cgroup charge,
but I didn't see an observable change from statistics, for some
workload the reading is even worse. I think that's mostly due to
noise, or higher swap out rate since all raced threads will alloc an
extra folio now. Applications that have many pages swapped out due to
memory limit are already on the edge of triggering another reclaim, so
a dozen more folio alloc could just trigger that...

And we can't move it after swap_read_folio()... That's exactly what we
want to protect.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
       [not found]     ` <CAMgjq7A9Gy0QK1f379wR6gp=dJM9DLSqMDq8OH5nMF_XxiOggQ@mail.gmail.com>
@ 2024-02-20  5:37       ` Chengming Zhou
  0 siblings, 0 replies; 15+ messages in thread
From: Chengming Zhou @ 2024-02-20  5:37 UTC (permalink / raw)
  To: Kairui Song
  Cc: Barry Song, Huang, Ying, linux-mm, Andrew Morton, Chris Li,
	Minchan Kim, Barry Song, Yu Zhao, SeongJae Park,
	David Hildenbrand, Hugh Dickins, Johannes Weiner, Matthew Wilcox,
	Michal Hocko, Yosry Ahmed, Aaron Lu, stable, LKML

On 2024/2/20 13:32, Kairui Song wrote:
> On Tue, Feb 20, 2024 at 12:49 PM Chengming Zhou <zhouchengming@bytedance.com>
> wrote:
>>
>> On 2024/2/20 06:10, Barry Song wrote:
>>> On Mon, Feb 19, 2024 at 9:21 PM Kairui Song <ryncsn@gmail.com> wrote:
>>>>
>>>> From: Kairui Song <kasong@tencent.com>
>>>>
>>>> When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
>>>> swapin the same entry at the same time, they get different pages (A,
> B).
>>>> Before one thread (T0) finishes the swapin and installs page (A)
>>>> to the PTE, another thread (T1) could finish swapin of page (B),
>>>> swap_free the entry, then swap out the possibly modified page
>>>> reusing the same entry. It breaks the pte_same check in (T0) because
>>>> PTE value is unchanged, causing ABA problem. Thread (T0) will
>>>> install a stalled page (A) into the PTE and cause data corruption.
>>>>
>>>> One possible callstack is like this:
>>>>
>>>> CPU0                                 CPU1
>>>> ----                                 ----
>>>> do_swap_page()                       do_swap_page() with same entry
>>>> <direct swapin path>                 <direct swapin path>
>>>> <alloc page A>                       <alloc page B>
>>>> swap_read_folio() <- read to page A  swap_read_folio() <- read to page
> B
>>>> <slow on later locks or interrupt>   <finished swapin first>
>>>> ..                                  set_pte_at()
>>>>                                      swap_free() <- entry is free
>>>>                                      <write to page B, now page A
> stalled>
>>>>                                      <swap out page B to same swap
> entry>
>>>> pte_same() <- Check pass, PTE seems
>>>>               unchanged, but page A
>>>>               is stalled!
>>>> swap_free() <- page B content lost!
>>>> set_pte_at() <- staled page A installed!
>>>>
>>>> And besides, for ZRAM, swap_free() allows the swap device to discard
>>>> the entry content, so even if page (B) is not modified, if
>>>> swap_read_folio() on CPU0 happens later than swap_free() on CPU1,
>>>> it may also cause data loss.
>>>>
>>>> To fix this, reuse swapcache_prepare which will pin the swap entry
> using
>>>> the cache flag, and allow only one thread to swap it in, also prevent
>>>> any parallel code from putting the entry in the cache. Release the pin
>>>> after PT unlocked.
>>>>
>>>> Racers just loop and wait since it's a rare and very short event.
>>>> A schedule_timeout_uninterruptible(1) call is added to avoid repeated
>>>> page faults wasting too much CPU, causing livelock or adding too much
>>>> noise to perf statistics. A similar livelock issue was described in
>>>> commit 029c4628b2eb ("mm: swap: get rid of livelock in swapin
> readahead")
>>>>
>>>> Reproducer:
>>>>
>>>> This race issue can be triggered easily using a well constructed
>>>> reproducer and patched brd (with a delay in read path) [1]:
>>>>
>>>> With latest 6.8 mainline, race caused data loss can be observed easily:
>>>> $ gcc -g -lpthread test-thread-swap-race.c && ./a.out
>>>>   Polulating 32MB of memory region...
>>>>   Keep swapping out...
>>>>   Starting round 0...
>>>>   Spawning 65536 workers...
>>>>   32746 workers spawned, wait for done...
>>>>   Round 0: Error on 0x5aa00, expected 32746, got 32743, 3 data loss!
>>>>   Round 0: Error on 0x395200, expected 32746, got 32743, 3 data loss!
>>>>   Round 0: Error on 0x3fd000, expected 32746, got 32737, 9 data loss!
>>>>   Round 0 Failed, 15 data loss!
>>>>
>>>> This reproducer spawns multiple threads sharing the same memory region
>>>> using a small swap device. Every two threads updates mapped pages one
> by
>>>> one in opposite direction trying to create a race, with one dedicated
>>>> thread keep swapping out the data out using madvise.
>>>>
>>>> The reproducer created a reproduce rate of about once every 5 minutes,
>>>> so the race should be totally possible in production.
>>>>
>>>> After this patch, I ran the reproducer for over a few hundred rounds
>>>> and no data loss observed.
>>>>
>>>> Performance overhead is minimal, microbenchmark swapin 10G from 32G
>>>> zram:
>>>>
>>>> Before:     10934698 us
>>>> After:      11157121 us
>>>> Cached:     13155355 us (Dropping SWP_SYNCHRONOUS_IO flag)
>>>>
>>>> Fixes: 0bcac06f27d7 ("mm, swap: skip swapcache for swapin of
> synchronous device")
>>>> Link:
> https://github.com/ryncsn/emm-test-project/tree/master/swap-stress-race [1]
>>>> Reported-by: "Huang, Ying" <ying.huang@intel.com>
>>>> Closes:
> https://lore.kernel.org/lkml/87bk92gqpx.fsf_-_@yhuang6-desk2.ccr.corp.intel.com/
>>>> Signed-off-by: Kairui Song <kasong@tencent.com>
>>>> Cc: stable@vger.kernel.org
>>>>
>>>> ---
>>>> V3:
> https://lore.kernel.org/all/20240216095105.14502-1-ryncsn@gmail.com/
>>>> Update from V3:
>>>> - Use schedule_timeout_uninterruptible(1) for now instead of
> schedule() to
>>>>   prevent the busy faulting task holds CPU and livelocks [Huang, Ying]
>>>>
>>>> V2:
> https://lore.kernel.org/all/20240206182559.32264-1-ryncsn@gmail.com/
>>>> Update from V2:
>>>> - Add a schedule() if raced to prevent repeated page faults wasting CPU
>>>>   and add noise to perf statistics.
>>>> - Use a bool to state the special case instead of reusing existing
>>>>   variables fixing error handling [Minchan Kim].
>>>>
>>>> V1: https://lore.kernel.org/all/20240205110959.4021-1-ryncsn@gmail.com/
>>>> Update from V1:
>>>> - Add some words on ZRAM case, it will discard swap content on
> swap_free
>>>>   so the race window is a bit different but cure is the same. [Barry
> Song]
>>>> - Update comments make it cleaner [Huang, Ying]
>>>> - Add a function place holder to fix CONFIG_SWAP=n built [SeongJae
> Park]
>>>> - Update the commit message and summary, refer to SWP_SYNCHRONOUS_IO
>>>>   instead of "direct swapin path" [Yu Zhao]
>>>> - Update commit message.
>>>> - Collect Review and Acks.
>>>>
>>>>  include/linux/swap.h |  5 +++++
>>>>  mm/memory.c          | 20 ++++++++++++++++++++
>>>>  mm/swap.h            |  5 +++++
>>>>  mm/swapfile.c        | 13 +++++++++++++
>>>>  4 files changed, 43 insertions(+)
>>>>
>>>> diff --git a/include/linux/swap.h b/include/linux/swap.h
>>>> index 4db00ddad261..8d28f6091a32 100644
>>>> --- a/include/linux/swap.h
>>>> +++ b/include/linux/swap.h
>>>> @@ -549,6 +549,11 @@ static inline int swap_duplicate(swp_entry_t swp)
>>>>         return 0;
>>>>  }
>>>>
>>>> +static inline int swapcache_prepare(swp_entry_t swp)
>>>> +{
>>>> +       return 0;
>>>> +}
>>>> +
>>>>  static inline void swap_free(swp_entry_t swp)
>>>>  {
>>>>  }
>>>> diff --git a/mm/memory.c b/mm/memory.c
>>>> index 7e1f4849463a..a99f5e7be9a5 100644
>>>> --- a/mm/memory.c
>>>> +++ b/mm/memory.c
>>>> @@ -3799,6 +3799,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>         struct page *page;
>>>>         struct swap_info_struct *si = NULL;
>>>>         rmap_t rmap_flags = RMAP_NONE;
>>>> +       bool need_clear_cache = false;
>>>>         bool exclusive = false;
>>>>         swp_entry_t entry;
>>>>         pte_t pte;
>>>> @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
>>>>         if (!folio) {
>>>>                 if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
>>>>                     __swap_count(entry) == 1) {
>>>> +                       /*
>>>> +                        * Prevent parallel swapin from proceeding with
>>>> +                        * the cache flag. Otherwise, another thread
> may
>>>> +                        * finish swapin first, free the entry, and
> swapout
>>>> +                        * reusing the same entry. It's undetectable as
>>>> +                        * pte_same() returns true due to entry reuse.
>>>> +                        */
>>>> +                       if (swapcache_prepare(entry)) {
>>>> +                               /* Relax a bit to prevent rapid
> repeated page faults */
>>>> +                               schedule_timeout_uninterruptible(1);
>>>
>>> Not a ideal model, imaging two tasks,
>>>
>>> task A  - low priority running on a LITTLE core
>>> task B - high priority and have real-time requirements such as audio,
>>>               graphics running on a big core.
>>>
>>> The original code will make B win even if it is a bit later than A as
> its CPU is
>>> much faster to finish swap_read_folio for example from zRAM. task B's
>>> swap-in can finish very soon.
>>>
>>> With the patch, B will wait a tick and its real-time performance will be
>>> negatively affected from time to time once low priority and high
> priority
>>> tasks fault in the same PTE and high priority tasks are a bit later than
>>> low priority tasks. This is a kind of priority inversion.
>>>
>>> When we support large folio swap-in, things can get worse. For example,
>>> to swap-in 16 or even more pages in one do_swap_page, the chance for
>>> task A and task B located in the same range of 16 PTEs will increase
>>> though they are not located in the same PTE.
>>>
>>> Please consider this is not a blocker for this patch. But I will put
> the problem
>>> in my list and run some real tests on Android phones later.
>>
>> Good point. Late for the discussion, I'm wondering why not get an extra
> reference
>> on the swap entry, instead of swapcache_prepare()? Then the faster thread
> will
>> succeed, but can't free the swap entry. Later, the slower thread will
> find the
>> changed pte value and fail, and free the swap entry. Maybe I missed
> something?
> 
> Hi, Chengming
> 
> That was my initial purpose. Then found a lot of problems with it. Increase
> swap count here, it may race with another swap free and end up increasing
> the swap count of a freed entry.
> 
> That can be fixed with audits and new helpers, but there are many other
> potential issues too. One major problem is that after count bump, raced
> swap threads will fallback to cached swap in. Pages in swapcache can be
> swaped out without allocating an entry, making the problem we were trying
> to resolve more serious.

Thanks for your clarification! Right, there are many issues I just ignored...


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-20  4:56       ` Kairui Song
@ 2024-02-20 10:26         ` Barry Song
  2024-02-20 16:32         ` Chris Li
  1 sibling, 0 replies; 15+ messages in thread
From: Barry Song @ 2024-02-20 10:26 UTC (permalink / raw)
  To: Kairui Song
  Cc: Andrew Morton, linux-mm, Huang, Ying, Chris Li, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

On Tue, Feb 20, 2024 at 5:56 PM Kairui Song <ryncsn@gmail.com> wrote:
>
> On Tue, Feb 20, 2024 at 12:01 PM Barry Song <21cnbao@gmail.com> wrote:
> >
> > On Tue, Feb 20, 2024 at 4:42 PM Kairui Song <ryncsn@gmail.com> wrote:
> > >
> > > On Tue, Feb 20, 2024 at 9:31 AM Andrew Morton <akpm@linux-foundation.org> wrote:
> > > >
> > > > On Mon, 19 Feb 2024 16:20:40 +0800 Kairui Song <ryncsn@gmail.com> wrote:
> > > >
> > > > > From: Kairui Song <kasong@tencent.com>
> > > > >
> > > > > When skipping swapcache for SWP_SYNCHRONOUS_IO, if two or more threads
> > > > > swapin the same entry at the same time, they get different pages (A, B).
> > > > > Before one thread (T0) finishes the swapin and installs page (A)
> > > > > to the PTE, another thread (T1) could finish swapin of page (B),
> > > > > swap_free the entry, then swap out the possibly modified page
> > > > > reusing the same entry. It breaks the pte_same check in (T0) because
> > > > > PTE value is unchanged, causing ABA problem. Thread (T0) will
> > > > > install a stalled page (A) into the PTE and cause data corruption.
> > > > >
> > > > > @@ -3867,6 +3868,20 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
> > > > >       if (!folio) {
> > > > >               if (data_race(si->flags & SWP_SYNCHRONOUS_IO) &&
> > > > >                   __swap_count(entry) == 1) {
> > > > > +                     /*
> > > > > +                      * Prevent parallel swapin from proceeding with
> > > > > +                      * the cache flag. Otherwise, another thread may
> > > > > +                      * finish swapin first, free the entry, and swapout
> > > > > +                      * reusing the same entry. It's undetectable as
> > > > > +                      * pte_same() returns true due to entry reuse.
> > > > > +                      */
> > > > > +                     if (swapcache_prepare(entry)) {
> > > > > +                             /* Relax a bit to prevent rapid repeated page faults */
> > > > > +                             schedule_timeout_uninterruptible(1);
> > > >
> > > > Well this is unpleasant.  How often can we expect this to occur?
> > > >
> > >
> > > The chance is very low, using the current mainline kernel and ZRAM,
> > > even with threads set to race on purpose using the reproducer I
> > > provides, for 647132 page faults it occured 1528 times (~0.2%).
> > >
> > > If I run MySQL and sysbench with 128 threads and 16G buffer pool, with
> > > 6G cgroup limit and 32G ZRAM, it occured 1372 times for 40 min,
> > > 109930201 page faults in total (~0.001%).
> >
>
> Hi Barry,
>
> > it might not be a problem for throughput. but for real-time and tail latency,
> > this hurts. For example, this might increase dropping frames of UI which
> > is an important parameter to evaluate performance :-)
> >
>

Hi Kairui,

> That's a true issue, as Chris mentioned before I think we need to
> think of some clever data struct to solve this more naturally in the
> future, similar issue exists for cached swapin as well and it has been
> there for a while. On the other hand I think maybe applications that
> are extremely latency sensitive should try to avoid swap on fault? A
> swapin could cause other issues like reclaim, throttled or contention
> with many other things, these seem to have a higher chance than this
> race.

ideally, if memory is very very large, we can avoid swap or mlock a
lot of things. In Android phones, most anon memory is actually in
swap as a system with limited memory.
For example, users might switch between a couple of applications.
some cold app could be entirely swapped. but those applications
can be re-activated by users all of a sudden.
We do mlock some limited memories. but we don't abuse mlock()
everywhere :-)
For a soft-real time system, a lot of other optimization is involved
to make sure RT/UI tasks can get priority on getting locks, memory
etc.
Overall we live together with swap but still try our best to make
important tasks have low latency.

The current patch, to me, seems to add a new place which makes
high priority tasks have no way to be done faster. But I do understand
the percentage is not high. And I have no doubt you have done your
best work on this.

I'm just curious if the number will increase a lot of times for large folios
swap-in as the conflicting memory range is enlarged. and also its
impact on UI and RT tasks.

Thus, I have followed up your work and made it support large folios
swap-in[1] as below. I will get phones to run it and update you with
the result after a couple of days(could be 3-4 weeks later).

Subject: [PATCH] mm: swap: introduce swapcache_prepare_nr and
 swapcache_clear_nr for large folios swap-in

Apply Kairui's work on large folios swap-in

Signed-off-by: Barry Song <v-songbaohua@oppo.com>
---

diff --git a/include/linux/swap.h b/include/linux/swap.h
index a293ef17c2b6..f1cf64c9ccb5 100644
--- a/include/linux/swap.h
+++ b/include/linux/swap.h
@@ -483,6 +483,7 @@ extern int add_swap_count_continuation(swp_entry_t, gfp_t);
 extern void swap_shmem_alloc(swp_entry_t);
 extern int swap_duplicate(swp_entry_t);
 extern int swapcache_prepare(swp_entry_t);
+extern int swapcache_prepare_nr(swp_entry_t, int nr);
 extern void swap_free(swp_entry_t);
 extern void swap_nr_free(swp_entry_t entry, int nr_pages);
 extern void swapcache_free_entries(swp_entry_t *entries, int n);
diff --git a/mm/memory.c b/mm/memory.c
index 2d27c087a39e..9cfd806a8236 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3905,7 +3905,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
                                 * reusing the same entry. It's undetectable as
                                 * pte_same() returns true due to entry reuse.
                                 */
-                               if (swapcache_prepare(entry)) {
+                               if (swapcache_prepare_nr(entry, nr_pages)) {
                                        /* Relax a bit to prevent
rapid repeated page faults */
                                        schedule_timeout_uninterruptible(1);
                                        goto out;
@@ -4194,7 +4194,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
 out:
        /* Clear the swap cache pin for direct swapin after PTL unlock */
        if (need_clear_cache)
-               swapcache_clear(si, entry);
+               swapcache_clear_nr(si, entry, nr_pages);
        if (si)
                put_swap_device(si);
        return ret;
@@ -4210,7 +4210,7 @@ vm_fault_t do_swap_page(struct vm_fault *vmf)
                folio_put(swapcache);
        }
        if (need_clear_cache)
-               swapcache_clear(si, entry);
+               swapcache_clear_nr(si, entry, nr_pages);
        if (si)
                put_swap_device(si);
        return ret;
diff --git a/mm/swap.h b/mm/swap.h
index 693d1b281559..a457496bd669 100644
--- a/mm/swap.h
+++ b/mm/swap.h
@@ -39,6 +39,7 @@ void delete_from_swap_cache(struct folio *folio);
 void clear_shadow_from_swap_cache(int type, unsigned long begin,
                                  unsigned long end);
 void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry);
+void swapcache_clear_nr(struct swap_info_struct *si, swp_entry_t
entry, int nr);
 struct folio *swap_cache_get_folio(swp_entry_t entry,
                struct vm_area_struct *vma, unsigned long addr);
 struct folio *filemap_get_incore_folio(struct address_space *mapping,
diff --git a/mm/swapfile.c b/mm/swapfile.c
index 6ecee63cf678..8c9d53f9f068 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -3322,65 +3322,76 @@ void si_swapinfo(struct sysinfo *val)
  * - swap-cache reference is requested but the entry is not used. -> ENOENT
  * - swap-mapped reference requested but needs continued swap count. -> ENOMEM
  */
-static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+
+static int __swap_duplicate_nr(swp_entry_t entry, int nr, unsigned char usage)
 {
        struct swap_info_struct *p;
        struct swap_cluster_info *ci;
        unsigned long offset;
-       unsigned char count;
-       unsigned char has_cache;
-       int err;
+       unsigned char count[SWAPFILE_CLUSTER];
+       unsigned char has_cache[SWAPFILE_CLUSTER];
+       int err, i;

        p = swp_swap_info(entry);

        offset = swp_offset(entry);
        ci = lock_cluster_or_swap_info(p, offset);

-       count = p->swap_map[offset];
+       for (i = 0; i < nr; i++) {
+               count[i] = p->swap_map[offset + i];

-       /*
-        * swapin_readahead() doesn't check if a swap entry is valid, so the
-        * swap entry could be SWAP_MAP_BAD. Check here with lock held.
-        */
-       if (unlikely(swap_count(count) == SWAP_MAP_BAD)) {
-               err = -ENOENT;
-               goto unlock_out;
+               /*
+                * swapin_readahead() doesn't check if a swap entry is
valid, so the
+                * swap entry could be SWAP_MAP_BAD. Check here with lock held.
+                */
+               if (unlikely(swap_count(count[i]) == SWAP_MAP_BAD)) {
+                       err = -ENOENT;
+                       goto unlock_out;
+               }
+
+               has_cache[i] = count[i] & SWAP_HAS_CACHE;
+               count[i] &= ~SWAP_HAS_CACHE;
+               err = 0;
+
+               if (usage == SWAP_HAS_CACHE) {
+
+                       /* set SWAP_HAS_CACHE if there is no cache and
entry is used */
+                       if (!has_cache[i] && count[i])
+                               has_cache[i] = SWAP_HAS_CACHE;
+                       else if (has_cache[i])          /* someone
else added cache */
+                               err = -EEXIST;
+                       else                            /* no users remaining */
+                               err = -ENOENT;
+               } else if (count[i] || has_cache[i]) {
+
+                       if ((count[i] & ~COUNT_CONTINUED) < SWAP_MAP_MAX)
+                               count[i] += usage;
+                       else if ((count[i] & ~COUNT_CONTINUED) > SWAP_MAP_MAX)
+                               err = -EINVAL;
+                       else if (swap_count_continued(p, offset + i, count[i]))
+                               count[i] = COUNT_CONTINUED;
+                       else
+                               err = -ENOMEM;
+               } else
+                       err = -ENOENT;                  /* unused swap entry */
+
+               if (err)
+                       goto unlock_out;
        }

-       has_cache = count & SWAP_HAS_CACHE;
-       count &= ~SWAP_HAS_CACHE;
-       err = 0;
-
-       if (usage == SWAP_HAS_CACHE) {
-
-               /* set SWAP_HAS_CACHE if there is no cache and entry is used */
-               if (!has_cache && count)
-                       has_cache = SWAP_HAS_CACHE;
-               else if (has_cache)             /* someone else added cache */
-                       err = -EEXIST;
-               else                            /* no users remaining */
-                       err = -ENOENT;
-
-       } else if (count || has_cache) {
-
-               if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX)
-                       count += usage;
-               else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX)
-                       err = -EINVAL;
-               else if (swap_count_continued(p, offset, count))
-                       count = COUNT_CONTINUED;
-               else
-                       err = -ENOMEM;
-       } else
-               err = -ENOENT;                  /* unused swap entry */
-
-       WRITE_ONCE(p->swap_map[offset], count | has_cache);
-
+       for (i = 0; i < nr; i++)
+               WRITE_ONCE(p->swap_map[offset + i], count[i] | has_cache[i]);
 unlock_out:
        unlock_cluster_or_swap_info(p, ci);
        return err;
 }

+
+static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+{
+       return __swap_duplicate_nr(entry, 1, usage);
+}
+
 /*
  * Help swapoff by noting that swap entry belongs to shmem/tmpfs
  * (in which case its reference count is never incremented).
@@ -3419,17 +3430,33 @@ int swapcache_prepare(swp_entry_t entry)
        return __swap_duplicate(entry, SWAP_HAS_CACHE);
 }

-void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
+int swapcache_prepare_nr(swp_entry_t entry, int nr)
+{
+       return __swap_duplicate_nr(entry, nr, SWAP_HAS_CACHE);
+}
+
+void swapcache_clear_nr(struct swap_info_struct *si, swp_entry_t entry, int nr)
 {
        struct swap_cluster_info *ci;
        unsigned long offset = swp_offset(entry);
-       unsigned char usage;
+       unsigned char usage[SWAPFILE_CLUSTER];
+       int i;

        ci = lock_cluster_or_swap_info(si, offset);
-       usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
+       for (i = 0; i < nr; i++)
+               usage[i] = __swap_entry_free_locked(si, offset + i,
SWAP_HAS_CACHE);
        unlock_cluster_or_swap_info(si, ci);
-       if (!usage)
-               free_swap_slot(entry);
+       for (i = 0; i < nr; i++) {
+               if (!usage[i]) {
+                       free_swap_slot(entry);
+                       entry.val++;
+               }
-
-       if (usage == SWAP_HAS_CACHE) {
-
-               /* set SWAP_HAS_CACHE if there is no cache and entry is used */
-               if (!has_cache && count)
-                       has_cache = SWAP_HAS_CACHE;
-               else if (has_cache)             /* someone else added cache */
-                       err = -EEXIST;
-               else                            /* no users remaining */
-                       err = -ENOENT;
-
-       } else if (count || has_cache) {
-
-               if ((count & ~COUNT_CONTINUED) < SWAP_MAP_MAX)
-                       count += usage;
-               else if ((count & ~COUNT_CONTINUED) > SWAP_MAP_MAX)
-                       err = -EINVAL;
-               else if (swap_count_continued(p, offset, count))
-                       count = COUNT_CONTINUED;
-               else
-                       err = -ENOMEM;
-       } else
-               err = -ENOENT;                  /* unused swap entry */
-
-       WRITE_ONCE(p->swap_map[offset], count | has_cache);
-
+       for (i = 0; i < nr; i++)
+               WRITE_ONCE(p->swap_map[offset + i], count[i] | has_cache[i]);
 unlock_out:
        unlock_cluster_or_swap_info(p, ci);
        return err;
 }

+
+static int __swap_duplicate(swp_entry_t entry, unsigned char usage)
+{
+       return __swap_duplicate_nr(entry, 1, usage);
+}
+
 /*
  * Help swapoff by noting that swap entry belongs to shmem/tmpfs
  * (in which case its reference count is never incremented).
@@ -3419,17 +3430,33 @@ int swapcache_prepare(swp_entry_t entry)
        return __swap_duplicate(entry, SWAP_HAS_CACHE);
 }

-void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
+int swapcache_prepare_nr(swp_entry_t entry, int nr)
+{
+       return __swap_duplicate_nr(entry, nr, SWAP_HAS_CACHE);
+}
+
+void swapcache_clear_nr(struct swap_info_struct *si, swp_entry_t entry, int nr)
 {
        struct swap_cluster_info *ci;
        unsigned long offset = swp_offset(entry);
-       unsigned char usage;
+       unsigned char usage[SWAPFILE_CLUSTER];
+       int i;

        ci = lock_cluster_or_swap_info(si, offset);
-       usage = __swap_entry_free_locked(si, offset, SWAP_HAS_CACHE);
+       for (i = 0; i < nr; i++)
+               usage[i] = __swap_entry_free_locked(si, offset + i,
SWAP_HAS_CACHE);
        unlock_cluster_or_swap_info(si, ci);
-       if (!usage)
-               free_swap_slot(entry);
+       for (i = 0; i < nr; i++) {
+               if (!usage[i]) {
+                       free_swap_slot(entry);
+                       entry.val++;
+               }
+       }
+}
+
+void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry)
+{
+       swapcache_clear_nr(si, entry, 1);
 }

 struct swap_info_struct *swp_swap_info(swp_entry_t entry)

[1] https://lore.kernel.org/linux-mm/20240118111036.72641-1-21cnbao@gmail.com/

>
> > BTW, I wonder if ying's previous proposal - moving swapcache_prepare()
> > after swap_read_folio() will further help decrease the number?
>
> We can move the swapcache_prepare after folio alloc or cgroup charge,
> but I didn't see an observable change from statistics, for some
> workload the reading is even worse. I think that's mostly due to
> noise, or higher swap out rate since all raced threads will alloc an
> extra folio now. Applications that have many pages swapped out due to
> memory limit are already on the edge of triggering another reclaim, so
> a dozen more folio alloc could just trigger that...

sometimes. might not be edged because of this app, but because users
launch another app as foreground and this one becomes background.

>
> And we can't move it after swap_read_folio()... That's exactly what we
> want to protect.

understood, thanks!

Barry

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
  2024-02-20  4:56       ` Kairui Song
  2024-02-20 10:26         ` Barry Song
@ 2024-02-20 16:32         ` Chris Li
       [not found]           ` <CAMgjq7BHAk_6ktCruKq_Yc30n++yhUyKTqzQuJ9fPvGVNNSdVA@mail.gmail.com>
  1 sibling, 1 reply; 15+ messages in thread
From: Chris Li @ 2024-02-20 16:32 UTC (permalink / raw)
  To: Kairui Song
  Cc: Barry Song, Andrew Morton, linux-mm, Huang, Ying, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, linux-kernel

On Mon, Feb 19, 2024 at 8:56 PM Kairui Song <ryncsn@gmail.com> wrote:

>
> Hi Barry,
>
> > it might not be a problem for throughput. but for real-time and tail latency,
> > this hurts. For example, this might increase dropping frames of UI which
> > is an important parameter to evaluate performance :-)
> >
>
> That's a true issue, as Chris mentioned before I think we need to
> think of some clever data struct to solve this more naturally in the
> future, similar issue exists for cached swapin as well and it has been
> there for a while. On the other hand I think maybe applications that
> are extremely latency sensitive should try to avoid swap on fault? A
> swapin could cause other issues like reclaim, throttled or contention
> with many other things, these seem to have a higher chance than this
> race.


Yes, I do think the best long term solution is to have some clever
data structure to solve the synchronization issue and allow racing
threads to make forward progress at the same time.

I have also explored some (failed) synchronization ideas, for example
having the run time swap entry refcount separate from swap_map count.
BTW, zswap entry->refcount behaves like that, it is separate from swap
entry and manages the temporary run time usage count held by the
function. However that idea has its own problem as well, it needs to
have an xarray to track the swap entry run time refcount (only stored
in the xarray when CPU fails to get SWAP_HAS_CACHE bit.) When we are
done with page faults, we still need to look up the xarray to make
sure there is no racing CPU and put the refcount into the xarray. That
 kind of defeats the purpose of avoiding the swap cache in the first
place. We still need to do the xarray lookup in the normal path.

I came to realize that, while this current fix is not perfect, (I
still wish we had a better solution not pausing the racing CPU). This
patch stands better than not fixing this data corruption issue and the
patch remains relatively simple. Yes it has latency issues but still
better than data corruption.  It also doesn't stop us from coming up
with better solutions later on. If we want to address the
synchronization in a way not blocking other CPUs, it will likely
require a much bigger change.

Unless we have a better suggestion. It seems the better one among the
alternatives so far.

Chris

>
> > BTW, I wonder if ying's previous proposal - moving swapcache_prepare()
> > after swap_read_folio() will further help decrease the number?
>
> We can move the swapcache_prepare after folio alloc or cgroup charge,
> but I didn't see an observable change from statistics, for some
> workload the reading is even worse. I think that's mostly due to
> noise, or higher swap out rate since all raced threads will alloc an
> extra folio now. Applications that have many pages swapped out due to
> memory limit are already on the edge of triggering another reclaim, so
> a dozen more folio alloc could just trigger that...
>
> And we can't move it after swap_read_folio()... That's exactly what we
> want to protect.
>

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v4] mm/swap: fix race when skipping swapcache
       [not found]           ` <CAMgjq7BHAk_6ktCruKq_Yc30n++yhUyKTqzQuJ9fPvGVNNSdVA@mail.gmail.com>
@ 2024-02-27 23:01             ` Chris Li
  0 siblings, 0 replies; 15+ messages in thread
From: Chris Li @ 2024-02-27 23:01 UTC (permalink / raw)
  To: Kairui Song
  Cc: Barry Song, Andrew Morton, linux-mm, Huang, Ying, Minchan Kim,
	Barry Song, Yu Zhao, SeongJae Park, David Hildenbrand,
	Hugh Dickins, Johannes Weiner, Matthew Wilcox, Michal Hocko,
	Yosry Ahmed, Aaron Lu, stable, LKML

On Tue, Feb 27, 2024 at 10:14 AM Kairui Song <ryncsn@gmail.com> wrote:
>
> On Wed, Feb 21, 2024 at 12:32 AM Chris Li <chrisl@kernel.org> wrote:
> >
> > On Mon, Feb 19, 2024 at 8:56 PM Kairui Song <ryncsn@gmail.com> wrote:
> >
> > >
> > > Hi Barry,
> > >
> > > > it might not be a problem for throughput. but for real-time and tail latency,
> > > > this hurts. For example, this might increase dropping frames of UI which
> > > > is an important parameter to evaluate performance :-)
> > > >
> > >
> > > That's a true issue, as Chris mentioned before I think we need to
> > > think of some clever data struct to solve this more naturally in the
> > > future, similar issue exists for cached swapin as well and it has been
> > > there for a while. On the other hand I think maybe applications that
> > > are extremely latency sensitive should try to avoid swap on fault? A
> > > swapin could cause other issues like reclaim, throttled or contention
> > > with many other things, these seem to have a higher chance than this
> > > race.
> >
> >
> > Yes, I do think the best long term solution is to have some clever
> > data structure to solve the synchronization issue and allow racing
> > threads to make forward progress at the same time.
> >
> > I have also explored some (failed) synchronization ideas, for example
> > having the run time swap entry refcount separate from swap_map count.
> > BTW, zswap entry->refcount behaves like that, it is separate from swap
> > entry and manages the temporary run time usage count held by the
> > function. However that idea has its own problem as well, it needs to
> > have an xarray to track the swap entry run time refcount (only stored
> > in the xarray when CPU fails to get SWAP_HAS_CACHE bit.) When we are
> > done with page faults, we still need to look up the xarray to make
> > sure there is no racing CPU and put the refcount into the xarray. That
> >  kind of defeats the purpose of avoiding the swap cache in the first
> > place. We still need to do the xarray lookup in the normal path.
> >
> > I came to realize that, while this current fix is not perfect, (I
> > still wish we had a better solution not pausing the racing CPU). This
> > patch stands better than not fixing this data corruption issue and the
> > patch remains relatively simple. Yes it has latency issues but still
> > better than data corruption.  It also doesn't stop us from coming up
> > with better solutions later on. If we want to address the
> > synchronization in a way not blocking other CPUs, it will likely
> > require a much bigger change.
> >
> > Unless we have a better suggestion. It seems the better one among the
> > alternatives so far.
> >
>
> Hi,
>
> Thanks for the comments. I've been trying some ideas locally, I think a simple and straight solution exists: We just don't skip the swap cache xarray.

Yes, I have been pondering about that as well.

Notice in __read_swap_cache_async(), it has a similar
"schedule_timeout_uninterruptible(1)" when  swapcache_prepare(entry)
fails to grab the SWAP_HAS_CACHE bit. So falling back to use the swap
cache does not automatically solve the latency issue. Similar delay
exists in the swap cache case as well.

> The current reason we are skipping it is for performance, but with some optimization, the performance should be as good as skipping it (in current behavior). Notice even in the swap cache bypass path, we need to do one lookup, and one modify (delete the shadow). That can't be skipped. So the usage of swap cache can be better organized and optimized.

> After all swapin makes use of swap cache, swapin can insert the folio in swap cache xarray first, then set swap map cache bit. I'm thinking about reusing the folio lock, or having an intermediate value in xarray, so raced swapins can wait properly. There are some tricky parts syncing with swap maps though.

Inserting the swap cache xarray first and setting SWAP_HAS_CACHE bit
later will need more audit on the race. I assume you take the swap
device/cluster lock before folio insert into swap cache xarray?

Chris
>
> Currently working on a series, will send in a few weeks if it works.

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2024-02-27 23:01 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-19  8:20 [PATCH v4] mm/swap: fix race when skipping swapcache Kairui Song
2024-02-19  8:47 ` Huang, Ying
2024-02-19  9:02 ` David Hildenbrand
2024-02-19 16:37 ` Chris Li
2024-02-19 22:10 ` Barry Song
2024-02-20  1:09   ` Huang, Ying
2024-02-20  4:49   ` Chengming Zhou
     [not found]     ` <CAMgjq7A9Gy0QK1f379wR6gp=dJM9DLSqMDq8OH5nMF_XxiOggQ@mail.gmail.com>
2024-02-20  5:37       ` Chengming Zhou
2024-02-20  1:31 ` Andrew Morton
2024-02-20  3:42   ` Kairui Song
2024-02-20  4:01     ` Barry Song
2024-02-20  4:56       ` Kairui Song
2024-02-20 10:26         ` Barry Song
2024-02-20 16:32         ` Chris Li
     [not found]           ` <CAMgjq7BHAk_6ktCruKq_Yc30n++yhUyKTqzQuJ9fPvGVNNSdVA@mail.gmail.com>
2024-02-27 23:01             ` Chris Li

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).