linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V3 0/2] mm: fix race condition in MADV_FREE
@ 2017-09-26 17:26 Shaohua Li
  2017-09-26 17:26 ` [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree Shaohua Li
  2017-09-26 17:26 ` [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page Shaohua Li
  0 siblings, 2 replies; 9+ messages in thread
From: Shaohua Li @ 2017-09-26 17:26 UTC (permalink / raw)
  To: linux-mm; +Cc: asavkov, Kernel-team, Shaohua Li

From: Shaohua Li <shli@fb.com>

Artem Savkov reported a race condition[1] in MADV_FREE. MADV_FREE clear pte
dirty bit and then mark the page lazyfree. There is no lock to prevent the
page is added to swap cache between these two steps by page reclaim. There are
two problems:
- page in swapcache is marked lazyfree (clear SwapBacked). This confuses some
  code pathes, like page fault handling.
- The page is added into swapcache, and freed but the page isn't swapout
  because pte isn't dirty. This will cause data corruption.

The patches will fix the issues.

I knew Minchan suggested these should be combined to one patch, but I really
think the separation makes things clearer because these are two issues even
they are stemmed from the same race.

Thanks,
Shaohua

V2->V3:
- reword patch log and code comments, no code change

V1->V2:
- dirty page in add_to_swap instead of in shrink_page_list as suggested by Minchan

Shaohua Li (2):
  mm: avoid marking swap cached page as lazyfree
  mm: fix data corruption caused by lazyfree page

 mm/swap.c       |  4 ++--
 mm/swap_state.c | 11 +++++++++++
 2 files changed, 13 insertions(+), 2 deletions(-)

-- 
2.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree
  2017-09-26 17:26 [PATCH V3 0/2] mm: fix race condition in MADV_FREE Shaohua Li
@ 2017-09-26 17:26 ` Shaohua Li
  2017-09-26 19:25   ` Johannes Weiner
                     ` (2 more replies)
  2017-09-26 17:26 ` [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page Shaohua Li
  1 sibling, 3 replies; 9+ messages in thread
From: Shaohua Li @ 2017-09-26 17:26 UTC (permalink / raw)
  To: linux-mm
  Cc: asavkov, Kernel-team, Shaohua Li, stable, Johannes Weiner,
	Michal Hocko, Hillf Danton, Minchan Kim, Hugh Dickins,
	Mel Gorman, Andrew Morton

From: Shaohua Li <shli@fb.com>

MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
SwapBacked). There is no lock to prevent the page is added to swap cache
between these two steps by page reclaim. Page reclaim could add the page
to swap cache and unmap the page. After page reclaim, the page is added
back to lru. At that time, we probably start draining per-cpu pagevec
and mark the page lazyfree. So the page could be in a state with
SwapBacked cleared and PG_swapcache set. Next time there is a refault in
the virtual address, do_swap_page can find the page from swap cache but
the page has PageSwapCache false because SwapBacked isn't set, so
do_swap_page will bail out and do nothing. The task will keep running
into fault handler.

Reported-and-tested-by: Artem Savkov <asavkov@redhat.com>
Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages)
Signed-off-by: Shaohua Li <shli@fb.com>
Cc: stable@vger.kernel.org
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Reviewed-by: Rik van Riel <riel@redhat.com>
---
 mm/swap.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/swap.c b/mm/swap.c
index 9295ae9..a77d68f 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -575,7 +575,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
 			    void *arg)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
-	    !PageUnevictable(page)) {
+	    !PageSwapCache(page) && !PageUnevictable(page)) {
 		bool active = PageActive(page);
 
 		del_page_from_lru_list(page, lruvec,
@@ -665,7 +665,7 @@ void deactivate_file_page(struct page *page)
 void mark_page_lazyfree(struct page *page)
 {
 	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
-	    !PageUnevictable(page)) {
+	    !PageSwapCache(page) && !PageUnevictable(page)) {
 		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
 
 		get_page(page);
-- 
2.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page
  2017-09-26 17:26 [PATCH V3 0/2] mm: fix race condition in MADV_FREE Shaohua Li
  2017-09-26 17:26 ` [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree Shaohua Li
@ 2017-09-26 17:26 ` Shaohua Li
  2017-09-26 19:40   ` Johannes Weiner
  2017-09-26 23:20   ` Minchan Kim
  1 sibling, 2 replies; 9+ messages in thread
From: Shaohua Li @ 2017-09-26 17:26 UTC (permalink / raw)
  To: linux-mm
  Cc: asavkov, Kernel-team, Shaohua Li, stable, Johannes Weiner,
	Hillf Danton, Minchan Kim, Hugh Dickins, Rik van Riel,
	Mel Gorman, Andrew Morton

From: Shaohua Li <shli@fb.com>

MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
SwapBacked). There is no lock to prevent the page is added to swap cache
between these two steps by page reclaim. If page reclaim finds such
page, it will simply add the page to swap cache without pageout the page
to swap because the page is marked as clean. Next time, page fault will
read data from the swap slot which doesn't have the original data, so we
have a data corruption. To fix issue, we mark the page dirty and pageout
the page.

However, we shouldn't dirty all pages which is clean and in swap cache.
swapin page is swap cache and clean too. So we only dirty page which is
added into swap cache in page reclaim, which shouldn't be swapin page.
As Minchan suggested, simply dirty the page in add_to_swap can do the
job.

Reported-by: Artem Savkov <asavkov@redhat.com>
Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages)
Signed-off-by: Shaohua Li <shli@fb.com>
Cc: stable@vger.kernel.org
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Hugh Dickins <hughd@google.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 mm/swap_state.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/mm/swap_state.c b/mm/swap_state.c
index 71ce2d1..ed91091 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -242,6 +242,17 @@ int add_to_swap(struct page *page)
 		 * clear SWAP_HAS_CACHE flag.
 		 */
 		goto fail;
+	/*
+	 * Normally the page will be dirtied in unmap because its pte should be
+	 * dirty. A special case is MADV_FREE page. The page'e pte could have
+	 * dirty bit cleared but the page's SwapBacked bit is still set because
+	 * clearing the dirty bit and SwapBacked bit has no lock protected. For
+	 * such page, unmap will not set dirty bit for it, so page reclaim will
+	 * not write the page out. This can cause data corruption when the page
+	 * is swap in later. Always setting the dirty bit for the page solves
+	 * the problem.
+	 */
+	set_page_dirty(page);
 
 	return 1;
 
-- 
2.9.5

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree
  2017-09-26 17:26 ` [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree Shaohua Li
@ 2017-09-26 19:25   ` Johannes Weiner
  2017-09-26 20:23   ` Michal Hocko
  2017-09-26 23:20   ` Minchan Kim
  2 siblings, 0 replies; 9+ messages in thread
From: Johannes Weiner @ 2017-09-26 19:25 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-mm, asavkov, Kernel-team, Shaohua Li, stable, Michal Hocko,
	Hillf Danton, Minchan Kim, Hugh Dickins, Mel Gorman,
	Andrew Morton

On Tue, Sep 26, 2017 at 10:26:25AM -0700, Shaohua Li wrote:
> From: Shaohua Li <shli@fb.com>
> 
> MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
> SwapBacked). There is no lock to prevent the page is added to swap cache
> between these two steps by page reclaim. Page reclaim could add the page
> to swap cache and unmap the page. After page reclaim, the page is added
> back to lru. At that time, we probably start draining per-cpu pagevec
> and mark the page lazyfree. So the page could be in a state with
> SwapBacked cleared and PG_swapcache set. Next time there is a refault in
> the virtual address, do_swap_page can find the page from swap cache but
> the page has PageSwapCache false because SwapBacked isn't set, so
> do_swap_page will bail out and do nothing. The task will keep running
> into fault handler.

The patch lgtm, but for the changelog it probably makes sense to start
with the user-visible behavior, i.e. the endlessly looping swap fault
handler because it thinks it's racing with the swap slot being freed.

Makes it easier for other distro/vendor people to identify this for
backporting.

On that note, I think this should go into 4.13 and be tagged for 4.12
stable.

> Reported-and-tested-by: Artem Savkov <asavkov@redhat.com>
> Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages)
> Signed-off-by: Shaohua Li <shli@fb.com>
> Cc: stable@vger.kernel.org
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Rik van Riel <riel@redhat.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page
  2017-09-26 17:26 ` [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page Shaohua Li
@ 2017-09-26 19:40   ` Johannes Weiner
  2017-09-26 19:46     ` Shaohua Li
  2017-09-26 23:20   ` Minchan Kim
  1 sibling, 1 reply; 9+ messages in thread
From: Johannes Weiner @ 2017-09-26 19:40 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-mm, asavkov, Kernel-team, Shaohua Li, stable, Hillf Danton,
	Minchan Kim, Hugh Dickins, Rik van Riel, Mel Gorman,
	Andrew Morton

On Tue, Sep 26, 2017 at 10:26:26AM -0700, Shaohua Li wrote:
> From: Shaohua Li <shli@fb.com>
> 
> MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
> SwapBacked). There is no lock to prevent the page is added to swap cache
> between these two steps by page reclaim. If page reclaim finds such
> page, it will simply add the page to swap cache without pageout the page
> to swap because the page is marked as clean. Next time, page fault will
> read data from the swap slot which doesn't have the original data, so we
> have a data corruption. To fix issue, we mark the page dirty and pageout
> the page.

Reclaim and MADV_FREE hold the page lock when manipulating the dirty
and the swapcache state.

Instead of undoing a racing MADV_FREE in reclaim, wouldn't it be safe
to check the dirty bit before add_to_swap() and skip clean pages?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page
  2017-09-26 19:40   ` Johannes Weiner
@ 2017-09-26 19:46     ` Shaohua Li
  0 siblings, 0 replies; 9+ messages in thread
From: Shaohua Li @ 2017-09-26 19:46 UTC (permalink / raw)
  To: Johannes Weiner
  Cc: linux-mm, asavkov, Kernel-team, Shaohua Li, stable, Hillf Danton,
	Minchan Kim, Hugh Dickins, Rik van Riel, Mel Gorman,
	Andrew Morton

On Tue, Sep 26, 2017 at 03:40:17PM -0400, Johannes Weiner wrote:
> On Tue, Sep 26, 2017 at 10:26:26AM -0700, Shaohua Li wrote:
> > From: Shaohua Li <shli@fb.com>
> > 
> > MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
> > SwapBacked). There is no lock to prevent the page is added to swap cache
> > between these two steps by page reclaim. If page reclaim finds such
> > page, it will simply add the page to swap cache without pageout the page
> > to swap because the page is marked as clean. Next time, page fault will
> > read data from the swap slot which doesn't have the original data, so we
> > have a data corruption. To fix issue, we mark the page dirty and pageout
> > the page.
> 
> Reclaim and MADV_FREE hold the page lock when manipulating the dirty
> and the swapcache state.
> 
> Instead of undoing a racing MADV_FREE in reclaim, wouldn't it be safe
> to check the dirty bit before add_to_swap() and skip clean pages?

That would work, but I don't see an easy/clean way to check the dirty bit.
Since the race is rare, I think this optimiztion isn't worthy.

Thanks,
Shaohua

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree
  2017-09-26 17:26 ` [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree Shaohua Li
  2017-09-26 19:25   ` Johannes Weiner
@ 2017-09-26 20:23   ` Michal Hocko
  2017-09-26 23:20   ` Minchan Kim
  2 siblings, 0 replies; 9+ messages in thread
From: Michal Hocko @ 2017-09-26 20:23 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-mm, asavkov, Kernel-team, Shaohua Li, stable,
	Johannes Weiner, Hillf Danton, Minchan Kim, Hugh Dickins,
	Mel Gorman, Andrew Morton

On Tue 26-09-17 10:26:25, Shaohua Li wrote:
> From: Shaohua Li <shli@fb.com>
> 
> MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
> SwapBacked). There is no lock to prevent the page is added to swap cache
> between these two steps by page reclaim. Page reclaim could add the page
> to swap cache and unmap the page. After page reclaim, the page is added
> back to lru. At that time, we probably start draining per-cpu pagevec
> and mark the page lazyfree. So the page could be in a state with
> SwapBacked cleared and PG_swapcache set. Next time there is a refault in
> the virtual address, do_swap_page can find the page from swap cache but
> the page has PageSwapCache false because SwapBacked isn't set, so
> do_swap_page will bail out and do nothing. The task will keep running
> into fault handler.

Thanks for the clarification in the changelog. It is much more clear
now!

> Reported-and-tested-by: Artem Savkov <asavkov@redhat.com>
> Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages)
> Signed-off-by: Shaohua Li <shli@fb.com>
> Cc: stable@vger.kernel.org
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Hillf Danton <hillf.zj@alibaba-inc.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Reviewed-by: Rik van Riel <riel@redhat.com>

Marking for stable as suggested by Johannes makes perfect sense to me.
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/swap.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/swap.c b/mm/swap.c
> index 9295ae9..a77d68f 100644
> --- a/mm/swap.c
> +++ b/mm/swap.c
> @@ -575,7 +575,7 @@ static void lru_lazyfree_fn(struct page *page, struct lruvec *lruvec,
>  			    void *arg)
>  {
>  	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
> -	    !PageUnevictable(page)) {
> +	    !PageSwapCache(page) && !PageUnevictable(page)) {
>  		bool active = PageActive(page);
>  
>  		del_page_from_lru_list(page, lruvec,
> @@ -665,7 +665,7 @@ void deactivate_file_page(struct page *page)
>  void mark_page_lazyfree(struct page *page)
>  {
>  	if (PageLRU(page) && PageAnon(page) && PageSwapBacked(page) &&
> -	    !PageUnevictable(page)) {
> +	    !PageSwapCache(page) && !PageUnevictable(page)) {
>  		struct pagevec *pvec = &get_cpu_var(lru_lazyfree_pvecs);
>  
>  		get_page(page);
> -- 
> 2.9.5
> 

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree
  2017-09-26 17:26 ` [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree Shaohua Li
  2017-09-26 19:25   ` Johannes Weiner
  2017-09-26 20:23   ` Michal Hocko
@ 2017-09-26 23:20   ` Minchan Kim
  2 siblings, 0 replies; 9+ messages in thread
From: Minchan Kim @ 2017-09-26 23:20 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-mm, asavkov, Kernel-team, Shaohua Li, stable,
	Johannes Weiner, Michal Hocko, Hillf Danton, Hugh Dickins,
	Mel Gorman, Andrew Morton

On Tue, Sep 26, 2017 at 10:26:25AM -0700, Shaohua Li wrote:
> From: Shaohua Li <shli@fb.com>
> 
> MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
> SwapBacked). There is no lock to prevent the page is added to swap cache
> between these two steps by page reclaim. Page reclaim could add the page
> to swap cache and unmap the page. After page reclaim, the page is added
> back to lru. At that time, we probably start draining per-cpu pagevec
> and mark the page lazyfree. So the page could be in a state with
> SwapBacked cleared and PG_swapcache set. Next time there is a refault in
> the virtual address, do_swap_page can find the page from swap cache but
> the page has PageSwapCache false because SwapBacked isn't set, so
> do_swap_page will bail out and do nothing. The task will keep running
> into fault handler.

With new description, I got why you want to seperate this. Yub, it should
be separated. Sorry for the noise. What I was missing is PageSwapCache's
change which checked PG_swapbacked as well as PG_swapcache. I didn't 
notice that the change.

Acked-by: Minchan Kim <minchan@kernel.org>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page
  2017-09-26 17:26 ` [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page Shaohua Li
  2017-09-26 19:40   ` Johannes Weiner
@ 2017-09-26 23:20   ` Minchan Kim
  1 sibling, 0 replies; 9+ messages in thread
From: Minchan Kim @ 2017-09-26 23:20 UTC (permalink / raw)
  To: Shaohua Li
  Cc: linux-mm, asavkov, Kernel-team, Shaohua Li, stable,
	Johannes Weiner, Hillf Danton, Hugh Dickins, Rik van Riel,
	Mel Gorman, Andrew Morton

On Tue, Sep 26, 2017 at 10:26:26AM -0700, Shaohua Li wrote:
> From: Shaohua Li <shli@fb.com>
> 
> MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
> SwapBacked). There is no lock to prevent the page is added to swap cache
> between these two steps by page reclaim. If page reclaim finds such
> page, it will simply add the page to swap cache without pageout the page
> to swap because the page is marked as clean. Next time, page fault will
> read data from the swap slot which doesn't have the original data, so we
> have a data corruption. To fix issue, we mark the page dirty and pageout
> the page.
> 
> However, we shouldn't dirty all pages which is clean and in swap cache.
> swapin page is swap cache and clean too. So we only dirty page which is
> added into swap cache in page reclaim, which shouldn't be swapin page.
> As Minchan suggested, simply dirty the page in add_to_swap can do the
> job.
> 
> Reported-by: Artem Savkov <asavkov@redhat.com>
> Fix: 802a3a92ad7a(mm: reclaim MADV_FREE pages)
> Signed-off-by: Shaohua Li <shli@fb.com>

Acked-by: Minchan Kim <minchan@kernel.org>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2017-09-26 23:20 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-26 17:26 [PATCH V3 0/2] mm: fix race condition in MADV_FREE Shaohua Li
2017-09-26 17:26 ` [PATCH V3 1/2] mm: avoid marking swap cached page as lazyfree Shaohua Li
2017-09-26 19:25   ` Johannes Weiner
2017-09-26 20:23   ` Michal Hocko
2017-09-26 23:20   ` Minchan Kim
2017-09-26 17:26 ` [PATCH V3 2/2] mm: fix data corruption caused by lazyfree page Shaohua Li
2017-09-26 19:40   ` Johannes Weiner
2017-09-26 19:46     ` Shaohua Li
2017-09-26 23:20   ` Minchan Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).