linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found] <2018072514375722198958@wingtech.com>
@ 2018-07-25  7:40 ` Michal Hocko
       [not found]   ` <2018072515575576668668@wingtech.com>
  2018-07-25 21:16 ` Andrew Morton
  1 sibling, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2018-07-25  7:40 UTC (permalink / raw)
  To: zhaowuyun
  Cc: mgorman, akpm, minchan, vinmenon, hannes, hillf.zj, linux-mm,
	linux-kernel

On Wed 25-07-18 14:37:58, zhaowuyun@wingtech.com wrote:
[...]
> Change-Id: I36d9df7ccff77c589b7157225410269c675a8504

What is this?

> Signed-off-by: zhaowuyun <zhaowuyun@wingtech.com>
> ---
> mm/vmscan.c | 9 +++++++++
> 1 file changed, 9 insertions(+)
>  
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 2740973..acede002 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -674,6 +674,12 @@ static int __remove_mapping(struct address_space *mapping, struct page *page,
> BUG_ON(!PageLocked(page));
> BUG_ON(mapping != page_mapping(page));
> + /*
> + * preemption must be disabled to protect current task preempted before
> + * swapcache_free(swap) invoked by the task which do the
> + * __read_swap_cache_async job on the same page
> + */
> + preempt_disable();
> spin_lock_irqsave(&mapping->tree_lock, flags);

Hmm, but spin_lock_irqsave already implies the disabled preemption.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found]   ` <2018072515575576668668@wingtech.com>
@ 2018-07-25  8:21     ` Michal Hocko
  2018-07-25 10:32       ` Michal Hocko
       [not found]       ` <2018072517530727482074@wingtech.com>
  0 siblings, 2 replies; 10+ messages in thread
From: Michal Hocko @ 2018-07-25  8:21 UTC (permalink / raw)
  To: zhaowuyun
  Cc: mgorman, akpm, minchan, vinmenon, hannes, hillf.zj, linux-mm,
	linux-kernel

[Please do not top post - thank you]
[CC Hugh - the original patch was http://lkml.kernel.org/r/2018072514375722198958@wingtech.com]

On Wed 25-07-18 15:57:55, zhaowuyun@wingtech.com wrote:
> That is a BUG we found in mm/vmscan.c at KERNEL VERSION 4.9.82

The code is quite similar in the current tree as well.

> Sumary is TASK A (normal priority) doing __remove_mapping page preempted by TASK B (RT priority) doing __read_swap_cache_async,
> the TASK A preempted before swapcache_free, left SWAP_HAS_CACHE flag in the swap cache,
> the TASK B which doing __read_swap_cache_async, will not success at swapcache_prepare(entry) because the swap cache was exist, then it will loop forever because it is a RT thread...
> the spin lock unlocked before swapcache_free, so disable preemption until swapcache_free executed ...

OK, I see your point now. I have missed the lock is dropped before
swapcache_free. How can preemption disabling prevent this race to happen
while the code is preempted by an IRQ?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
  2018-07-25  8:21     ` Michal Hocko
@ 2018-07-25 10:32       ` Michal Hocko
       [not found]       ` <2018072517530727482074@wingtech.com>
  1 sibling, 0 replies; 10+ messages in thread
From: Michal Hocko @ 2018-07-25 10:32 UTC (permalink / raw)
  To: zhaowuyun
  Cc: mgorman, akpm, minchan, vinmenon, hannes, hillf.zj, linux-mm,
	linux-kernel, Hugh Dickins

On Wed 25-07-18 10:21:00, Michal Hocko wrote:
> [Please do not top post - thank you]
> [CC Hugh - the original patch was http://lkml.kernel.org/r/2018072514375722198958@wingtech.com]

now for real

> On Wed 25-07-18 15:57:55, zhaowuyun@wingtech.com wrote:
> > That is a BUG we found in mm/vmscan.c at KERNEL VERSION 4.9.82
> 
> The code is quite similar in the current tree as well.
> 
> > Sumary is TASK A (normal priority) doing __remove_mapping page preempted by TASK B (RT priority) doing __read_swap_cache_async,
> > the TASK A preempted before swapcache_free, left SWAP_HAS_CACHE flag in the swap cache,
> > the TASK B which doing __read_swap_cache_async, will not success at swapcache_prepare(entry) because the swap cache was exist, then it will loop forever because it is a RT thread...
> > the spin lock unlocked before swapcache_free, so disable preemption until swapcache_free executed ...
> 
> OK, I see your point now. I have missed the lock is dropped before
> swapcache_free. How can preemption disabling prevent this race to happen
> while the code is preempted by an IRQ?

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found]       ` <2018072517530727482074@wingtech.com>
@ 2018-07-25 10:34         ` Michal Hocko
  0 siblings, 0 replies; 10+ messages in thread
From: Michal Hocko @ 2018-07-25 10:34 UTC (permalink / raw)
  To: zhaowuyun
  Cc: mgorman, akpm, minchan, vinmenon, hannes, hillf.zj, linux-mm,
	linux-kernel, Hugh Dickins

On Wed 25-07-18 17:53:07, zhaowuyun@wingtech.com wrote:
> >[Please do not top post - thank you]
> >[CC Hugh - the original patch was http://lkml.kernel.org/r/2018072514375722198958@wingtech.com]
> >
> >On Wed 25-07-18 15:57:55, zhaowuyun@wingtech.com wrote:
> >> That is a BUG we found in mm/vmscan.c at KERNEL VERSION 4.9.82
> >
> >The code is quite similar in the current tree as well.
> >
> >> Sumary is TASK A (normal priority) doing __remove_mapping page preempted by TASK B (RT priority) doing __read_swap_cache_async,
> >> the TASK A preempted before swapcache_free, left SWAP_HAS_CACHE flag in the swap cache,
> >> the TASK B which doing __read_swap_cache_async, will not success at swapcache_prepare(entry) because the swap cache was exist, then it will loop forever because it is a RT thread...
> >> the spin lock unlocked before swapcache_free, so disable preemption until swapcache_free executed ...
> >
> >OK, I see your point now. I have missed the lock is dropped before
> >swapcache_free. How can preemption disabling prevent this race to happen
> >while the code is preempted by an IRQ?
> >--
> >Michal Hocko
> >SUSE Labs 
> 
> Hi Michal,
> 
> The action what processes __read_swap_cache_async is on the process context, so I think disable preemption is enough.

So what you are saying is that no IRQ or other non-process contexts will
not loop in __read_swap_cache_async so the live lock is not possible?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found] <2018072514375722198958@wingtech.com>
  2018-07-25  7:40 ` [PATCH] [PATCH] mm: disable preemption before swapcache_free Michal Hocko
@ 2018-07-25 21:16 ` Andrew Morton
       [not found]   ` <2018072610214038358990@wingtech.com>
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2018-07-25 21:16 UTC (permalink / raw)
  To: zhaowuyun
  Cc: mgorman, minchan, vinmenon, mhocko, hannes, hillf.zj, linux-mm,
	linux-kernel, Hugh Dickins

On Wed, 25 Jul 2018 14:37:58 +0800 "zhaowuyun@wingtech.com" <zhaowuyun@wingtech.com> wrote:

> From: zhaowuyun <zhaowuyun@wingtech.com>
>  
> issue is that there are two processes A and B, A is kworker/u16:8
> normal priority, B is AudioTrack, RT priority, they are on the
> same CPU 3.
>  
> The task A preempted by task B in the moment
> after __delete_from_swap_cache(page) and before swapcache_free(swap).
>  
> The task B does __read_swap_cache_async in the do {} while loop, it
> will never find the page from swapper_space because the page is removed
> by the task A, and it will never sucessfully in swapcache_prepare because
> the entry is EEXIST.
>  
> The task B then stuck in the loop infinitely because it is a RT task,
> no one can preempt it.
>  
> so need to disable preemption until the swapcache_free executed.

Yes, right, sorry, I must have merged cbab0e4eec299 in my sleep. 
cond_resched() is a no-op in the presence of realtime policy threads
and using to attempt to yield to a different thread it in this fashion
is broken.

Disabling preemption on the other side of the race should fix things,
but it's using a bandaid to plug the leakage from the earlier bandaid. 
The proper way to coordinate threads is to use a sleeping lock, such
as a mutex, or some other wait/wakeup mechanism.

And once that's done, we can hopefully eliminate the do loop from
__read_swap_cache_async().  That also services ENOMEM from
radix_tree_insert(), but __add_to_swap_cache() appears to handle that
OK and we shouldn't just loop around retrying the insert and the
radix_tree_preload() should ensure that radix_tree_insert() never fails
anyway.  Unless we're calling __read_swap_cache_async() with screwy
gfp_flags from somewhere.



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found]   ` <2018072610214038358990@wingtech.com>
@ 2018-07-26  6:06     ` Michal Hocko
       [not found]       ` <20180726150323057627100@wingtech.com>
  0 siblings, 1 reply; 10+ messages in thread
From: Michal Hocko @ 2018-07-26  6:06 UTC (permalink / raw)
  To: zhaowuyun
  Cc: akpm, mgorman, minchan, vinmenon, hannes, hillf.zj, linux-mm,
	linux-kernel, Hugh Dickins

On Thu 26-07-18 10:21:40, zhaowuyun@wingtech.com wrote:
[...]
> Our project really needs a fix to this issue

Could you be more specific why? My understanding is that RT tasks
usually have all the memory mlocked otherwise all the real time
expectations are gone already.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found]       ` <20180726150323057627100@wingtech.com>
@ 2018-07-26  7:44         ` Michal Hocko
  2018-07-26 22:11         ` Andrew Morton
  1 sibling, 0 replies; 10+ messages in thread
From: Michal Hocko @ 2018-07-26  7:44 UTC (permalink / raw)
  To: zhaowuyun
  Cc: akpm, mgorman, minchan, vinmenon, hannes, hillf.zj, linux-mm,
	linux-kernel, Hugh Dickins

On Thu 26-07-18 15:03:23, zhaowuyun@wingtech.com wrote:
> >On Thu 26-07-18 10:21:40, zhaowuyun@wingtech.com wrote:
> >[...]
> >> Our project really needs a fix to this issue
> >
> >Could you be more specific why? My understanding is that RT tasks
> >usually have all the memory mlocked otherwise all the real time
> >expectations are gone already.
> >--
> >Michal Hocko
> >SUSE Labs 
> 
> 
> The RT thread is created by a process with normal priority, and the process was sleep, 
> then some task needs the RT thread to do something, so the process create this thread, and set it to RT policy.
> I think that is the reason why RT task would read the swap.

OK I see. This design is quite fragile though. You are opening ticket to
priority inversions and what not.

Anyway, the underlying swap issue should be fixed. Unfortunatelly I do
not have a great idea how to do that properly.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found]       ` <20180726150323057627100@wingtech.com>
  2018-07-26  7:44         ` Michal Hocko
@ 2018-07-26 22:11         ` Andrew Morton
       [not found]           ` <20180727140749669129112@wingtech.com>
  1 sibling, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2018-07-26 22:11 UTC (permalink / raw)
  To: zhaowuyun
  Cc: Michal Hocko, mgorman, minchan, vinmenon, hannes, hillf.zj,
	linux-mm, linux-kernel, Hugh Dickins

On Thu, 26 Jul 2018 15:03:23 +0800 "zhaowuyun@wingtech.com" <zhaowuyun@wingtech.com> wrote:

> >On Thu 26-07-18 10:21:40, zhaowuyun@wingtech.com wrote:
> >[...]
> >> Our project really needs a fix to this issue
> >
> >Could you be more specific why? My understanding is that RT tasks
> >usually have all the memory mlocked otherwise all the real time
> >expectations are gone already.
> >--
> >Michal Hocko
> >SUSE Labs 
> 
> 
> The RT thread is created by a process with normal priority, and the process was sleep, 
> then some task needs the RT thread to do something, so the process create this thread, and set it to RT policy.
> I think that is the reason why RT task would read the swap.

A simpler bandaid might be to replace the cond_resched() with msleep(1).

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found]           ` <20180727140749669129112@wingtech.com>
@ 2018-08-04 23:07             ` Hugh Dickins
       [not found]               ` <20180807101540612373235@wingtech.com>
  0 siblings, 1 reply; 10+ messages in thread
From: Hugh Dickins @ 2018-08-04 23:07 UTC (permalink / raw)
  To: zhaowuyun
  Cc: akpm, Michal Hocko, mgorman, minchan, vinmenon, hannes, hillf.zj,
	linux-mm, linux-kernel, Hugh Dickins

On Fri, 27 Jul 2018, zhaowuyun@wingtech.com wrote:
> >On Thu, 26 Jul 2018 15:03:23 +0800 "zhaowuyun@wingtech.com" <zhaowuyun@wingtech.com> wrote:
> >
> >> >On Thu 26-07-18 10:21:40, zhaowuyun@wingtech.com wrote:
> >> >[...]
> >> >> Our project really needs a fix to this issue
> >> >
> >> >Could you be more specific why? My understanding is that RT tasks
> >> >usually have all the memory mlocked otherwise all the real time
> >> >expectations are gone already.
> >> >--
> >> >Michal Hocko
> >> >SUSE Labs
> >>
> >>
> >> The RT thread is created by a process with normal priority, and the process was sleep,
> >> then some task needs the RT thread to do something, so the process create this thread, and set it to RT policy.
> >> I think that is the reason why RT task would read the swap.
> >
> >A simpler bandaid might be to replace the cond_resched() with msleep(1). 
> 
> 
> Thanks for the suggestion, we will try that.

Andrew's msleep(1) may be a good enough bandaid for you. And I share
Michal's doubt about your design, in which an RT thread meets swap:
this may not be the last problem you have with that.

But this is a real bug when CONFIG_PREEMPT=y, RT threads or not: we
just didn't notice, because it's usually hidden by the cond_resched().
(I think that was added in 3.10, because in 2.6.29 I had been guilty of
inserting a discard, and wait for completion, in between allocating swap
and adding to swap cache; but Shao Hua fixed my discard in 3.12.) Thanks
a lot for making us aware of this bug.

After reminding myself of the issues here, I disagree with much of what
has been said: we shall "always" want the loop in __read_swap_cache_async()
(though some of its error handling is probably superfluous now, agreed);
and your disabling of preemption is not just a bandaid, it's exactly the
right approach.

We could do something heavier, perhaps rearranging the swapcache tree work
to be done under swap_lock as well as tree_lock (I'm talking 4.9-language),
but that's very unlikely to be an improvement. Disabling preemption yokes
the two spinlocks together in an efficient way, without overhead on other
paths; on rare occasions we spin around __read_swap_cache_async() instead
of spinning around to acquire a spinlock.

But your patch is incomplete. The same needs to be done in delete_from_
swap_cache(), and we would also need to protect against preemption between
the get_swap_page() and the add_to_swap_cache(), in add_to_swap() and in
shmem_writepage(). The latter gets messy, but 4.11 (where Tim Chen uses
SWAP_HAS_CACHE more widely) gives a good hint: __read_swap_cache_async()
callers are only interested in swap entries that are already in use and
still in use. (Though swapoff has to be more careful, partly because one
of its jobs is to clear out swap-cache-only entries, partly because the
current interface would mistake a NULL for no-entry as out-of-memory.)

Below is the patch I would do for 4.9 (diffed against 4.9.117), and I'm
showing that because it's the simplest case. Although the principles stay
the same, the codebase here has gone through several shifts, and 4.19 will
probably be different again. So I'll test and post a patch against 4.19-rc
in a few weeks time, and that can then be backported to stable: but will
need several manual backports because of the intervening changes.

I did wonder whether just to extend the irq-disabled section in
delete_from_swap_cache() etc: that's easy, and works, and is even better
protection against spinning too long; but it's not absolutely necessary,
so all in all, probably better avoided. I did wonder whether to remove
the cond_resched(), but it's not a bad place for one, so I've left it in.

When checking worst cases of looping around __read_swap_cache_async(),
after the patch, I was worried for a while. I had naively imagined that
going more than twice around the loop should be vanishingly rare, but
that is not so at all. But the bad cases I looked into were all the same:
after forking, two processes, on HT siblings, each serving do_swap_page(),
trying to bring the same swap into its mm, with a sparse swapper_space
tree: one process gets to do all the work of allocating new radix-tree
nodes and bringing them into D-cache, while the other just spins around
__read_swap_cache_async() seeing SWAP_HAS_CACHE but not yet the page in
the radix-tree. That's okay, that makes sense.

Hugh
---

 mm/swap_state.c |   26 +++++++++++++-------------
 mm/swapfile.c   |    8 +++++++-
 mm/vmscan.c     |    3 +++
 3 files changed, 23 insertions(+), 14 deletions(-)

--- 4.9.117/mm/swap_state.c	2016-12-11 11:17:54.000000000 -0800
+++ linux/mm/swap_state.c	2018-08-04 11:50:46.577788766 -0700
@@ -225,9 +225,11 @@ void delete_from_swap_cache(struct page
 	address_space = swap_address_space(entry);
 	spin_lock_irq(&address_space->tree_lock);
 	__delete_from_swap_cache(page);
+	/* Expedite swapcache_free() to help __read_swap_cache_async() */
+	preempt_disable();
 	spin_unlock_irq(&address_space->tree_lock);
-
 	swapcache_free(entry);
+	preempt_enable();
 	put_page(page);
 }
 
@@ -337,19 +339,17 @@ struct page *__read_swap_cache_async(swp
 		if (err == -EEXIST) {
 			radix_tree_preload_end();
 			/*
-			 * We might race against get_swap_page() and stumble
-			 * across a SWAP_HAS_CACHE swap_map entry whose page
-			 * has not been brought into the swapcache yet, while
-			 * the other end is scheduled away waiting on discard
-			 * I/O completion at scan_swap_map().
+			 * We might race against __delete_from_swap_cache() and
+			 * stumble across a swap_map entry whose SWAP_HAS_CACHE
+			 * has not yet been cleared: hence preempt_disable()
+			 * in __remove_mapping() and delete_from_swap_cache(),
+			 * so they cannot schedule away before clearing it.
 			 *
-			 * In order to avoid turning this transitory state
-			 * into a permanent loop around this -EEXIST case
-			 * if !CONFIG_PREEMPT and the I/O completion happens
-			 * to be waiting on the CPU waitqueue where we are now
-			 * busy looping, we just conditionally invoke the
-			 * scheduler here, if there are some more important
-			 * tasks to run.
+			 * We need similar protection against racing calls to
+			 * __read_swap_cache_async(): preempt_disable() before
+			 * swapcache_prepare() above, preempt_enable() after
+			 * __add_to_swap_cache() below: which are already in
+			 * radix_tree_maybe_preload(), radix_tree_preload_end().
 			 */
 			cond_resched();
 			continue;
--- 4.9.117/mm/swapfile.c	2018-08-04 11:40:08.463504848 -0700
+++ linux/mm/swapfile.c	2018-08-04 11:50:46.577788766 -0700
@@ -2670,7 +2670,13 @@ static int __swap_duplicate(swp_entry_t
 		/* set SWAP_HAS_CACHE if there is no cache and entry is used */
 		if (!has_cache && count)
 			has_cache = SWAP_HAS_CACHE;
-		else if (has_cache)		/* someone else added cache */
+		/*
+		 * __read_swap_cache_async() can usually skip entries without
+		 * real usage (including those in between being allocated and
+		 * added to swap cache); but swapoff (!SWP_WRITEOK) must not.
+		 */
+		else if (has_cache &&
+			 (count || !(p->flags & SWP_WRITEOK)))
 			err = -EEXIST;
 		else				/* no users remaining */
 			err = -ENOENT;
--- 4.9.117/mm/vmscan.c	2018-08-04 11:40:08.471504902 -0700
+++ linux/mm/vmscan.c	2018-08-04 11:50:46.577788766 -0700
@@ -709,8 +709,11 @@ static int __remove_mapping(struct addre
 		swp_entry_t swap = { .val = page_private(page) };
 		mem_cgroup_swapout(page, swap);
 		__delete_from_swap_cache(page);
+		/* Expedite swapcache_free() for __read_swap_cache_async() */
+		preempt_disable();
 		spin_unlock_irqrestore(&mapping->tree_lock, flags);
 		swapcache_free(swap);
+		preempt_enable();
 	} else {
 		void (*freepage)(struct page *);
 		void *shadow = NULL;

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Re: [PATCH] [PATCH] mm: disable preemption before swapcache_free
       [not found]               ` <20180807101540612373235@wingtech.com>
@ 2018-08-07  3:23                 ` Hugh Dickins
  0 siblings, 0 replies; 10+ messages in thread
From: Hugh Dickins @ 2018-08-07  3:23 UTC (permalink / raw)
  To: zhaowuyun
  Cc: Hugh Dickins, akpm, Michal Hocko, mgorman, minchan, vinmenon,
	hannes, hillf.zj, linux-mm, linux-kernel

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1700 bytes --]

On Tue, 7 Aug 2018, zhaowuyun@wingtech.com wrote:
> 
> Thanks for affirming the modification of disabling preemption and 
> pointing out the incompleteness, delete_from_swap_cache() needs the same protection.
> I'm curious about that why don't put swapcache_free(swap) under protection of mapping->tree_lock ??

That would violate the long-established lock ordering (see not-always-
kept-up-to-date comments at the head of mm/rmap.c). In particular,
swap_lock (and its more recent descendants, such as swap_info->lock)
can be held with interrupts enabled, whereas taking tree_lock (later
called i_pages lock) involves disabling interrupts. So: there would
be quite a lot of modifications required to do swapcache_free(swap)
under mapping->tree_lock.

Generally easier would be to take tree_lock under swap lock: that fits
the establishd lock ordering, and is already done in just a few places
- or am I thinking of free_swap_and_cache() in the old days before
find_get_page() did lockless lookup? But you didn't suggest that way,
because it's more awkward in the __remove_mapping() case: I expect
that could be worked around with an initial PageSwapCache check,
taking swap locks there first (not inside swapcache_free()) -
__remove_mapping()'s BUG_ON(!PageLocked) implies that won't be racy.

But either way round, why? What would be the advantage in doing so?
A more conventional nesting of locks, easier to describe and understand,
yes. But from a performance point of view, thinking of lock contention,
nothing but disadvantage. And don't forget the get_swap_page() end:
there it would be harder to deal with both locks together (at least
in the shmem case).

Hugh

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-08-07  3:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <2018072514375722198958@wingtech.com>
2018-07-25  7:40 ` [PATCH] [PATCH] mm: disable preemption before swapcache_free Michal Hocko
     [not found]   ` <2018072515575576668668@wingtech.com>
2018-07-25  8:21     ` Michal Hocko
2018-07-25 10:32       ` Michal Hocko
     [not found]       ` <2018072517530727482074@wingtech.com>
2018-07-25 10:34         ` Michal Hocko
2018-07-25 21:16 ` Andrew Morton
     [not found]   ` <2018072610214038358990@wingtech.com>
2018-07-26  6:06     ` Michal Hocko
     [not found]       ` <20180726150323057627100@wingtech.com>
2018-07-26  7:44         ` Michal Hocko
2018-07-26 22:11         ` Andrew Morton
     [not found]           ` <20180727140749669129112@wingtech.com>
2018-08-04 23:07             ` Hugh Dickins
     [not found]               ` <20180807101540612373235@wingtech.com>
2018-08-07  3:23                 ` Hugh Dickins

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).