All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
@ 2018-01-22 13:46 Tetsuo Handa
  2018-01-23  8:38 ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-01-22 13:46 UTC (permalink / raw)
  To: akpm
  Cc: linux-mm, Tetsuo Handa, David Rientjes, Johannes Weiner,
	Michal Hocko, Roman Gushchin, Tejun Heo, Vladimir Davydov

When I was examining a bug which occurs under CPU + memory pressure, I
observed that a thread which called out_of_memory() can sleep for minutes
at schedule_timeout_killable(1) with oom_lock held when many threads are
doing direct reclaim.

--------------------
[  163.357628] b.out invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
[  163.360946] CPU: 0 PID: 554 Comm: b.out Not tainted 4.15.0-rc8+ #216
(...snipped...)
[  163.470193] Out of memory: Kill process 548 (b.out) score 6 or sacrifice child
[  163.471813] Killed process 1191 (b.out) total-vm:2108kB, anon-rss:60kB, file-rss:4kB, shmem-rss:0kB
(...snipped...)
[  248.016033] sysrq: SysRq : Show State
(...snipped...)
[  249.625720] b.out           R  running task        0   554    538 0x00000004
[  249.627778] Call Trace:
[  249.628513]  __schedule+0x142/0x4b2
[  249.629394]  schedule+0x27/0x70
[  249.630114]  schedule_timeout+0xd1/0x160
[  249.631029]  ? oom_kill_process+0x396/0x400
[  249.632039]  ? __next_timer_interrupt+0xc0/0xc0
[  249.633087]  schedule_timeout_killable+0x15/0x20
[  249.634097]  out_of_memory+0xea/0x270
[  249.634901]  __alloc_pages_nodemask+0x715/0x880
[  249.635920]  handle_mm_fault+0x538/0xe40
[  249.636888]  ? __enqueue_entity+0x63/0x70
[  249.637787]  ? set_next_entity+0x4b/0x80
[  249.638687]  __do_page_fault+0x199/0x430
[  249.639535]  ? vmalloc_sync_all+0x180/0x180
[  249.640452]  do_page_fault+0x1a/0x1e
[  249.641283]  common_exception+0x82/0x8a
(...snipped...)
[  462.676366] oom_reaper: reaped process 1191 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
--------------------

--------------------
[  269.985819] b.out invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
[  269.988570] CPU: 0 PID: 9079 Comm: b.out Not tainted 4.15.0-rc8+ #217
(...snipped...)
[  270.073050] Out of memory: Kill process 914 (b.out) score 9 or sacrifice child
[  270.074660] Killed process 2208 (b.out) total-vm:2108kB, anon-rss:64kB, file-rss:4kB, shmem-rss:0kB
[  297.562824] sysrq: SysRq : Show State
(...snipped...)
[  471.716610] b.out           R  running task        0  9079   7400 0x00000000
[  471.718203] Call Trace:
[  471.718784]  __schedule+0x142/0x4b2
[  471.719577]  schedule+0x27/0x70
[  471.720294]  schedule_timeout+0xd1/0x160
[  471.721207]  ? oom_kill_process+0x396/0x400
[  471.722151]  ? __next_timer_interrupt+0xc0/0xc0
[  471.723215]  schedule_timeout_killable+0x15/0x20
[  471.724350]  out_of_memory+0xea/0x270
[  471.725201]  __alloc_pages_nodemask+0x715/0x880
[  471.726238]  ? radix_tree_lookup_slot+0x1f/0x50
[  471.727253]  filemap_fault+0x346/0x510
[  471.728120]  ? filemap_map_pages+0x245/0x2d0
[  471.729105]  ? unlock_page+0x30/0x30
[  471.729987]  __xfs_filemap_fault.isra.18+0x2d/0xb0
[  471.731488]  ? unlock_page+0x30/0x30
[  471.732364]  xfs_filemap_fault+0xa/0x10
[  471.733260]  __do_fault+0x11/0x30
[  471.734033]  handle_mm_fault+0x8e8/0xe40
[  471.735200]  __do_page_fault+0x199/0x430
[  471.736163]  ? common_exception+0x82/0x8a
[  471.737102]  ? vmalloc_sync_all+0x180/0x180
[  471.738061]  do_page_fault+0x1a/0x1e
[  471.738881]  common_exception+0x82/0x8a
(...snipped...)
[  566.969400] oom_reaper: reaped process 2208 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
--------------------

Allowing the OOM reaper to start reclaiming memory without waiting for
the oom_lock is not sufficient if the OOM reaper did not reclaim enough
memory. We need to make sure that the thread which called out_of_memory()
will release oom_lock shortly. Thus, this patch brings the short sleep
to outside of the OOM killer.

For __alloc_pages_may_oom() case, this patch uses uninterruptible sleep
than killable sleep because fatal_signal_pending() threads won't be able
to use memory reserves unless tsk_is_oom_victim() becomes true.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Roman Gushchin <guro@fb.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Tejun Heo <tj@kernel.org>
---
 mm/oom_kill.c   | 18 +++---------------
 mm/page_alloc.c |  3 ++-
 2 files changed, 5 insertions(+), 16 deletions(-)

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 8219001..47212442 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -1078,7 +1078,6 @@ bool out_of_memory(struct oom_control *oc)
 {
 	unsigned long freed = 0;
 	enum oom_constraint constraint = CONSTRAINT_NONE;
-	bool delay = false; /* if set, delay next allocation attempt */
 
 	if (oom_killer_disabled)
 		return false;
@@ -1128,10 +1127,8 @@ bool out_of_memory(struct oom_control *oc)
 		return true;
 	}
 
-	if (mem_cgroup_select_oom_victim(oc) && oom_kill_memcg_victim(oc)) {
-		delay = true;
+	if (mem_cgroup_select_oom_victim(oc) && oom_kill_memcg_victim(oc))
 		goto out;
-	}
 
 	select_bad_process(oc);
 	/* Found nothing?!?! Either we hang forever, or we panic. */
@@ -1139,20 +1136,10 @@ bool out_of_memory(struct oom_control *oc)
 		dump_header(oc, NULL);
 		panic("Out of memory and no killable processes...\n");
 	}
-	if (oc->chosen_task && oc->chosen_task != INFLIGHT_VICTIM) {
+	if (oc->chosen_task && oc->chosen_task != INFLIGHT_VICTIM)
 		oom_kill_process(oc, !is_memcg_oom(oc) ? "Out of memory" :
 				 "Memory cgroup out of memory");
-		delay = true;
-	}
-
 out:
-	/*
-	 * Give the killed process a good chance to exit before trying
-	 * to allocate memory again.
-	 */
-	if (delay)
-		schedule_timeout_killable(1);
-
 	return !!oc->chosen_task;
 }
 
@@ -1178,4 +1165,5 @@ void pagefault_out_of_memory(void)
 		return;
 	out_of_memory(&oc);
 	mutex_unlock(&oom_lock);
+	schedule_timeout_killable(1);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4093728..e93bff1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3355,7 +3355,6 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 	 */
 	if (!mutex_trylock(&oom_lock)) {
 		*did_some_progress = 1;
-		schedule_timeout_uninterruptible(1);
 		return NULL;
 	}
 
@@ -4116,6 +4115,8 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 	/* Retry as long as the OOM killer is making progress */
 	if (did_some_progress) {
 		no_progress_loops = 0;
+		if (!tsk_is_oom_victim(current))
+			schedule_timeout_uninterruptible(1);
 		goto retry;
 	}
 
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
  2018-01-22 13:46 [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held Tetsuo Handa
@ 2018-01-23  8:38 ` Michal Hocko
  2018-01-23 12:07   ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-01-23  8:38 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, David Rientjes, Johannes Weiner, Roman Gushchin,
	Tejun Heo, Vladimir Davydov

On Mon 22-01-18 22:46:22, Tetsuo Handa wrote:
> When I was examining a bug which occurs under CPU + memory pressure, I
> observed that a thread which called out_of_memory() can sleep for minutes
> at schedule_timeout_killable(1) with oom_lock held when many threads are
> doing direct reclaim.
> 
> --------------------
> [  163.357628] b.out invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
> [  163.360946] CPU: 0 PID: 554 Comm: b.out Not tainted 4.15.0-rc8+ #216
> (...snipped...)
> [  163.470193] Out of memory: Kill process 548 (b.out) score 6 or sacrifice child
> [  163.471813] Killed process 1191 (b.out) total-vm:2108kB, anon-rss:60kB, file-rss:4kB, shmem-rss:0kB
> (...snipped...)
> [  248.016033] sysrq: SysRq : Show State
> (...snipped...)
> [  249.625720] b.out           R  running task        0   554    538 0x00000004
> [  249.627778] Call Trace:
> [  249.628513]  __schedule+0x142/0x4b2
> [  249.629394]  schedule+0x27/0x70
> [  249.630114]  schedule_timeout+0xd1/0x160
> [  249.631029]  ? oom_kill_process+0x396/0x400
> [  249.632039]  ? __next_timer_interrupt+0xc0/0xc0
> [  249.633087]  schedule_timeout_killable+0x15/0x20
> [  249.634097]  out_of_memory+0xea/0x270
> [  249.634901]  __alloc_pages_nodemask+0x715/0x880
> [  249.635920]  handle_mm_fault+0x538/0xe40
> [  249.636888]  ? __enqueue_entity+0x63/0x70
> [  249.637787]  ? set_next_entity+0x4b/0x80
> [  249.638687]  __do_page_fault+0x199/0x430
> [  249.639535]  ? vmalloc_sync_all+0x180/0x180
> [  249.640452]  do_page_fault+0x1a/0x1e
> [  249.641283]  common_exception+0x82/0x8a
> (...snipped...)
> [  462.676366] oom_reaper: reaped process 1191 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
> --------------------
> 
> --------------------
> [  269.985819] b.out invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
> [  269.988570] CPU: 0 PID: 9079 Comm: b.out Not tainted 4.15.0-rc8+ #217
> (...snipped...)
> [  270.073050] Out of memory: Kill process 914 (b.out) score 9 or sacrifice child
> [  270.074660] Killed process 2208 (b.out) total-vm:2108kB, anon-rss:64kB, file-rss:4kB, shmem-rss:0kB
> [  297.562824] sysrq: SysRq : Show State
> (...snipped...)
> [  471.716610] b.out           R  running task        0  9079   7400 0x00000000
> [  471.718203] Call Trace:
> [  471.718784]  __schedule+0x142/0x4b2
> [  471.719577]  schedule+0x27/0x70
> [  471.720294]  schedule_timeout+0xd1/0x160
> [  471.721207]  ? oom_kill_process+0x396/0x400
> [  471.722151]  ? __next_timer_interrupt+0xc0/0xc0
> [  471.723215]  schedule_timeout_killable+0x15/0x20
> [  471.724350]  out_of_memory+0xea/0x270
> [  471.725201]  __alloc_pages_nodemask+0x715/0x880
> [  471.726238]  ? radix_tree_lookup_slot+0x1f/0x50
> [  471.727253]  filemap_fault+0x346/0x510
> [  471.728120]  ? filemap_map_pages+0x245/0x2d0
> [  471.729105]  ? unlock_page+0x30/0x30
> [  471.729987]  __xfs_filemap_fault.isra.18+0x2d/0xb0
> [  471.731488]  ? unlock_page+0x30/0x30
> [  471.732364]  xfs_filemap_fault+0xa/0x10
> [  471.733260]  __do_fault+0x11/0x30
> [  471.734033]  handle_mm_fault+0x8e8/0xe40
> [  471.735200]  __do_page_fault+0x199/0x430
> [  471.736163]  ? common_exception+0x82/0x8a
> [  471.737102]  ? vmalloc_sync_all+0x180/0x180
> [  471.738061]  do_page_fault+0x1a/0x1e
> [  471.738881]  common_exception+0x82/0x8a
> (...snipped...)
> [  566.969400] oom_reaper: reaped process 2208 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
> --------------------
> 
> Allowing the OOM reaper to start reclaiming memory without waiting for
> the oom_lock is not sufficient if the OOM reaper did not reclaim enough
> memory. We need to make sure that the thread which called out_of_memory()
> will release oom_lock shortly. Thus, this patch brings the short sleep
> to outside of the OOM killer.

And why does sleeping outside of the lock make any sense? The whole
point of the sleep is to give the victim some time to exit and if we
sleep outside of the lock then contending allocating paths hit the oom
path early.

To be completely host, I am not in love with this
schedule_timeout_uninterruptible(1). It is an ugly hack. It used to be
much more important in the past when the oom victim test was too
fragile. I strongly suspect that it is not needed this days so rather
than moving the sleep around I would try to remove it altogether.

Also, your changelog silently skips over some important details. The
system must be really overloaded when a short sleep can take minutes.
I would trongly suspect that such an overloaded system doesn't need
a short sleep to hold the oom lock for too long. All you need is to be
preempted. So this patch doesn't really solve any _real_ problem.

[...]
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
  2018-01-23  8:38 ` Michal Hocko
@ 2018-01-23 12:07   ` Tetsuo Handa
  2018-01-23 12:42     ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-01-23 12:07 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> On Mon 22-01-18 22:46:22, Tetsuo Handa wrote:
> > When I was examining a bug which occurs under CPU + memory pressure, I
> > observed that a thread which called out_of_memory() can sleep for minutes
> > at schedule_timeout_killable(1) with oom_lock held when many threads are
> > doing direct reclaim.
> > 
> > --------------------
> > [  163.357628] b.out invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
> > [  163.360946] CPU: 0 PID: 554 Comm: b.out Not tainted 4.15.0-rc8+ #216
> > (...snipped...)
> > [  163.470193] Out of memory: Kill process 548 (b.out) score 6 or sacrifice child
> > [  163.471813] Killed process 1191 (b.out) total-vm:2108kB, anon-rss:60kB, file-rss:4kB, shmem-rss:0kB
> > (...snipped...)
> > [  248.016033] sysrq: SysRq : Show State
> > (...snipped...)
> > [  249.625720] b.out           R  running task        0   554    538 0x00000004
> > [  249.627778] Call Trace:
> > [  249.628513]  __schedule+0x142/0x4b2
> > [  249.629394]  schedule+0x27/0x70
> > [  249.630114]  schedule_timeout+0xd1/0x160
> > [  249.631029]  ? oom_kill_process+0x396/0x400
> > [  249.632039]  ? __next_timer_interrupt+0xc0/0xc0
> > [  249.633087]  schedule_timeout_killable+0x15/0x20
> > [  249.634097]  out_of_memory+0xea/0x270
> > [  249.634901]  __alloc_pages_nodemask+0x715/0x880
> > [  249.635920]  handle_mm_fault+0x538/0xe40
> > [  249.636888]  ? __enqueue_entity+0x63/0x70
> > [  249.637787]  ? set_next_entity+0x4b/0x80
> > [  249.638687]  __do_page_fault+0x199/0x430
> > [  249.639535]  ? vmalloc_sync_all+0x180/0x180
> > [  249.640452]  do_page_fault+0x1a/0x1e
> > [  249.641283]  common_exception+0x82/0x8a
> > (...snipped...)
> > [  462.676366] oom_reaper: reaped process 1191 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
> > --------------------
> > 
> > --------------------
> > [  269.985819] b.out invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
> > [  269.988570] CPU: 0 PID: 9079 Comm: b.out Not tainted 4.15.0-rc8+ #217
> > (...snipped...)
> > [  270.073050] Out of memory: Kill process 914 (b.out) score 9 or sacrifice child
> > [  270.074660] Killed process 2208 (b.out) total-vm:2108kB, anon-rss:64kB, file-rss:4kB, shmem-rss:0kB
> > [  297.562824] sysrq: SysRq : Show State
> > (...snipped...)
> > [  471.716610] b.out           R  running task        0  9079   7400 0x00000000
> > [  471.718203] Call Trace:
> > [  471.718784]  __schedule+0x142/0x4b2
> > [  471.719577]  schedule+0x27/0x70
> > [  471.720294]  schedule_timeout+0xd1/0x160
> > [  471.721207]  ? oom_kill_process+0x396/0x400
> > [  471.722151]  ? __next_timer_interrupt+0xc0/0xc0
> > [  471.723215]  schedule_timeout_killable+0x15/0x20
> > [  471.724350]  out_of_memory+0xea/0x270
> > [  471.725201]  __alloc_pages_nodemask+0x715/0x880
> > [  471.726238]  ? radix_tree_lookup_slot+0x1f/0x50
> > [  471.727253]  filemap_fault+0x346/0x510
> > [  471.728120]  ? filemap_map_pages+0x245/0x2d0
> > [  471.729105]  ? unlock_page+0x30/0x30
> > [  471.729987]  __xfs_filemap_fault.isra.18+0x2d/0xb0
> > [  471.731488]  ? unlock_page+0x30/0x30
> > [  471.732364]  xfs_filemap_fault+0xa/0x10
> > [  471.733260]  __do_fault+0x11/0x30
> > [  471.734033]  handle_mm_fault+0x8e8/0xe40
> > [  471.735200]  __do_page_fault+0x199/0x430
> > [  471.736163]  ? common_exception+0x82/0x8a
> > [  471.737102]  ? vmalloc_sync_all+0x180/0x180
> > [  471.738061]  do_page_fault+0x1a/0x1e
> > [  471.738881]  common_exception+0x82/0x8a
> > (...snipped...)
> > [  566.969400] oom_reaper: reaped process 2208 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
> > --------------------
> > 
> > Allowing the OOM reaper to start reclaiming memory without waiting for
> > the oom_lock is not sufficient if the OOM reaper did not reclaim enough
> > memory. We need to make sure that the thread which called out_of_memory()
> > will release oom_lock shortly. Thus, this patch brings the short sleep
> > to outside of the OOM killer.
> 
> And why does sleeping outside of the lock make any sense? The whole
> point of the sleep is to give the victim some time to exit and if we
> sleep outside of the lock then contending allocating paths hit the oom
> path early.

Why does subsequent threads call out_of_memory() again without giving
the OOM victim some time to exit matter? MMF_OOM_SKIP test will prevent
subsequent out_of_memory() calls from selecting next OOM victims.

> 
> To be completely host, I am not in love with this
> schedule_timeout_uninterruptible(1). It is an ugly hack. It used to be
> much more important in the past when the oom victim test was too
> fragile. I strongly suspect that it is not needed this days so rather
> than moving the sleep around I would try to remove it altogether.

But this schedule_timeout_uninterruptible(1) serves as a guaranteed
sleep for PF_WQ_WORKER threads
( http://lkml.kernel.org/r/20170830064019.mfihbeu3mm5ygcrb@dhcp22.suse.cz ).

    > If we are under memory pressure, __zone_watermark_ok() can return false.
    > If __zone_watermark_ok() == false, when is schedule_timeout_*() called explicitly?

    If all zones fail with the watermark check then we should hit the oom
    path and sleep there. We do not do so for all cases though.

Thus, you cannot simply remove it.

> 
> Also, your changelog silently skips over some important details. The
> system must be really overloaded when a short sleep can take minutes.

Yes, the system was really overloaded, for I was testing below reproducer
on a x86_32 kernel.

----------------------------------------
#define _FILE_OFFSET_BITS 64
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>

static void run(void)
{
	unsigned long long size;
	char *buf = NULL;
	unsigned long long i;
	for (i = 0; i < 24; i++) {
		if (fork() == 0) {
			static char buf[4096];
			const int fd = open("file", O_CREAT | O_WRONLY | O_APPEND, 0600);
			while (write(fd, buf, sizeof(buf)) == sizeof(buf));
			close(fd);
			_exit(0);
		}
	}
	for (size = 1048576; size < 512ULL * (1 << 30); size += 1048576) {
		char *cp = realloc(buf, size);
		if (!cp) {
			size -= 1048576;
			break;
		}
		buf = cp;
	}
	for (i = 0; i < size; i += 4096)
		buf[i] = 0;
	_exit(0);
} 

int main(int argc, char *argv[])
{
	if (argc != 1)
		run();
	else
		while (1)
			if (fork() == 0)
				execlp(argv[0], argv[0], "", NULL);
	return 0;
}
----------------------------------------

> I would trongly suspect that such an overloaded system doesn't need
> a short sleep to hold the oom lock for too long. All you need is to be
> preempted. So this patch doesn't really solve any _real_ problem.

Preemption makes the OOM situation much worse. The only way to avoid all OOM
lockups caused by lack of CPU resource is to replace mutex_trylock(&oom_lock)
in __alloc_pages_may_oom() with mutex_lock(&oom_lock) (or similar) in order to
guarantee that all threads waiting for the OOM killer/reaper to make forward
progress shall give enough CPU resource.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
  2018-01-23 12:07   ` Tetsuo Handa
@ 2018-01-23 12:42     ` Michal Hocko
  2018-01-24 13:28       ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-01-23 12:42 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Tue 23-01-18 21:07:03, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Mon 22-01-18 22:46:22, Tetsuo Handa wrote:
> > > When I was examining a bug which occurs under CPU + memory pressure, I
> > > observed that a thread which called out_of_memory() can sleep for minutes
> > > at schedule_timeout_killable(1) with oom_lock held when many threads are
> > > doing direct reclaim.
> > > 
> > > --------------------
> > > [  163.357628] b.out invoked oom-killer: gfp_mask=0x14280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0
> > > [  163.360946] CPU: 0 PID: 554 Comm: b.out Not tainted 4.15.0-rc8+ #216
> > > (...snipped...)
> > > [  163.470193] Out of memory: Kill process 548 (b.out) score 6 or sacrifice child
> > > [  163.471813] Killed process 1191 (b.out) total-vm:2108kB, anon-rss:60kB, file-rss:4kB, shmem-rss:0kB
> > > (...snipped...)
> > > [  248.016033] sysrq: SysRq : Show State
> > > (...snipped...)
> > > [  249.625720] b.out           R  running task        0   554    538 0x00000004
> > > [  249.627778] Call Trace:
> > > [  249.628513]  __schedule+0x142/0x4b2
> > > [  249.629394]  schedule+0x27/0x70
> > > [  249.630114]  schedule_timeout+0xd1/0x160
> > > [  249.631029]  ? oom_kill_process+0x396/0x400
> > > [  249.632039]  ? __next_timer_interrupt+0xc0/0xc0
> > > [  249.633087]  schedule_timeout_killable+0x15/0x20
> > > [  249.634097]  out_of_memory+0xea/0x270
> > > [  249.634901]  __alloc_pages_nodemask+0x715/0x880
> > > [  249.635920]  handle_mm_fault+0x538/0xe40
> > > [  249.636888]  ? __enqueue_entity+0x63/0x70
> > > [  249.637787]  ? set_next_entity+0x4b/0x80
> > > [  249.638687]  __do_page_fault+0x199/0x430
> > > [  249.639535]  ? vmalloc_sync_all+0x180/0x180
> > > [  249.640452]  do_page_fault+0x1a/0x1e
> > > [  249.641283]  common_exception+0x82/0x8a
> > > (...snipped...)
> > > [  462.676366] oom_reaper: reaped process 1191 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
> > > --------------------
> > > 
> > > --------------------
> > > [  269.985819] b.out invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
> > > [  269.988570] CPU: 0 PID: 9079 Comm: b.out Not tainted 4.15.0-rc8+ #217
> > > (...snipped...)
> > > [  270.073050] Out of memory: Kill process 914 (b.out) score 9 or sacrifice child
> > > [  270.074660] Killed process 2208 (b.out) total-vm:2108kB, anon-rss:64kB, file-rss:4kB, shmem-rss:0kB
> > > [  297.562824] sysrq: SysRq : Show State
> > > (...snipped...)
> > > [  471.716610] b.out           R  running task        0  9079   7400 0x00000000
> > > [  471.718203] Call Trace:
> > > [  471.718784]  __schedule+0x142/0x4b2
> > > [  471.719577]  schedule+0x27/0x70
> > > [  471.720294]  schedule_timeout+0xd1/0x160
> > > [  471.721207]  ? oom_kill_process+0x396/0x400
> > > [  471.722151]  ? __next_timer_interrupt+0xc0/0xc0
> > > [  471.723215]  schedule_timeout_killable+0x15/0x20
> > > [  471.724350]  out_of_memory+0xea/0x270
> > > [  471.725201]  __alloc_pages_nodemask+0x715/0x880
> > > [  471.726238]  ? radix_tree_lookup_slot+0x1f/0x50
> > > [  471.727253]  filemap_fault+0x346/0x510
> > > [  471.728120]  ? filemap_map_pages+0x245/0x2d0
> > > [  471.729105]  ? unlock_page+0x30/0x30
> > > [  471.729987]  __xfs_filemap_fault.isra.18+0x2d/0xb0
> > > [  471.731488]  ? unlock_page+0x30/0x30
> > > [  471.732364]  xfs_filemap_fault+0xa/0x10
> > > [  471.733260]  __do_fault+0x11/0x30
> > > [  471.734033]  handle_mm_fault+0x8e8/0xe40
> > > [  471.735200]  __do_page_fault+0x199/0x430
> > > [  471.736163]  ? common_exception+0x82/0x8a
> > > [  471.737102]  ? vmalloc_sync_all+0x180/0x180
> > > [  471.738061]  do_page_fault+0x1a/0x1e
> > > [  471.738881]  common_exception+0x82/0x8a
> > > (...snipped...)
> > > [  566.969400] oom_reaper: reaped process 2208 (b.out), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
> > > --------------------
> > > 
> > > Allowing the OOM reaper to start reclaiming memory without waiting for
> > > the oom_lock is not sufficient if the OOM reaper did not reclaim enough
> > > memory. We need to make sure that the thread which called out_of_memory()
> > > will release oom_lock shortly. Thus, this patch brings the short sleep
> > > to outside of the OOM killer.
> > 
> > And why does sleeping outside of the lock make any sense? The whole
> > point of the sleep is to give the victim some time to exit and if we
> > sleep outside of the lock then contending allocating paths hit the oom
> > path early.
> 
> Why does subsequent threads call out_of_memory() again without giving
> the OOM victim some time to exit matter? MMF_OOM_SKIP test will prevent
> subsequent out_of_memory() calls from selecting next OOM victims.

Yes, it is possible that the _current_ code doesn't need this as I've
written below.

> > To be completely host, I am not in love with this
> > schedule_timeout_uninterruptible(1). It is an ugly hack. It used to be
> > much more important in the past when the oom victim test was too
> > fragile. I strongly suspect that it is not needed this days so rather
> > than moving the sleep around I would try to remove it altogether.
> 
> But this schedule_timeout_uninterruptible(1) serves as a guaranteed
> sleep for PF_WQ_WORKER threads
> ( http://lkml.kernel.org/r/20170830064019.mfihbeu3mm5ygcrb@dhcp22.suse.cz ).
> 
>     > If we are under memory pressure, __zone_watermark_ok() can return false.
>     > If __zone_watermark_ok() == false, when is schedule_timeout_*() called explicitly?
> 
>     If all zones fail with the watermark check then we should hit the oom
>     path and sleep there. We do not do so for all cases though.
> 
> Thus, you cannot simply remove it.

Then I would rather make should_reclaim_retry more robust.

> > Also, your changelog silently skips over some important details. The
> > system must be really overloaded when a short sleep can take minutes.
> 
> Yes, the system was really overloaded, for I was testing below reproducer
> on a x86_32 kernel.
[...]
> > I would trongly suspect that such an overloaded system doesn't need
> > a short sleep to hold the oom lock for too long. All you need is to be
> > preempted. So this patch doesn't really solve any _real_ problem.
> 
> Preemption makes the OOM situation much worse. The only way to avoid all OOM
> lockups caused by lack of CPU resource is to replace mutex_trylock(&oom_lock)
> in __alloc_pages_may_oom() with mutex_lock(&oom_lock) (or similar) in order to
> guarantee that all threads waiting for the OOM killer/reaper to make forward
> progress shall give enough CPU resource.

And how exactly does that help when the page allocator gets preempted by
somebody not doing any allocation?
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
  2018-01-23 12:42     ` Michal Hocko
@ 2018-01-24 13:28       ` Tetsuo Handa
  2018-02-13 11:58         ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-01-24 13:28 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> On Tue 23-01-18 21:07:03, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > To be completely host, I am not in love with this
> > > schedule_timeout_uninterruptible(1). It is an ugly hack. It used to be
> > > much more important in the past when the oom victim test was too
> > > fragile. I strongly suspect that it is not needed this days so rather
> > > than moving the sleep around I would try to remove it altogether.
> > 
> > But this schedule_timeout_uninterruptible(1) serves as a guaranteed
> > sleep for PF_WQ_WORKER threads
> > ( http://lkml.kernel.org/r/20170830064019.mfihbeu3mm5ygcrb@dhcp22.suse.cz ).
> > 
> >     > If we are under memory pressure, __zone_watermark_ok() can return false.
> >     > If __zone_watermark_ok() == false, when is schedule_timeout_*() called explicitly?
> > 
> >     If all zones fail with the watermark check then we should hit the oom
> >     path and sleep there. We do not do so for all cases though.
> > 
> > Thus, you cannot simply remove it.
> 
> Then I would rather make should_reclaim_retry more robust.

I'm OK with that if we can remove schedule_timeout_*() with oom_lock held.

> 
> > > Also, your changelog silently skips over some important details. The
> > > system must be really overloaded when a short sleep can take minutes.
> > 
> > Yes, the system was really overloaded, for I was testing below reproducer
> > on a x86_32 kernel.
> [...]
> > > I would trongly suspect that such an overloaded system doesn't need
> > > a short sleep to hold the oom lock for too long. All you need is to be
> > > preempted. So this patch doesn't really solve any _real_ problem.
> > 
> > Preemption makes the OOM situation much worse. The only way to avoid all OOM
> > lockups caused by lack of CPU resource is to replace mutex_trylock(&oom_lock)
> > in __alloc_pages_may_oom() with mutex_lock(&oom_lock) (or similar) in order to
> > guarantee that all threads waiting for the OOM killer/reaper to make forward
> > progress shall give enough CPU resource.
> 
> And how exactly does that help when the page allocator gets preempted by
> somebody not doing any allocation?

The page allocator is not responsible for wasting CPU resource for something
other than memory allocation request. Wasting CPU resource due to unable to
allow the OOM killer/reaper to make forward progress is the page allocator's
bug.

There are currently ways to artificially choke the OOM killer/reaper (e.g. let
a SCHED_IDLE priority thread which is allowed to run on only one specific CPU
invoke the OOM killer). To mitigate it, offloading the OOM killer to a dedicated
kernel thread (like the OOM reaper) which has reasonable scheduling priority and
is allowed to run on any idle CPU will help. But such enhancement is out of scope
for this patch. This patch is intended for avoid sleeping for minutes at
schedule_timeout_killable(1) with oom_lock held which can be reproduced without
using SCHED_IDLE priority and/or runnable CPU restrictions.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held.
  2018-01-24 13:28       ` Tetsuo Handa
@ 2018-02-13 11:58         ` Tetsuo Handa
  2018-02-20 13:32           ` [PATCH] mm,page_alloc: wait for oom_lock than back off Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-02-13 11:58 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal, if you are busy, can I test my version until you get time?

Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 23-01-18 21:07:03, Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > To be completely host, I am not in love with this
> > > > schedule_timeout_uninterruptible(1). It is an ugly hack. It used to be
> > > > much more important in the past when the oom victim test was too
> > > > fragile. I strongly suspect that it is not needed this days so rather
> > > > than moving the sleep around I would try to remove it altogether.
> > > 
> > > But this schedule_timeout_uninterruptible(1) serves as a guaranteed
> > > sleep for PF_WQ_WORKER threads
> > > ( http://lkml.kernel.org/r/20170830064019.mfihbeu3mm5ygcrb@dhcp22.suse.cz ).
> > > 
> > >     > If we are under memory pressure, __zone_watermark_ok() can return false.
> > >     > If __zone_watermark_ok() == false, when is schedule_timeout_*() called explicitly?
> > > 
> > >     If all zones fail with the watermark check then we should hit the oom
> > >     path and sleep there. We do not do so for all cases though.
> > > 
> > > Thus, you cannot simply remove it.
> > 
> > Then I would rather make should_reclaim_retry more robust.
> 
> I'm OK with that if we can remove schedule_timeout_*() with oom_lock held.
> 
> > 
> > > > Also, your changelog silently skips over some important details. The
> > > > system must be really overloaded when a short sleep can take minutes.
> > > 
> > > Yes, the system was really overloaded, for I was testing below reproducer
> > > on a x86_32 kernel.
> > [...]
> > > > I would trongly suspect that such an overloaded system doesn't need
> > > > a short sleep to hold the oom lock for too long. All you need is to be
> > > > preempted. So this patch doesn't really solve any _real_ problem.
> > > 
> > > Preemption makes the OOM situation much worse. The only way to avoid all OOM
> > > lockups caused by lack of CPU resource is to replace mutex_trylock(&oom_lock)
> > > in __alloc_pages_may_oom() with mutex_lock(&oom_lock) (or similar) in order to
> > > guarantee that all threads waiting for the OOM killer/reaper to make forward
> > > progress shall give enough CPU resource.
> > 
> > And how exactly does that help when the page allocator gets preempted by
> > somebody not doing any allocation?
> 
> The page allocator is not responsible for wasting CPU resource for something
> other than memory allocation request. Wasting CPU resource due to unable to
> allow the OOM killer/reaper to make forward progress is the page allocator's
> bug.
> 
> There are currently ways to artificially choke the OOM killer/reaper (e.g. let
> a SCHED_IDLE priority thread which is allowed to run on only one specific CPU
> invoke the OOM killer). To mitigate it, offloading the OOM killer to a dedicated
> kernel thread (like the OOM reaper) which has reasonable scheduling priority and
> is allowed to run on any idle CPU will help. But such enhancement is out of scope
> for this patch. This patch is intended for avoid sleeping for minutes at
> schedule_timeout_killable(1) with oom_lock held which can be reproduced without
> using SCHED_IDLE priority and/or runnable CPU restrictions.
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH] mm,page_alloc: wait for oom_lock than back off
  2018-02-13 11:58         ` Tetsuo Handa
@ 2018-02-20 13:32           ` Tetsuo Handa
  2018-02-20 13:40             ` Matthew Wilcox
  2018-02-20 14:49             ` Michal Hocko
  0 siblings, 2 replies; 26+ messages in thread
From: Tetsuo Handa @ 2018-02-20 13:32 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

>From c3b6616238fcd65d5a0fdabcb4577c7e6f40d35e Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date: Tue, 20 Feb 2018 11:07:23 +0900
Subject: [PATCH] mm,page_alloc: wait for oom_lock than back off

This patch fixes a bug which is essentially same with a bug fixed by
commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
too long").

Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
on an assumption that the owner of oom_lock is making progress for us. But
it is possible to trigger OOM lockup when many threads concurrently called
__alloc_pages_slowpath() because all CPU resources are wasted for pointless
direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
__alloc_pages_may_oom() does not always give enough CPU resource to the
owner of the oom_lock.

It is possible that the owner of oom_lock is preempted by other threads.
Preemption makes the OOM situation much worse. But the page allocator is
not responsible about wasting CPU resource for something other than memory
allocation request. Wasting CPU resource for memory allocation request
without allowing the owner of oom_lock to make forward progress is a page
allocator's bug.

Therefore, this patch changes to wait for oom_lock in order to guarantee
that no thread waiting for the owner of oom_lock to make forward progress
will not consume CPU resources for pointless direct reclaim efforts.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Rientjes <rientjes@google.com>
---
 mm/page_alloc.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e2b42f6..0cd48ae6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3350,11 +3350,7 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 
 	*did_some_progress = 0;
 
-	/*
-	 * Acquire the oom lock.  If that fails, somebody else is
-	 * making progress for us.
-	 */
-	if (!mutex_trylock(&oom_lock)) {
+	if (mutex_lock_killable(&oom_lock)) {
 		*did_some_progress = 1;
 		schedule_timeout_uninterruptible(1);
 		return NULL;
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,page_alloc: wait for oom_lock than back off
  2018-02-20 13:32           ` [PATCH] mm,page_alloc: wait for oom_lock than back off Tetsuo Handa
@ 2018-02-20 13:40             ` Matthew Wilcox
  2018-02-20 14:12               ` Tetsuo Handa
  2018-02-20 14:49             ` Michal Hocko
  1 sibling, 1 reply; 26+ messages in thread
From: Matthew Wilcox @ 2018-02-20 13:40 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: mhocko, akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev,
	torvalds

On Tue, Feb 20, 2018 at 10:32:56PM +0900, Tetsuo Handa wrote:
>  
> -	/*
> -	 * Acquire the oom lock.  If that fails, somebody else is
> -	 * making progress for us.
> -	 */
> -	if (!mutex_trylock(&oom_lock)) {
> +	if (mutex_lock_killable(&oom_lock)) {
>  		*did_some_progress = 1;
>  		schedule_timeout_uninterruptible(1);
>  		return NULL;

It looks odd to mutex_lock_killable() and then
schedule_timeout_uninterruptible().  Why not schedule_timeout_killable()?
If someone's sent a fatal signal, why delay for one jiffy?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,page_alloc: wait for oom_lock than back off
  2018-02-20 13:40             ` Matthew Wilcox
@ 2018-02-20 14:12               ` Tetsuo Handa
  0 siblings, 0 replies; 26+ messages in thread
From: Tetsuo Handa @ 2018-02-20 14:12 UTC (permalink / raw)
  To: willy
  Cc: mhocko, akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev,
	torvalds

Matthew Wilcox wrote:
> On Tue, Feb 20, 2018 at 10:32:56PM +0900, Tetsuo Handa wrote:
> >  
> > -	/*
> > -	 * Acquire the oom lock.  If that fails, somebody else is
> > -	 * making progress for us.
> > -	 */
> > -	if (!mutex_trylock(&oom_lock)) {
> > +	if (mutex_lock_killable(&oom_lock)) {
> >  		*did_some_progress = 1;
> >  		schedule_timeout_uninterruptible(1);
> >  		return NULL;
> 
> It looks odd to mutex_lock_killable() and then
> schedule_timeout_uninterruptible().  Why not schedule_timeout_killable()?
> If someone's sent a fatal signal, why delay for one jiffy?
> 

That sleep will be moved to somewhere else by future patches. For now, let's
make sure that "killed but not yet marked as an OOM victim" threads won't
waste CPU resources by looping without permission to use memory reserves.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,page_alloc: wait for oom_lock than back off
  2018-02-20 13:32           ` [PATCH] mm,page_alloc: wait for oom_lock than back off Tetsuo Handa
  2018-02-20 13:40             ` Matthew Wilcox
@ 2018-02-20 14:49             ` Michal Hocko
  2018-02-21 14:27               ` Tetsuo Handa
  1 sibling, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-02-20 14:49 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Tue 20-02-18 22:32:56, Tetsuo Handa wrote:
> >From c3b6616238fcd65d5a0fdabcb4577c7e6f40d35e Mon Sep 17 00:00:00 2001
> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Date: Tue, 20 Feb 2018 11:07:23 +0900
> Subject: [PATCH] mm,page_alloc: wait for oom_lock than back off
> 
> This patch fixes a bug which is essentially same with a bug fixed by
> commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> too long").
> 
> Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> on an assumption that the owner of oom_lock is making progress for us. But
> it is possible to trigger OOM lockup when many threads concurrently called
> __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> __alloc_pages_may_oom() does not always give enough CPU resource to the
> owner of the oom_lock.
> 
> It is possible that the owner of oom_lock is preempted by other threads.
> Preemption makes the OOM situation much worse. But the page allocator is
> not responsible about wasting CPU resource for something other than memory
> allocation request. Wasting CPU resource for memory allocation request
> without allowing the owner of oom_lock to make forward progress is a page
> allocator's bug.
> 
> Therefore, this patch changes to wait for oom_lock in order to guarantee
> that no thread waiting for the owner of oom_lock to make forward progress
> will not consume CPU resources for pointless direct reclaim efforts.

So instead we will have many tasks sleeping on the lock and prevent the
oom reaper to make any forward progress. This is not a solution without
further steps. Also I would like to see a real life workload that would
benefit from this.

> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: David Rientjes <rientjes@google.com>
> ---
>  mm/page_alloc.c | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index e2b42f6..0cd48ae6 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3350,11 +3350,7 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
>  
>  	*did_some_progress = 0;
>  
> -	/*
> -	 * Acquire the oom lock.  If that fails, somebody else is
> -	 * making progress for us.
> -	 */
> -	if (!mutex_trylock(&oom_lock)) {
> +	if (mutex_lock_killable(&oom_lock)) {
>  		*did_some_progress = 1;
>  		schedule_timeout_uninterruptible(1);
>  		return NULL;
> -- 
> 1.8.3.1

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,page_alloc: wait for oom_lock than back off
  2018-02-20 14:49             ` Michal Hocko
@ 2018-02-21 14:27               ` Tetsuo Handa
  2018-02-22 13:06                 ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-02-21 14:27 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> On Tue 20-02-18 22:32:56, Tetsuo Handa wrote:
> > >From c3b6616238fcd65d5a0fdabcb4577c7e6f40d35e Mon Sep 17 00:00:00 2001
> > From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > Date: Tue, 20 Feb 2018 11:07:23 +0900
> > Subject: [PATCH] mm,page_alloc: wait for oom_lock than back off
> > 
> > This patch fixes a bug which is essentially same with a bug fixed by
> > commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> > too long").
> > 
> > Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> > on an assumption that the owner of oom_lock is making progress for us. But
> > it is possible to trigger OOM lockup when many threads concurrently called
> > __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> > direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> > __alloc_pages_may_oom() does not always give enough CPU resource to the
> > owner of the oom_lock.
> > 
> > It is possible that the owner of oom_lock is preempted by other threads.
> > Preemption makes the OOM situation much worse. But the page allocator is
> > not responsible about wasting CPU resource for something other than memory
> > allocation request. Wasting CPU resource for memory allocation request
> > without allowing the owner of oom_lock to make forward progress is a page
> > allocator's bug.
> > 
> > Therefore, this patch changes to wait for oom_lock in order to guarantee
> > that no thread waiting for the owner of oom_lock to make forward progress
> > will not consume CPU resources for pointless direct reclaim efforts.
> 
> So instead we will have many tasks sleeping on the lock and prevent the
> oom reaper to make any forward progress. This is not a solution without
> further steps. Also I would like to see a real life workload that would
> benefit from this.

Of course I will propose follow-up patches. We already discussed that it is
safe to use ALLOC_WMARK_MIN for last second allocation attempt with oom_lock
held and ALLOC_OOM for OOM victim's last second allocation attempt with
oom_lock held. We don't need to serialize whole __oom_reap_task_mm() using
oom_lock; we need to serialize only setting of MMF_OOM_SKIP using oom_lock.
(We won't need oom_lock serialization for setting MMF_OOM_SKIP if everyone
can agree with doing last second allocation attempt with oom_lock held after
confirming that there is no !MMF_OOM_SKIP mm. But we could not agree it.)
Even more, we could try direct OOM reaping than schedule_timeout_killable(1)
if preventing the OOM reaper kernel thread is a problem, for we should be
able to concurrently run __oom_reap_task_mm() because we allow exit_mmap()
and __oom_reap_task_mm() to run concurrently.

We know printk() from OOM situation where a lot of threads are doing almost
busy-looping is a nightmare. printk() with oom_lock held can start utilizing
CPU resources saved by this patch (and reduce preemption during printk(),
making printk() complete faster) is already a benefit.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH] mm,page_alloc: wait for oom_lock than back off
  2018-02-21 14:27               ` Tetsuo Handa
@ 2018-02-22 13:06                 ` Michal Hocko
  2018-02-24  8:00                   ` [PATCH v2] " Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-02-22 13:06 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Wed 21-02-18 23:27:05, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 20-02-18 22:32:56, Tetsuo Handa wrote:
> > > >From c3b6616238fcd65d5a0fdabcb4577c7e6f40d35e Mon Sep 17 00:00:00 2001
> > > From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > > Date: Tue, 20 Feb 2018 11:07:23 +0900
> > > Subject: [PATCH] mm,page_alloc: wait for oom_lock than back off
> > > 
> > > This patch fixes a bug which is essentially same with a bug fixed by
> > > commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> > > too long").
> > > 
> > > Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> > > on an assumption that the owner of oom_lock is making progress for us. But
> > > it is possible to trigger OOM lockup when many threads concurrently called
> > > __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> > > direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> > > __alloc_pages_may_oom() does not always give enough CPU resource to the
> > > owner of the oom_lock.
> > > 
> > > It is possible that the owner of oom_lock is preempted by other threads.
> > > Preemption makes the OOM situation much worse. But the page allocator is
> > > not responsible about wasting CPU resource for something other than memory
> > > allocation request. Wasting CPU resource for memory allocation request
> > > without allowing the owner of oom_lock to make forward progress is a page
> > > allocator's bug.
> > > 
> > > Therefore, this patch changes to wait for oom_lock in order to guarantee
> > > that no thread waiting for the owner of oom_lock to make forward progress
> > > will not consume CPU resources for pointless direct reclaim efforts.
> > 
> > So instead we will have many tasks sleeping on the lock and prevent the
> > oom reaper to make any forward progress. This is not a solution without
> > further steps. Also I would like to see a real life workload that would
> > benefit from this.
> 
> Of course I will propose follow-up patches.

The patch in its current form will cause a worse behavior than we have
currently, because pending oom waiters simply block the oom reaper. So I
do not really see any reason to push this forward without other changes.

So NAK to this patch in its current form.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v2] mm,page_alloc: wait for oom_lock than back off
  2018-02-22 13:06                 ` Michal Hocko
@ 2018-02-24  8:00                   ` Tetsuo Handa
  2018-02-26  9:27                     ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-02-24  8:00 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

>From d922dd170c2bed01a775e8cca0871098aecc253d Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date: Sat, 24 Feb 2018 16:49:21 +0900
Subject: [PATCH v2] mm,page_alloc: wait for oom_lock than back off

This patch fixes a bug which is essentially same with a bug fixed by
commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
too long").

Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
on an assumption that the owner of oom_lock is making progress for us. But
it is possible to trigger OOM lockup when many threads concurrently called
__alloc_pages_slowpath() because all CPU resources are wasted for pointless
direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
__alloc_pages_may_oom() does not always give enough CPU resource to the
owner of the oom_lock.

It is possible that the owner of oom_lock is preempted by other threads.
Preemption makes the OOM situation much worse. But the page allocator is
not responsible about wasting CPU resource for something other than memory
allocation request. Wasting CPU resource for memory allocation request
without allowing the owner of oom_lock to make forward progress is a page
allocator's bug.

Therefore, this patch changes to wait for oom_lock in order to guarantee
that no thread waiting for the owner of oom_lock to make forward progress
will not consume CPU resources for pointless direct reclaim efforts.

We know printk() from OOM situation where a lot of threads are doing almost
busy-looping is a nightmare. As a side effect of this patch, printk() with
oom_lock held can start utilizing CPU resources saved by this patch (and
reduce preemption during printk(), making printk() complete faster).

By changing !mutex_trylock(&oom_lock) with mutex_lock_killable(&oom_lock),
it is possible that many threads prevent the OOM reaper from making forward
progress. Thus, this patch removes mutex_lock(&oom_lock) from the OOM
reaper.

Also, since nobody uses oom_lock serialization when setting MMF_OOM_SKIP
and we don't try last second allocation attempt after confirming that there
is no !MMF_OOM_SKIP OOM victim, the possibility of needlessly selecting
more OOM victims will be increased if we continue using ALLOC_WMARK_HIGH.
Thus, this patch changes to use ALLOC_MARK_MIN.

Also, since we don't want to sleep with oom_lock held so that we can allow
threads waiting at mutex_lock_killable(&oom_lock) to try last second
allocation attempt (because the OOM reaper starts reclaiming memory without
waiting for oom_lock) and start selecting next OOM victim if necessary,
this patch changes the location of the short sleep from inside of oom_lock
to outside of oom_lock.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Rientjes <rientjes@google.com>
---
 mm/oom_kill.c   | 35 +++--------------------------------
 mm/page_alloc.c | 30 ++++++++++++++++++------------
 2 files changed, 21 insertions(+), 44 deletions(-)

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 8219001..802214f 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -491,22 +491,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 	struct vm_area_struct *vma;
 	bool ret = true;
 
-	/*
-	 * We have to make sure to not race with the victim exit path
-	 * and cause premature new oom victim selection:
-	 * __oom_reap_task_mm		exit_mm
-	 *   mmget_not_zero
-	 *				  mmput
-	 *				    atomic_dec_and_test
-	 *				  exit_oom_victim
-	 *				[...]
-	 *				out_of_memory
-	 *				  select_bad_process
-	 *				    # no TIF_MEMDIE task selects new victim
-	 *  unmap_page_range # frees some memory
-	 */
-	mutex_lock(&oom_lock);
-
 	if (!down_read_trylock(&mm->mmap_sem)) {
 		ret = false;
 		trace_skip_task_reaping(tsk->pid);
@@ -581,7 +565,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 
 	trace_finish_task_reaping(tsk->pid);
 unlock_oom:
-	mutex_unlock(&oom_lock);
 	return ret;
 }
 
@@ -1078,7 +1061,6 @@ bool out_of_memory(struct oom_control *oc)
 {
 	unsigned long freed = 0;
 	enum oom_constraint constraint = CONSTRAINT_NONE;
-	bool delay = false; /* if set, delay next allocation attempt */
 
 	if (oom_killer_disabled)
 		return false;
@@ -1128,10 +1110,8 @@ bool out_of_memory(struct oom_control *oc)
 		return true;
 	}
 
-	if (mem_cgroup_select_oom_victim(oc) && oom_kill_memcg_victim(oc)) {
-		delay = true;
+	if (mem_cgroup_select_oom_victim(oc) && oom_kill_memcg_victim(oc))
 		goto out;
-	}
 
 	select_bad_process(oc);
 	/* Found nothing?!?! Either we hang forever, or we panic. */
@@ -1139,20 +1119,10 @@ bool out_of_memory(struct oom_control *oc)
 		dump_header(oc, NULL);
 		panic("Out of memory and no killable processes...\n");
 	}
-	if (oc->chosen_task && oc->chosen_task != INFLIGHT_VICTIM) {
+	if (oc->chosen_task && oc->chosen_task != INFLIGHT_VICTIM)
 		oom_kill_process(oc, !is_memcg_oom(oc) ? "Out of memory" :
 				 "Memory cgroup out of memory");
-		delay = true;
-	}
-
 out:
-	/*
-	 * Give the killed process a good chance to exit before trying
-	 * to allocate memory again.
-	 */
-	if (delay)
-		schedule_timeout_killable(1);
-
 	return !!oc->chosen_task;
 }
 
@@ -1178,4 +1148,5 @@ void pagefault_out_of_memory(void)
 		return;
 	out_of_memory(&oc);
 	mutex_unlock(&oom_lock);
+	schedule_timeout_killable(1);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2836bc9..23d1769 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3438,26 +3438,26 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 
 	*did_some_progress = 0;
 
-	/*
-	 * Acquire the oom lock.  If that fails, somebody else is
-	 * making progress for us.
-	 */
-	if (!mutex_trylock(&oom_lock)) {
+	if (mutex_lock_killable(&oom_lock)) {
 		*did_some_progress = 1;
-		schedule_timeout_uninterruptible(1);
 		return NULL;
 	}
 
 	/*
-	 * Go through the zonelist yet one more time, keep very high watermark
-	 * here, this is only to catch a parallel oom killing, we must fail if
-	 * we're still under heavy pressure. But make sure that this reclaim
-	 * attempt shall not depend on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY
-	 * allocation which will never fail due to oom_lock already held.
+	 * This allocation attempt must not depend on __GFP_DIRECT_RECLAIM &&
+	 * !__GFP_NORETRY allocation which will never fail due to oom_lock
+	 * already held.
+	 *
+	 * Since neither the OOM reaper nor exit_mmap() waits for oom_lock when
+	 * setting MMF_OOM_SKIP on the OOM victim's mm, we might needlessly
+	 * select more OOM victims if we use ALLOC_WMARK_HIGH here. But since
+	 * this allocation attempt does not sleep, we will not fail to invoke
+	 * the OOM killer even if we choose ALLOC_WMARK_MIN here. Thus, we use
+	 * ALLOC_WMARK_MIN here.
 	 */
 	page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) &
 				      ~__GFP_DIRECT_RECLAIM, order,
-				      ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
+				      ALLOC_WMARK_MIN | ALLOC_CPUSET, ac);
 	if (page)
 		goto out;
 
@@ -4205,6 +4205,12 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 	/* Retry as long as the OOM killer is making progress */
 	if (did_some_progress) {
 		no_progress_loops = 0;
+		/*
+		 * This schedule_timeout_*() serves as a guaranteed sleep for
+		 * PF_WQ_WORKER threads when __zone_watermark_ok() == false.
+		 */
+		if (!tsk_is_oom_victim(current))
+			schedule_timeout_uninterruptible(1);
 		goto retry;
 	}
 
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
  2018-02-24  8:00                   ` [PATCH v2] " Tetsuo Handa
@ 2018-02-26  9:27                     ` Michal Hocko
  2018-02-26 10:58                       ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-02-26  9:27 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Sat 24-02-18 17:00:51, Tetsuo Handa wrote:
> >From d922dd170c2bed01a775e8cca0871098aecc253d Mon Sep 17 00:00:00 2001
> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Date: Sat, 24 Feb 2018 16:49:21 +0900
> Subject: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
> 
> This patch fixes a bug which is essentially same with a bug fixed by
> commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> too long").
> 
> Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> on an assumption that the owner of oom_lock is making progress for us. But
> it is possible to trigger OOM lockup when many threads concurrently called
> __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> __alloc_pages_may_oom() does not always give enough CPU resource to the
> owner of the oom_lock.
> 
> It is possible that the owner of oom_lock is preempted by other threads.
> Preemption makes the OOM situation much worse. But the page allocator is
> not responsible about wasting CPU resource for something other than memory
> allocation request. Wasting CPU resource for memory allocation request
> without allowing the owner of oom_lock to make forward progress is a page
> allocator's bug.
> 
> Therefore, this patch changes to wait for oom_lock in order to guarantee
> that no thread waiting for the owner of oom_lock to make forward progress
> will not consume CPU resources for pointless direct reclaim efforts.
> 
> We know printk() from OOM situation where a lot of threads are doing almost
> busy-looping is a nightmare. As a side effect of this patch, printk() with
> oom_lock held can start utilizing CPU resources saved by this patch (and
> reduce preemption during printk(), making printk() complete faster).
> 
> By changing !mutex_trylock(&oom_lock) with mutex_lock_killable(&oom_lock),
> it is possible that many threads prevent the OOM reaper from making forward
> progress. Thus, this patch removes mutex_lock(&oom_lock) from the OOM
> reaper.
> 
> Also, since nobody uses oom_lock serialization when setting MMF_OOM_SKIP
> and we don't try last second allocation attempt after confirming that there
> is no !MMF_OOM_SKIP OOM victim, the possibility of needlessly selecting
> more OOM victims will be increased if we continue using ALLOC_WMARK_HIGH.
> Thus, this patch changes to use ALLOC_MARK_MIN.
> 
> Also, since we don't want to sleep with oom_lock held so that we can allow
> threads waiting at mutex_lock_killable(&oom_lock) to try last second
> allocation attempt (because the OOM reaper starts reclaiming memory without
> waiting for oom_lock) and start selecting next OOM victim if necessary,
> this patch changes the location of the short sleep from inside of oom_lock
> to outside of oom_lock.

This patch does three different things mangled into one patch. All that
with a patch description which talks a lot but doesn't really explain
those changes.

Moreover, you are effectively tunning for an overloaded page allocator
artifical test case and add a central lock where many tasks would
block. I have already tried to explain that this is not an universal
win and you should better have a real life example where this is really
helpful.

While I do agree that removing the oom_lock from __oom_reap_task_mm is a
sensible thing, changing the last allocation attempt to ALLOC_WMARK_MIN
is not all that straightforward and it would require much more detailed
explaination.

So the patch in its current form is not mergeable IMHO.
 
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: David Rientjes <rientjes@google.com>
> ---
>  mm/oom_kill.c   | 35 +++--------------------------------
>  mm/page_alloc.c | 30 ++++++++++++++++++------------
>  2 files changed, 21 insertions(+), 44 deletions(-)
> 
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 8219001..802214f 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -491,22 +491,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
>  	struct vm_area_struct *vma;
>  	bool ret = true;
>  
> -	/*
> -	 * We have to make sure to not race with the victim exit path
> -	 * and cause premature new oom victim selection:
> -	 * __oom_reap_task_mm		exit_mm
> -	 *   mmget_not_zero
> -	 *				  mmput
> -	 *				    atomic_dec_and_test
> -	 *				  exit_oom_victim
> -	 *				[...]
> -	 *				out_of_memory
> -	 *				  select_bad_process
> -	 *				    # no TIF_MEMDIE task selects new victim
> -	 *  unmap_page_range # frees some memory
> -	 */
> -	mutex_lock(&oom_lock);
> -
>  	if (!down_read_trylock(&mm->mmap_sem)) {
>  		ret = false;
>  		trace_skip_task_reaping(tsk->pid);
> @@ -581,7 +565,6 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
>  
>  	trace_finish_task_reaping(tsk->pid);
>  unlock_oom:
> -	mutex_unlock(&oom_lock);
>  	return ret;
>  }
>  
> @@ -1078,7 +1061,6 @@ bool out_of_memory(struct oom_control *oc)
>  {
>  	unsigned long freed = 0;
>  	enum oom_constraint constraint = CONSTRAINT_NONE;
> -	bool delay = false; /* if set, delay next allocation attempt */
>  
>  	if (oom_killer_disabled)
>  		return false;
> @@ -1128,10 +1110,8 @@ bool out_of_memory(struct oom_control *oc)
>  		return true;
>  	}
>  
> -	if (mem_cgroup_select_oom_victim(oc) && oom_kill_memcg_victim(oc)) {
> -		delay = true;
> +	if (mem_cgroup_select_oom_victim(oc) && oom_kill_memcg_victim(oc))
>  		goto out;
> -	}
>  
>  	select_bad_process(oc);
>  	/* Found nothing?!?! Either we hang forever, or we panic. */
> @@ -1139,20 +1119,10 @@ bool out_of_memory(struct oom_control *oc)
>  		dump_header(oc, NULL);
>  		panic("Out of memory and no killable processes...\n");
>  	}
> -	if (oc->chosen_task && oc->chosen_task != INFLIGHT_VICTIM) {
> +	if (oc->chosen_task && oc->chosen_task != INFLIGHT_VICTIM)
>  		oom_kill_process(oc, !is_memcg_oom(oc) ? "Out of memory" :
>  				 "Memory cgroup out of memory");
> -		delay = true;
> -	}
> -
>  out:
> -	/*
> -	 * Give the killed process a good chance to exit before trying
> -	 * to allocate memory again.
> -	 */
> -	if (delay)
> -		schedule_timeout_killable(1);
> -
>  	return !!oc->chosen_task;
>  }
>  
> @@ -1178,4 +1148,5 @@ void pagefault_out_of_memory(void)
>  		return;
>  	out_of_memory(&oc);
>  	mutex_unlock(&oom_lock);
> +	schedule_timeout_killable(1);
>  }
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 2836bc9..23d1769 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -3438,26 +3438,26 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
>  
>  	*did_some_progress = 0;
>  
> -	/*
> -	 * Acquire the oom lock.  If that fails, somebody else is
> -	 * making progress for us.
> -	 */
> -	if (!mutex_trylock(&oom_lock)) {
> +	if (mutex_lock_killable(&oom_lock)) {
>  		*did_some_progress = 1;
> -		schedule_timeout_uninterruptible(1);
>  		return NULL;
>  	}
>  
>  	/*
> -	 * Go through the zonelist yet one more time, keep very high watermark
> -	 * here, this is only to catch a parallel oom killing, we must fail if
> -	 * we're still under heavy pressure. But make sure that this reclaim
> -	 * attempt shall not depend on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY
> -	 * allocation which will never fail due to oom_lock already held.
> +	 * This allocation attempt must not depend on __GFP_DIRECT_RECLAIM &&
> +	 * !__GFP_NORETRY allocation which will never fail due to oom_lock
> +	 * already held.
> +	 *
> +	 * Since neither the OOM reaper nor exit_mmap() waits for oom_lock when
> +	 * setting MMF_OOM_SKIP on the OOM victim's mm, we might needlessly
> +	 * select more OOM victims if we use ALLOC_WMARK_HIGH here. But since
> +	 * this allocation attempt does not sleep, we will not fail to invoke
> +	 * the OOM killer even if we choose ALLOC_WMARK_MIN here. Thus, we use
> +	 * ALLOC_WMARK_MIN here.
>  	 */
>  	page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) &
>  				      ~__GFP_DIRECT_RECLAIM, order,
> -				      ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
> +				      ALLOC_WMARK_MIN | ALLOC_CPUSET, ac);
>  	if (page)
>  		goto out;
>  
> @@ -4205,6 +4205,12 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
>  	/* Retry as long as the OOM killer is making progress */
>  	if (did_some_progress) {
>  		no_progress_loops = 0;
> +		/*
> +		 * This schedule_timeout_*() serves as a guaranteed sleep for
> +		 * PF_WQ_WORKER threads when __zone_watermark_ok() == false.
> +		 */
> +		if (!tsk_is_oom_victim(current))
> +			schedule_timeout_uninterruptible(1);
>  		goto retry;
>  	}
>  
> -- 
> 1.8.3.1

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
  2018-02-26  9:27                     ` Michal Hocko
@ 2018-02-26 10:58                       ` Tetsuo Handa
  2018-02-26 12:19                         ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-02-26 10:58 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> On Sat 24-02-18 17:00:51, Tetsuo Handa wrote:
> > >From d922dd170c2bed01a775e8cca0871098aecc253d Mon Sep 17 00:00:00 2001
> > From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > Date: Sat, 24 Feb 2018 16:49:21 +0900
> > Subject: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
> > 
> > This patch fixes a bug which is essentially same with a bug fixed by
> > commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> > too long").
> > 
> > Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> > on an assumption that the owner of oom_lock is making progress for us. But
> > it is possible to trigger OOM lockup when many threads concurrently called
> > __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> > direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> > __alloc_pages_may_oom() does not always give enough CPU resource to the
> > owner of the oom_lock.
> > 
> > It is possible that the owner of oom_lock is preempted by other threads.
> > Preemption makes the OOM situation much worse. But the page allocator is
> > not responsible about wasting CPU resource for something other than memory
> > allocation request. Wasting CPU resource for memory allocation request
> > without allowing the owner of oom_lock to make forward progress is a page
> > allocator's bug.
> > 
> > Therefore, this patch changes to wait for oom_lock in order to guarantee
> > that no thread waiting for the owner of oom_lock to make forward progress
> > will not consume CPU resources for pointless direct reclaim efforts.
> > 
> > We know printk() from OOM situation where a lot of threads are doing almost
> > busy-looping is a nightmare. As a side effect of this patch, printk() with
> > oom_lock held can start utilizing CPU resources saved by this patch (and
> > reduce preemption during printk(), making printk() complete faster).
> > 
> > By changing !mutex_trylock(&oom_lock) with mutex_lock_killable(&oom_lock),
> > it is possible that many threads prevent the OOM reaper from making forward
> > progress. Thus, this patch removes mutex_lock(&oom_lock) from the OOM
> > reaper.
> > 
> > Also, since nobody uses oom_lock serialization when setting MMF_OOM_SKIP
> > and we don't try last second allocation attempt after confirming that there
> > is no !MMF_OOM_SKIP OOM victim, the possibility of needlessly selecting
> > more OOM victims will be increased if we continue using ALLOC_WMARK_HIGH.
> > Thus, this patch changes to use ALLOC_MARK_MIN.
> > 
> > Also, since we don't want to sleep with oom_lock held so that we can allow
> > threads waiting at mutex_lock_killable(&oom_lock) to try last second
> > allocation attempt (because the OOM reaper starts reclaiming memory without
> > waiting for oom_lock) and start selecting next OOM victim if necessary,
> > this patch changes the location of the short sleep from inside of oom_lock
> > to outside of oom_lock.
> 
> This patch does three different things mangled into one patch. All that
> with a patch description which talks a lot but doesn't really explain
> those changes.
> 
> Moreover, you are effectively tunning for an overloaded page allocator
> artifical test case and add a central lock where many tasks would
> block. I have already tried to explain that this is not an universal
> win and you should better have a real life example where this is really
> helpful.
> 
> While I do agree that removing the oom_lock from __oom_reap_task_mm is a
> sensible thing, changing the last allocation attempt to ALLOC_WMARK_MIN
> is not all that straightforward and it would require much more detailed
> explaination.
> 
> So the patch in its current form is not mergeable IMHO.

Your comment is impossible to satisfy.
Please show me your version, for you are keeping me deadlocked.

I'm angry with MM people's attitude that MM people are not friendly to
users who are bothered by lockup / slowdown problems under memory pressure.
They just say "Your system is overloaded" and don't provide enough support
for checking whether they are hitting a real bug other than overloaded.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
  2018-02-26 10:58                       ` Tetsuo Handa
@ 2018-02-26 12:19                         ` Michal Hocko
  2018-02-26 13:16                           ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-02-26 12:19 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Mon 26-02-18 19:58:19, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Sat 24-02-18 17:00:51, Tetsuo Handa wrote:
> > > >From d922dd170c2bed01a775e8cca0871098aecc253d Mon Sep 17 00:00:00 2001
> > > From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > > Date: Sat, 24 Feb 2018 16:49:21 +0900
> > > Subject: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
> > > 
> > > This patch fixes a bug which is essentially same with a bug fixed by
> > > commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> > > too long").
> > > 
> > > Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> > > on an assumption that the owner of oom_lock is making progress for us. But
> > > it is possible to trigger OOM lockup when many threads concurrently called
> > > __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> > > direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> > > __alloc_pages_may_oom() does not always give enough CPU resource to the
> > > owner of the oom_lock.
> > > 
> > > It is possible that the owner of oom_lock is preempted by other threads.
> > > Preemption makes the OOM situation much worse. But the page allocator is
> > > not responsible about wasting CPU resource for something other than memory
> > > allocation request. Wasting CPU resource for memory allocation request
> > > without allowing the owner of oom_lock to make forward progress is a page
> > > allocator's bug.
> > > 
> > > Therefore, this patch changes to wait for oom_lock in order to guarantee
> > > that no thread waiting for the owner of oom_lock to make forward progress
> > > will not consume CPU resources for pointless direct reclaim efforts.
> > > 
> > > We know printk() from OOM situation where a lot of threads are doing almost
> > > busy-looping is a nightmare. As a side effect of this patch, printk() with
> > > oom_lock held can start utilizing CPU resources saved by this patch (and
> > > reduce preemption during printk(), making printk() complete faster).
> > > 
> > > By changing !mutex_trylock(&oom_lock) with mutex_lock_killable(&oom_lock),
> > > it is possible that many threads prevent the OOM reaper from making forward
> > > progress. Thus, this patch removes mutex_lock(&oom_lock) from the OOM
> > > reaper.
> > > 
> > > Also, since nobody uses oom_lock serialization when setting MMF_OOM_SKIP
> > > and we don't try last second allocation attempt after confirming that there
> > > is no !MMF_OOM_SKIP OOM victim, the possibility of needlessly selecting
> > > more OOM victims will be increased if we continue using ALLOC_WMARK_HIGH.
> > > Thus, this patch changes to use ALLOC_MARK_MIN.
> > > 
> > > Also, since we don't want to sleep with oom_lock held so that we can allow
> > > threads waiting at mutex_lock_killable(&oom_lock) to try last second
> > > allocation attempt (because the OOM reaper starts reclaiming memory without
> > > waiting for oom_lock) and start selecting next OOM victim if necessary,
> > > this patch changes the location of the short sleep from inside of oom_lock
> > > to outside of oom_lock.
> > 
> > This patch does three different things mangled into one patch. All that
> > with a patch description which talks a lot but doesn't really explain
> > those changes.
> > 
> > Moreover, you are effectively tunning for an overloaded page allocator
> > artifical test case and add a central lock where many tasks would
> > block. I have already tried to explain that this is not an universal
> > win and you should better have a real life example where this is really
> > helpful.
> > 
> > While I do agree that removing the oom_lock from __oom_reap_task_mm is a
> > sensible thing, changing the last allocation attempt to ALLOC_WMARK_MIN
> > is not all that straightforward and it would require much more detailed
> > explaination.
> > 
> > So the patch in its current form is not mergeable IMHO.
> 
> Your comment is impossible to satisfy.
> Please show me your version, for you are keeping me deadlocked.
> 
> I'm angry with MM people's attitude that MM people are not friendly to
> users who are bothered by lockup / slowdown problems under memory pressure.
> They just say "Your system is overloaded" and don't provide enough support
> for checking whether they are hitting a real bug other than overloaded.

You should listen much more and also try to understand concerns that we
have. You are trying to touch a very subtle piece of code. That code
is not perfect at all but it tends to work reasonably well under most
workloads out there. Now you try to handle corner cases you are seeing
and that is good thing in general. But the fix shouldn't introduce new
risks and adding a single synchronization point into the oom path is
simply not without its own risks.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
  2018-02-26 12:19                         ` Michal Hocko
@ 2018-02-26 13:16                           ` Tetsuo Handa
  2018-03-02 11:10                             ` [PATCH v3] " Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-02-26 13:16 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> On Mon 26-02-18 19:58:19, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > On Sat 24-02-18 17:00:51, Tetsuo Handa wrote:
> > > > >From d922dd170c2bed01a775e8cca0871098aecc253d Mon Sep 17 00:00:00 2001
> > > > From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> > > > Date: Sat, 24 Feb 2018 16:49:21 +0900
> > > > Subject: [PATCH v2] mm,page_alloc: wait for oom_lock than back off
> > > > 
> > > > This patch fixes a bug which is essentially same with a bug fixed by
> > > > commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> > > > too long").
> > > > 
> > > > Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> > > > on an assumption that the owner of oom_lock is making progress for us. But
> > > > it is possible to trigger OOM lockup when many threads concurrently called
> > > > __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> > > > direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> > > > __alloc_pages_may_oom() does not always give enough CPU resource to the
> > > > owner of the oom_lock.
> > > > 
> > > > It is possible that the owner of oom_lock is preempted by other threads.
> > > > Preemption makes the OOM situation much worse. But the page allocator is
> > > > not responsible about wasting CPU resource for something other than memory
> > > > allocation request. Wasting CPU resource for memory allocation request
> > > > without allowing the owner of oom_lock to make forward progress is a page
> > > > allocator's bug.
> > > > 
> > > > Therefore, this patch changes to wait for oom_lock in order to guarantee
> > > > that no thread waiting for the owner of oom_lock to make forward progress
> > > > will not consume CPU resources for pointless direct reclaim efforts.
> > > > 
> > > > We know printk() from OOM situation where a lot of threads are doing almost
> > > > busy-looping is a nightmare. As a side effect of this patch, printk() with
> > > > oom_lock held can start utilizing CPU resources saved by this patch (and
> > > > reduce preemption during printk(), making printk() complete faster).
> > > > 
> > > > By changing !mutex_trylock(&oom_lock) with mutex_lock_killable(&oom_lock),
> > > > it is possible that many threads prevent the OOM reaper from making forward
> > > > progress. Thus, this patch removes mutex_lock(&oom_lock) from the OOM
> > > > reaper.
> > > > 
> > > > Also, since nobody uses oom_lock serialization when setting MMF_OOM_SKIP
> > > > and we don't try last second allocation attempt after confirming that there
> > > > is no !MMF_OOM_SKIP OOM victim, the possibility of needlessly selecting
> > > > more OOM victims will be increased if we continue using ALLOC_WMARK_HIGH.
> > > > Thus, this patch changes to use ALLOC_MARK_MIN.
> > > > 
> > > > Also, since we don't want to sleep with oom_lock held so that we can allow
> > > > threads waiting at mutex_lock_killable(&oom_lock) to try last second
> > > > allocation attempt (because the OOM reaper starts reclaiming memory without
> > > > waiting for oom_lock) and start selecting next OOM victim if necessary,
> > > > this patch changes the location of the short sleep from inside of oom_lock
> > > > to outside of oom_lock.
> > > 
> > > This patch does three different things mangled into one patch. All that
> > > with a patch description which talks a lot but doesn't really explain
> > > those changes.
> > > 
> > > Moreover, you are effectively tunning for an overloaded page allocator
> > > artifical test case and add a central lock where many tasks would
> > > block. I have already tried to explain that this is not an universal
> > > win and you should better have a real life example where this is really
> > > helpful.
> > > 
> > > While I do agree that removing the oom_lock from __oom_reap_task_mm is a
> > > sensible thing, changing the last allocation attempt to ALLOC_WMARK_MIN
> > > is not all that straightforward and it would require much more detailed
> > > explaination.
> > > 
> > > So the patch in its current form is not mergeable IMHO.
> > 
> > Your comment is impossible to satisfy.
> > Please show me your version, for you are keeping me deadlocked.
> > 
> > I'm angry with MM people's attitude that MM people are not friendly to
> > users who are bothered by lockup / slowdown problems under memory pressure.
> > They just say "Your system is overloaded" and don't provide enough support
> > for checking whether they are hitting a real bug other than overloaded.
> 
> You should listen much more and also try to understand concerns that we
> have. You are trying to touch a very subtle piece of code. That code
> is not perfect at all but it tends to work reasonably well under most
> workloads out there. Now you try to handle corner cases you are seeing
> and that is good thing in general. But the fix shouldn't introduce new
> risks and adding a single synchronization point into the oom path is
> simply not without its own risks.

Then, please show me a real life example (by comparing trylock() versus
lock_killable()) where lock_killable() hurts. If you proved it, I'm
willing to make this optional (i.e. "use trylock() at the risk of lockup
or use lock_killable() at the risk of latency"). Without testing this
patch at real world, we won't be able to prove it though.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-02-26 13:16                           ` Tetsuo Handa
@ 2018-03-02 11:10                             ` Tetsuo Handa
  2018-03-02 14:10                               ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-03-02 11:10 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

>From e80aeb994a03c3ae108107ea4d4489bbd7d868e9 Mon Sep 17 00:00:00 2001
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Date: Fri, 2 Mar 2018 19:56:50 +0900
Subject: [PATCH v3] mm,page_alloc: wait for oom_lock than back off

This patch fixes a bug which is essentially same with a bug fixed by
commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
too long").

Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
on an assumption that the owner of oom_lock is making progress for us. But
it is possible to trigger OOM lockup when many threads concurrently called
__alloc_pages_slowpath() because all CPU resources are wasted for pointless
direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
__alloc_pages_may_oom() does not always give enough CPU resource to the
owner of the oom_lock.

It is possible that the owner of oom_lock is preempted by other threads.
Preemption makes the OOM situation much worse. But the page allocator is
not responsible about wasting CPU resource for something other than memory
allocation request. Wasting CPU resource for memory allocation request
without allowing the owner of oom_lock to make forward progress is a page
allocator's bug.

Therefore, this patch changes to wait for oom_lock in order to guarantee
that no thread waiting for the owner of oom_lock to make forward progress
will not consume CPU resources for pointless direct reclaim efforts.

We know printk() from OOM situation where a lot of threads are doing almost
busy-looping is a nightmare. As a side effect of this patch, printk() with
oom_lock held can start utilizing CPU resources saved by this patch (and
reduce preemption during printk(), making printk() complete faster).

By changing !mutex_trylock(&oom_lock) with mutex_lock_killable(&oom_lock),
it is possible that many threads prevent the OOM reaper from making forward
progress. Thus, this patch removes mutex_lock(&oom_lock) from the OOM
reaper.

Also, since nobody uses oom_lock serialization when setting MMF_OOM_SKIP
and we don't try last second allocation attempt after confirming that there
is no !MMF_OOM_SKIP OOM victim, the possibility of needlessly selecting
more OOM victims will be increased if we continue using ALLOC_WMARK_HIGH.
Thus, this patch changes to use ALLOC_MARK_MIN.

Also, since we don't want to sleep with oom_lock held so that we can allow
threads waiting at mutex_lock_killable(&oom_lock) to try last second
allocation attempt (because the OOM reaper starts reclaiming memory without
waiting for oom_lock) and start selecting next OOM victim if necessary,
this patch changes the location of the short sleep from inside of oom_lock
to outside of oom_lock.

But since Michal is still worrying that adding a single synchronization
point into the OOM path is risky (without showing a real life example
where lock_killable() in the coldest OOM path hurts), changes made by
this patch will be enabled only when oom_compat_mode=0 kernel command line
parameter is specified so that users can test whether their workloads get
hurt by this patch.

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@suse.com>
Cc: David Rientjes <rientjes@google.com>
---
 include/linux/oom.h |  1 +
 mm/oom_kill.c       | 43 +++++++++++++++++----------
 mm/page_alloc.c     | 84 +++++++++++++++++++++++++++++++++++++++++------------
 3 files changed, 95 insertions(+), 33 deletions(-)

diff --git a/include/linux/oom.h b/include/linux/oom.h
index d4d41c0..58bfda1 100644
--- a/include/linux/oom.h
+++ b/include/linux/oom.h
@@ -125,4 +125,5 @@ extern unsigned long oom_badness(struct task_struct *p,
 extern int sysctl_oom_dump_tasks;
 extern int sysctl_oom_kill_allocating_task;
 extern int sysctl_panic_on_oom;
+extern int oom_compat_mode;
 #endif /* _INCLUDE_LINUX_OOM_H */
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 8219001..f3afcba 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -55,6 +55,17 @@
 
 DEFINE_MUTEX(oom_lock);
 
+int oom_compat_mode __ro_after_init = 1;
+static int __init oom_compat_mode_setup(char *str)
+{
+	int rc = kstrtoint(str, 0, &oom_compat_mode);
+
+	if (rc)
+		return rc;
+	return 1;
+}
+__setup("oom_compat_mode=", oom_compat_mode_setup);
+
 #ifdef CONFIG_NUMA
 /**
  * has_intersects_mems_allowed() - check task eligiblity for kill
@@ -492,20 +503,19 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 	bool ret = true;
 
 	/*
-	 * We have to make sure to not race with the victim exit path
-	 * and cause premature new oom victim selection:
-	 * __oom_reap_task_mm		exit_mm
-	 *   mmget_not_zero
-	 *				  mmput
-	 *				    atomic_dec_and_test
-	 *				  exit_oom_victim
-	 *				[...]
-	 *				out_of_memory
-	 *				  select_bad_process
-	 *				    # no TIF_MEMDIE task selects new victim
-	 *  unmap_page_range # frees some memory
+	 * We have to make sure to not cause premature new oom victim selection:
+	 * __alloc_pages_may_oom()     __oom_reap_task_mm()
+	 *   mutex_trylock(&oom_lock)   # succeeds
+	 *                               unmap_page_range() # frees some memory
+	 *   get_page_from_freelist(ALLOC_WMARK_HIGH) # fails
+	 *                               set_bit(MMF_OOM_SKIP)
+	 *   out_of_memory()
+	 *     select_bad_process()
+	 *       test_bit(MMF_OOM_SKIP) # selects new oom victim
+	 *   mutex_unlock(&oom_lock)
 	 */
-	mutex_lock(&oom_lock);
+	if (oom_compat_mode)
+		mutex_lock(&oom_lock);
 
 	if (!down_read_trylock(&mm->mmap_sem)) {
 		ret = false;
@@ -581,7 +591,8 @@ static bool __oom_reap_task_mm(struct task_struct *tsk, struct mm_struct *mm)
 
 	trace_finish_task_reaping(tsk->pid);
 unlock_oom:
-	mutex_unlock(&oom_lock);
+	if (oom_compat_mode)
+		mutex_unlock(&oom_lock);
 	return ret;
 }
 
@@ -1150,7 +1161,7 @@ bool out_of_memory(struct oom_control *oc)
 	 * Give the killed process a good chance to exit before trying
 	 * to allocate memory again.
 	 */
-	if (delay)
+	if (delay && oom_compat_mode)
 		schedule_timeout_killable(1);
 
 	return !!oc->chosen_task;
@@ -1178,4 +1189,6 @@ void pagefault_out_of_memory(void)
 		return;
 	out_of_memory(&oc);
 	mutex_unlock(&oom_lock);
+	if (!oom_compat_mode)
+		schedule_timeout_killable(1);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2836bc9..536eee9 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3438,26 +3438,68 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 
 	*did_some_progress = 0;
 
-	/*
-	 * Acquire the oom lock.  If that fails, somebody else is
-	 * making progress for us.
-	 */
-	if (!mutex_trylock(&oom_lock)) {
-		*did_some_progress = 1;
-		schedule_timeout_uninterruptible(1);
-		return NULL;
+	if (oom_compat_mode) {
+		/*
+		 * Acquire the oom lock. If that fails, somebody else is making
+		 * progress for us. But that assumption does not hold true if
+		 * we are depriving the owner of the oom lock of the CPU
+		 * resources by retrying direct reclaim/compaction.
+		 */
+		if (!mutex_trylock(&oom_lock)) {
+			*did_some_progress = 1;
+			schedule_timeout_uninterruptible(1);
+			return NULL;
+		}
+#ifdef CONFIG_PROVE_LOCKING
+		/*
+		 * Teach the lockdep that mutex_trylock() above acts like
+		 * mutex_lock(), for we are not allowed to depend on
+		 * __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation here.
+		 */
+		mutex_release(&oom_lock.dep_map, 1, _THIS_IP_);
+		mutex_acquire(&oom_lock.dep_map, 0, 0, _THIS_IP_);
+#endif
+		/*
+		 * Go through the zonelist yet one more time, keep very high
+		 * watermark here, this is only to catch a parallel oom
+		 * killing, we must fail if we're still under heavy pressure.
+		 * But make sure that this allocation attempt must not depend
+		 * on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation which
+		 * will never fail due to oom_lock already held.
+		 */
+		page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) &
+					      ~__GFP_DIRECT_RECLAIM, order,
+					      ALLOC_WMARK_HIGH | ALLOC_CPUSET,
+					      ac);
+	} else {
+		/*
+		 * Wait for the oom lock, in order to make sure that we won't
+		 * deprive the owner of the oom lock of CPU resources for
+		 * making progress for us.
+		 */
+		if (mutex_lock_killable(&oom_lock)) {
+			*did_some_progress = 1;
+			return NULL;
+		}
+		/*
+		 * This allocation attempt must not depend on
+		 * __GFP_DIRECT_RECLAIM && !__GFP_NORETRY allocation which will
+		 * never fail due to oom_lock already held.
+		 *
+		 * Since neither the OOM reaper nor exit_mmap() waits for
+		 * oom_lock when setting MMF_OOM_SKIP on the OOM victim's mm,
+		 * we might needlessly select more OOM victims if we use
+		 * ALLOC_WMARK_HIGH here. But since this allocation attempt
+		 * does not sleep, we will not fail to invoke the OOM killer
+		 * even if we choose ALLOC_WMARK_MIN here. Thus, we use
+		 * ALLOC_WMARK_MIN here.
+		 */
+		page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) &
+					      ~__GFP_DIRECT_RECLAIM, order,
+					      ALLOC_WMARK_MIN | ALLOC_CPUSET,
+					      ac);
 	}
 
-	/*
-	 * Go through the zonelist yet one more time, keep very high watermark
-	 * here, this is only to catch a parallel oom killing, we must fail if
-	 * we're still under heavy pressure. But make sure that this reclaim
-	 * attempt shall not depend on __GFP_DIRECT_RECLAIM && !__GFP_NORETRY
-	 * allocation which will never fail due to oom_lock already held.
-	 */
-	page = get_page_from_freelist((gfp_mask | __GFP_HARDWALL) &
-				      ~__GFP_DIRECT_RECLAIM, order,
-				      ALLOC_WMARK_HIGH|ALLOC_CPUSET, ac);
 	if (page)
 		goto out;
 
@@ -4205,6 +4247,12 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
 	/* Retry as long as the OOM killer is making progress */
 	if (did_some_progress) {
 		no_progress_loops = 0;
+		/*
+		 * This schedule_timeout_*() serves as a guaranteed sleep for
+		 * PF_WQ_WORKER threads when __zone_watermark_ok() == false.
+		 */
+		if (!oom_compat_mode && !tsk_is_oom_victim(current))
+			schedule_timeout_uninterruptible(1);
 		goto retry;
 	}
 
-- 
1.8.3.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-02 11:10                             ` [PATCH v3] " Tetsuo Handa
@ 2018-03-02 14:10                               ` Michal Hocko
  2018-03-03  3:15                                 ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-03-02 14:10 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Fri 02-03-18 20:10:19, Tetsuo Handa wrote:
> >From e80aeb994a03c3ae108107ea4d4489bbd7d868e9 Mon Sep 17 00:00:00 2001
> From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Date: Fri, 2 Mar 2018 19:56:50 +0900
> Subject: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
> 
> This patch fixes a bug which is essentially same with a bug fixed by
> commit 400e22499dd92613 ("mm: don't warn about allocations which stall for
> too long").
> 
> Currently __alloc_pages_may_oom() is using mutex_trylock(&oom_lock) based
> on an assumption that the owner of oom_lock is making progress for us. But
> it is possible to trigger OOM lockup when many threads concurrently called
> __alloc_pages_slowpath() because all CPU resources are wasted for pointless
> direct reclaim efforts. That is, schedule_timeout_uninterruptible(1) in
> __alloc_pages_may_oom() does not always give enough CPU resource to the
> owner of the oom_lock.
> 
> It is possible that the owner of oom_lock is preempted by other threads.
> Preemption makes the OOM situation much worse. But the page allocator is
> not responsible about wasting CPU resource for something other than memory
> allocation request. Wasting CPU resource for memory allocation request
> without allowing the owner of oom_lock to make forward progress is a page
> allocator's bug.
> 
> Therefore, this patch changes to wait for oom_lock in order to guarantee
> that no thread waiting for the owner of oom_lock to make forward progress
> will not consume CPU resources for pointless direct reclaim efforts.
> 
> We know printk() from OOM situation where a lot of threads are doing almost
> busy-looping is a nightmare. As a side effect of this patch, printk() with
> oom_lock held can start utilizing CPU resources saved by this patch (and
> reduce preemption during printk(), making printk() complete faster).
> 
> By changing !mutex_trylock(&oom_lock) with mutex_lock_killable(&oom_lock),
> it is possible that many threads prevent the OOM reaper from making forward
> progress. Thus, this patch removes mutex_lock(&oom_lock) from the OOM
> reaper.
> 
> Also, since nobody uses oom_lock serialization when setting MMF_OOM_SKIP
> and we don't try last second allocation attempt after confirming that there
> is no !MMF_OOM_SKIP OOM victim, the possibility of needlessly selecting
> more OOM victims will be increased if we continue using ALLOC_WMARK_HIGH.
> Thus, this patch changes to use ALLOC_MARK_MIN.
> 
> Also, since we don't want to sleep with oom_lock held so that we can allow
> threads waiting at mutex_lock_killable(&oom_lock) to try last second
> allocation attempt (because the OOM reaper starts reclaiming memory without
> waiting for oom_lock) and start selecting next OOM victim if necessary,
> this patch changes the location of the short sleep from inside of oom_lock
> to outside of oom_lock.
> 
> But since Michal is still worrying that adding a single synchronization
> point into the OOM path is risky (without showing a real life example
> where lock_killable() in the coldest OOM path hurts), changes made by
> this patch will be enabled only when oom_compat_mode=0 kernel command line
> parameter is specified so that users can test whether their workloads get
> hurt by this patch.
> 
> Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: David Rientjes <rientjes@google.com>

Nacked with passion. This is absolutely hideous. First of all there is
absolutely no need for the kernel command line. That is just trying to
dance around the fact that you are not able to argue for the change
and bring reasonable arguments on the table. We definitely do not want
two subtly different modes for the oom handling. Secondly, and repeatedly,
you are squashing multiple changes into a single patch. And finally this
is too big of a hammer for something that even doesn't solve the problem
for PREEMPTIVE kernels which are free to schedule regardless of the
sleep or the reclaim retry you are so passion about.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-02 14:10                               ` Michal Hocko
@ 2018-03-03  3:15                                 ` Tetsuo Handa
  2018-03-21 10:39                                   ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-03-03  3:15 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> > But since Michal is still worrying that adding a single synchronization
> > point into the OOM path is risky (without showing a real life example
> > where lock_killable() in the coldest OOM path hurts), changes made by
> > this patch will be enabled only when oom_compat_mode=0 kernel command line
> > parameter is specified so that users can test whether their workloads get
> > hurt by this patch.
> > 
> Nacked with passion. This is absolutely hideous. First of all there is
> absolutely no need for the kernel command line. That is just trying to
> dance around the fact that you are not able to argue for the change
> and bring reasonable arguments on the table. We definitely do not want
> two subtly different modes for the oom handling. Secondly, and repeatedly,
> you are squashing multiple changes into a single patch. And finally this
> is too big of a hammer for something that even doesn't solve the problem
> for PREEMPTIVE kernels which are free to schedule regardless of the
> sleep or the reclaim retry you are so passion about.

So, where is your version? Offload to a kernel thread like the OOM reaper?
Get rid of oom_lock? Just rejecting my proposal makes no progress.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-03  3:15                                 ` Tetsuo Handa
@ 2018-03-21 10:39                                   ` Tetsuo Handa
  2018-03-21 11:21                                     ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-03-21 10:39 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Tetsuo Handa wrote:
> Michal Hocko wrote:
> > > But since Michal is still worrying that adding a single synchronization
> > > point into the OOM path is risky (without showing a real life example
> > > where lock_killable() in the coldest OOM path hurts), changes made by
> > > this patch will be enabled only when oom_compat_mode=0 kernel command line
> > > parameter is specified so that users can test whether their workloads get
> > > hurt by this patch.
> > > 
> > Nacked with passion. This is absolutely hideous. First of all there is
> > absolutely no need for the kernel command line. That is just trying to
> > dance around the fact that you are not able to argue for the change
> > and bring reasonable arguments on the table. We definitely do not want
> > two subtly different modes for the oom handling. Secondly, and repeatedly,
> > you are squashing multiple changes into a single patch. And finally this
> > is too big of a hammer for something that even doesn't solve the problem
> > for PREEMPTIVE kernels which are free to schedule regardless of the
> > sleep or the reclaim retry you are so passion about.
> 
> So, where is your version? Offload to a kernel thread like the OOM reaper?
> Get rid of oom_lock? Just rejecting my proposal makes no progress.
> 
Did you come up with some idea?
Even CONFIG_PREEMPT=y, as far as I tested, v2 patch significantly reduces stalls than now.
I believe there is no valid reason not to test my v2 patch at linux-next.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-21 10:39                                   ` Tetsuo Handa
@ 2018-03-21 11:21                                     ` Michal Hocko
  2018-03-21 11:35                                       ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-03-21 11:21 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Wed 21-03-18 19:39:32, Tetsuo Handa wrote:
> Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > > But since Michal is still worrying that adding a single synchronization
> > > > point into the OOM path is risky (without showing a real life example
> > > > where lock_killable() in the coldest OOM path hurts), changes made by
> > > > this patch will be enabled only when oom_compat_mode=0 kernel command line
> > > > parameter is specified so that users can test whether their workloads get
> > > > hurt by this patch.
> > > > 
> > > Nacked with passion. This is absolutely hideous. First of all there is
> > > absolutely no need for the kernel command line. That is just trying to
> > > dance around the fact that you are not able to argue for the change
> > > and bring reasonable arguments on the table. We definitely do not want
> > > two subtly different modes for the oom handling. Secondly, and repeatedly,
> > > you are squashing multiple changes into a single patch. And finally this
> > > is too big of a hammer for something that even doesn't solve the problem
> > > for PREEMPTIVE kernels which are free to schedule regardless of the
> > > sleep or the reclaim retry you are so passion about.
> > 
> > So, where is your version? Offload to a kernel thread like the OOM reaper?
> > Get rid of oom_lock? Just rejecting my proposal makes no progress.
> > 
> Did you come up with some idea?
> Even CONFIG_PREEMPT=y, as far as I tested, v2 patch significantly reduces stalls than now.
> I believe there is no valid reason not to test my v2 patch at linux-next.

There are and I've mentioned them in my review feedback.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-21 11:21                                     ` Michal Hocko
@ 2018-03-21 11:35                                       ` Tetsuo Handa
  2018-03-21 12:00                                         ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-03-21 11:35 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> On Wed 21-03-18 19:39:32, Tetsuo Handa wrote:
> > Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > > But since Michal is still worrying that adding a single synchronization
> > > > > point into the OOM path is risky (without showing a real life example
> > > > > where lock_killable() in the coldest OOM path hurts), changes made by
> > > > > this patch will be enabled only when oom_compat_mode=0 kernel command line
> > > > > parameter is specified so that users can test whether their workloads get
> > > > > hurt by this patch.
> > > > > 
> > > > Nacked with passion. This is absolutely hideous. First of all there is
> > > > absolutely no need for the kernel command line. That is just trying to
> > > > dance around the fact that you are not able to argue for the change
> > > > and bring reasonable arguments on the table. We definitely do not want
> > > > two subtly different modes for the oom handling. Secondly, and repeatedly,
> > > > you are squashing multiple changes into a single patch. And finally this
> > > > is too big of a hammer for something that even doesn't solve the problem
> > > > for PREEMPTIVE kernels which are free to schedule regardless of the
> > > > sleep or the reclaim retry you are so passion about.
> > > 
> > > So, where is your version? Offload to a kernel thread like the OOM reaper?
> > > Get rid of oom_lock? Just rejecting my proposal makes no progress.
> > > 
> > Did you come up with some idea?
> > Even CONFIG_PREEMPT=y, as far as I tested, v2 patch significantly reduces stalls than now.
> > I believe there is no valid reason not to test my v2 patch at linux-next.
> 
> There are and I've mentioned them in my review feedback.
> 
Where? When I tried to disable preemption while oom_lock is held,
you suggested not to disable preemption. Thus, I followed your feedback.
Now, you again complain about preemption.

When I tried to replace only mutex_trylock() with mutex_lock_killable() in v1,
you said we need followup changes. Thus, I added followup changes in v2.

What are still missing? I can't understand what you are saying.

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-21 11:35                                       ` Tetsuo Handa
@ 2018-03-21 12:00                                         ` Michal Hocko
  2018-03-21 12:20                                           ` Tetsuo Handa
  0 siblings, 1 reply; 26+ messages in thread
From: Michal Hocko @ 2018-03-21 12:00 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Wed 21-03-18 20:35:47, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Wed 21-03-18 19:39:32, Tetsuo Handa wrote:
> > > Tetsuo Handa wrote:
> > > > Michal Hocko wrote:
> > > > > > But since Michal is still worrying that adding a single synchronization
> > > > > > point into the OOM path is risky (without showing a real life example
> > > > > > where lock_killable() in the coldest OOM path hurts), changes made by
> > > > > > this patch will be enabled only when oom_compat_mode=0 kernel command line
> > > > > > parameter is specified so that users can test whether their workloads get
> > > > > > hurt by this patch.
> > > > > > 
> > > > > Nacked with passion. This is absolutely hideous. First of all there is
> > > > > absolutely no need for the kernel command line. That is just trying to
> > > > > dance around the fact that you are not able to argue for the change
> > > > > and bring reasonable arguments on the table. We definitely do not want
> > > > > two subtly different modes for the oom handling. Secondly, and repeatedly,
> > > > > you are squashing multiple changes into a single patch. And finally this
> > > > > is too big of a hammer for something that even doesn't solve the problem
> > > > > for PREEMPTIVE kernels which are free to schedule regardless of the
> > > > > sleep or the reclaim retry you are so passion about.
> > > > 
> > > > So, where is your version? Offload to a kernel thread like the OOM reaper?
> > > > Get rid of oom_lock? Just rejecting my proposal makes no progress.
> > > > 
> > > Did you come up with some idea?
> > > Even CONFIG_PREEMPT=y, as far as I tested, v2 patch significantly reduces stalls than now.
> > > I believe there is no valid reason not to test my v2 patch at linux-next.
> > 
> > There are and I've mentioned them in my review feedback.
> > 
> Where? When I tried to disable preemption while oom_lock is held,
> you suggested not to disable preemption. Thus, I followed your feedback.
> Now, you again complain about preemption.
> 
> When I tried to replace only mutex_trylock() with mutex_lock_killable() in v1,
> you said we need followup changes. Thus, I added followup changes in v2.
> 
> What are still missing? I can't understand what you are saying.

http://lkml.kernel.org/r/20180302141000.GB12772@dhcp22.suse.cz

There are several points I really disliked. Ignoring them is not going
to move this work forward.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-21 12:00                                         ` Michal Hocko
@ 2018-03-21 12:20                                           ` Tetsuo Handa
  2018-03-21 12:31                                             ` Michal Hocko
  0 siblings, 1 reply; 26+ messages in thread
From: Tetsuo Handa @ 2018-03-21 12:20 UTC (permalink / raw)
  To: mhocko; +Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

Michal Hocko wrote:
> On Wed 21-03-18 20:35:47, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > On Wed 21-03-18 19:39:32, Tetsuo Handa wrote:
> > > > Tetsuo Handa wrote:
> > > > > Michal Hocko wrote:
> > > > > > > But since Michal is still worrying that adding a single synchronization
> > > > > > > point into the OOM path is risky (without showing a real life example
> > > > > > > where lock_killable() in the coldest OOM path hurts), changes made by
> > > > > > > this patch will be enabled only when oom_compat_mode=0 kernel command line
> > > > > > > parameter is specified so that users can test whether their workloads get
> > > > > > > hurt by this patch.
> > > > > > > 
> > > > > > Nacked with passion. This is absolutely hideous. First of all there is
> > > > > > absolutely no need for the kernel command line. That is just trying to
> > > > > > dance around the fact that you are not able to argue for the change
> > > > > > and bring reasonable arguments on the table. We definitely do not want
> > > > > > two subtly different modes for the oom handling. Secondly, and repeatedly,
> > > > > > you are squashing multiple changes into a single patch. And finally this
> > > > > > is too big of a hammer for something that even doesn't solve the problem
> > > > > > for PREEMPTIVE kernels which are free to schedule regardless of the
> > > > > > sleep or the reclaim retry you are so passion about.
> > > > > 
> > > > > So, where is your version? Offload to a kernel thread like the OOM reaper?
> > > > > Get rid of oom_lock? Just rejecting my proposal makes no progress.
> > > > > 
> > > > Did you come up with some idea?
> > > > Even CONFIG_PREEMPT=y, as far as I tested, v2 patch significantly reduces stalls than now.
> > > > I believe there is no valid reason not to test my v2 patch at linux-next.
> > > 
> > > There are and I've mentioned them in my review feedback.
> > > 
> > Where? When I tried to disable preemption while oom_lock is held,
> > you suggested not to disable preemption. Thus, I followed your feedback.
> > Now, you again complain about preemption.
> > 
> > When I tried to replace only mutex_trylock() with mutex_lock_killable() in v1,
> > you said we need followup changes. Thus, I added followup changes in v2.
> > 
> > What are still missing? I can't understand what you are saying.
> 
> http://lkml.kernel.org/r/20180302141000.GB12772@dhcp22.suse.cz
> 
> There are several points I really disliked. Ignoring them is not going
> to move this work forward.

"And finally this is too big of a hammer for something that even doesn't solve
the problem for PREEMPTIVE kernels which are free to schedule regardless of the
sleep or the reclaim retry you are so passion about." is not a problem, for
preemption is there in the hope that preemption allows processes to do something
useful. But current code allows processes to simply waste CPU resources by
pointless direct reclaim and prevents the owner of oom_lock from making progress
(i.e. AB-BA deadlock).

"Secondly, and repeatedly, you are squashing multiple changes into a single
patch." is a result of your feedback "This is not a solution without further
steps." While v2 patch will need to be split into multiple patches when merging,
you should give feedback based on what changes are missing. Doing multiple changes
into a single patch can be a reason not to merge but can not be a reason not to
test these changes.

"We definitely do not want two subtly different modes for the oom handling." is
not there in v2 patch.

If you say "you are not able to argue for the change and bring reasonable arguments
on the table.", please show us your arguments which is better than mine. Nothing can
justify current code (i.e. AB-BA deadlock). I'm asking your arguments by
"So, where is your version?"

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v3] mm,page_alloc: wait for oom_lock than back off
  2018-03-21 12:20                                           ` Tetsuo Handa
@ 2018-03-21 12:31                                             ` Michal Hocko
  0 siblings, 0 replies; 26+ messages in thread
From: Michal Hocko @ 2018-03-21 12:31 UTC (permalink / raw)
  To: Tetsuo Handa
  Cc: akpm, linux-mm, rientjes, hannes, guro, tj, vdavydov.dev, torvalds

On Wed 21-03-18 21:20:12, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Wed 21-03-18 20:35:47, Tetsuo Handa wrote:
> > > Michal Hocko wrote:
> > > > On Wed 21-03-18 19:39:32, Tetsuo Handa wrote:
> > > > > Tetsuo Handa wrote:
> > > > > > Michal Hocko wrote:
> > > > > > > > But since Michal is still worrying that adding a single synchronization
> > > > > > > > point into the OOM path is risky (without showing a real life example
> > > > > > > > where lock_killable() in the coldest OOM path hurts), changes made by
> > > > > > > > this patch will be enabled only when oom_compat_mode=0 kernel command line
> > > > > > > > parameter is specified so that users can test whether their workloads get
> > > > > > > > hurt by this patch.
> > > > > > > > 
> > > > > > > Nacked with passion. This is absolutely hideous. First of all there is
> > > > > > > absolutely no need for the kernel command line. That is just trying to
> > > > > > > dance around the fact that you are not able to argue for the change
> > > > > > > and bring reasonable arguments on the table. We definitely do not want
> > > > > > > two subtly different modes for the oom handling. Secondly, and repeatedly,
> > > > > > > you are squashing multiple changes into a single patch. And finally this
> > > > > > > is too big of a hammer for something that even doesn't solve the problem
> > > > > > > for PREEMPTIVE kernels which are free to schedule regardless of the
> > > > > > > sleep or the reclaim retry you are so passion about.
> > > > > > 
> > > > > > So, where is your version? Offload to a kernel thread like the OOM reaper?
> > > > > > Get rid of oom_lock? Just rejecting my proposal makes no progress.
> > > > > > 
> > > > > Did you come up with some idea?
> > > > > Even CONFIG_PREEMPT=y, as far as I tested, v2 patch significantly reduces stalls than now.
> > > > > I believe there is no valid reason not to test my v2 patch at linux-next.
> > > > 
> > > > There are and I've mentioned them in my review feedback.
> > > > 
> > > Where? When I tried to disable preemption while oom_lock is held,
> > > you suggested not to disable preemption. Thus, I followed your feedback.
> > > Now, you again complain about preemption.
> > > 
> > > When I tried to replace only mutex_trylock() with mutex_lock_killable() in v1,
> > > you said we need followup changes. Thus, I added followup changes in v2.
> > > 
> > > What are still missing? I can't understand what you are saying.
> > 
> > http://lkml.kernel.org/r/20180302141000.GB12772@dhcp22.suse.cz
> > 
> > There are several points I really disliked. Ignoring them is not going
> > to move this work forward.
> 
> "And finally this is too big of a hammer for something that even doesn't solve
> the problem for PREEMPTIVE kernels which are free to schedule regardless of the
> sleep or the reclaim retry you are so passion about." is not a problem, for
> preemption is there in the hope that preemption allows processes to do something
> useful. But current code allows processes to simply waste CPU resources by
> pointless direct reclaim and prevents the owner of oom_lock from making progress
> (i.e. AB-BA deadlock).
> 
> "Secondly, and repeatedly, you are squashing multiple changes into a single
> patch." is a result of your feedback "This is not a solution without further
> steps." While v2 patch will need to be split into multiple patches when merging,
> you should give feedback based on what changes are missing. Doing multiple changes
> into a single patch can be a reason not to merge but can not be a reason not to
> test these changes.
> 
> "We definitely do not want two subtly different modes for the oom handling." is
> not there in v2 patch.
> 
> If you say "you are not able to argue for the change and bring reasonable arguments
> on the table.", please show us your arguments which is better than mine. Nothing can
> justify current code (i.e. AB-BA deadlock). I'm asking your arguments by
> "So, where is your version?"

This is just a waste of time. I am off from this thread.

My nack still holds and you should seriously reconsider the way you take
the review feedback.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2018-03-21 12:31 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-01-22 13:46 [PATCH] mm,oom: Don't call schedule_timeout_killable() with oom_lock held Tetsuo Handa
2018-01-23  8:38 ` Michal Hocko
2018-01-23 12:07   ` Tetsuo Handa
2018-01-23 12:42     ` Michal Hocko
2018-01-24 13:28       ` Tetsuo Handa
2018-02-13 11:58         ` Tetsuo Handa
2018-02-20 13:32           ` [PATCH] mm,page_alloc: wait for oom_lock than back off Tetsuo Handa
2018-02-20 13:40             ` Matthew Wilcox
2018-02-20 14:12               ` Tetsuo Handa
2018-02-20 14:49             ` Michal Hocko
2018-02-21 14:27               ` Tetsuo Handa
2018-02-22 13:06                 ` Michal Hocko
2018-02-24  8:00                   ` [PATCH v2] " Tetsuo Handa
2018-02-26  9:27                     ` Michal Hocko
2018-02-26 10:58                       ` Tetsuo Handa
2018-02-26 12:19                         ` Michal Hocko
2018-02-26 13:16                           ` Tetsuo Handa
2018-03-02 11:10                             ` [PATCH v3] " Tetsuo Handa
2018-03-02 14:10                               ` Michal Hocko
2018-03-03  3:15                                 ` Tetsuo Handa
2018-03-21 10:39                                   ` Tetsuo Handa
2018-03-21 11:21                                     ` Michal Hocko
2018-03-21 11:35                                       ` Tetsuo Handa
2018-03-21 12:00                                         ` Michal Hocko
2018-03-21 12:20                                           ` Tetsuo Handa
2018-03-21 12:31                                             ` Michal Hocko

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.