All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm,oom: Teach lockdep about oom_lock.
@ 2019-03-08 10:22 Tetsuo Handa
  2019-03-08 11:03 ` Michal Hocko
  0 siblings, 1 reply; 13+ messages in thread
From: Tetsuo Handa @ 2019-03-08 10:22 UTC (permalink / raw)
  To: akpm; +Cc: linux-mm, Tetsuo Handa

Since we are not allowed to depend on blocking memory allocations when
oom_lock is already held, teach lockdep to consider that blocking memory
allocations might wait for oom_lock at as early location as possible, and
teach lockdep to consider that oom_lock is held by mutex_lock() than by
mutex_trylock().

Also, since the OOM killer is disabled until the OOM reaper or exit_mmap()
sets MMF_OOM_SKIP, teach lockdep to consider that oom_lock is held when
__oom_reap_task_mm() is called.

This patch should not cause lockdep splats unless there is somebody doing
dangerous things (e.g. from OOM notifiers, from the OOM reaper).

Signed-off-by: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
---
 mm/oom_kill.c   |  9 ++++++++-
 mm/page_alloc.c | 13 +++++++++++++
 2 files changed, 21 insertions(+), 1 deletion(-)

diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 3a24848..759aa4e 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -513,6 +513,7 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
 	 */
 	set_bit(MMF_UNSTABLE, &mm->flags);
 
+	mutex_acquire(&oom_lock.dep_map, 0, 0, _THIS_IP_);
 	for (vma = mm->mmap ; vma; vma = vma->vm_next) {
 		if (!can_madv_dontneed_vma(vma))
 			continue;
@@ -544,6 +545,7 @@ bool __oom_reap_task_mm(struct mm_struct *mm)
 			tlb_finish_mmu(&tlb, range.start, range.end);
 		}
 	}
+	mutex_release(&oom_lock.dep_map, 1, _THIS_IP_);
 
 	return ret;
 }
@@ -1120,8 +1122,13 @@ void pagefault_out_of_memory(void)
 	if (mem_cgroup_oom_synchronize(true))
 		return;
 
-	if (!mutex_trylock(&oom_lock))
+	if (!mutex_trylock(&oom_lock)) {
+		mutex_acquire(&oom_lock.dep_map, 0, 0, _THIS_IP_);
+		mutex_release(&oom_lock.dep_map, 1, _THIS_IP_);
 		return;
+	}
+	mutex_release(&oom_lock.dep_map, 1, _THIS_IP_);
+	mutex_acquire(&oom_lock.dep_map, 0, 0, _THIS_IP_);
 	out_of_memory(&oc);
 	mutex_unlock(&oom_lock);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6d0fa5b..25533214 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3793,6 +3793,8 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...)
 		schedule_timeout_uninterruptible(1);
 		return NULL;
 	}
+	mutex_release(&oom_lock.dep_map, 1, _THIS_IP_);
+	mutex_acquire(&oom_lock.dep_map, 0, 0, _THIS_IP_);
 
 	/*
 	 * Go through the zonelist yet one more time, keep very high watermark
@@ -4651,6 +4653,17 @@ static inline bool prepare_alloc_pages(gfp_t gfp_mask, unsigned int order,
 	fs_reclaim_acquire(gfp_mask);
 	fs_reclaim_release(gfp_mask);
 
+	/*
+	 * Allocation requests which can call __alloc_pages_may_oom() might
+	 * fail to bail out due to waiting for oom_lock.
+	 */
+	if ((gfp_mask & __GFP_DIRECT_RECLAIM) && !(gfp_mask & __GFP_NORETRY) &&
+	    (!(gfp_mask & __GFP_RETRY_MAYFAIL) ||
+	     order <= PAGE_ALLOC_COSTLY_ORDER)) {
+		mutex_acquire(&oom_lock.dep_map, 0, 0, _THIS_IP_);
+		mutex_release(&oom_lock.dep_map, 1, _THIS_IP_);
+	}
+
 	might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
 
 	if (should_fail_alloc_page(gfp_mask, order))
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2019-03-14 13:56 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-08 10:22 [PATCH] mm,oom: Teach lockdep about oom_lock Tetsuo Handa
2019-03-08 11:03 ` Michal Hocko
2019-03-08 11:29   ` Tetsuo Handa
2019-03-08 11:54     ` Michal Hocko
2019-03-08 11:58       ` Michal Hocko
2019-03-08 15:01         ` Peter Zijlstra
2019-03-08 15:13           ` Michal Hocko
2019-03-09  6:02             ` Tetsuo Handa
2019-03-11 10:30               ` Michal Hocko
2019-03-12 14:06                 ` Tetsuo Handa
2019-03-12 15:31                   ` Michal Hocko
2019-03-14 13:55                     ` Tetsuo Handa
2019-03-12  8:24               ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.