linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] hugetlb: select PREEMPT_COUNT if HUGETLB_PAGE for in_atomic use
@ 2021-03-11  2:13 Mike Kravetz
  2021-03-11  5:43 ` Andrew Morton
                   ` (3 more replies)
  0 siblings, 4 replies; 17+ messages in thread
From: Mike Kravetz @ 2021-03-11  2:13 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Michal Hocko, Paul E . McKenney, Shakeel Butt, tglx, john.ogness,
	urezki, ast, Eric Dumazet, Mina Almasry, peterz, Andrew Morton,
	Mike Kravetz

put_page does not correctly handle all calling contexts for hugetlb
pages.  This was recently discussed in the threads [1] and [2].

free_huge_page is the routine called for the final put_page of huegtlb
pages.  Since at least the beginning of git history, free_huge_page has
acquired the hugetlb_lock to move the page to a free list and possibly
perform other processing. When this code was originally written, the
hugetlb_lock should have been made irq safe.

For many years, nobody noticed this situation until lockdep code caught
free_huge_page being called from irq context.  By this time, another
lock (hugetlb subpool) was also taken in the free_huge_page path.  In
addition, hugetlb cgroup code had been added which could hold
hugetlb_lock for a considerable period of time.  Because of this, commit
c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in non-task
context") was added to address the issue of free_huge_page being called
from irq context.  That commit hands off free_huge_page processing to a
workqueue if !in_task.

The !in_task check handles the case of being called from irq context.
However, it does not take into account the case when called with irqs
disabled as in [1].

To complicate matters, functionality has been added to hugetlb
such that free_huge_page may block/sleep in certain situations.  The
hugetlb_lock is of course dropped before potentially blocking.

One way to handle all calling contexts is to have free_huge_page always
send pages to the workqueue for processing.  This idea was briefly
discussed here [3], but has some undesirable side effects.

Ideally, the hugetlb_lock should have been irq safe from the beginning
and any code added to the free_huge_page path should have taken this
into account.  However, this has not happened.  The code today does have
the ability to hand off requests to a workqueue.  It does this for calls
from irq context.  Changing the check in the code from !in_task to
in_atomic would handle the situations when called with irqs disabled.
However, it does not not handle the case when called with a spinlock
held.  This is needed because the code could block/sleep.

Select PREEMPT_COUNT if HUGETLB_PAGE is enabled so that in_atomic can be
used to detect all atomic contexts where sleeping is not possible.

[1] https://lore.kernel.org/linux-mm/000000000000f1c03b05bc43aadc@google.com/
[2] https://lore.kernel.org/linux-mm/YEjji9oAwHuZaZEt@dhcp22.suse.cz/
[3] https://lore.kernel.org/linux-mm/YDzaAWK41K4gD35V@dhcp22.suse.cz/

Suggested-by: Michal Hocko <mhocko@suse.com>
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 fs/Kconfig   |  1 +
 mm/hugetlb.c | 10 +++++-----
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/fs/Kconfig b/fs/Kconfig
index 462253ae483a..403d7a7a619a 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -235,6 +235,7 @@ config HUGETLBFS
 
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
+	select PREEMPT_COUNT
 
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 33b0d8778551..5407e77ca803 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1437,9 +1437,9 @@ static void __free_huge_page(struct page *page)
 }
 
 /*
- * As free_huge_page() can be called from a non-task context, we have
- * to defer the actual freeing in a workqueue to prevent potential
- * hugetlb_lock deadlock.
+ * If free_huge_page() is called from an atomic context, we have to defer
+ * the actual freeing in a workqueue.  This is to prevent possible sleeping
+ * while in atomic and potential hugetlb_lock deadlock.
  *
  * free_hpage_workfn() locklessly retrieves the linked list of pages to
  * be freed and frees them one-by-one. As the page->mapping pointer is
@@ -1467,9 +1467,9 @@ static DECLARE_WORK(free_hpage_work, free_hpage_workfn);
 void free_huge_page(struct page *page)
 {
 	/*
-	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
+	 * Defer freeing if in atomic context and sleeping is not allowed
 	 */
-	if (!in_task()) {
+	if (in_atomic()) {
 		/*
 		 * Only call schedule_work() if hpage_freelist is previously
 		 * empty. Otherwise, schedule_work() had been called but the
-- 
2.29.2



^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-03-11 17:51 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-11  2:13 [PATCH] hugetlb: select PREEMPT_COUNT if HUGETLB_PAGE for in_atomic use Mike Kravetz
2021-03-11  5:43 ` Andrew Morton
2021-03-11  8:20   ` Michal Hocko
2021-03-11  8:26 ` Michal Hocko
2021-03-11  8:27   ` Michal Hocko
2021-03-11  8:46 ` Peter Zijlstra
2021-03-11  9:01   ` Michal Hocko
2021-03-11  9:32     ` Peter Zijlstra
2021-03-11  9:44       ` Michal Hocko
2021-03-11  9:52         ` Peter Zijlstra
2021-03-11 11:09           ` Michal Hocko
2021-03-11 11:36             ` Peter Zijlstra
2021-03-11 12:02               ` Michal Hocko
2021-03-11 17:25                 ` Mike Kravetz
2021-03-11 12:49               ` Michal Hocko
2021-03-11 17:50               ` Paul E. McKenney
2021-03-11  9:49 ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).