mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: akpm@linux-foundation.org, almasrymina@google.com,
	aneesh.kumar@linux.ibm.com, david@redhat.com, guro@fb.com,
	hdanton@sina.com, iamjoonsoo.kim@lge.com, linmiaohe@huawei.com,
	linux-mm@kvack.org, longman@redhat.com, mhocko@suse.com,
	mike.kravetz@oracle.com, mm-commits@vger.kernel.org,
	naoya.horiguchi@nec.com, osalvador@suse.de, peterx@redhat.com,
	peterz@infradead.org, rientjes@google.com, shakeelb@google.com,
	song.bao.hua@hisilicon.com, songmuchun@bytedance.com,
	torvalds@linux-foundation.org, will@kernel.org,
	willy@infradead.org
Subject: [patch 045/143] hugetlb: make free_huge_page irq safe
Date: Tue, 04 May 2021 18:35:07 -0700	[thread overview]
Message-ID: <20210505013507.148RdHj5B%akpm@linux-foundation.org> (raw)
In-Reply-To: <20210504183219.a3cc46aee4013d77402276c5@linux-foundation.org>

From: Mike Kravetz <mike.kravetz@oracle.com>
Subject: hugetlb: make free_huge_page irq safe

Commit c77c0a8ac4c5 ("mm/hugetlb: defer freeing of huge pages if in
non-task context") was added to address the issue of free_huge_page being
called from irq context.  That commit hands off free_huge_page processing
to a workqueue if !in_task.  However, this doesn't cover all the cases as
pointed out by 0day bot lockdep report [1].

:  Possible interrupt unsafe locking scenario:
:
:        CPU0                    CPU1
:        ----                    ----
:   lock(hugetlb_lock);
:                                local_irq_disable();
:                                lock(slock-AF_INET);
:                                lock(hugetlb_lock);
:   <Interrupt>
:     lock(slock-AF_INET);

Shakeel has later explained that this is very likely TCP TX zerocopy from
hugetlb pages scenario when the networking code drops a last reference to
hugetlb page while having IRQ disabled.  Hugetlb freeing path doesn't
disable IRQ while holding hugetlb_lock so a lock dependency chain can lead
to a deadlock.

This commit addresses the issue by doing the following:
- Make hugetlb_lock irq safe.  This is mostly a simple process of
  changing spin_*lock calls to spin_*lock_irq* calls.
- Make subpool lock irq safe in a similar manner.
- Revert the !in_task check and workqueue handoff.

[1] https://lore.kernel.org/linux-mm/000000000000f1c03b05bc43aadc@google.com/

Link: https://lkml.kernel.org/r/20210409205254.242291-8-mike.kravetz@oracle.com
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: "Aneesh Kumar K . V" <aneesh.kumar@linux.ibm.com>
Cc: Barry Song <song.bao.hua@hisilicon.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Hillf Danton <hdanton@sina.com>
Cc: HORIGUCHI NAOYA <naoya.horiguchi@nec.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Mina Almasry <almasrymina@google.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Cc: Waiman Long <longman@redhat.com>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c        |  169 +++++++++++++++---------------------------
 mm/hugetlb_cgroup.c |    8 -
 2 files changed, 67 insertions(+), 110 deletions(-)

--- a/mm/hugetlb.c~hugetlb-make-free_huge_page-irq-safe
+++ a/mm/hugetlb.c
@@ -94,9 +94,10 @@ static inline bool subpool_is_free(struc
 	return true;
 }
 
-static inline void unlock_or_release_subpool(struct hugepage_subpool *spool)
+static inline void unlock_or_release_subpool(struct hugepage_subpool *spool,
+						unsigned long irq_flags)
 {
-	spin_unlock(&spool->lock);
+	spin_unlock_irqrestore(&spool->lock, irq_flags);
 
 	/* If no pages are used, and no other handles to the subpool
 	 * remain, give up any reservations based on minimum size and
@@ -135,10 +136,12 @@ struct hugepage_subpool *hugepage_new_su
 
 void hugepage_put_subpool(struct hugepage_subpool *spool)
 {
-	spin_lock(&spool->lock);
+	unsigned long flags;
+
+	spin_lock_irqsave(&spool->lock, flags);
 	BUG_ON(!spool->count);
 	spool->count--;
-	unlock_or_release_subpool(spool);
+	unlock_or_release_subpool(spool, flags);
 }
 
 /*
@@ -157,7 +160,7 @@ static long hugepage_subpool_get_pages(s
 	if (!spool)
 		return ret;
 
-	spin_lock(&spool->lock);
+	spin_lock_irq(&spool->lock);
 
 	if (spool->max_hpages != -1) {		/* maximum size accounting */
 		if ((spool->used_hpages + delta) <= spool->max_hpages)
@@ -184,7 +187,7 @@ static long hugepage_subpool_get_pages(s
 	}
 
 unlock_ret:
-	spin_unlock(&spool->lock);
+	spin_unlock_irq(&spool->lock);
 	return ret;
 }
 
@@ -198,11 +201,12 @@ static long hugepage_subpool_put_pages(s
 				       long delta)
 {
 	long ret = delta;
+	unsigned long flags;
 
 	if (!spool)
 		return delta;
 
-	spin_lock(&spool->lock);
+	spin_lock_irqsave(&spool->lock, flags);
 
 	if (spool->max_hpages != -1)		/* maximum size accounting */
 		spool->used_hpages -= delta;
@@ -223,7 +227,7 @@ static long hugepage_subpool_put_pages(s
 	 * If hugetlbfs_put_super couldn't free spool due to an outstanding
 	 * quota reference, free it now.
 	 */
-	unlock_or_release_subpool(spool);
+	unlock_or_release_subpool(spool, flags);
 
 	return ret;
 }
@@ -1412,7 +1416,7 @@ struct hstate *size_to_hstate(unsigned l
 	return NULL;
 }
 
-static void __free_huge_page(struct page *page)
+void free_huge_page(struct page *page)
 {
 	/*
 	 * Can't pass hstate in here because it is called from the
@@ -1422,6 +1426,7 @@ static void __free_huge_page(struct page
 	int nid = page_to_nid(page);
 	struct hugepage_subpool *spool = hugetlb_page_subpool(page);
 	bool restore_reserve;
+	unsigned long flags;
 
 	VM_BUG_ON_PAGE(page_count(page), page);
 	VM_BUG_ON_PAGE(page_mapcount(page), page);
@@ -1450,7 +1455,7 @@ static void __free_huge_page(struct page
 			restore_reserve = true;
 	}
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irqsave(&hugetlb_lock, flags);
 	ClearHPageMigratable(page);
 	hugetlb_cgroup_uncharge_page(hstate_index(h),
 				     pages_per_huge_page(h), page);
@@ -1461,66 +1466,18 @@ static void __free_huge_page(struct page
 
 	if (HPageTemporary(page)) {
 		remove_hugetlb_page(h, page, false);
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irqrestore(&hugetlb_lock, flags);
 		update_and_free_page(h, page);
 	} else if (h->surplus_huge_pages_node[nid]) {
 		/* remove the page from active list */
 		remove_hugetlb_page(h, page, true);
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irqrestore(&hugetlb_lock, flags);
 		update_and_free_page(h, page);
 	} else {
 		arch_clear_hugepage_flags(page);
 		enqueue_huge_page(h, page);
-		spin_unlock(&hugetlb_lock);
-	}
-}
-
-/*
- * As free_huge_page() can be called from a non-task context, we have
- * to defer the actual freeing in a workqueue to prevent potential
- * hugetlb_lock deadlock.
- *
- * free_hpage_workfn() locklessly retrieves the linked list of pages to
- * be freed and frees them one-by-one. As the page->mapping pointer is
- * going to be cleared in __free_huge_page() anyway, it is reused as the
- * llist_node structure of a lockless linked list of huge pages to be freed.
- */
-static LLIST_HEAD(hpage_freelist);
-
-static void free_hpage_workfn(struct work_struct *work)
-{
-	struct llist_node *node;
-	struct page *page;
-
-	node = llist_del_all(&hpage_freelist);
-
-	while (node) {
-		page = container_of((struct address_space **)node,
-				     struct page, mapping);
-		node = node->next;
-		__free_huge_page(page);
-	}
-}
-static DECLARE_WORK(free_hpage_work, free_hpage_workfn);
-
-void free_huge_page(struct page *page)
-{
-	/*
-	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
-	 */
-	if (!in_task()) {
-		/*
-		 * Only call schedule_work() if hpage_freelist is previously
-		 * empty. Otherwise, schedule_work() had been called but the
-		 * workfn hasn't retrieved the list yet.
-		 */
-		if (llist_add((struct llist_node *)&page->mapping,
-			      &hpage_freelist))
-			schedule_work(&free_hpage_work);
-		return;
+		spin_unlock_irqrestore(&hugetlb_lock, flags);
 	}
-
-	__free_huge_page(page);
 }
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
@@ -1530,11 +1487,11 @@ static void prep_new_huge_page(struct hs
 	hugetlb_set_page_subpool(page, NULL);
 	set_hugetlb_cgroup(page, NULL);
 	set_hugetlb_cgroup_rsvd(page, NULL);
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	h->nr_huge_pages++;
 	h->nr_huge_pages_node[nid]++;
 	ClearHPageFreed(page);
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 }
 
 static void prep_compound_gigantic_page(struct page *page, unsigned int order)
@@ -1780,7 +1737,7 @@ retry:
 	if (!PageHuge(page))
 		return 0;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	if (!PageHuge(page)) {
 		rc = 0;
 		goto out;
@@ -1797,7 +1754,7 @@ retry:
 		 * when it is dissolved.
 		 */
 		if (unlikely(!HPageFreed(head))) {
-			spin_unlock(&hugetlb_lock);
+			spin_unlock_irq(&hugetlb_lock);
 			cond_resched();
 
 			/*
@@ -1821,12 +1778,12 @@ retry:
 		}
 		remove_hugetlb_page(h, page, false);
 		h->max_huge_pages--;
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irq(&hugetlb_lock);
 		update_and_free_page(h, head);
 		return 0;
 	}
 out:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	return rc;
 }
 
@@ -1868,16 +1825,16 @@ static struct page *alloc_surplus_huge_p
 	if (hstate_is_gigantic(h))
 		return NULL;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages)
 		goto out_unlock;
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 
 	page = alloc_fresh_huge_page(h, gfp_mask, nid, nmask, NULL);
 	if (!page)
 		return NULL;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	/*
 	 * We could have raced with the pool size change.
 	 * Double check that and simply deallocate the new page
@@ -1887,7 +1844,7 @@ static struct page *alloc_surplus_huge_p
 	 */
 	if (h->surplus_huge_pages >= h->nr_overcommit_huge_pages) {
 		SetHPageTemporary(page);
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irq(&hugetlb_lock);
 		put_page(page);
 		return NULL;
 	} else {
@@ -1896,7 +1853,7 @@ static struct page *alloc_surplus_huge_p
 	}
 
 out_unlock:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 
 	return page;
 }
@@ -1946,17 +1903,17 @@ struct page *alloc_buddy_huge_page_with_
 struct page *alloc_huge_page_nodemask(struct hstate *h, int preferred_nid,
 		nodemask_t *nmask, gfp_t gfp_mask)
 {
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	if (h->free_huge_pages - h->resv_huge_pages > 0) {
 		struct page *page;
 
 		page = dequeue_huge_page_nodemask(h, gfp_mask, preferred_nid, nmask);
 		if (page) {
-			spin_unlock(&hugetlb_lock);
+			spin_unlock_irq(&hugetlb_lock);
 			return page;
 		}
 	}
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 
 	return alloc_migrate_huge_page(h, gfp_mask, preferred_nid, nmask);
 }
@@ -2004,7 +1961,7 @@ static int gather_surplus_pages(struct h
 
 	ret = -ENOMEM;
 retry:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	for (i = 0; i < needed; i++) {
 		page = alloc_surplus_huge_page(h, htlb_alloc_mask(h),
 				NUMA_NO_NODE, NULL);
@@ -2021,7 +1978,7 @@ retry:
 	 * After retaking hugetlb_lock, we need to recalculate 'needed'
 	 * because either resv_huge_pages or free_huge_pages may have changed.
 	 */
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	needed = (h->resv_huge_pages + delta) -
 			(h->free_huge_pages + allocated);
 	if (needed > 0) {
@@ -2061,12 +2018,12 @@ retry:
 		enqueue_huge_page(h, page);
 	}
 free:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 
 	/* Free unnecessary surplus pages to the buddy allocator */
 	list_for_each_entry_safe(page, tmp, &surplus_list, lru)
 		put_page(page);
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 
 	return ret;
 }
@@ -2116,9 +2073,9 @@ static void return_unused_surplus_pages(
 	}
 
 out:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	update_and_free_pages_bulk(h, &page_list);
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 }
 
 
@@ -2352,7 +2309,7 @@ struct page *alloc_huge_page(struct vm_a
 	if (ret)
 		goto out_uncharge_cgroup_reservation;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	/*
 	 * glb_chg is passed to indicate whether or not a page must be taken
 	 * from the global free pool (global change).  gbl_chg == 0 indicates
@@ -2360,7 +2317,7 @@ struct page *alloc_huge_page(struct vm_a
 	 */
 	page = dequeue_huge_page_vma(h, vma, addr, avoid_reserve, gbl_chg);
 	if (!page) {
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irq(&hugetlb_lock);
 		page = alloc_buddy_huge_page_with_mpol(h, vma, addr);
 		if (!page)
 			goto out_uncharge_cgroup;
@@ -2368,7 +2325,7 @@ struct page *alloc_huge_page(struct vm_a
 			SetHPageRestoreReserve(page);
 			h->resv_huge_pages--;
 		}
-		spin_lock(&hugetlb_lock);
+		spin_lock_irq(&hugetlb_lock);
 		list_add(&page->lru, &h->hugepage_activelist);
 		/* Fall through */
 	}
@@ -2381,7 +2338,7 @@ struct page *alloc_huge_page(struct vm_a
 						  h_cg, page);
 	}
 
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 
 	hugetlb_set_page_subpool(page, spool);
 
@@ -2593,9 +2550,9 @@ static void try_to_free_low(struct hstat
 	}
 
 out:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	update_and_free_pages_bulk(h, &page_list);
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 }
 #else
 static inline void try_to_free_low(struct hstate *h, unsigned long count,
@@ -2660,7 +2617,7 @@ static int set_max_huge_pages(struct hst
 	 * pages in hstate via the proc/sysfs interfaces.
 	 */
 	mutex_lock(&h->resize_lock);
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 
 	/*
 	 * Check for a node specific request.
@@ -2691,7 +2648,7 @@ static int set_max_huge_pages(struct hst
 	 */
 	if (hstate_is_gigantic(h) && !IS_ENABLED(CONFIG_CONTIG_ALLOC)) {
 		if (count > persistent_huge_pages(h)) {
-			spin_unlock(&hugetlb_lock);
+			spin_unlock_irq(&hugetlb_lock);
 			mutex_unlock(&h->resize_lock);
 			NODEMASK_FREE(node_alloc_noretry);
 			return -EINVAL;
@@ -2721,14 +2678,14 @@ static int set_max_huge_pages(struct hst
 		 * page, free_huge_page will handle it by freeing the page
 		 * and reducing the surplus.
 		 */
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irq(&hugetlb_lock);
 
 		/* yield cpu to avoid soft lockup */
 		cond_resched();
 
 		ret = alloc_pool_huge_page(h, nodes_allowed,
 						node_alloc_noretry);
-		spin_lock(&hugetlb_lock);
+		spin_lock_irq(&hugetlb_lock);
 		if (!ret)
 			goto out;
 
@@ -2767,9 +2724,9 @@ static int set_max_huge_pages(struct hst
 		list_add(&page->lru, &page_list);
 	}
 	/* free the pages after dropping lock */
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	update_and_free_pages_bulk(h, &page_list);
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 
 	while (count < persistent_huge_pages(h)) {
 		if (!adjust_pool_surplus(h, nodes_allowed, 1))
@@ -2777,7 +2734,7 @@ static int set_max_huge_pages(struct hst
 	}
 out:
 	h->max_huge_pages = persistent_huge_pages(h);
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	mutex_unlock(&h->resize_lock);
 
 	NODEMASK_FREE(node_alloc_noretry);
@@ -2933,9 +2890,9 @@ static ssize_t nr_overcommit_hugepages_s
 	if (err)
 		return err;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	h->nr_overcommit_huge_pages = input;
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 
 	return count;
 }
@@ -3522,9 +3479,9 @@ int hugetlb_overcommit_handler(struct ct
 		goto out;
 
 	if (write) {
-		spin_lock(&hugetlb_lock);
+		spin_lock_irq(&hugetlb_lock);
 		h->nr_overcommit_huge_pages = tmp;
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irq(&hugetlb_lock);
 	}
 out:
 	return ret;
@@ -3620,7 +3577,7 @@ static int hugetlb_acct_memory(struct hs
 	if (!delta)
 		return 0;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	/*
 	 * When cpuset is configured, it breaks the strict hugetlb page
 	 * reservation as the accounting is done on a global variable. Such
@@ -3659,7 +3616,7 @@ static int hugetlb_acct_memory(struct hs
 		return_unused_surplus_pages(h, (unsigned long) -delta);
 
 out:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	return ret;
 }
 
@@ -5687,7 +5644,7 @@ bool isolate_huge_page(struct page *page
 {
 	bool ret = true;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	if (!PageHeadHuge(page) ||
 	    !HPageMigratable(page) ||
 	    !get_page_unless_zero(page)) {
@@ -5697,16 +5654,16 @@ bool isolate_huge_page(struct page *page
 	ClearHPageMigratable(page);
 	list_move_tail(&page->lru, list);
 unlock:
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	return ret;
 }
 
 void putback_active_hugepage(struct page *page)
 {
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	SetHPageMigratable(page);
 	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	put_page(page);
 }
 
@@ -5740,12 +5697,12 @@ void move_hugetlb_state(struct page *old
 		 */
 		if (new_nid == old_nid)
 			return;
-		spin_lock(&hugetlb_lock);
+		spin_lock_irq(&hugetlb_lock);
 		if (h->surplus_huge_pages_node[old_nid]) {
 			h->surplus_huge_pages_node[old_nid]--;
 			h->surplus_huge_pages_node[new_nid]++;
 		}
-		spin_unlock(&hugetlb_lock);
+		spin_unlock_irq(&hugetlb_lock);
 	}
 }
 
--- a/mm/hugetlb_cgroup.c~hugetlb-make-free_huge_page-irq-safe
+++ a/mm/hugetlb_cgroup.c
@@ -204,11 +204,11 @@ static void hugetlb_cgroup_css_offline(s
 	do {
 		idx = 0;
 		for_each_hstate(h) {
-			spin_lock(&hugetlb_lock);
+			spin_lock_irq(&hugetlb_lock);
 			list_for_each_entry(page, &h->hugepage_activelist, lru)
 				hugetlb_cgroup_move_parent(idx, h_cg, page);
 
-			spin_unlock(&hugetlb_lock);
+			spin_unlock_irq(&hugetlb_lock);
 			idx++;
 		}
 		cond_resched();
@@ -784,7 +784,7 @@ void hugetlb_cgroup_migrate(struct page
 	if (hugetlb_cgroup_disabled())
 		return;
 
-	spin_lock(&hugetlb_lock);
+	spin_lock_irq(&hugetlb_lock);
 	h_cg = hugetlb_cgroup_from_page(oldhpage);
 	h_cg_rsvd = hugetlb_cgroup_from_page_rsvd(oldhpage);
 	set_hugetlb_cgroup(oldhpage, NULL);
@@ -794,7 +794,7 @@ void hugetlb_cgroup_migrate(struct page
 	set_hugetlb_cgroup(newhpage, h_cg);
 	set_hugetlb_cgroup_rsvd(newhpage, h_cg_rsvd);
 	list_move(&newhpage->lru, &h->hugepage_activelist);
-	spin_unlock(&hugetlb_lock);
+	spin_unlock_irq(&hugetlb_lock);
 	return;
 }
 
_

  parent reply	other threads:[~2021-05-05  1:35 UTC|newest]

Thread overview: 146+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-05  1:32 incoming Andrew Morton
2021-05-05  1:32 ` [patch 001/143] mm: introduce and use mapping_empty() Andrew Morton
2021-05-05  1:32 ` [patch 002/143] mm: stop accounting shadow entries Andrew Morton
2021-05-05  1:32 ` [patch 003/143] dax: account DAX entries as nrpages Andrew Morton
2021-05-05  1:32 ` [patch 004/143] mm: remove nrexceptional from inode Andrew Morton
2021-05-05  1:32 ` [patch 005/143] mm: remove nrexceptional from inode: remove BUG_ON Andrew Morton
2021-05-05  1:33 ` [patch 006/143] hugetlb: pass vma into huge_pte_alloc() and huge_pmd_share() Andrew Morton
2021-05-05  1:33 ` [patch 007/143] hugetlb/userfaultfd: forbid huge pmd sharing when uffd enabled Andrew Morton
2021-05-05  1:33 ` [patch 008/143] mm/hugetlb: move flush_hugetlb_tlb_range() into hugetlb.h Andrew Morton
2021-05-05  1:33 ` [patch 009/143] hugetlb/userfaultfd: unshare all pmds for hugetlbfs when register wp Andrew Morton
2021-05-05  1:33 ` [patch 010/143] mm/hugetlb: remove redundant reservation check condition in alloc_huge_page() Andrew Morton
2021-05-05  1:33 ` [patch 011/143] mm: generalize HUGETLB_PAGE_SIZE_VARIABLE Andrew Morton
2021-05-05  1:33 ` [patch 012/143] mm/hugetlb: use some helper functions to cleanup code Andrew Morton
2021-05-05  1:33 ` [patch 013/143] mm/hugetlb: optimize the surplus state transfer code in move_hugetlb_state() Andrew Morton
2021-05-05  1:33 ` [patch 014/143] mm/hugetlb_cgroup: remove unnecessary VM_BUG_ON_PAGE in hugetlb_cgroup_migrate() Andrew Morton
2021-05-05  1:33 ` [patch 015/143] mm/hugetlb: simplify the code when alloc_huge_page() failed in hugetlb_no_page() Andrew Morton
2021-05-05  1:33 ` [patch 016/143] mm/hugetlb: avoid calculating fault_mutex_hash in truncate_op case Andrew Morton
2021-05-05  1:33 ` [patch 017/143] khugepaged: remove unneeded return value of khugepaged_collapse_pte_mapped_thps() Andrew Morton
2021-05-05  1:33 ` [patch 018/143] khugepaged: reuse the smp_wmb() inside __SetPageUptodate() Andrew Morton
2021-05-05  1:33 ` [patch 019/143] khugepaged: use helper khugepaged_test_exit() in __khugepaged_enter() Andrew Morton
2021-05-05  1:33 ` [patch 020/143] khugepaged: fix wrong result value for trace_mm_collapse_huge_page_isolate() Andrew Morton
2021-05-05  1:33 ` [patch 021/143] mm/huge_memory.c: remove unnecessary local variable ret2 Andrew Morton
2021-05-05  1:33 ` [patch 022/143] mm/huge_memory.c: rework the function vma_adjust_trans_huge() Andrew Morton
2021-05-05  1:33 ` [patch 023/143] mm/huge_memory.c: make get_huge_zero_page() return bool Andrew Morton
2021-05-05  1:33 ` [patch 024/143] mm/huge_memory.c: rework the function do_huge_pmd_numa_page() slightly Andrew Morton
2021-05-05  1:34 ` [patch 025/143] mm/huge_memory.c: remove redundant PageCompound() check Andrew Morton
2021-05-05  1:34 ` [patch 026/143] mm/huge_memory.c: remove unused macro TRANSPARENT_HUGEPAGE_DEBUG_COW_FLAG Andrew Morton
2021-05-05  1:34 ` [patch 027/143] mm/huge_memory.c: use helper function migration_entry_to_page() Andrew Morton
2021-05-05  1:34 ` [patch 028/143] mm/khugepaged.c: replace barrier() with READ_ONCE() for a selective variable Andrew Morton
2021-05-05  1:34 ` [patch 029/143] khugepaged: use helper function range_in_vma() in collapse_pte_mapped_thp() Andrew Morton
2021-05-05  1:34 ` [patch 030/143] khugepaged: remove unnecessary out label in collapse_huge_page() Andrew Morton
2021-05-05  1:34 ` [patch 031/143] khugepaged: remove meaningless !pte_present() check in khugepaged_scan_pmd() Andrew Morton
2021-05-05  1:34 ` [patch 032/143] mm: huge_memory: a new debugfs interface for splitting THP tests Andrew Morton
2021-05-05  1:34 ` [patch 033/143] mm: huge_memory: debugfs for file-backed THP split Andrew Morton
2021-05-05  1:34 ` [patch 034/143] mm/hugeltb: remove redundant VM_BUG_ON() in region_add() Andrew Morton
2021-05-05  1:34 ` [patch 035/143] mm/hugeltb: simplify the return code of __vma_reservation_common() Andrew Morton
2021-05-05  1:34 ` [patch 036/143] mm/hugeltb: clarify (chg - freed) won't go negative in hugetlb_unreserve_pages() Andrew Morton
2021-05-05  1:34 ` [patch 037/143] mm/hugeltb: handle the error case in hugetlb_fix_reserve_counts() Andrew Morton
2021-05-05  1:34 ` [patch 038/143] mm/hugetlb: remove unused variable pseudo_vma in remove_inode_hugepages() Andrew Morton
2021-05-05  1:34 ` [patch 039/143] mm/cma: change cma mutex to irq safe spinlock Andrew Morton
2021-05-05  1:34 ` [patch 040/143] hugetlb: no need to drop hugetlb_lock to call cma_release Andrew Morton
2021-05-05  1:34 ` [patch 041/143] hugetlb: add per-hstate mutex to synchronize user adjustments Andrew Morton
2021-05-05  1:34 ` [patch 042/143] hugetlb: create remove_hugetlb_page() to separate functionality Andrew Morton
2021-05-05  1:34 ` [patch 043/143] hugetlb: call update_and_free_page without hugetlb_lock Andrew Morton
2021-05-05  1:35 ` [patch 044/143] hugetlb: change free_pool_huge_page to remove_pool_huge_page Andrew Morton
2021-05-05  1:35 ` Andrew Morton [this message]
2021-05-05  1:35 ` [patch 046/143] hugetlb: add lockdep_assert_held() calls for hugetlb_lock Andrew Morton
2021-05-05  1:35 ` [patch 047/143] mm,page_alloc: bail out earlier on -ENOMEM in alloc_contig_migrate_range Andrew Morton
2021-05-05  1:35 ` [patch 048/143] mm,compaction: let isolate_migratepages_{range,block} return error codes Andrew Morton
2021-05-05  1:35 ` [patch 049/143] mm,hugetlb: drop clearing of flag from prep_new_huge_page Andrew Morton
2021-05-05  1:35 ` [patch 050/143] mm,hugetlb: split prep_new_huge_page functionality Andrew Morton
2021-05-05  1:35 ` [patch 051/143] mm: make alloc_contig_range handle free hugetlb pages Andrew Morton
2021-05-05  1:35 ` [patch 052/143] mm: make alloc_contig_range handle in-use " Andrew Morton
2021-05-05  1:35 ` [patch 053/143] mm,page_alloc: drop unnecessary checks from pfn_range_valid_contig Andrew Morton
2021-05-05  1:35 ` [patch 054/143] userfaultfd: add minor fault registration mode Andrew Morton
2021-05-05  1:35 ` [patch 055/143] userfaultfd: disable huge PMD sharing for MINOR registered VMAs Andrew Morton
2021-05-05  1:35 ` [patch 056/143] userfaultfd: hugetlbfs: only compile UFFD helpers if config enabled Andrew Morton
2021-05-05  1:35 ` [patch 057/143] userfaultfd: add UFFDIO_CONTINUE ioctl Andrew Morton
2021-05-05  1:35 ` [patch 058/143] userfaultfd: update documentation to describe minor fault handling Andrew Morton
2021-05-05  1:35 ` [patch 059/143] userfaultfd/selftests: add test exercising " Andrew Morton
2021-05-05  1:36 ` [patch 060/143] mm/vmscan: move RECLAIM* bits to uapi header Andrew Morton
2021-05-05  1:36 ` [patch 061/143] mm/vmscan: replace implicit RECLAIM_ZONE checks with explicit checks Andrew Morton
2021-05-05  1:36 ` [patch 062/143] mm: vmscan: use nid from shrink_control for tracepoint Andrew Morton
2021-05-05  1:36 ` [patch 063/143] mm: vmscan: consolidate shrinker_maps handling code Andrew Morton
2021-05-05  1:36 ` [patch 064/143] mm: vmscan: use shrinker_rwsem to protect shrinker_maps allocation Andrew Morton
2021-05-05  1:36 ` [patch 065/143] mm: vmscan: remove memcg_shrinker_map_size Andrew Morton
2021-05-05  1:36 ` [patch 066/143] mm: vmscan: use kvfree_rcu instead of call_rcu Andrew Morton
2021-05-05  1:36 ` [patch 067/143] mm: memcontrol: rename shrinker_map to shrinker_info Andrew Morton
2021-05-05  1:36 ` [patch 068/143] mm: vmscan: add shrinker_info_protected() helper Andrew Morton
2021-05-05  1:36 ` [patch 069/143] mm: vmscan: use a new flag to indicate shrinker is registered Andrew Morton
2021-05-05  1:36 ` [patch 070/143] mm: vmscan: add per memcg shrinker nr_deferred Andrew Morton
2021-05-05  1:36 ` [patch 071/143] mm: vmscan: use per memcg nr_deferred of shrinker Andrew Morton
2021-05-05  1:36 ` [patch 072/143] mm: vmscan: don't need allocate shrinker->nr_deferred for memcg aware shrinkers Andrew Morton
2021-05-05  1:36 ` [patch 073/143] mm: memcontrol: reparent nr_deferred when memcg offline Andrew Morton
2021-05-05  1:36 ` [patch 074/143] mm: vmscan: shrink deferred objects proportional to priority Andrew Morton
2021-05-05  1:36 ` [patch 075/143] mm/compaction: remove unused variable sysctl_compact_memory Andrew Morton
2021-05-05  1:36 ` [patch 076/143] mm: compaction: update the COMPACT[STALL|FAIL] events properly Andrew Morton
2021-05-05  1:36 ` [patch 077/143] mm: disable LRU pagevec during the migration temporarily Andrew Morton
2021-05-05  1:36 ` [patch 078/143] mm: replace migrate_[prep|finish] with lru_cache_[disable|enable] Andrew Morton
2021-05-05  1:37 ` [patch 079/143] mm: fs: invalidate BH LRU during page migration Andrew Morton
2021-05-05  1:37 ` [patch 080/143] mm/migrate.c: make putback_movable_page() static Andrew Morton
2021-05-05  1:37 ` [patch 081/143] mm/migrate.c: remove unnecessary rc != MIGRATEPAGE_SUCCESS check in 'else' case Andrew Morton
2021-05-05  1:37 ` [patch 082/143] mm/migrate.c: fix potential indeterminate pte entry in migrate_vma_insert_page() Andrew Morton
2021-05-05  1:37 ` [patch 083/143] mm/migrate.c: use helper migrate_vma_collect_skip() in migrate_vma_collect_hole() Andrew Morton
2021-05-05  1:37 ` [patch 084/143] Revert "mm: migrate: skip shared exec THP for NUMA balancing" Andrew Morton
2021-05-05  1:37 ` [patch 085/143] mm: vmstat: add cma statistics Andrew Morton
2021-05-05  1:37 ` [patch 086/143] mm: cma: use pr_err_ratelimited for CMA warning Andrew Morton
2021-05-05  1:37 ` [patch 087/143] mm: cma: add trace events for CMA alloc perf testing Andrew Morton
2021-05-05  1:37 ` [patch 088/143] mm: cma: support sysfs Andrew Morton
2021-05-05  1:37 ` [patch 089/143] mm: cma: add the CMA instance name to cma trace events Andrew Morton
2021-05-05  1:37 ` [patch 090/143] mm: use proper type for cma_[alloc|release] Andrew Morton
2021-05-05  1:37 ` [patch 091/143] ksm: remove redundant VM_BUG_ON_PAGE() on stable_tree_search() Andrew Morton
2021-05-05  1:37 ` [patch 092/143] ksm: use GET_KSM_PAGE_NOLOCK to get ksm page in remove_rmap_item_from_tree() Andrew Morton
2021-05-05  1:37 ` [patch 093/143] ksm: remove dedicated macro KSM_FLAG_MASK Andrew Morton
2021-05-05  1:37 ` [patch 094/143] ksm: fix potential missing rmap_item for stable_node Andrew Morton
2021-05-05  1:37 ` [patch 095/143] mm/ksm: remove unused parameter from remove_trailing_rmap_items() Andrew Morton
2021-05-05  1:37 ` [patch 096/143] mm: restore node stat checking in /proc/sys/vm/stat_refresh Andrew Morton
2021-05-05  1:37 ` [patch 097/143] mm: no more EINVAL from /proc/sys/vm/stat_refresh Andrew Morton
2021-05-05  1:37 ` [patch 098/143] mm: /proc/sys/vm/stat_refresh skip checking known negative stats Andrew Morton
2021-05-05  1:38 ` [patch 099/143] mm: /proc/sys/vm/stat_refresh stop checking monotonic numa stats Andrew Morton
2021-05-05  1:38 ` [patch 100/143] x86/mm: track linear mapping split events Andrew Morton
2021-05-05  1:38 ` [patch 101/143] mm/mmap.c: don't unlock VMAs in remap_file_pages() Andrew Morton
2021-05-05  1:38 ` [patch 102/143] mm: generalize ARCH_HAS_CACHE_LINE_SIZE Andrew Morton
2021-05-05  1:38 ` [patch 104/143] mm: generalize ARCH_ENABLE_MEMORY_[HOTPLUG|HOTREMOVE] Andrew Morton
2021-05-05  1:38 ` [patch 105/143] mm: drop redundant ARCH_ENABLE_[HUGEPAGE|THP]_MIGRATION Andrew Morton
2021-05-05  1:38 ` [patch 108/143] mm/util.c: reduce mem_dump_obj() object size Andrew Morton
2021-05-05  1:38 ` [patch 109/143] mm/util.c: fix typo Andrew Morton
2021-05-05  1:38 ` [patch 110/143] mm/gup: don't pin migrated cma pages in movable zone Andrew Morton
2021-05-05  1:38 ` [patch 111/143] mm/gup: check every subpage of a compound page during isolation Andrew Morton
2021-05-05  1:38 ` [patch 112/143] mm/gup: return an error on migration failure Andrew Morton
2021-05-05  1:38 ` [patch 113/143] mm/gup: check for isolation errors Andrew Morton
2021-05-05  1:38 ` [patch 114/143] mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN Andrew Morton
2021-05-05  1:38 ` [patch 115/143] mm: apply per-task gfp constraints in fast path Andrew Morton
2021-05-05  1:39 ` [patch 116/143] mm: honor PF_MEMALLOC_PIN for all movable pages Andrew Morton
2021-05-05  1:39 ` [patch 117/143] mm/gup: do not migrate zero page Andrew Morton
2021-05-05  1:39 ` [patch 118/143] mm/gup: migrate pinned pages out of movable zone Andrew Morton
2021-05-05  1:39 ` [patch 119/143] memory-hotplug.rst: add a note about ZONE_MOVABLE and page pinning Andrew Morton
2021-05-05  1:39 ` [patch 120/143] mm/gup: change index type to long as it counts pages Andrew Morton
2021-05-05  1:39 ` [patch 121/143] mm/gup: longterm pin migration cleanup Andrew Morton
2021-05-05  1:39 ` [patch 122/143] selftests/vm: gup_test: fix test flag Andrew Morton
2021-05-05  1:39 ` [patch 123/143] selftests/vm: gup_test: test faulting in kernel, and verify pinnable pages Andrew Morton
2021-05-05  1:39 ` [patch 124/143] mm/memory_hotplug: remove broken locking of zone PCP structures during hot remove Andrew Morton
2021-05-05  1:39 ` [patch 125/143] drivers/base/memory: introduce memory_block_{online,offline} Andrew Morton
2021-05-05  1:39 ` [patch 126/143] mm,memory_hotplug: relax fully spanned sections check Andrew Morton
2021-05-05  1:39 ` [patch 127/143] mm,memory_hotplug: factor out adjusting present pages into adjust_present_page_count() Andrew Morton
2021-05-05  1:39 ` [patch 128/143] mm,memory_hotplug: allocate memmap from the added memory range Andrew Morton
2021-05-05  1:39 ` [patch 129/143] acpi,memhotplug: enable MHP_MEMMAP_ON_MEMORY when supported Andrew Morton
2021-05-05  1:39 ` [patch 130/143] mm,memory_hotplug: add kernel boot option to enable memmap_on_memory Andrew Morton
2021-05-05  1:39 ` [patch 131/143] x86/Kconfig: introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE Andrew Morton
2021-05-05  1:39 ` [patch 132/143] arm64/Kconfig: " Andrew Morton
2021-05-05  1:39 ` [patch 133/143] mm/zswap.c: switch from strlcpy to strscpy Andrew Morton
2021-05-05  1:40 ` [patch 134/143] mm/zsmalloc: use BUG_ON instead of if condition followed by BUG Andrew Morton
2021-05-05  1:40 ` [patch 135/143] iov_iter: lift memzero_page() to highmem.h Andrew Morton
2021-05-05  1:40 ` [patch 136/143] btrfs: use memzero_page() instead of open coded kmap pattern Andrew Morton
2021-05-05  1:40 ` [patch 137/143] mm/highmem.c: fix coding style issue Andrew Morton
2021-05-05  1:40 ` [patch 138/143] mm/mempool: minor coding style tweaks Andrew Morton
2021-05-05  1:40 ` [patch 139/143] mm/process_vm_access.c: remove duplicate include Andrew Morton
2021-05-05  1:40 ` [patch 140/143] kfence: zero guard page after out-of-bounds access Andrew Morton
2021-05-05  1:40 ` [patch 141/143] kfence: await for allocation using wait_event Andrew Morton
2021-05-05  1:40 ` [patch 142/143] kfence: maximize allocation wait timeout duration Andrew Morton
2021-05-05  1:40 ` [patch 143/143] kfence: use power-efficient work queue to run delayed work Andrew Morton
2021-05-05  1:47 ` incoming Linus Torvalds
2021-05-05  3:16   ` incoming Andrew Morton
2021-05-05 17:10     ` incoming Linus Torvalds
2021-05-05 17:44       ` incoming Andrew Morton
2021-05-06  3:19         ` incoming Anshuman Khandual

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210505013507.148RdHj5B%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=almasrymina@google.com \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=david@redhat.com \
    --cc=guro@fb.com \
    --cc=hdanton@sina.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=longman@redhat.com \
    --cc=mhocko@suse.com \
    --cc=mike.kravetz@oracle.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=naoya.horiguchi@nec.com \
    --cc=osalvador@suse.de \
    --cc=peterx@redhat.com \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=song.bao.hua@hisilicon.com \
    --cc=songmuchun@bytedance.com \
    --cc=torvalds@linux-foundation.org \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).