mm-commits.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* incoming
@ 2021-02-05  2:31 Andrew Morton
  2021-02-05  2:32 ` [patch 01/18] mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page Andrew Morton
                   ` (17 more replies)
  0 siblings, 18 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:31 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: mm-commits, linux-mm

18 patches, based on 5c279c4cf206e03995e04fd3404fa95ffd243a97.

Subsystems affected by this patch series:

  mm/hugetlb
  mm/compaction
  mm/vmalloc
  gcov
  mm/shmem
  mm/memblock
  mailmap
  mm/pagecache
  mm/kasan
  ubsan
  mm/hugetlb
  MAINTAINERS

Subsystem: mm/hugetlb

    Muchun Song <songmuchun@bytedance.com>:
      mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page
      mm: hugetlb: fix a race between freeing and dissolving the page
      mm: hugetlb: fix a race between isolating and freeing page
      mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active
      mm: migrate: do not migrate HugeTLB page whose refcount is one

Subsystem: mm/compaction

    Rokudo Yan <wu-yan@tcl.com>:
      mm, compaction: move high_pfn to the for loop scope

Subsystem: mm/vmalloc

    Rick Edgecombe <rick.p.edgecombe@intel.com>:
      mm/vmalloc: separate put pages and flush VM flags

Subsystem: gcov

    Johannes Berg <johannes.berg@intel.com>:
      init/gcov: allow CONFIG_CONSTRUCTORS on UML to fix module gcov

Subsystem: mm/shmem

    Hugh Dickins <hughd@google.com>:
      mm: thp: fix MADV_REMOVE deadlock on shmem THP

Subsystem: mm/memblock

    Roman Gushchin <guro@fb.com>:
      memblock: do not start bottom-up allocations with kernel_end

Subsystem: mailmap

    Viresh Kumar <viresh.kumar@linaro.org>:
      mailmap: fix name/email for Viresh Kumar

    Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>:
      mailmap: add entries for Manivannan Sadhasivam

Subsystem: mm/pagecache

    Waiman Long <longman@redhat.com>:
      mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked()

Subsystem: mm/kasan

    Vincenzo Frascino <vincenzo.frascino@arm.com>:
    Patch series "kasan: Fix metadata detection for KASAN_HW_TAGS", v5:
      kasan: add explicit preconditions to kasan_report()
      kasan: make addr_has_metadata() return true for valid addresses

Subsystem: ubsan

    Nathan Chancellor <nathan@kernel.org>:
      ubsan: implement __ubsan_handle_alignment_assumption

Subsystem: mm/hugetlb

    Muchun Song <songmuchun@bytedance.com>:
      mm: hugetlb: fix missing put_page in gather_surplus_pages()

Subsystem: MAINTAINERS

    Nathan Chancellor <nathan@kernel.org>:
      MAINTAINERS/.mailmap: use my @kernel.org address

 .mailmap                |    5 ++++
 MAINTAINERS             |    2 -
 fs/hugetlbfs/inode.c    |    3 +-
 include/linux/hugetlb.h |    2 +
 include/linux/kasan.h   |    7 ++++++
 include/linux/vmalloc.h |    9 +-------
 init/Kconfig            |    1 
 init/main.c             |    8 ++++++-
 kernel/gcov/Kconfig     |    2 -
 lib/ubsan.c             |   31 ++++++++++++++++++++++++++++
 lib/ubsan.h             |    6 +++++
 mm/compaction.c         |    3 +-
 mm/filemap.c            |    4 +++
 mm/huge_memory.c        |   37 ++++++++++++++++++++-------------
 mm/hugetlb.c            |   53 ++++++++++++++++++++++++++++++++++++++++++------
 mm/kasan/kasan.h        |    2 -
 mm/memblock.c           |   49 +++++---------------------------------------
 mm/migrate.c            |    6 +++++
 18 files changed, 153 insertions(+), 77 deletions(-)


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 01/18] mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page
  2021-02-05  2:31 incoming Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 02/18] mm: hugetlb: fix a race between freeing and dissolving the page Andrew Morton
                   ` (16 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, david, linux-mm, mhocko, mike.kravetz, mm-commits,
	osalvador, shy828301, songmuchun, stable, torvalds

From: Muchun Song <songmuchun@bytedance.com>
Subject: mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page

If a new hugetlb page is allocated during fallocate it will not be marked
as active (set_page_huge_active) which will result in a later
isolate_huge_page failure when the page migration code would like to move
that page.  Such a failure would be unexpected and wrong.

Only export set_page_huge_active, just leave clear_page_huge_active as
static.  Because there are no external users.

Link: https://lkml.kernel.org/r/20210115124942.46403-3-songmuchun@bytedance.com
Fixes: 70c3547e36f5 (hugetlbfs: add hugetlbfs_fallocate())
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 fs/hugetlbfs/inode.c    |    3 ++-
 include/linux/hugetlb.h |    2 ++
 mm/hugetlb.c            |    2 +-
 3 files changed, 5 insertions(+), 2 deletions(-)

--- a/fs/hugetlbfs/inode.c~mm-hugetlbfs-fix-cannot-migrate-the-fallocated-hugetlb-page
+++ a/fs/hugetlbfs/inode.c
@@ -735,9 +735,10 @@ static long hugetlbfs_fallocate(struct f
 
 		mutex_unlock(&hugetlb_fault_mutex_table[hash]);
 
+		set_page_huge_active(page);
 		/*
 		 * unlock_page because locked by add_to_page_cache()
-		 * page_put due to reference from alloc_huge_page()
+		 * put_page() due to reference from alloc_huge_page()
 		 */
 		unlock_page(page);
 		put_page(page);
--- a/include/linux/hugetlb.h~mm-hugetlbfs-fix-cannot-migrate-the-fallocated-hugetlb-page
+++ a/include/linux/hugetlb.h
@@ -770,6 +770,8 @@ static inline void huge_ptep_modify_prot
 }
 #endif
 
+void set_page_huge_active(struct page *page);
+
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 
--- a/mm/hugetlb.c~mm-hugetlbfs-fix-cannot-migrate-the-fallocated-hugetlb-page
+++ a/mm/hugetlb.c
@@ -1349,7 +1349,7 @@ bool page_huge_active(struct page *page)
 }
 
 /* never called for tail page */
-static void set_page_huge_active(struct page *page)
+void set_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
 	SetPagePrivate(&page[1]);
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 02/18] mm: hugetlb: fix a race between freeing and dissolving the page
  2021-02-05  2:31 incoming Andrew Morton
  2021-02-05  2:32 ` [patch 01/18] mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 03/18] mm: hugetlb: fix a race between isolating and freeing page Andrew Morton
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, david, linux-mm, mhocko, mike.kravetz, mm-commits,
	osalvador, shy828301, songmuchun, stable, torvalds

From: Muchun Song <songmuchun@bytedance.com>
Subject: mm: hugetlb: fix a race between freeing and dissolving the page

There is a race condition between __free_huge_page()
and dissolve_free_huge_page().

CPU0:                         CPU1:

// page_count(page) == 1
put_page(page)
  __free_huge_page(page)
                              dissolve_free_huge_page(page)
                                spin_lock(&hugetlb_lock)
                                // PageHuge(page) && !page_count(page)
                                update_and_free_page(page)
                                // page is freed to the buddy
                                spin_unlock(&hugetlb_lock)
    spin_lock(&hugetlb_lock)
    clear_page_huge_active(page)
    enqueue_huge_page(page)
    // It is wrong, the page is already freed
    spin_unlock(&hugetlb_lock)

The race windows is between put_page() and dissolve_free_huge_page().

We should make sure that the page is already on the free list
when it is dissolved.

As a result __free_huge_page would corrupt page(s) already in the buddy
allocator.

Link: https://lkml.kernel.org/r/20210115124942.46403-4-songmuchun@bytedance.com
Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |   39 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 39 insertions(+)

--- a/mm/hugetlb.c~mm-hugetlb-fix-a-race-between-freeing-and-dissolving-the-page
+++ a/mm/hugetlb.c
@@ -79,6 +79,21 @@ DEFINE_SPINLOCK(hugetlb_lock);
 static int num_fault_mutexes;
 struct mutex *hugetlb_fault_mutex_table ____cacheline_aligned_in_smp;
 
+static inline bool PageHugeFreed(struct page *head)
+{
+	return page_private(head + 4) == -1UL;
+}
+
+static inline void SetPageHugeFreed(struct page *head)
+{
+	set_page_private(head + 4, -1UL);
+}
+
+static inline void ClearPageHugeFreed(struct page *head)
+{
+	set_page_private(head + 4, 0);
+}
+
 /* Forward declaration */
 static int hugetlb_acct_memory(struct hstate *h, long delta);
 
@@ -1028,6 +1043,7 @@ static void enqueue_huge_page(struct hst
 	list_move(&page->lru, &h->hugepage_freelists[nid]);
 	h->free_huge_pages++;
 	h->free_huge_pages_node[nid]++;
+	SetPageHugeFreed(page);
 }
 
 static struct page *dequeue_huge_page_node_exact(struct hstate *h, int nid)
@@ -1044,6 +1060,7 @@ static struct page *dequeue_huge_page_no
 
 		list_move(&page->lru, &h->hugepage_activelist);
 		set_page_refcounted(page);
+		ClearPageHugeFreed(page);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
 		return page;
@@ -1505,6 +1522,7 @@ static void prep_new_huge_page(struct hs
 	spin_lock(&hugetlb_lock);
 	h->nr_huge_pages++;
 	h->nr_huge_pages_node[nid]++;
+	ClearPageHugeFreed(page);
 	spin_unlock(&hugetlb_lock);
 }
 
@@ -1755,6 +1773,7 @@ int dissolve_free_huge_page(struct page
 {
 	int rc = -EBUSY;
 
+retry:
 	/* Not to disrupt normal path by vainly holding hugetlb_lock */
 	if (!PageHuge(page))
 		return 0;
@@ -1771,6 +1790,26 @@ int dissolve_free_huge_page(struct page
 		int nid = page_to_nid(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
+
+		/*
+		 * We should make sure that the page is already on the free list
+		 * when it is dissolved.
+		 */
+		if (unlikely(!PageHugeFreed(head))) {
+			spin_unlock(&hugetlb_lock);
+			cond_resched();
+
+			/*
+			 * Theoretically, we should return -EBUSY when we
+			 * encounter this race. In fact, we have a chance
+			 * to successfully dissolve the page if we do a
+			 * retry. Because the race window is quite small.
+			 * If we seize this opportunity, it is an optimization
+			 * for increasing the success rate of dissolving page.
+			 */
+			goto retry;
+		}
+
 		/*
 		 * Move PageHWPoison flag from head page to the raw error page,
 		 * which makes any subpages rather than the error page reusable.
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 03/18] mm: hugetlb: fix a race between isolating and freeing page
  2021-02-05  2:31 incoming Andrew Morton
  2021-02-05  2:32 ` [patch 01/18] mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page Andrew Morton
  2021-02-05  2:32 ` [patch 02/18] mm: hugetlb: fix a race between freeing and dissolving the page Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 04/18] mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active Andrew Morton
                   ` (14 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, david, linux-mm, mhocko, mike.kravetz, mm-commits,
	osalvador, shy828301, songmuchun, stable, torvalds

From: Muchun Song <songmuchun@bytedance.com>
Subject: mm: hugetlb: fix a race between isolating and freeing page

There is a race between isolate_huge_page() and __free_huge_page().

CPU0:                                       CPU1:

if (PageHuge(page))
                                            put_page(page)
                                              __free_huge_page(page)
                                                  spin_lock(&hugetlb_lock)
                                                  update_and_free_page(page)
                                                    set_compound_page_dtor(page,
                                                      NULL_COMPOUND_DTOR)
                                                  spin_unlock(&hugetlb_lock)
  isolate_huge_page(page)
    // trigger BUG_ON
    VM_BUG_ON_PAGE(!PageHead(page), page)
    spin_lock(&hugetlb_lock)
    page_huge_active(page)
      // trigger BUG_ON
      VM_BUG_ON_PAGE(!PageHuge(page), page)
    spin_unlock(&hugetlb_lock)

When we isolate a HugeTLB page on CPU0. Meanwhile, we free it to the
buddy allocator on CPU1. Then, we can trigger a BUG_ON on CPU0. Because
it is already freed to the buddy allocator.

Link: https://lkml.kernel.org/r/20210115124942.46403-5-songmuchun@bytedance.com
Fixes: c8721bbbdd36 ("mm: memory-hotplug: enable memory hotplug to handle hugepage")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/mm/hugetlb.c~mm-hugetlb-fix-a-race-between-isolating-and-freeing-page
+++ a/mm/hugetlb.c
@@ -5594,9 +5594,9 @@ bool isolate_huge_page(struct page *page
 {
 	bool ret = true;
 
-	VM_BUG_ON_PAGE(!PageHead(page), page);
 	spin_lock(&hugetlb_lock);
-	if (!page_huge_active(page) || !get_page_unless_zero(page)) {
+	if (!PageHeadHuge(page) || !page_huge_active(page) ||
+	    !get_page_unless_zero(page)) {
 		ret = false;
 		goto unlock;
 	}
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 04/18] mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active
  2021-02-05  2:31 incoming Andrew Morton
                   ` (2 preceding siblings ...)
  2021-02-05  2:32 ` [patch 03/18] mm: hugetlb: fix a race between isolating and freeing page Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 05/18] mm: migrate: do not migrate HugeTLB page whose refcount is one Andrew Morton
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, david, linux-mm, mhocko, mike.kravetz, mm-commits,
	osalvador, shy828301, songmuchun, stable, torvalds

From: Muchun Song <songmuchun@bytedance.com>
Subject: mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active

The page_huge_active() can be called from scan_movable_pages() which do
not hold a reference count to the HugeTLB page.  So when we call
page_huge_active() from scan_movable_pages(), the HugeTLB page can be
freed parallel.  Then we will trigger a BUG_ON which is in the
page_huge_active() when CONFIG_DEBUG_VM is enabled.  Just remove the
VM_BUG_ON_PAGE.

Link: https://lkml.kernel.org/r/20210115124942.46403-6-songmuchun@bytedance.com
Fixes: 7e1f049efb86 ("mm: hugetlb: cleanup using paeg_huge_active()")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: Yang Shi <shy828301@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

--- a/mm/hugetlb.c~mm-hugetlb-remove-vm_bug_on_page-from-page_huge_active
+++ a/mm/hugetlb.c
@@ -1361,8 +1361,7 @@ struct hstate *size_to_hstate(unsigned l
  */
 bool page_huge_active(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return PageHead(page) && PagePrivate(&page[1]);
+	return PageHeadHuge(page) && PagePrivate(&page[1]);
 }
 
 /* never called for tail page */
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 05/18] mm: migrate: do not migrate HugeTLB page whose refcount is one
  2021-02-05  2:31 incoming Andrew Morton
                   ` (3 preceding siblings ...)
  2021-02-05  2:32 ` [patch 04/18] mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 06/18] mm, compaction: move high_pfn to the for loop scope Andrew Morton
                   ` (12 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, david, linux-mm, mhocko, mike.kravetz, mm-commits,
	osalvador, shy828301, songmuchun, torvalds

From: Muchun Song <songmuchun@bytedance.com>
Subject: mm: migrate: do not migrate HugeTLB page whose refcount is one

All pages isolated for the migration have an elevated reference count and
therefore seeing a reference count equal to 1 means that the last user of
the page has dropped the reference and the page has became unused and
there doesn't make much sense to migrate it anymore.  This has been done
for regular pages and this patch does the same for hugetlb pages. 
Although the likelihood of the race is rather small for hugetlb pages it
makes sense the two code paths in sync.

Link: https://lkml.kernel.org/r/20210115124942.46403-2-songmuchun@bytedance.com
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Acked-by: Yang Shi <shy828301@gmail.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/migrate.c |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/mm/migrate.c~mm-migrate-do-not-migrate-hugetlb-page-whose-refcount-is-one
+++ a/mm/migrate.c
@@ -1280,6 +1280,12 @@ static int unmap_and_move_huge_page(new_
 		return -ENOSYS;
 	}
 
+	if (page_count(hpage) == 1) {
+		/* page was freed from under us. So we are done. */
+		putback_active_hugepage(hpage);
+		return MIGRATEPAGE_SUCCESS;
+	}
+
 	new_hpage = get_new_page(hpage, private);
 	if (!new_hpage)
 		return -ENOMEM;
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 06/18] mm, compaction: move high_pfn to the for loop scope
  2021-02-05  2:31 incoming Andrew Morton
                   ` (4 preceding siblings ...)
  2021-02-05  2:32 ` [patch 05/18] mm: migrate: do not migrate HugeTLB page whose refcount is one Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 07/18] mm/vmalloc: separate put pages and flush VM flags Andrew Morton
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, linux-mm, mgorman, mm-commits, stable, torvalds, vbabka, wu-yan

From: Rokudo Yan <wu-yan@tcl.com>
Subject: mm, compaction: move high_pfn to the for loop scope

In fast_isolate_freepages, high_pfn will be used if a prefered one(PFN >=
low_fn) not found.  But the high_pfn is not reset before searching an free
area, so when it was used as freepage, it may from another free area
searched before.  And move_freelist_head(freelist, freepage) will have
unexpected behavior(eg.  corrupt the MOVABLE freelist)

Unable to handle kernel paging request at virtual address dead000000000200
Mem abort info:
  ESR = 0x96000044
  Exception class = DABT (current EL), IL = 32 bits
  SET = 0, FnV = 0
  EA = 0, S1PTW = 0
Data abort info:
  ISV = 0, ISS = 0x00000044
  CM = 0, WnR = 1
[dead000000000200] address between user and kernel address ranges

-000|list_cut_before(inline)
-000|move_freelist_head(inline)
-000|fast_isolate_freepages(inline)
-000|isolate_freepages(inline)
-000|compaction_alloc(?, ?)
-001|unmap_and_move(inline)
-001|migrate_pages([NSD:0xFFFFFF80088CBBD0] from = 0xFFFFFF80088CBD88, [NSD:0xFFFFFF80088CBBC8] get_new_p
-002|__read_once_size(inline)
-002|static_key_count(inline)
-002|static_key_false(inline)
-002|trace_mm_compaction_migratepages(inline)
-002|compact_zone(?, [NSD:0xFFFFFF80088CBCB0] capc = 0x0)
-003|kcompactd_do_work(inline)
-003|kcompactd([X19] p = 0xFFFFFF93227FBC40)
-004|kthread([X20] _create = 0xFFFFFFE1AFB26380)
-005|ret_from_fork(asm)
---|end of frame

The issue was reported on an smart phone product with 6GB ram and 3GB zram
as swap device.

This patch fixes the issue by reset high_pfn before searching each free
area, which ensure freepage and freelist match when call
move_freelist_head in fast_isolate_freepages().

Link: http://lkml.kernel.org/r/20190118175136.31341-12-mgorman@techsingularity.net
Link: https://lkml.kernel.org/r/20210112094720.1238444-1-wu-yan@tcl.com
Fixes: 5a811889de10f1eb ("mm, compaction: use free lists to quickly locate a migration target")
Signed-off-by: Rokudo Yan <wu-yan@tcl.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/compaction.c |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- a/mm/compaction.c~mm-compaction-move-high_pfn-to-the-for-loop-scope
+++ a/mm/compaction.c
@@ -1342,7 +1342,7 @@ fast_isolate_freepages(struct compact_co
 {
 	unsigned int limit = min(1U, freelist_scan_limit(cc) >> 1);
 	unsigned int nr_scanned = 0;
-	unsigned long low_pfn, min_pfn, high_pfn = 0, highest = 0;
+	unsigned long low_pfn, min_pfn, highest = 0;
 	unsigned long nr_isolated = 0;
 	unsigned long distance;
 	struct page *page = NULL;
@@ -1387,6 +1387,7 @@ fast_isolate_freepages(struct compact_co
 		struct page *freepage;
 		unsigned long flags;
 		unsigned int order_scanned = 0;
+		unsigned long high_pfn = 0;
 
 		if (!area->nr_free)
 			continue;
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 07/18] mm/vmalloc: separate put pages and flush VM flags
  2021-02-05  2:31 incoming Andrew Morton
                   ` (5 preceding siblings ...)
  2021-02-05  2:32 ` [patch 06/18] mm, compaction: move high_pfn to the for loop scope Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 08/18] init/gcov: allow CONFIG_CONSTRUCTORS on UML to fix module gcov Andrew Morton
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, dja, hch, linmiaohe, linux-mm, mm-commits,
	rick.p.edgecombe, stable, torvalds, willy

From: Rick Edgecombe <rick.p.edgecombe@intel.com>
Subject: mm/vmalloc: separate put pages and flush VM flags

When VM_MAP_PUT_PAGES was added, it was defined with the same value as
VM_FLUSH_RESET_PERMS.  This doesn't seem like it will cause any big
functional problems other than some excess flushing for VM_MAP_PUT_PAGES
allocations.

Redefine VM_MAP_PUT_PAGES to have its own value.  Also, rearrange things
so flags are less likely to be missed in the future.

Link: https://lkml.kernel.org/r/20210122233706.9304-1-rick.p.edgecombe@intel.com
Fixes: b944afc9d64d ("mm: add a VM_MAP_PUT_PAGES flag for vmap")
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Christoph Hellwig <hch@lst.de>
Cc: Daniel Axtens <dja@axtens.net>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/vmalloc.h |    9 ++-------
 1 file changed, 2 insertions(+), 7 deletions(-)

--- a/include/linux/vmalloc.h~mm-vmalloc-separate-put-pages-and-flush-vm-flags
+++ a/include/linux/vmalloc.h
@@ -24,7 +24,8 @@ struct notifier_block;		/* in notifier.h
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
 #define VM_NO_GUARD		0x00000040      /* don't add guard page */
 #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
-#define VM_MAP_PUT_PAGES	0x00000100	/* put pages and free array in vfree */
+#define VM_FLUSH_RESET_PERMS	0x00000100	/* reset direct map and flush TLB on unmap, can't be freed in atomic context */
+#define VM_MAP_PUT_PAGES	0x00000200	/* put pages and free array in vfree */
 
 /*
  * VM_KASAN is used slighly differently depending on CONFIG_KASAN_VMALLOC.
@@ -37,12 +38,6 @@ struct notifier_block;		/* in notifier.h
  * determine which allocations need the module shadow freed.
  */
 
-/*
- * Memory with VM_FLUSH_RESET_PERMS cannot be freed in an interrupt or with
- * vfree_atomic().
- */
-#define VM_FLUSH_RESET_PERMS	0x00000100      /* Reset direct map and flush TLB on unmap */
-
 /* bits [20..32] reserved for arch specific ioremap internals */
 
 /*
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 08/18] init/gcov: allow CONFIG_CONSTRUCTORS on UML to fix module gcov
  2021-02-05  2:31 incoming Andrew Morton
                   ` (6 preceding siblings ...)
  2021-02-05  2:32 ` [patch 07/18] mm/vmalloc: separate put pages and flush VM flags Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 09/18] mm: thp: fix MADV_REMOVE deadlock on shmem THP Andrew Morton
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, arnd, jeyu, johannes.berg, linux-mm, mm-commits, oberpar, torvalds

From: Johannes Berg <johannes.berg@intel.com>
Subject: init/gcov: allow CONFIG_CONSTRUCTORS on UML to fix module gcov

On ARCH=um, loading a module doesn't result in its constructors getting
called, which breaks module gcov since the debugfs files are never
registered.  On the other hand, in-kernel constructors have already been
called by the dynamic linker, so we can't call them again.

Get out of this conundrum by allowing CONFIG_CONSTRUCTORS to be selected,
but avoiding the in-kernel constructor calls.

Also remove the "if !UML" from GCOV selecting CONSTRUCTORS now, since we
really do want CONSTRUCTORS, just not kernel binary ones.

Link: https://lkml.kernel.org/r/20210120172041.c246a2cac2fb.I1358f584b76f1898373adfed77f4462c8705b736@changeid
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.ibm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Jessica Yu <jeyu@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 init/Kconfig        |    1 -
 init/main.c         |    8 +++++++-
 kernel/gcov/Kconfig |    2 +-
 3 files changed, 8 insertions(+), 3 deletions(-)

--- a/init/Kconfig~init-gcov-allow-config_constructors-on-uml-to-fix-module-gcov
+++ a/init/Kconfig
@@ -76,7 +76,6 @@ config CC_HAS_ASM_INLINE
 
 config CONSTRUCTORS
 	bool
-	depends on !UML
 
 config IRQ_WORK
 	bool
--- a/init/main.c~init-gcov-allow-config_constructors-on-uml-to-fix-module-gcov
+++ a/init/main.c
@@ -1066,7 +1066,13 @@ asmlinkage __visible void __init __no_sa
 /* Call all constructor functions linked into the kernel. */
 static void __init do_ctors(void)
 {
-#ifdef CONFIG_CONSTRUCTORS
+/*
+ * For UML, the constructors have already been called by the
+ * normal setup code as it's just a normal ELF binary, so we
+ * cannot do it again - but we do need CONFIG_CONSTRUCTORS
+ * even on UML for modules.
+ */
+#if defined(CONFIG_CONSTRUCTORS) && !defined(CONFIG_UML)
 	ctor_fn_t *fn = (ctor_fn_t *) __ctors_start;
 
 	for (; fn < (ctor_fn_t *) __ctors_end; fn++)
--- a/kernel/gcov/Kconfig~init-gcov-allow-config_constructors-on-uml-to-fix-module-gcov
+++ a/kernel/gcov/Kconfig
@@ -4,7 +4,7 @@ menu "GCOV-based kernel profiling"
 config GCOV_KERNEL
 	bool "Enable gcov-based kernel profiling"
 	depends on DEBUG_FS
-	select CONSTRUCTORS if !UML
+	select CONSTRUCTORS
 	default n
 	help
 	This option enables gcov-based code profiling (e.g. for code coverage
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 09/18] mm: thp: fix MADV_REMOVE deadlock on shmem THP
  2021-02-05  2:31 incoming Andrew Morton
                   ` (7 preceding siblings ...)
  2021-02-05  2:32 ` [patch 08/18] init/gcov: allow CONFIG_CONSTRUCTORS on UML to fix module gcov Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 10/18] memblock: do not start bottom-up allocations with kernel_end Andrew Morton
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: aarcange, akpm, hughd, linux-mm, mm-commits,
	sergey.senozhatsky.work, stable, torvalds

From: Hugh Dickins <hughd@google.com>
Subject: mm: thp: fix MADV_REMOVE deadlock on shmem THP

Sergey reported deadlock between kswapd correctly doing its usual
lock_page(page) followed by down_read(page->mapping->i_mmap_rwsem), and
madvise(MADV_REMOVE) on an madvise(MADV_HUGEPAGE) area doing
down_write(page->mapping->i_mmap_rwsem) followed by lock_page(page).

This happened when shmem_fallocate(punch hole)'s unmap_mapping_range()
reaches zap_pmd_range()'s call to __split_huge_pmd().  The same deadlock
could occur when partially truncating a mapped huge tmpfs file, or using
fallocate(FALLOC_FL_PUNCH_HOLE) on it.

__split_huge_pmd()'s page lock was added in 5.8, to make sure that any
concurrent use of reuse_swap_page() (holding page lock) could not catch
the anon THP's mapcounts and swapcounts while they were being split.

Fortunately, reuse_swap_page() is never applied to a shmem or file THP
(not even by khugepaged, which checks PageSwapCache before calling), and
anonymous THPs are never created in shmem or file areas: so that
__split_huge_pmd()'s page lock can only be necessary for anonymous THPs,
on which there is no risk of deadlock with i_mmap_rwsem.

Link: https://lkml.kernel.org/r/alpine.LSU.2.11.2101161409470.2022@eggly.anvils
Fixes: c444eb564fb1 ("mm: thp: make the THP mapcount atomic against __split_huge_pmd_locked()")
Signed-off-by: Hugh Dickins <hughd@google.com>
Reported-by: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com>
Reviewed-by: Andrea Arcangeli <aarcange@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/huge_memory.c |   37 +++++++++++++++++++++++--------------
 1 file changed, 23 insertions(+), 14 deletions(-)

--- a/mm/huge_memory.c~mm-thp-fix-madv_remove-deadlock-on-shmem-thp
+++ a/mm/huge_memory.c
@@ -2202,7 +2202,7 @@ void __split_huge_pmd(struct vm_area_str
 {
 	spinlock_t *ptl;
 	struct mmu_notifier_range range;
-	bool was_locked = false;
+	bool do_unlock_page = false;
 	pmd_t _pmd;
 
 	mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, vma, vma->vm_mm,
@@ -2218,7 +2218,6 @@ void __split_huge_pmd(struct vm_area_str
 	VM_BUG_ON(freeze && !page);
 	if (page) {
 		VM_WARN_ON_ONCE(!PageLocked(page));
-		was_locked = true;
 		if (page != pmd_page(*pmd))
 			goto out;
 	}
@@ -2227,19 +2226,29 @@ repeat:
 	if (pmd_trans_huge(*pmd)) {
 		if (!page) {
 			page = pmd_page(*pmd);
-			if (unlikely(!trylock_page(page))) {
-				get_page(page);
-				_pmd = *pmd;
-				spin_unlock(ptl);
-				lock_page(page);
-				spin_lock(ptl);
-				if (unlikely(!pmd_same(*pmd, _pmd))) {
-					unlock_page(page);
+			/*
+			 * An anonymous page must be locked, to ensure that a
+			 * concurrent reuse_swap_page() sees stable mapcount;
+			 * but reuse_swap_page() is not used on shmem or file,
+			 * and page lock must not be taken when zap_pmd_range()
+			 * calls __split_huge_pmd() while i_mmap_lock is held.
+			 */
+			if (PageAnon(page)) {
+				if (unlikely(!trylock_page(page))) {
+					get_page(page);
+					_pmd = *pmd;
+					spin_unlock(ptl);
+					lock_page(page);
+					spin_lock(ptl);
+					if (unlikely(!pmd_same(*pmd, _pmd))) {
+						unlock_page(page);
+						put_page(page);
+						page = NULL;
+						goto repeat;
+					}
 					put_page(page);
-					page = NULL;
-					goto repeat;
 				}
-				put_page(page);
+				do_unlock_page = true;
 			}
 		}
 		if (PageMlocked(page))
@@ -2249,7 +2258,7 @@ repeat:
 	__split_huge_pmd_locked(vma, pmd, range.start, freeze);
 out:
 	spin_unlock(ptl);
-	if (!was_locked && page)
+	if (do_unlock_page)
 		unlock_page(page);
 	/*
 	 * No need to double call mmu_notifier->invalidate_range() callback.
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 10/18] memblock: do not start bottom-up allocations with kernel_end
  2021-02-05  2:31 incoming Andrew Morton
                   ` (8 preceding siblings ...)
  2021-02-05  2:32 ` [patch 09/18] mm: thp: fix MADV_REMOVE deadlock on shmem THP Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 11/18] mailmap: fix name/email for Viresh Kumar Andrew Morton
                   ` (7 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, bauerman, guro, iamjoonsoo.kim, linux-mm, mhocko,
	mm-commits, riel, rppt, stable, torvalds, vvghjk1234

From: Roman Gushchin <guro@fb.com>
Subject: memblock: do not start bottom-up allocations with kernel_end

With kaslr the kernel image is placed at a random place, so starting the
bottom-up allocation with the kernel_end can result in an allocation
failure and a warning like this one:

[    0.002920] hugetlb_cma: reserve 2048 MiB, up to 2048 MiB per node
[    0.002921] ------------[ cut here ]------------
[    0.002922] memblock: bottom-up allocation failed, memory hotremove may be affected
[    0.002937] WARNING: CPU: 0 PID: 0 at mm/memblock.c:332 memblock_find_in_range_node+0x178/0x25a
[    0.002937] Modules linked in:
[    0.002939] CPU: 0 PID: 0 Comm: swapper Not tainted 5.10.0+ #1169
[    0.002940] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.14.0-1.fc33 04/01/2014
[    0.002942] RIP: 0010:memblock_find_in_range_node+0x178/0x25a
[    0.002944] Code: e9 6d ff ff ff 48 85 c0 0f 85 da 00 00 00 80 3d 9b 35 df 00 00 75 15 48 c7 c7 c0 75 59 88 c6 05 8b 35 df 00 01 e8 25 8a fa ff <0f> 0b 48 c7 44 24 20 ff ff ff ff 44 89 e6 44 89 ea 48 c7 c1 70 5c
[    0.002945] RSP: 0000:ffffffff88803d18 EFLAGS: 00010086 ORIG_RAX: 0000000000000000
[    0.002947] RAX: 0000000000000000 RBX: 0000000240000000 RCX: 00000000ffffdfff
[    0.002948] RDX: 00000000ffffdfff RSI: 00000000ffffffea RDI: 0000000000000046
[    0.002948] RBP: 0000000100000000 R08: ffffffff88922788 R09: 0000000000009ffb
[    0.002949] R10: 00000000ffffe000 R11: 3fffffffffffffff R12: 0000000000000000
[    0.002950] R13: 0000000000000000 R14: 0000000080000000 R15: 00000001fb42c000
[    0.002952] FS:  0000000000000000(0000) GS:ffffffff88f71000(0000) knlGS:0000000000000000
[    0.002953] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[    0.002954] CR2: ffffa080fb401000 CR3: 00000001fa80a000 CR4: 00000000000406b0
[    0.002956] Call Trace:
[    0.002961]  ? memblock_alloc_range_nid+0x8d/0x11e
[    0.002963]  ? cma_declare_contiguous_nid+0x2c4/0x38c
[    0.002964]  ? hugetlb_cma_reserve+0xdc/0x128
[    0.002968]  ? flush_tlb_one_kernel+0xc/0x20
[    0.002969]  ? native_set_fixmap+0x82/0xd0
[    0.002971]  ? flat_get_apic_id+0x5/0x10
[    0.002973]  ? register_lapic_address+0x8e/0x97
[    0.002975]  ? setup_arch+0x8a5/0xc3f
[    0.002978]  ? start_kernel+0x66/0x547
[    0.002980]  ? load_ucode_bsp+0x4c/0xcd
[    0.002982]  ? secondary_startup_64_no_verify+0xb0/0xbb
[    0.002986] random: get_random_bytes called from __warn+0xab/0x110 with crng_init=0
[    0.002988] ---[ end trace f151227d0b39be70 ]---

At the same time, the kernel image is protected with memblock_reserve(),
so we can just start searching at PAGE_SIZE.  In this case the bottom-up
allocation has the same chances to success as a top-down allocation, so
there is no reason to fallback in the case of a failure.  All together it
simplifies the logic.

Link: https://lkml.kernel.org/r/20201217201214.3414100-2-guro@fb.com
Fixes: 8fabc623238e ("powerpc: Ensure that swiotlb buffer is allocated from low memory")
Signed-off-by: Roman Gushchin <guro@fb.com>
Reviewed-by: Mike Rapoport <rppt@linux.ibm.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Rik van Riel <riel@surriel.com>
Cc: Wonhyuk Yang <vvghjk1234@gmail.com>
Cc: Thiago Jung Bauermann <bauerman@linux.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/memblock.c |   49 +++++-------------------------------------------
 1 file changed, 6 insertions(+), 43 deletions(-)

--- a/mm/memblock.c~memblock-do-not-start-bottom-up-allocations-with-kernel_end
+++ a/mm/memblock.c
@@ -275,14 +275,6 @@ __memblock_find_range_top_down(phys_addr
  *
  * Find @size free area aligned to @align in the specified range and node.
  *
- * When allocation direction is bottom-up, the @start should be greater
- * than the end of the kernel image. Otherwise, it will be trimmed. The
- * reason is that we want the bottom-up allocation just near the kernel
- * image so it is highly likely that the allocated memory and the kernel
- * will reside in the same node.
- *
- * If bottom-up allocation failed, will try to allocate memory top-down.
- *
  * Return:
  * Found address on success, 0 on failure.
  */
@@ -291,8 +283,6 @@ static phys_addr_t __init_memblock membl
 					phys_addr_t end, int nid,
 					enum memblock_flags flags)
 {
-	phys_addr_t kernel_end, ret;
-
 	/* pump up @end */
 	if (end == MEMBLOCK_ALLOC_ACCESSIBLE ||
 	    end == MEMBLOCK_ALLOC_KASAN)
@@ -301,40 +291,13 @@ static phys_addr_t __init_memblock membl
 	/* avoid allocating the first page */
 	start = max_t(phys_addr_t, start, PAGE_SIZE);
 	end = max(start, end);
-	kernel_end = __pa_symbol(_end);
-
-	/*
-	 * try bottom-up allocation only when bottom-up mode
-	 * is set and @end is above the kernel image.
-	 */
-	if (memblock_bottom_up() && end > kernel_end) {
-		phys_addr_t bottom_up_start;
-
-		/* make sure we will allocate above the kernel */
-		bottom_up_start = max(start, kernel_end);
-
-		/* ok, try bottom-up allocation first */
-		ret = __memblock_find_range_bottom_up(bottom_up_start, end,
-						      size, align, nid, flags);
-		if (ret)
-			return ret;
-
-		/*
-		 * we always limit bottom-up allocation above the kernel,
-		 * but top-down allocation doesn't have the limit, so
-		 * retrying top-down allocation may succeed when bottom-up
-		 * allocation failed.
-		 *
-		 * bottom-up allocation is expected to be fail very rarely,
-		 * so we use WARN_ONCE() here to see the stack trace if
-		 * fail happens.
-		 */
-		WARN_ONCE(IS_ENABLED(CONFIG_MEMORY_HOTREMOVE),
-			  "memblock: bottom-up allocation failed, memory hotremove may be affected\n");
-	}
 
-	return __memblock_find_range_top_down(start, end, size, align, nid,
-					      flags);
+	if (memblock_bottom_up())
+		return __memblock_find_range_bottom_up(start, end, size, align,
+						       nid, flags);
+	else
+		return __memblock_find_range_top_down(start, end, size, align,
+						      nid, flags);
 }
 
 /**
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 11/18] mailmap: fix name/email for Viresh Kumar
  2021-02-05  2:31 incoming Andrew Morton
                   ` (9 preceding siblings ...)
  2021-02-05  2:32 ` [patch 10/18] memblock: do not start bottom-up allocations with kernel_end Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 12/18] mailmap: add entries for Manivannan Sadhasivam Andrew Morton
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, linux-mm, mm-commits, torvalds, viresh.kumar

From: Viresh Kumar <viresh.kumar@linaro.org>
Subject: mailmap: fix name/email for Viresh Kumar

For some of the patches the email id was misspelled to linaro.com instead
of linaro.org and for others Viresh Kumar was written as "viresh kumar"
(all small).  Fix both with help of mailmap entries.

Link: https://lkml.kernel.org/r/d6b80b210d7fe0ddc1d4d0b22eff9708c72ef8b3.1612178938.git.viresh.kumar@linaro.org
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 .mailmap |    2 ++
 1 file changed, 2 insertions(+)

--- a/.mailmap~mailmap-fix-name-email-for-viresh-kumar
+++ a/.mailmap
@@ -334,6 +334,8 @@ Vinod Koul <vkoul@kernel.org> <vkoul@inf
 Viresh Kumar <vireshk@kernel.org> <viresh.kumar2@arm.com>
 Viresh Kumar <vireshk@kernel.org> <viresh.kumar@st.com>
 Viresh Kumar <vireshk@kernel.org> <viresh.linux@gmail.com>
+Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.org>
+Viresh Kumar <viresh.kumar@linaro.org> <viresh.kumar@linaro.com>
 Vivien Didelot <vivien.didelot@gmail.com> <vivien.didelot@savoirfairelinux.com>
 Vlad Dogaru <ddvlad@gmail.com> <vlad.dogaru@intel.com>
 Vladimir Davydov <vdavydov.dev@gmail.com> <vdavydov@parallels.com>
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 12/18] mailmap: add entries for Manivannan Sadhasivam
  2021-02-05  2:31 incoming Andrew Morton
                   ` (10 preceding siblings ...)
  2021-02-05  2:32 ` [patch 11/18] mailmap: fix name/email for Viresh Kumar Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 13/18] mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked() Andrew Morton
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, linux-mm, manivannan.sadhasivam, mm-commits, torvalds

From: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Subject: mailmap: add entries for Manivannan Sadhasivam

Map my personal and work addresses to korg mail address.

Link: https://lkml.kernel.org/r/20210201104640.108556-1-manivannan.sadhasivam@linaro.org
Signed-off-by: Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 .mailmap |    2 ++
 1 file changed, 2 insertions(+)

--- a/.mailmap~mailmap-add-entries-for-manivannan-sadhasivam
+++ a/.mailmap
@@ -199,6 +199,8 @@ Li Yang <leoyang.li@nxp.com> <leoli@free
 Li Yang <leoyang.li@nxp.com> <leo@zh-kernel.org>
 Lukasz Luba <lukasz.luba@arm.com> <l.luba@partner.samsung.com>
 Maciej W. Rozycki <macro@mips.com> <macro@imgtec.com>
+Manivannan Sadhasivam <mani@kernel.org> <manivannanece23@gmail.com>
+Manivannan Sadhasivam <mani@kernel.org> <manivannan.sadhasivam@linaro.org>
 Marcin Nowakowski <marcin.nowakowski@mips.com> <marcin.nowakowski@imgtec.com>
 Marc Zyngier <maz@kernel.org> <marc.zyngier@arm.com>
 Mark Brown <broonie@sirena.org.uk>
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 13/18] mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked()
  2021-02-05  2:31 incoming Andrew Morton
                   ` (11 preceding siblings ...)
  2021-02-05  2:32 ` [patch 12/18] mailmap: add entries for Manivannan Sadhasivam Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 14/18] kasan: add explicit preconditions to kasan_report() Andrew Morton
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, alex.shi, hannes, linmiaohe, linux-mm, longman, mhocko,
	mm-commits, smuchun, stable, torvalds, willy

From: Waiman Long <longman@redhat.com>
Subject: mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked()

commit 3fea5a499d57 ("mm: memcontrol: convert page cache to a new
mem_cgroup_charge() API") introduced a bug in __add_to_page_cache_locked()
causing the following splat:

 [ 1570.068330] page dumped because: VM_BUG_ON_PAGE(page_memcg(page))
 [ 1570.068333] pages's memcg:ffff8889a4116000
 [ 1570.068343] ------------[ cut here ]------------
 [ 1570.068346] kernel BUG at mm/memcontrol.c:2924!
 [ 1570.068355] invalid opcode: 0000 [#1] SMP KASAN PTI
 [ 1570.068359] CPU: 35 PID: 12345 Comm: cat Tainted: G S      W I       5.11.0-rc4-debug+ #1
 [ 1570.068363] Hardware name: HP HP Z8 G4 Workstation/81C7, BIOS P60 v01.25 12/06/2017
 [ 1570.068365] RIP: 0010:commit_charge+0xf4/0x130
   :
 [ 1570.068375] RSP: 0018:ffff8881b38d70e8 EFLAGS: 00010286
 [ 1570.068379] RAX: 0000000000000000 RBX: ffffea00260ddd00 RCX: 0000000000000027
 [ 1570.068382] RDX: 0000000000000000 RSI: 0000000000000004 RDI: ffff88907ebe05a8
 [ 1570.068384] RBP: ffffea00260ddd00 R08: ffffed120fd7c0b6 R09: ffffed120fd7c0b6
 [ 1570.068386] R10: ffff88907ebe05ab R11: ffffed120fd7c0b5 R12: ffffea00260ddd38
 [ 1570.068389] R13: ffff8889a4116000 R14: ffff8889a4116000 R15: 0000000000000001
 [ 1570.068391] FS:  00007ff039638680(0000) GS:ffff88907ea00000(0000) knlGS:0000000000000000
 [ 1570.068394] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
 [ 1570.068396] CR2: 00007f36f354cc20 CR3: 00000008a0126006 CR4: 00000000007706e0
 [ 1570.068398] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
 [ 1570.068400] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
 [ 1570.068402] PKRU: 55555554
 [ 1570.068404] Call Trace:
 [ 1570.068407]  mem_cgroup_charge+0x175/0x770
 [ 1570.068413]  __add_to_page_cache_locked+0x712/0xad0
 [ 1570.068439]  add_to_page_cache_lru+0xc5/0x1f0
 [ 1570.068461]  cachefiles_read_or_alloc_pages+0x895/0x2e10 [cachefiles]
 [ 1570.068524]  __fscache_read_or_alloc_pages+0x6c0/0xa00 [fscache]
 [ 1570.068540]  __nfs_readpages_from_fscache+0x16d/0x630 [nfs]
 [ 1570.068585]  nfs_readpages+0x24e/0x540 [nfs]
 [ 1570.068693]  read_pages+0x5b1/0xc40
 [ 1570.068711]  page_cache_ra_unbounded+0x460/0x750
 [ 1570.068729]  generic_file_buffered_read_get_pages+0x290/0x1710
 [ 1570.068756]  generic_file_buffered_read+0x2a9/0xc30
 [ 1570.068832]  nfs_file_read+0x13f/0x230 [nfs]
 [ 1570.068872]  new_sync_read+0x3af/0x610
 [ 1570.068901]  vfs_read+0x339/0x4b0
 [ 1570.068909]  ksys_read+0xf1/0x1c0
 [ 1570.068920]  do_syscall_64+0x33/0x40
 [ 1570.068926]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
 [ 1570.068930] RIP: 0033:0x7ff039135595

Before that commit, there was a try_charge() and commit_charge() in
__add_to_page_cache_locked().  These 2 separated charge functions were
replaced by a single mem_cgroup_charge().  However, it forgot to add a
matching mem_cgroup_uncharge() when the xarray insertion failed with the
page released back to the pool.  Fix this by adding a
mem_cgroup_uncharge() call when insertion error happens.

Link: https://lkml.kernel.org/r/20210125042441.20030-1-longman@redhat.com
Fixes: 3fea5a499d57 ("mm: memcontrol: convert page cache to a new mem_cgroup_charge() API")
Signed-off-by: Waiman Long <longman@redhat.com>
Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Muchun Song <smuchun@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/filemap.c |    4 ++++
 1 file changed, 4 insertions(+)

--- a/mm/filemap.c~mm-filemap-adding-missing-mem_cgroup_uncharge-to-__add_to_page_cache_locked
+++ a/mm/filemap.c
@@ -835,6 +835,7 @@ noinline int __add_to_page_cache_locked(
 	XA_STATE(xas, &mapping->i_pages, offset);
 	int huge = PageHuge(page);
 	int error;
+	bool charged = false;
 
 	VM_BUG_ON_PAGE(!PageLocked(page), page);
 	VM_BUG_ON_PAGE(PageSwapBacked(page), page);
@@ -848,6 +849,7 @@ noinline int __add_to_page_cache_locked(
 		error = mem_cgroup_charge(page, current->mm, gfp);
 		if (error)
 			goto error;
+		charged = true;
 	}
 
 	gfp &= GFP_RECLAIM_MASK;
@@ -896,6 +898,8 @@ unlock:
 
 	if (xas_error(&xas)) {
 		error = xas_error(&xas);
+		if (charged)
+			mem_cgroup_uncharge(page);
 		goto error;
 	}
 
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 14/18] kasan: add explicit preconditions to kasan_report()
  2021-02-05  2:31 incoming Andrew Morton
                   ` (12 preceding siblings ...)
  2021-02-05  2:32 ` [patch 13/18] mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked() Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 15/18] kasan: make addr_has_metadata() return true for valid addresses Andrew Morton
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, andreyknvl, aryabinin, catalin.marinas, dvyukov, glider,
	leonro, linux-mm, mark.rutland, mm-commits, naresh.kamboju,
	paulmck, torvalds, vincenzo.frascino, will

From: Vincenzo Frascino <vincenzo.frascino@arm.com>
Subject: kasan: add explicit preconditions to kasan_report()

Patch series "kasan: Fix metadata detection for KASAN_HW_TAGS", v5.

With the introduction of KASAN_HW_TAGS, kasan_report() currently assumes
that every location in memory has valid metadata associated.  This is due
to the fact that addr_has_metadata() returns always true.

As a consequence of this, an invalid address (e.g.  NULL pointer address)
passed to kasan_report() when KASAN_HW_TAGS is enabled, leads to a kernel
panic.

Example below, based on arm64:

 ==================================================================
 BUG: KASAN: invalid-access in 0x0
 Read at addr 0000000000000000 by task swapper/0/1
 Unable to handle kernel NULL pointer dereference at virtual address 0000000000000000
 Mem abort info:
   ESR = 0x96000004
   EC = 0x25: DABT (current EL), IL = 32 bits
   SET = 0, FnV = 0
   EA = 0, S1PTW = 0
 Data abort info:
   ISV = 0, ISS = 0x00000004
   CM = 0, WnR = 0

...

 Call trace:
  mte_get_mem_tag+0x24/0x40
  kasan_report+0x1a4/0x410
  alsa_sound_last_init+0x8c/0xa4
  do_one_initcall+0x50/0x1b0
  kernel_init_freeable+0x1d4/0x23c
  kernel_init+0x14/0x118
  ret_from_fork+0x10/0x34
 Code: d65f03c0 9000f021 f9428021 b6cfff61 (d9600000)
 ---[ end trace 377c8bb45bdd3a1a ]---
 hrtimer: interrupt took 48694256 ns
 note: swapper/0[1] exited with preempt_count 1
 Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
 SMP: stopping secondary CPUs
 Kernel Offset: 0x35abaf140000 from 0xffff800010000000
 PHYS_OFFSET: 0x40000000
 CPU features: 0x0a7e0152,61c0a030
 Memory Limit: none
 ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---

This series fixes the behavior of addr_has_metadata() that now returns
true only when the address is valid.


This patch (of 2):

With the introduction of KASAN_HW_TAGS, kasan_report() accesses the
metadata only when addr_has_metadata() succeeds.

Add a comment to make sure that the preconditions to the function are
explicitly clarified.

Link: https://lkml.kernel.org/r/20210126134409.47894-1-vincenzo.frascino@arm.com
Link: https://lkml.kernel.org/r/20210126134409.47894-2-vincenzo.frascino@arm.com
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: "Paul E . McKenney" <paulmck@kernel.org>
Cc: Naresh Kamboju <naresh.kamboju@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/kasan.h |    7 +++++++
 1 file changed, 7 insertions(+)

--- a/include/linux/kasan.h~kasan-add-explicit-preconditions-to-kasan_report
+++ a/include/linux/kasan.h
@@ -333,6 +333,13 @@ static inline void *kasan_reset_tag(cons
 	return (void *)arch_kasan_reset_tag(addr);
 }
 
+/**
+ * kasan_report - print a report about a bad memory access detected by KASAN
+ * @addr: address of the bad access
+ * @size: size of the bad access
+ * @is_write: whether the bad access is a write or a read
+ * @ip: instruction pointer for the accessibility check or the bad access itself
+ */
 bool kasan_report(unsigned long addr, size_t size,
 		bool is_write, unsigned long ip);
 
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 15/18] kasan: make addr_has_metadata() return true for valid addresses
  2021-02-05  2:31 incoming Andrew Morton
                   ` (13 preceding siblings ...)
  2021-02-05  2:32 ` [patch 14/18] kasan: add explicit preconditions to kasan_report() Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:32 ` [patch 16/18] ubsan: implement __ubsan_handle_alignment_assumption Andrew Morton
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, andreyknvl, aryabinin, catalin.marinas, dvyukov, glider,
	leonro, linux-mm, mark.rutland, mm-commits, naresh.kamboju,
	paulmck, torvalds, vincenzo.frascino, will

From: Vincenzo Frascino <vincenzo.frascino@arm.com>
Subject: kasan: make addr_has_metadata() return true for valid addresses

Currently, addr_has_metadata() returns true for every address.  An invalid
address (e.g.  NULL) passed to the function when, KASAN_HW_TAGS is
enabled, leads to a kernel panic.

Make addr_has_metadata() return true for valid addresses only.

Note: KASAN_HW_TAGS support for vmalloc will be added with a future patch.

Link: https://lkml.kernel.org/r/20210126134409.47894-3-vincenzo.frascino@arm.com
Fixes: 2e903b91479782b7 ("kasan, arm64: implement HW_TAGS runtime")
Signed-off-by: Vincenzo Frascino <vincenzo.frascino@arm.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Leon Romanovsky <leonro@mellanox.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Naresh Kamboju <naresh.kamboju@linaro.org>
Cc: "Paul E . McKenney" <paulmck@kernel.org>
Cc: Will Deacon <will@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/kasan/kasan.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/kasan/kasan.h~kasan-make-addr_has_metadata-return-true-for-valid-addresses
+++ a/mm/kasan/kasan.h
@@ -209,7 +209,7 @@ bool check_memory_region(unsigned long a
 
 static inline bool addr_has_metadata(const void *addr)
 {
-	return true;
+	return (is_vmalloc_addr(addr) || virt_addr_valid(addr));
 }
 
 #endif /* CONFIG_KASAN_GENERIC || CONFIG_KASAN_SW_TAGS */
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 16/18] ubsan: implement __ubsan_handle_alignment_assumption
  2021-02-05  2:31 incoming Andrew Morton
                   ` (14 preceding siblings ...)
  2021-02-05  2:32 ` [patch 15/18] kasan: make addr_has_metadata() return true for valid addresses Andrew Morton
@ 2021-02-05  2:32 ` Andrew Morton
  2021-02-05  2:33 ` [patch 17/18] mm: hugetlb: fix missing put_page in gather_surplus_pages() Andrew Morton
  2021-02-05  2:33 ` [patch 18/18] MAINTAINERS/.mailmap: use my @kernel.org address Andrew Morton
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:32 UTC (permalink / raw)
  To: akpm, keescook, linux-mm, mm-commits, nathan, ndesaulniers, torvalds

From: Nathan Chancellor <nathan@kernel.org>
Subject: ubsan: implement __ubsan_handle_alignment_assumption

When building ARCH=mips 32r2el_defconfig with CONFIG_UBSAN_ALIGNMENT:

ld.lld: error: undefined symbol: __ubsan_handle_alignment_assumption
>>> referenced by slab.h:557 (include/linux/slab.h:557)
>>>               main.o:(do_initcalls) in archive init/built-in.a
>>> referenced by slab.h:448 (include/linux/slab.h:448)
>>>               do_mounts_rd.o:(rd_load_image) in archive init/built-in.a
>>> referenced by slab.h:448 (include/linux/slab.h:448)
>>>               do_mounts_rd.o:(identify_ramdisk_image) in archive init/built-in.a
>>> referenced 1579 more times

Implement this for the kernel based on LLVM's
handleAlignmentAssumptionImpl because the kernel is not linked against
the compiler runtime.

Link: https://github.com/ClangBuiltLinux/linux/issues/1245
Link: https://github.com/llvm/llvm-project/blob/llvmorg-11.0.1/compiler-rt/lib/ubsan/ubsan_handlers.cpp#L151-L190
Link: https://lkml.kernel.org/r/20210127224451.2587372-1-nathan@kernel.org
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 lib/ubsan.c |   31 +++++++++++++++++++++++++++++++
 lib/ubsan.h |    6 ++++++
 2 files changed, 37 insertions(+)

--- a/lib/ubsan.c~ubsan-implement-__ubsan_handle_alignment_assumption
+++ a/lib/ubsan.c
@@ -427,3 +427,34 @@ void __ubsan_handle_load_invalid_value(v
 	ubsan_epilogue();
 }
 EXPORT_SYMBOL(__ubsan_handle_load_invalid_value);
+
+void __ubsan_handle_alignment_assumption(void *_data, unsigned long ptr,
+					 unsigned long align,
+					 unsigned long offset);
+void __ubsan_handle_alignment_assumption(void *_data, unsigned long ptr,
+					 unsigned long align,
+					 unsigned long offset)
+{
+	struct alignment_assumption_data *data = _data;
+	unsigned long real_ptr;
+
+	if (suppress_report(&data->location))
+		return;
+
+	ubsan_prologue(&data->location, "alignment-assumption");
+
+	if (offset)
+		pr_err("assumption of %lu byte alignment (with offset of %lu byte) for pointer of type %s failed",
+		       align, offset, data->type->type_name);
+	else
+		pr_err("assumption of %lu byte alignment for pointer of type %s failed",
+		       align, data->type->type_name);
+
+	real_ptr = ptr - offset;
+	pr_err("%saddress is %lu aligned, misalignment offset is %lu bytes",
+	       offset ? "offset " : "", BIT(real_ptr ? __ffs(real_ptr) : 0),
+	       real_ptr & (align - 1));
+
+	ubsan_epilogue();
+}
+EXPORT_SYMBOL(__ubsan_handle_alignment_assumption);
--- a/lib/ubsan.h~ubsan-implement-__ubsan_handle_alignment_assumption
+++ a/lib/ubsan.h
@@ -78,6 +78,12 @@ struct invalid_value_data {
 	struct type_descriptor *type;
 };
 
+struct alignment_assumption_data {
+	struct source_location location;
+	struct source_location assumption_location;
+	struct type_descriptor *type;
+};
+
 #if defined(CONFIG_ARCH_SUPPORTS_INT128)
 typedef __int128 s_max;
 typedef unsigned __int128 u_max;
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 17/18] mm: hugetlb: fix missing put_page in gather_surplus_pages()
  2021-02-05  2:31 incoming Andrew Morton
                   ` (15 preceding siblings ...)
  2021-02-05  2:32 ` [patch 16/18] ubsan: implement __ubsan_handle_alignment_assumption Andrew Morton
@ 2021-02-05  2:33 ` Andrew Morton
  2021-02-05  2:33 ` [patch 18/18] MAINTAINERS/.mailmap: use my @kernel.org address Andrew Morton
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:33 UTC (permalink / raw)
  To: akpm, linmiaohe, linux-mm, mike.kravetz, mm-commits, songmuchun,
	stable, torvalds

From: Muchun Song <songmuchun@bytedance.com>
Subject: mm: hugetlb: fix missing put_page in gather_surplus_pages()

The VM_BUG_ON_PAGE avoids the generation of any code, even if that
expression has side-effects when !CONFIG_DEBUG_VM.

Link: https://lkml.kernel.org/r/20210126031009.96266-1-songmuchun@bytedance.com
Fixes: e5dfacebe4a4 ("mm/hugetlb.c: just use put_page_testzero() instead of page_count()")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- a/mm/hugetlb.c~mm-hugetlb-fix-missing-put_page-in-gather_surplus_pages
+++ a/mm/hugetlb.c
@@ -2047,13 +2047,16 @@ retry:
 
 	/* Free the needed pages to the hugetlb pool */
 	list_for_each_entry_safe(page, tmp, &surplus_list, lru) {
+		int zeroed;
+
 		if ((--needed) < 0)
 			break;
 		/*
 		 * This page is now managed by the hugetlb allocator and has
 		 * no users -- drop the buddy allocator's reference.
 		 */
-		VM_BUG_ON_PAGE(!put_page_testzero(page), page);
+		zeroed = put_page_testzero(page);
+		VM_BUG_ON_PAGE(!zeroed, page);
 		enqueue_huge_page(h, page);
 	}
 free:
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [patch 18/18] MAINTAINERS/.mailmap: use my @kernel.org address
  2021-02-05  2:31 incoming Andrew Morton
                   ` (16 preceding siblings ...)
  2021-02-05  2:33 ` [patch 17/18] mm: hugetlb: fix missing put_page in gather_surplus_pages() Andrew Morton
@ 2021-02-05  2:33 ` Andrew Morton
  17 siblings, 0 replies; 19+ messages in thread
From: Andrew Morton @ 2021-02-05  2:33 UTC (permalink / raw)
  To: akpm, linux-mm, lukas.bulwahn, mm-commits, nathan, ndesaulniers,
	ojeda, sedat.dilek, torvalds

From: Nathan Chancellor <nathan@kernel.org>
Subject: MAINTAINERS/.mailmap: use my @kernel.org address

Use my @kernel.org for all points of contact so that I am always
accessible.

Link: https://lkml.kernel.org/r/20210126212730.2097108-1-nathan@kernel.org
Signed-off-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Acked-by: Miguel Ojeda <ojeda@kernel.org>
Cc: Sedat Dilek <sedat.dilek@gmail.com>
Cc: Lukas Bulwahn <lukas.bulwahn@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 .mailmap    |    1 +
 MAINTAINERS |    2 +-
 2 files changed, 2 insertions(+), 1 deletion(-)

--- a/.mailmap~maintainers-mailmap-use-my-kernelorg-address
+++ a/.mailmap
@@ -246,6 +246,7 @@ Morten Welinder <welinder@anemone.rentec
 Morten Welinder <welinder@darter.rentec.com>
 Morten Welinder <welinder@troll.com>
 Mythri P K <mythripk@ti.com>
+Nathan Chancellor <nathan@kernel.org> <natechancellor@gmail.com>
 Nguyen Anh Quynh <aquynh@gmail.com>
 Nicolas Ferre <nicolas.ferre@microchip.com> <nicolas.ferre@atmel.com>
 Nicolas Pitre <nico@fluxnic.net> <nicolas.pitre@linaro.org>
--- a/MAINTAINERS~maintainers-mailmap-use-my-kernelorg-address
+++ a/MAINTAINERS
@@ -4304,7 +4304,7 @@ S:	Maintained
 F:	.clang-format
 
 CLANG/LLVM BUILD SUPPORT
-M:	Nathan Chancellor <natechancellor@gmail.com>
+M:	Nathan Chancellor <nathan@kernel.org>
 M:	Nick Desaulniers <ndesaulniers@google.com>
 L:	clang-built-linux@googlegroups.com
 S:	Supported
_

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2021-02-05  2:34 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-05  2:31 incoming Andrew Morton
2021-02-05  2:32 ` [patch 01/18] mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page Andrew Morton
2021-02-05  2:32 ` [patch 02/18] mm: hugetlb: fix a race between freeing and dissolving the page Andrew Morton
2021-02-05  2:32 ` [patch 03/18] mm: hugetlb: fix a race between isolating and freeing page Andrew Morton
2021-02-05  2:32 ` [patch 04/18] mm: hugetlb: remove VM_BUG_ON_PAGE from page_huge_active Andrew Morton
2021-02-05  2:32 ` [patch 05/18] mm: migrate: do not migrate HugeTLB page whose refcount is one Andrew Morton
2021-02-05  2:32 ` [patch 06/18] mm, compaction: move high_pfn to the for loop scope Andrew Morton
2021-02-05  2:32 ` [patch 07/18] mm/vmalloc: separate put pages and flush VM flags Andrew Morton
2021-02-05  2:32 ` [patch 08/18] init/gcov: allow CONFIG_CONSTRUCTORS on UML to fix module gcov Andrew Morton
2021-02-05  2:32 ` [patch 09/18] mm: thp: fix MADV_REMOVE deadlock on shmem THP Andrew Morton
2021-02-05  2:32 ` [patch 10/18] memblock: do not start bottom-up allocations with kernel_end Andrew Morton
2021-02-05  2:32 ` [patch 11/18] mailmap: fix name/email for Viresh Kumar Andrew Morton
2021-02-05  2:32 ` [patch 12/18] mailmap: add entries for Manivannan Sadhasivam Andrew Morton
2021-02-05  2:32 ` [patch 13/18] mm/filemap: add missing mem_cgroup_uncharge() to __add_to_page_cache_locked() Andrew Morton
2021-02-05  2:32 ` [patch 14/18] kasan: add explicit preconditions to kasan_report() Andrew Morton
2021-02-05  2:32 ` [patch 15/18] kasan: make addr_has_metadata() return true for valid addresses Andrew Morton
2021-02-05  2:32 ` [patch 16/18] ubsan: implement __ubsan_handle_alignment_assumption Andrew Morton
2021-02-05  2:33 ` [patch 17/18] mm: hugetlb: fix missing put_page in gather_surplus_pages() Andrew Morton
2021-02-05  2:33 ` [patch 18/18] MAINTAINERS/.mailmap: use my @kernel.org address Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).