All of lore.kernel.org
 help / color / mirror / Atom feed
* incoming
@ 2022-04-08 20:08 Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                   ` (8 more replies)
  0 siblings, 9 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-mm, mm-commits, patches

9 patches, based on d00c50b35101b862c3db270ffeba53a63a1063d9.

Subsystems affected by this patch series:

  mm/migration
  mm/highmem
  lz4
  mm/sparsemem
  mm/mremap
  mm/mempolicy
  mailmap
  mm/memcg
  MAINTAINERS

Subsystem: mm/migration

    Zi Yan <ziy@nvidia.com>:
      mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation.

Subsystem: mm/highmem

    Max Filippov <jcmvbkbc@gmail.com>:
      highmem: fix checks in __kmap_local_sched_{in,out}

Subsystem: lz4

    Guo Xuenan <guoxuenan@huawei.com>:
      lz4: fix LZ4_decompress_safe_partial read out of bound

Subsystem: mm/sparsemem

    Waiman Long <longman@redhat.com>:
      mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning

Subsystem: mm/mremap

    Paolo Bonzini <pbonzini@redhat.com>:
      mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0)

Subsystem: mm/mempolicy

    Miaohe Lin <linmiaohe@huawei.com>:
      mm/mempolicy: fix mpol_new leak in shared_policy_replace

Subsystem: mailmap

    Vasily Averin <vasily.averin@linux.dev>:
      mailmap: update Vasily Averin's email address

Subsystem: mm/memcg

    Andrew Morton <akpm@linux-foundation.org>:
      mm/list_lru.c: revert "mm/list_lru: optimize memcg_reparent_list_lru_node()"

Subsystem: MAINTAINERS

    Tom Rix <trix@redhat.com>:
      MAINTAINERS: add Tom as clang reviewer

 .mailmap                 |    4 ++++
 MAINTAINERS              |    1 +
 include/linux/mmzone.h   |   11 +++++++----
 lib/lz4/lz4_decompress.c |    8 ++++++--
 mm/highmem.c             |    4 ++--
 mm/list_lru.c            |    6 ------
 mm/mempolicy.c           |    3 ++-
 mm/migrate.c             |    2 +-
 mm/mremap.c              |    3 +++
 9 files changed, 26 insertions(+), 16 deletions(-)


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 1/9] mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation.
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:08   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw)
  To: willy, naoya.horiguchi, mhocko, ziy, akpm, patches, linux-mm,
	mm-commits, torvalds, akpm

From: Zi Yan <ziy@nvidia.com>
Subject: mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation.

Fix a VM_BUG_ON_FOLIO(folio_nr_pages(old) != nr_pages) crash.

With folios support, it is possible to have other than HPAGE_PMD_ORDER
THPs, in the form of folios, in the system.  Use thp_order() to correctly
determine the source page order during migration.

Link: https://lkml.kernel.org/r/20220404165325.1883267-1-zi.yan@sent.com
Link: https://lore.kernel.org/linux-mm/20220404132908.GA785673@u2004/
Fixes: d68eccad3706 ("mm/filemap: Allow large folios to be added to the page cache")
Reported-by: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/mempolicy.c |    2 +-
 mm/migrate.c   |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

--- a/mm/mempolicy.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation
+++ a/mm/mempolicy.c
@@ -1209,7 +1209,7 @@ static struct page *new_page(struct page
 		struct page *thp;
 
 		thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address,
-					 HPAGE_PMD_ORDER);
+					 thp_order(page));
 		if (!thp)
 			return NULL;
 		prep_transhuge_page(thp);
--- a/mm/migrate.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation
+++ a/mm/migrate.c
@@ -1547,7 +1547,7 @@ struct page *alloc_migration_target(stru
 		 */
 		gfp_mask &= ~__GFP_RECLAIM;
 		gfp_mask |= GFP_TRANSHUGE;
-		order = HPAGE_PMD_ORDER;
+		order = thp_order(page);
 	}
 	zidx = zone_idx(page_zone(page));
 	if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 1/9] mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation.
@ 2022-04-08 20:08   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw)
  To: willy, naoya.horiguchi, mhocko, ziy, akpm, patches, linux-mm,
	mm-commits, torvalds, akpm

From: Zi Yan <ziy@nvidia.com>
Subject: mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation.

Fix a VM_BUG_ON_FOLIO(folio_nr_pages(old) != nr_pages) crash.

With folios support, it is possible to have other than HPAGE_PMD_ORDER
THPs, in the form of folios, in the system.  Use thp_order() to correctly
determine the source page order during migration.

Link: https://lkml.kernel.org/r/20220404165325.1883267-1-zi.yan@sent.com
Link: https://lore.kernel.org/linux-mm/20220404132908.GA785673@u2004/
Fixes: d68eccad3706 ("mm/filemap: Allow large folios to be added to the page cache")
Reported-by: Naoya Horiguchi <naoya.horiguchi@linux.dev>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/mempolicy.c |    2 +-
 mm/migrate.c   |    2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

--- a/mm/mempolicy.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation
+++ a/mm/mempolicy.c
@@ -1209,7 +1209,7 @@ static struct page *new_page(struct page
 		struct page *thp;
 
 		thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address,
-					 HPAGE_PMD_ORDER);
+					 thp_order(page));
 		if (!thp)
 			return NULL;
 		prep_transhuge_page(thp);
--- a/mm/migrate.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation
+++ a/mm/migrate.c
@@ -1547,7 +1547,7 @@ struct page *alloc_migration_target(stru
 		 */
 		gfp_mask &= ~__GFP_RECLAIM;
 		gfp_mask |= GFP_TRANSHUGE;
-		order = HPAGE_PMD_ORDER;
+		order = thp_order(page);
 	}
 	zidx = zone_idx(page_zone(page));
 	if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 2/9] highmem: fix checks in __kmap_local_sched_{in,out}
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:08   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw)
  To: tglx, stable, peterz, jcmvbkbc, akpm, patches, linux-mm,
	mm-commits, torvalds, akpm

From: Max Filippov <jcmvbkbc@gmail.com>
Subject: highmem: fix checks in __kmap_local_sched_{in,out}

When CONFIG_DEBUG_KMAP_LOCAL is enabled __kmap_local_sched_{in,out} check
that even slots in the tsk->kmap_ctrl.pteval are unmapped.  The slots are
initialized with 0 value, but the check is done with pte_none.  0 pte
however does not necessarily mean that pte_none will return true.  e.g. 
on xtensa it returns false, resulting in the following runtime warnings:

 WARNING: CPU: 0 PID: 101 at mm/highmem.c:627 __kmap_local_sched_out+0x51/0x108
 CPU: 0 PID: 101 Comm: touch Not tainted 5.17.0-rc7-00010-gd3a1cdde80d2-dirty #13
 Call Trace:
   dump_stack+0xc/0x40
   __warn+0x8f/0x174
   warn_slowpath_fmt+0x48/0xac
   __kmap_local_sched_out+0x51/0x108
   __schedule+0x71a/0x9c4
   preempt_schedule_irq+0xa0/0xe0
   common_exception_return+0x5c/0x93
   do_wp_page+0x30e/0x330
   handle_mm_fault+0xa70/0xc3c
   do_page_fault+0x1d8/0x3c4
   common_exception+0x7f/0x7f

 WARNING: CPU: 0 PID: 101 at mm/highmem.c:664 __kmap_local_sched_in+0x50/0xe0
 CPU: 0 PID: 101 Comm: touch Tainted: G        W         5.17.0-rc7-00010-gd3a1cdde80d2-dirty #13
 Call Trace:
   dump_stack+0xc/0x40
   __warn+0x8f/0x174
   warn_slowpath_fmt+0x48/0xac
   __kmap_local_sched_in+0x50/0xe0
   finish_task_switch$isra$0+0x1ce/0x2f8
   __schedule+0x86e/0x9c4
   preempt_schedule_irq+0xa0/0xe0
   common_exception_return+0x5c/0x93
   do_wp_page+0x30e/0x330
   handle_mm_fault+0xa70/0xc3c
   do_page_fault+0x1d8/0x3c4
   common_exception+0x7f/0x7f

Fix it by replacing !pte_none(pteval) with pte_val(pteval) != 0.

Link: https://lkml.kernel.org/r/20220403235159.3498065-1-jcmvbkbc@gmail.com
Fixes: 5fbda3ecd14a ("sched: highmem: Store local kmaps in task struct")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/highmem.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/mm/highmem.c~highmem-fix-checks-in-__kmap_local_sched_inout
+++ a/mm/highmem.c
@@ -624,7 +624,7 @@ void __kmap_local_sched_out(void)
 
 		/* With debug all even slots are unmapped and act as guard */
 		if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
-			WARN_ON_ONCE(!pte_none(pteval));
+			WARN_ON_ONCE(pte_val(pteval) != 0);
 			continue;
 		}
 		if (WARN_ON_ONCE(pte_none(pteval)))
@@ -661,7 +661,7 @@ void __kmap_local_sched_in(void)
 
 		/* With debug all even slots are unmapped and act as guard */
 		if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
-			WARN_ON_ONCE(!pte_none(pteval));
+			WARN_ON_ONCE(pte_val(pteval) != 0);
 			continue;
 		}
 		if (WARN_ON_ONCE(pte_none(pteval)))
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 2/9] highmem: fix checks in __kmap_local_sched_{in,out}
@ 2022-04-08 20:08   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw)
  To: tglx, stable, peterz, jcmvbkbc, akpm, patches, linux-mm,
	mm-commits, torvalds, akpm

From: Max Filippov <jcmvbkbc@gmail.com>
Subject: highmem: fix checks in __kmap_local_sched_{in,out}

When CONFIG_DEBUG_KMAP_LOCAL is enabled __kmap_local_sched_{in,out} check
that even slots in the tsk->kmap_ctrl.pteval are unmapped.  The slots are
initialized with 0 value, but the check is done with pte_none.  0 pte
however does not necessarily mean that pte_none will return true.  e.g. 
on xtensa it returns false, resulting in the following runtime warnings:

 WARNING: CPU: 0 PID: 101 at mm/highmem.c:627 __kmap_local_sched_out+0x51/0x108
 CPU: 0 PID: 101 Comm: touch Not tainted 5.17.0-rc7-00010-gd3a1cdde80d2-dirty #13
 Call Trace:
   dump_stack+0xc/0x40
   __warn+0x8f/0x174
   warn_slowpath_fmt+0x48/0xac
   __kmap_local_sched_out+0x51/0x108
   __schedule+0x71a/0x9c4
   preempt_schedule_irq+0xa0/0xe0
   common_exception_return+0x5c/0x93
   do_wp_page+0x30e/0x330
   handle_mm_fault+0xa70/0xc3c
   do_page_fault+0x1d8/0x3c4
   common_exception+0x7f/0x7f

 WARNING: CPU: 0 PID: 101 at mm/highmem.c:664 __kmap_local_sched_in+0x50/0xe0
 CPU: 0 PID: 101 Comm: touch Tainted: G        W         5.17.0-rc7-00010-gd3a1cdde80d2-dirty #13
 Call Trace:
   dump_stack+0xc/0x40
   __warn+0x8f/0x174
   warn_slowpath_fmt+0x48/0xac
   __kmap_local_sched_in+0x50/0xe0
   finish_task_switch$isra$0+0x1ce/0x2f8
   __schedule+0x86e/0x9c4
   preempt_schedule_irq+0xa0/0xe0
   common_exception_return+0x5c/0x93
   do_wp_page+0x30e/0x330
   handle_mm_fault+0xa70/0xc3c
   do_page_fault+0x1d8/0x3c4
   common_exception+0x7f/0x7f

Fix it by replacing !pte_none(pteval) with pte_val(pteval) != 0.

Link: https://lkml.kernel.org/r/20220403235159.3498065-1-jcmvbkbc@gmail.com
Fixes: 5fbda3ecd14a ("sched: highmem: Store local kmaps in task struct")
Signed-off-by: Max Filippov <jcmvbkbc@gmail.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/highmem.c |    4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/mm/highmem.c~highmem-fix-checks-in-__kmap_local_sched_inout
+++ a/mm/highmem.c
@@ -624,7 +624,7 @@ void __kmap_local_sched_out(void)
 
 		/* With debug all even slots are unmapped and act as guard */
 		if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
-			WARN_ON_ONCE(!pte_none(pteval));
+			WARN_ON_ONCE(pte_val(pteval) != 0);
 			continue;
 		}
 		if (WARN_ON_ONCE(pte_none(pteval)))
@@ -661,7 +661,7 @@ void __kmap_local_sched_in(void)
 
 		/* With debug all even slots are unmapped and act as guard */
 		if (IS_ENABLED(CONFIG_DEBUG_KMAP_LOCAL) && !(i & 0x01)) {
-			WARN_ON_ONCE(!pte_none(pteval));
+			WARN_ON_ONCE(pte_val(pteval) != 0);
 			continue;
 		}
 		if (WARN_ON_ONCE(pte_none(pteval)))
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 3/9] lz4: fix LZ4_decompress_safe_partial read out of bound
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:08   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw)
  To: terrelln, stable, hsiangkao, cy.fan, cyan, guoxuenan, akpm,
	patches, linux-mm, mm-commits, torvalds, akpm

From: Guo Xuenan <guoxuenan@huawei.com>
Subject: lz4: fix LZ4_decompress_safe_partial read out of bound

When partialDecoding, it is EOF if we've either filled the output buffer
or can't proceed with reading an offset for following match.

In some extreme corner cases when compressed data is suitably
corrupted, UAF will occur.  As reported by KASAN [1],
LZ4_decompress_safe_partial may lead to read out of bound problem
during decoding.  lz4 upstream has fixed it [2] and this issue has been
disscussed here [3] before.

current decompression routine was ported from lz4 v1.8.3, bumping lib/lz4
to v1.9.+ is certainly a huge work to be done later, so, we'd better fix
it first.

[1] https://lore.kernel.org/all/000000000000830d1205cf7f0477@google.com/
[2] https://github.com/lz4/lz4/commit/c5d6f8a8be3927c0bec91bcc58667a6cfad244ad#
[3] https://lore.kernel.org/all/CC666AE8-4CA4-4951-B6FB-A2EFDE3AC03B@fb.com/

Link: https://lkml.kernel.org/r/20211111105048.2006070-1-guoxuenan@huawei.com
Reported-by: syzbot+63d688f1d899c588fb71@syzkaller.appspotmail.com
Signed-off-by: Guo Xuenan <guoxuenan@huawei.com>
Reviewed-by: Nick Terrell <terrelln@fb.com>
Acked-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Cc: Yann Collet <cyan@fb.com>
Cc: Chengyang Fan <cy.fan@huawei.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 lib/lz4/lz4_decompress.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

--- a/lib/lz4/lz4_decompress.c~lz4-fix-lz4_decompress_safe_partial-read-out-of-bound
+++ a/lib/lz4/lz4_decompress.c
@@ -271,8 +271,12 @@ static FORCE_INLINE int LZ4_decompress_g
 			ip += length;
 			op += length;
 
-			/* Necessarily EOF, due to parsing restrictions */
-			if (!partialDecoding || (cpy == oend))
+			/* Necessarily EOF when !partialDecoding.
+			 * When partialDecoding, it is EOF if we've either
+			 * filled the output buffer or
+			 * can't proceed with reading an offset for following match.
+			 */
+			if (!partialDecoding || (cpy == oend) || (ip >= (iend - 2)))
 				break;
 		} else {
 			/* may overwrite up to WILDCOPYLENGTH beyond cpy */
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 3/9] lz4: fix LZ4_decompress_safe_partial read out of bound
@ 2022-04-08 20:08   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:08 UTC (permalink / raw)
  To: terrelln, stable, hsiangkao, cy.fan, cyan, guoxuenan, akpm,
	patches, linux-mm, mm-commits, torvalds, akpm

From: Guo Xuenan <guoxuenan@huawei.com>
Subject: lz4: fix LZ4_decompress_safe_partial read out of bound

When partialDecoding, it is EOF if we've either filled the output buffer
or can't proceed with reading an offset for following match.

In some extreme corner cases when compressed data is suitably
corrupted, UAF will occur.  As reported by KASAN [1],
LZ4_decompress_safe_partial may lead to read out of bound problem
during decoding.  lz4 upstream has fixed it [2] and this issue has been
disscussed here [3] before.

current decompression routine was ported from lz4 v1.8.3, bumping lib/lz4
to v1.9.+ is certainly a huge work to be done later, so, we'd better fix
it first.

[1] https://lore.kernel.org/all/000000000000830d1205cf7f0477@google.com/
[2] https://github.com/lz4/lz4/commit/c5d6f8a8be3927c0bec91bcc58667a6cfad244ad#
[3] https://lore.kernel.org/all/CC666AE8-4CA4-4951-B6FB-A2EFDE3AC03B@fb.com/

Link: https://lkml.kernel.org/r/20211111105048.2006070-1-guoxuenan@huawei.com
Reported-by: syzbot+63d688f1d899c588fb71@syzkaller.appspotmail.com
Signed-off-by: Guo Xuenan <guoxuenan@huawei.com>
Reviewed-by: Nick Terrell <terrelln@fb.com>
Acked-by: Gao Xiang <hsiangkao@linux.alibaba.com>
Cc: Yann Collet <cyan@fb.com>
Cc: Chengyang Fan <cy.fan@huawei.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 lib/lz4/lz4_decompress.c |    8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

--- a/lib/lz4/lz4_decompress.c~lz4-fix-lz4_decompress_safe_partial-read-out-of-bound
+++ a/lib/lz4/lz4_decompress.c
@@ -271,8 +271,12 @@ static FORCE_INLINE int LZ4_decompress_g
 			ip += length;
 			op += length;
 
-			/* Necessarily EOF, due to parsing restrictions */
-			if (!partialDecoding || (cpy == oend))
+			/* Necessarily EOF when !partialDecoding.
+			 * When partialDecoding, it is EOF if we've either
+			 * filled the output buffer or
+			 * can't proceed with reading an offset for following match.
+			 */
+			if (!partialDecoding || (cpy == oend) || (ip >= (iend - 2)))
 				break;
 		} else {
 			/* may overwrite up to WILDCOPYLENGTH beyond cpy */
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 4/9] mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:09   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: mingo, kirill.shutemov, jforbes, aquini, longman, akpm, patches,
	linux-mm, mm-commits, torvalds, akpm

From: Waiman Long <longman@redhat.com>
Subject: mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning

The gcc 12 compiler reports a "'mem_section' will never be NULL" warning
on the following code:

    static inline struct mem_section *__nr_to_section(unsigned long nr)
    {
    #ifdef CONFIG_SPARSEMEM_EXTREME
        if (!mem_section)
                return NULL;
    #endif
        if (!mem_section[SECTION_NR_TO_ROOT(nr)])
                return NULL;
       :

It happens with CONFIG_SPARSEMEM_EXTREME off.  The mem_section definition
is

    #ifdef CONFIG_SPARSEMEM_EXTREME
    extern struct mem_section **mem_section;
    #else
    extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
    #endif

In the !CONFIG_SPARSEMEM_EXTREME case, mem_section
is a static 2-dimensional array and so the check
"!mem_section[SECTION_NR_TO_ROOT(nr)]" doesn't make sense.

Fix this warning by moving the "!mem_section[SECTION_NR_TO_ROOT(nr)]"
check up inside the CONFIG_SPARSEMEM_EXTREME block and adding an explicit
NR_SECTION_ROOTS check to make sure that there is no out-of-bound array
access.

Link: https://lkml.kernel.org/r/20220331180246.2746210-1-longman@redhat.com
Fixes: 3e347261a80b ("sparsemem extreme implementation")
Signed-off-by: Waiman Long <longman@redhat.com>
Reported-by: Justin Forbes <jforbes@redhat.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/mmzone.h |   11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

--- a/include/linux/mmzone.h~mm-sparsemem-fix-mem_section-will-never-be-null-gcc-12-warning
+++ a/include/linux/mmzone.h
@@ -1397,13 +1397,16 @@ static inline unsigned long *section_to_
 
 static inline struct mem_section *__nr_to_section(unsigned long nr)
 {
+	unsigned long root = SECTION_NR_TO_ROOT(nr);
+
+	if (unlikely(root >= NR_SECTION_ROOTS))
+		return NULL;
+
 #ifdef CONFIG_SPARSEMEM_EXTREME
-	if (!mem_section)
+	if (!mem_section || !mem_section[root])
 		return NULL;
 #endif
-	if (!mem_section[SECTION_NR_TO_ROOT(nr)])
-		return NULL;
-	return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
+	return &mem_section[root][nr & SECTION_ROOT_MASK];
 }
 extern size_t mem_section_usage_size(void);
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 4/9] mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning
@ 2022-04-08 20:09   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: mingo, kirill.shutemov, jforbes, aquini, longman, akpm, patches,
	linux-mm, mm-commits, torvalds, akpm

From: Waiman Long <longman@redhat.com>
Subject: mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning

The gcc 12 compiler reports a "'mem_section' will never be NULL" warning
on the following code:

    static inline struct mem_section *__nr_to_section(unsigned long nr)
    {
    #ifdef CONFIG_SPARSEMEM_EXTREME
        if (!mem_section)
                return NULL;
    #endif
        if (!mem_section[SECTION_NR_TO_ROOT(nr)])
                return NULL;
       :

It happens with CONFIG_SPARSEMEM_EXTREME off.  The mem_section definition
is

    #ifdef CONFIG_SPARSEMEM_EXTREME
    extern struct mem_section **mem_section;
    #else
    extern struct mem_section mem_section[NR_SECTION_ROOTS][SECTIONS_PER_ROOT];
    #endif

In the !CONFIG_SPARSEMEM_EXTREME case, mem_section
is a static 2-dimensional array and so the check
"!mem_section[SECTION_NR_TO_ROOT(nr)]" doesn't make sense.

Fix this warning by moving the "!mem_section[SECTION_NR_TO_ROOT(nr)]"
check up inside the CONFIG_SPARSEMEM_EXTREME block and adding an explicit
NR_SECTION_ROOTS check to make sure that there is no out-of-bound array
access.

Link: https://lkml.kernel.org/r/20220331180246.2746210-1-longman@redhat.com
Fixes: 3e347261a80b ("sparsemem extreme implementation")
Signed-off-by: Waiman Long <longman@redhat.com>
Reported-by: Justin Forbes <jforbes@redhat.com>
Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Rafael Aquini <aquini@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/mmzone.h |   11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

--- a/include/linux/mmzone.h~mm-sparsemem-fix-mem_section-will-never-be-null-gcc-12-warning
+++ a/include/linux/mmzone.h
@@ -1397,13 +1397,16 @@ static inline unsigned long *section_to_
 
 static inline struct mem_section *__nr_to_section(unsigned long nr)
 {
+	unsigned long root = SECTION_NR_TO_ROOT(nr);
+
+	if (unlikely(root >= NR_SECTION_ROOTS))
+		return NULL;
+
 #ifdef CONFIG_SPARSEMEM_EXTREME
-	if (!mem_section)
+	if (!mem_section || !mem_section[root])
 		return NULL;
 #endif
-	if (!mem_section[SECTION_NR_TO_ROOT(nr)])
-		return NULL;
-	return &mem_section[SECTION_NR_TO_ROOT(nr)][nr & SECTION_ROOT_MASK];
+	return &mem_section[root][nr & SECTION_ROOT_MASK];
 }
 extern size_t mem_section_usage_size(void);
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 5/9] mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0)
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:09   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: stable, seanjc, pbonzini, akpm, patches, linux-mm, mm-commits,
	torvalds, akpm

From: Paolo Bonzini <pbonzini@redhat.com>
Subject: mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0)

If an mremap() syscall with old_size=0 ends up in move_page_tables(), it
will call invalidate_range_start()/invalidate_range_end() unnecessarily,
i.e.  with an empty range.

This causes a WARN in KVM's mmu_notifier.  In the past, empty ranges have
been diagnosed to be off-by-one bugs, hence the WARNing.  Given the low
(so far) number of unique reports, the benefits of detecting more buggy
callers seem to outweigh the cost of having to fix cases such as this one,
where userspace is doing something silly.  In this particular case, an
early return from move_page_tables() is enough to fix the issue.

Link: https://lkml.kernel.org/r/20220329173155.172439-1-pbonzini@redhat.com
Reported-by: syzbot+6bde52d89cfdf9f61425@syzkaller.appspotmail.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/mremap.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/mm/mremap.c~mm-avoid-pointless-invalidate_range_start-end-on-mremapold_size=0
+++ a/mm/mremap.c
@@ -486,6 +486,9 @@ unsigned long move_page_tables(struct vm
 	pmd_t *old_pmd, *new_pmd;
 	pud_t *old_pud, *new_pud;
 
+	if (!len)
+		return 0;
+
 	old_end = old_addr + len;
 	flush_cache_range(vma, old_addr, old_end);
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 5/9] mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0)
@ 2022-04-08 20:09   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: stable, seanjc, pbonzini, akpm, patches, linux-mm, mm-commits,
	torvalds, akpm

From: Paolo Bonzini <pbonzini@redhat.com>
Subject: mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0)

If an mremap() syscall with old_size=0 ends up in move_page_tables(), it
will call invalidate_range_start()/invalidate_range_end() unnecessarily,
i.e.  with an empty range.

This causes a WARN in KVM's mmu_notifier.  In the past, empty ranges have
been diagnosed to be off-by-one bugs, hence the WARNing.  Given the low
(so far) number of unique reports, the benefits of detecting more buggy
callers seem to outweigh the cost of having to fix cases such as this one,
where userspace is doing something silly.  In this particular case, an
early return from move_page_tables() is enough to fix the issue.

Link: https://lkml.kernel.org/r/20220329173155.172439-1-pbonzini@redhat.com
Reported-by: syzbot+6bde52d89cfdf9f61425@syzkaller.appspotmail.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/mremap.c |    3 +++
 1 file changed, 3 insertions(+)

--- a/mm/mremap.c~mm-avoid-pointless-invalidate_range_start-end-on-mremapold_size=0
+++ a/mm/mremap.c
@@ -486,6 +486,9 @@ unsigned long move_page_tables(struct vm
 	pmd_t *old_pmd, *new_pmd;
 	pud_t *old_pud, *new_pud;
 
+	if (!len)
+		return 0;
+
 	old_end = old_addr + len;
 	flush_cache_range(vma, old_addr, old_end);
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 6/9] mm/mempolicy: fix mpol_new leak in shared_policy_replace
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:09   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: stable, mhocko, mgorman, kosaki.motohiro, linmiaohe, akpm,
	patches, linux-mm, mm-commits, torvalds, akpm

From: Miaohe Lin <linmiaohe@huawei.com>
Subject: mm/mempolicy: fix mpol_new leak in shared_policy_replace

If mpol_new is allocated but not used in restart loop, mpol_new will be
freed via mpol_put before returning to the caller.  But refcnt is not
initialized yet, so mpol_put could not do the right things and might leak
the unused mpol_new.  This would happen if mempolicy was updated on the
shared shmem file while the sp->lock has been dropped during the memory
allocation.

This issue could be triggered easily with the below code snippet if there
are many processes doing the below work at the same time:

  shmid = shmget((key_t)5566, 1024 * PAGE_SIZE, 0666|IPC_CREAT);
  shm = shmat(shmid, 0, 0);
  loop many times {
    mbind(shm, 1024 * PAGE_SIZE, MPOL_LOCAL, mask, maxnode, 0);
    mbind(shm + 128 * PAGE_SIZE, 128 * PAGE_SIZE, MPOL_DEFAULT, mask,
          maxnode, 0);
  }

Link: https://lkml.kernel.org/r/20220329111416.27954-1-linmiaohe@huawei.com
Fixes: 42288fe366c4 ("mm: mempolicy: Convert shared_policy mutex to spinlock")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: <stable@vger.kernel.org>	[3.8]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/mempolicy.c |    1 +
 1 file changed, 1 insertion(+)

--- a/mm/mempolicy.c~mm-mempolicy-fix-mpol_new-leak-in-shared_policy_replace
+++ a/mm/mempolicy.c
@@ -2733,6 +2733,7 @@ alloc_new:
 	mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
 	if (!mpol_new)
 		goto err_out;
+	atomic_set(&mpol_new->refcnt, 1);
 	goto restart;
 }
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 6/9] mm/mempolicy: fix mpol_new leak in shared_policy_replace
@ 2022-04-08 20:09   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: stable, mhocko, mgorman, kosaki.motohiro, linmiaohe, akpm,
	patches, linux-mm, mm-commits, torvalds, akpm

From: Miaohe Lin <linmiaohe@huawei.com>
Subject: mm/mempolicy: fix mpol_new leak in shared_policy_replace

If mpol_new is allocated but not used in restart loop, mpol_new will be
freed via mpol_put before returning to the caller.  But refcnt is not
initialized yet, so mpol_put could not do the right things and might leak
the unused mpol_new.  This would happen if mempolicy was updated on the
shared shmem file while the sp->lock has been dropped during the memory
allocation.

This issue could be triggered easily with the below code snippet if there
are many processes doing the below work at the same time:

  shmid = shmget((key_t)5566, 1024 * PAGE_SIZE, 0666|IPC_CREAT);
  shm = shmat(shmid, 0, 0);
  loop many times {
    mbind(shm, 1024 * PAGE_SIZE, MPOL_LOCAL, mask, maxnode, 0);
    mbind(shm + 128 * PAGE_SIZE, 128 * PAGE_SIZE, MPOL_DEFAULT, mask,
          maxnode, 0);
  }

Link: https://lkml.kernel.org/r/20220329111416.27954-1-linmiaohe@huawei.com
Fixes: 42288fe366c4 ("mm: mempolicy: Convert shared_policy mutex to spinlock")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: <stable@vger.kernel.org>	[3.8]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/mempolicy.c |    1 +
 1 file changed, 1 insertion(+)

--- a/mm/mempolicy.c~mm-mempolicy-fix-mpol_new-leak-in-shared_policy_replace
+++ a/mm/mempolicy.c
@@ -2733,6 +2733,7 @@ alloc_new:
 	mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
 	if (!mpol_new)
 		goto err_out;
+	atomic_set(&mpol_new->refcnt, 1);
 	goto restart;
 }
 
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 7/9] mailmap: update Vasily Averin's email address
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:09   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: vasily.averin, akpm, patches, linux-mm, mm-commits, torvalds, akpm

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1081 bytes --]

From: Vasily Averin <vasily.averin@linux.dev>
Subject: mailmap: update Vasily Averin's email address

I'm moving to a @linux.dev account. Map my old addresses.

Link: https://lkml.kernel.org/r/737c7c2b-cdab-63ee-be90-cb33316c9657@linux.dev
Signed-off-by: Vasily Averin <vasily.averin@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 .mailmap |    4 ++++
 1 file changed, 4 insertions(+)

--- a/.mailmap~mailmap-update-vasily-averins-email-address
+++ a/.mailmap
@@ -391,6 +391,10 @@ Uwe Kleine-König <ukleinek@strlen.de>
 Uwe Kleine-König <ukl@pengutronix.de>
 Uwe Kleine-König <Uwe.Kleine-Koenig@digi.com>
 Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
+Vasily Averin <vasily.averin@linux.dev> <vvs@virtuozzo.com>
+Vasily Averin <vasily.averin@linux.dev> <vvs@openvz.org>
+Vasily Averin <vasily.averin@linux.dev> <vvs@parallels.com>
+Vasily Averin <vasily.averin@linux.dev> <vvs@sw.ru>
 Vinod Koul <vkoul@kernel.org> <vinod.koul@intel.com>
 Vinod Koul <vkoul@kernel.org> <vinod.koul@linux.intel.com>
 Vinod Koul <vkoul@kernel.org> <vkoul@infradead.org>
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 7/9] mailmap: update Vasily Averin's email address
@ 2022-04-08 20:09   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: vasily.averin, akpm, patches, linux-mm, mm-commits, torvalds, akpm

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 1081 bytes --]

From: Vasily Averin <vasily.averin@linux.dev>
Subject: mailmap: update Vasily Averin's email address

I'm moving to a @linux.dev account. Map my old addresses.

Link: https://lkml.kernel.org/r/737c7c2b-cdab-63ee-be90-cb33316c9657@linux.dev
Signed-off-by: Vasily Averin <vasily.averin@linux.dev>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 .mailmap |    4 ++++
 1 file changed, 4 insertions(+)

--- a/.mailmap~mailmap-update-vasily-averins-email-address
+++ a/.mailmap
@@ -391,6 +391,10 @@ Uwe Kleine-König <ukleinek@strlen.de>
 Uwe Kleine-König <ukl@pengutronix.de>
 Uwe Kleine-König <Uwe.Kleine-Koenig@digi.com>
 Valdis Kletnieks <Valdis.Kletnieks@vt.edu>
+Vasily Averin <vasily.averin@linux.dev> <vvs@virtuozzo.com>
+Vasily Averin <vasily.averin@linux.dev> <vvs@openvz.org>
+Vasily Averin <vasily.averin@linux.dev> <vvs@parallels.com>
+Vasily Averin <vasily.averin@linux.dev> <vvs@sw.ru>
 Vinod Koul <vkoul@kernel.org> <vinod.koul@intel.com>
 Vinod Koul <vkoul@kernel.org> <vinod.koul@linux.intel.com>
 Vinod Koul <vkoul@kernel.org> <vkoul@infradead.org>
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 8/9] mm/list_lru.c: revert "mm/list_lru: optimize memcg_reparent_list_lru_node()"
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:09   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: songmuchun, shakeelb, roman.gushchin, mhocko, longman, hannes,
	akpm, patches, linux-mm, mm-commits, torvalds, akpm

From: Andrew Morton <akpm@linux-foundation.org>
Subject: mm/list_lru.c: revert "mm/list_lru: optimize memcg_reparent_list_lru_node()"

405cc51fc1049c73 ("mm/list_lru: optimize memcg_reparent_list_lru_node()")
has subtle races which are proving ugly to fix.  Revert the original
optimization.  If quantitative testing indicates that we have a
significant problem here then other implementations can be looked at.

Fixes: 405cc51fc1049c73 ("mm/list_lru: optimize memcg_reparent_list_lru_node()")
Acked-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Waiman Long <longman@redhat.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/list_lru.c |    6 ------
 1 file changed, 6 deletions(-)

--- a/mm/list_lru.c~mm-list_lruc-revert-mm-list_lru-optimize-memcg_reparent_list_lru_node
+++ a/mm/list_lru.c
@@ -395,12 +395,6 @@ static void memcg_reparent_list_lru_node
 	struct list_lru_one *src, *dst;
 
 	/*
-	 * If there is no lru entry in this nlru, we can skip it immediately.
-	 */
-	if (!READ_ONCE(nlru->nr_items))
-		return;
-
-	/*
 	 * Since list_lru_{add,del} may be called under an IRQ-safe lock,
 	 * we have to use IRQ-safe primitives here to avoid deadlock.
 	 */
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 8/9] mm/list_lru.c: revert "mm/list_lru: optimize memcg_reparent_list_lru_node()"
@ 2022-04-08 20:09   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: songmuchun, shakeelb, roman.gushchin, mhocko, longman, hannes,
	akpm, patches, linux-mm, mm-commits, torvalds, akpm

From: Andrew Morton <akpm@linux-foundation.org>
Subject: mm/list_lru.c: revert "mm/list_lru: optimize memcg_reparent_list_lru_node()"

405cc51fc1049c73 ("mm/list_lru: optimize memcg_reparent_list_lru_node()")
has subtle races which are proving ugly to fix.  Revert the original
optimization.  If quantitative testing indicates that we have a
significant problem here then other implementations can be looked at.

Fixes: 405cc51fc1049c73 ("mm/list_lru: optimize memcg_reparent_list_lru_node()")
Acked-by: Shakeel Butt <shakeelb@google.com>
Reviewed-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Waiman Long <longman@redhat.com>
Cc: Roman Gushchin <roman.gushchin@linux.dev>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/list_lru.c |    6 ------
 1 file changed, 6 deletions(-)

--- a/mm/list_lru.c~mm-list_lruc-revert-mm-list_lru-optimize-memcg_reparent_list_lru_node
+++ a/mm/list_lru.c
@@ -395,12 +395,6 @@ static void memcg_reparent_list_lru_node
 	struct list_lru_one *src, *dst;
 
 	/*
-	 * If there is no lru entry in this nlru, we can skip it immediately.
-	 */
-	if (!READ_ONCE(nlru->nr_items))
-		return;
-
-	/*
 	 * Since list_lru_{add,del} may be called under an IRQ-safe lock,
 	 * we have to use IRQ-safe primitives here to avoid deadlock.
 	 */
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 9/9] MAINTAINERS: add Tom as clang reviewer
  2022-04-08 20:08 incoming Andrew Morton
@ 2022-04-08 20:09   ` Andrew Morton
  2022-04-08 20:08   ` Andrew Morton
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: ndesaulniers, nathan, trix, akpm, patches, linux-mm, mm-commits,
	torvalds, akpm

From: Tom Rix <trix@redhat.com>
Subject: MAINTAINERS: add Tom as clang reviewer

I have been helping with build breaks and other clang things and would
like to help with the reviews.

Link: https://lkml.kernel.org/r/20220407175715.3378998-1-trix@redhat.com
Signed-off-by: Tom Rix <trix@redhat.com>
Acked-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 MAINTAINERS |    1 +
 1 file changed, 1 insertion(+)

--- a/MAINTAINERS~maintainers-add-self-as-clang-reviewer
+++ a/MAINTAINERS
@@ -4791,6 +4791,7 @@ F:	.clang-format
 CLANG/LLVM BUILD SUPPORT
 M:	Nathan Chancellor <nathan@kernel.org>
 M:	Nick Desaulniers <ndesaulniers@google.com>
+R:	Tom Rix <trix@redhat.com>
 L:	llvm@lists.linux.dev
 S:	Supported
 W:	https://clangbuiltlinux.github.io/
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [patch 9/9] MAINTAINERS: add Tom as clang reviewer
@ 2022-04-08 20:09   ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2022-04-08 20:09 UTC (permalink / raw)
  To: ndesaulniers, nathan, trix, akpm, patches, linux-mm, mm-commits,
	torvalds, akpm

From: Tom Rix <trix@redhat.com>
Subject: MAINTAINERS: add Tom as clang reviewer

I have been helping with build breaks and other clang things and would
like to help with the reviews.

Link: https://lkml.kernel.org/r/20220407175715.3378998-1-trix@redhat.com
Signed-off-by: Tom Rix <trix@redhat.com>
Acked-by: Nathan Chancellor <nathan@kernel.org>
Acked-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 MAINTAINERS |    1 +
 1 file changed, 1 insertion(+)

--- a/MAINTAINERS~maintainers-add-self-as-clang-reviewer
+++ a/MAINTAINERS
@@ -4791,6 +4791,7 @@ F:	.clang-format
 CLANG/LLVM BUILD SUPPORT
 M:	Nathan Chancellor <nathan@kernel.org>
 M:	Nick Desaulniers <ndesaulniers@google.com>
+R:	Tom Rix <trix@redhat.com>
 L:	llvm@lists.linux.dev
 S:	Supported
 W:	https://clangbuiltlinux.github.io/
_

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 1/9] mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation.
  2022-04-08 20:08   ` Andrew Morton
  (?)
@ 2022-04-08 20:10   ` Matthew Wilcox
  -1 siblings, 0 replies; 21+ messages in thread
From: Matthew Wilcox @ 2022-04-08 20:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: naoya.horiguchi, mhocko, ziy, patches, linux-mm, mm-commits, torvalds

On Fri, Apr 08, 2022 at 01:08:52PM -0700, Andrew Morton wrote:
> From: Zi Yan <ziy@nvidia.com>
> Subject: mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation.
> 
> Fix a VM_BUG_ON_FOLIO(folio_nr_pages(old) != nr_pages) crash.
> 
> With folios support, it is possible to have other than HPAGE_PMD_ORDER
> THPs, in the form of folios, in the system.  Use thp_order() to correctly
> determine the source page order during migration.

This one's now obsolete after the folio pull request Linus merged
earlier today.

> Link: https://lkml.kernel.org/r/20220404165325.1883267-1-zi.yan@sent.com
> Link: https://lore.kernel.org/linux-mm/20220404132908.GA785673@u2004/
> Fixes: d68eccad3706 ("mm/filemap: Allow large folios to be added to the page cache")
> Reported-by: Naoya Horiguchi <naoya.horiguchi@linux.dev>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> ---
> 
>  mm/mempolicy.c |    2 +-
>  mm/migrate.c   |    2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> --- a/mm/mempolicy.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation
> +++ a/mm/mempolicy.c
> @@ -1209,7 +1209,7 @@ static struct page *new_page(struct page
>  		struct page *thp;
>  
>  		thp = alloc_hugepage_vma(GFP_TRANSHUGE, vma, address,
> -					 HPAGE_PMD_ORDER);
> +					 thp_order(page));
>  		if (!thp)
>  			return NULL;
>  		prep_transhuge_page(thp);
> --- a/mm/migrate.c~mm-migrate-use-thp_order-instead-of-hpage_pmd_order-for-new-page-allocation
> +++ a/mm/migrate.c
> @@ -1547,7 +1547,7 @@ struct page *alloc_migration_target(stru
>  		 */
>  		gfp_mask &= ~__GFP_RECLAIM;
>  		gfp_mask |= GFP_TRANSHUGE;
> -		order = HPAGE_PMD_ORDER;
> +		order = thp_order(page);
>  	}
>  	zidx = zone_idx(page_zone(page));
>  	if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
> _
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [patch 4/9] mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning
  2022-04-08 20:09   ` Andrew Morton
  (?)
@ 2022-04-11  9:00   ` Oscar Salvador
  -1 siblings, 0 replies; 21+ messages in thread
From: Oscar Salvador @ 2022-04-11  9:00 UTC (permalink / raw)
  To: Andrew Morton
  Cc: mingo, kirill.shutemov, jforbes, aquini, longman, patches,
	linux-mm, mm-commits, torvalds

On Fri, Apr 08, 2022 at 01:09:01PM -0700, Andrew Morton wrote:
> From: Waiman Long <longman@redhat.com>
> Subject: mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning
> 
 ...
> Link: https://lkml.kernel.org/r/20220331180246.2746210-1-longman@redhat.com
> Fixes: 3e347261a80b ("sparsemem extreme implementation")
> Signed-off-by: Waiman Long <longman@redhat.com>
> Reported-by: Justin Forbes <jforbes@redhat.com>
> Cc: "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Rafael Aquini <aquini@redhat.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>

Reviewed-by: Oscar Salvador <osalvador@suse.de>


-- 
Oscar Salvador
SUSE Labs

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-04-11  9:00 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-08 20:08 incoming Andrew Morton
2022-04-08 20:08 ` [patch 1/9] mm: migrate: use thp_order instead of HPAGE_PMD_ORDER for new page allocation Andrew Morton
2022-04-08 20:08   ` Andrew Morton
2022-04-08 20:10   ` Matthew Wilcox
2022-04-08 20:08 ` [patch 2/9] highmem: fix checks in __kmap_local_sched_{in,out} Andrew Morton
2022-04-08 20:08   ` Andrew Morton
2022-04-08 20:08 ` [patch 3/9] lz4: fix LZ4_decompress_safe_partial read out of bound Andrew Morton
2022-04-08 20:08   ` Andrew Morton
2022-04-08 20:09 ` [patch 4/9] mm/sparsemem: fix 'mem_section' will never be NULL gcc 12 warning Andrew Morton
2022-04-08 20:09   ` Andrew Morton
2022-04-11  9:00   ` Oscar Salvador
2022-04-08 20:09 ` [patch 5/9] mmmremap.c: avoid pointless invalidate_range_start/end on mremap(old_size=0) Andrew Morton
2022-04-08 20:09   ` Andrew Morton
2022-04-08 20:09 ` [patch 6/9] mm/mempolicy: fix mpol_new leak in shared_policy_replace Andrew Morton
2022-04-08 20:09   ` Andrew Morton
2022-04-08 20:09 ` [patch 7/9] mailmap: update Vasily Averin's email address Andrew Morton
2022-04-08 20:09   ` Andrew Morton
2022-04-08 20:09 ` [patch 8/9] mm/list_lru.c: revert "mm/list_lru: optimize memcg_reparent_list_lru_node()" Andrew Morton
2022-04-08 20:09   ` Andrew Morton
2022-04-08 20:09 ` [patch 9/9] MAINTAINERS: add Tom as clang reviewer Andrew Morton
2022-04-08 20:09   ` Andrew Morton

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.