linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* incoming
@ 2021-01-12 23:48 Andrew Morton
  2021-01-12 23:49 ` [patch 01/10] mm, slub: consider rest of partial list if acquire_slab() fails Andrew Morton
                   ` (10 more replies)
  0 siblings, 11 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:48 UTC (permalink / raw)
  To: Linus Torvalds; +Cc: linux-mm, mm-commits

10 patches, based on e609571b5ffa3528bf85292de1ceaddac342bc1c.

Subsystems affected by this patch series:

  mm/slub
  mm/pagealloc
  mm/memcg
  mm/kasan
  mm/vmalloc
  mm/migration
  mm/hugetlb
  MAINTAINERS
  mm/memory-failure
  mm/process_vm_access

Subsystem: mm/slub

    Jann Horn <jannh@google.com>:
      mm, slub: consider rest of partial list if acquire_slab() fails

Subsystem: mm/pagealloc

    Hailong liu <liu.hailong6@zte.com.cn>:
      mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint

Subsystem: mm/memcg

    Hugh Dickins <hughd@google.com>:
      mm/memcontrol: fix warning in mem_cgroup_page_lruvec()

Subsystem: mm/kasan

    Hailong Liu <liu.hailong6@zte.com.cn>:
      arm/kasan: fix the array size of kasan_early_shadow_pte[]

Subsystem: mm/vmalloc

    Miaohe Lin <linmiaohe@huawei.com>:
      mm/vmalloc.c: fix potential memory leak

Subsystem: mm/migration

    Jan Stancek <jstancek@redhat.com>:
      mm: migrate: initialize err in do_migrate_pages

Subsystem: mm/hugetlb

    Miaohe Lin <linmiaohe@huawei.com>:
      mm/hugetlb: fix potential missing huge page size info

Subsystem: MAINTAINERS

    Vlastimil Babka <vbabka@suse.cz>:
      MAINTAINERS: add Vlastimil as slab allocators maintainer

Subsystem: mm/memory-failure

    Oscar Salvador <osalvador@suse.de>:
      mm,hwpoison: fix printing of page flags

Subsystem: mm/process_vm_access

    Andrew Morton <akpm@linux-foundation.org>:
      mm/process_vm_access.c: include compat.h

 MAINTAINERS                |    1 +
 include/linux/kasan.h      |    6 +++++-
 include/linux/memcontrol.h |    2 +-
 mm/hugetlb.c               |    2 +-
 mm/kasan/init.c            |    3 ++-
 mm/memory-failure.c        |    2 +-
 mm/mempolicy.c             |    2 +-
 mm/page_alloc.c            |   31 ++++++++++++++++---------------
 mm/process_vm_access.c     |    1 +
 mm/slub.c                  |    2 +-
 mm/vmalloc.c               |    4 +++-
 11 files changed, 33 insertions(+), 23 deletions(-)



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 01/10] mm, slub: consider rest of partial list if acquire_slab() fails
  2021-01-12 23:48 incoming Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 02/10] mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint Andrew Morton
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, cl, iamjoonsoo.kim, jannh, linux-mm, mm-commits, penberg,
	rientjes, torvalds

From: Jann Horn <jannh@google.com>
Subject: mm, slub: consider rest of partial list if acquire_slab() fails

acquire_slab() fails if there is contention on the freelist of the page
(probably because some other CPU is concurrently freeing an object from
the page).  In that case, it might make sense to look for a different page
(since there might be more remote frees to the page from other CPUs, and
we don't want contention on struct page).

However, the current code accidentally stops looking at the partial list
completely in that case.  Especially on kernels without CONFIG_NUMA set,
this means that get_partial() fails and new_slab_objects() falls back to
new_slab(), allocating new pages.  This could lead to an unnecessary
increase in memory fragmentation.

Link: https://lkml.kernel.org/r/20201228130853.1871516-1-jannh@google.com
Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop")
Signed-off-by: Jann Horn <jannh@google.com>
Acked-by: David Rientjes <rientjes@google.com>
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/slub.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/slub.c~mm-slub-consider-rest-of-partial-list-if-acquire_slab-fails
+++ a/mm/slub.c
@@ -1973,7 +1973,7 @@ static void *get_partial_node(struct kme
 
 		t = acquire_slab(s, n, page, object == NULL, &objects);
 		if (!t)
-			break;
+			continue; /* cmpxchg raced */
 
 		available += objects;
 		if (!object) {
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 02/10] mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint
  2021-01-12 23:48 incoming Andrew Morton
  2021-01-12 23:49 ` [patch 01/10] mm, slub: consider rest of partial list if acquire_slab() fails Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 03/10] mm/memcontrol: fix warning in mem_cgroup_page_lruvec() Andrew Morton
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, linux-mm, liu.hailong6, mm-commits, torvalds

From: Hailong liu <liu.hailong6@zte.com.cn>
Subject: mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint

The trace point *trace_mm_page_alloc_zone_locked()* in __rmqueue() does
not currently cover all branches.  Add the missing tracepoint and check
the page before do that.

[akpm@linux-foundation.org: use IS_ENABLED() to suppress warning]
Link: https://lkml.kernel.org/r/20201228132901.41523-1-carver4lio@163.com
Signed-off-by: Hailong liu <liu.hailong6@zte.com.cn>
Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/page_alloc.c |   31 ++++++++++++++++---------------
 1 file changed, 16 insertions(+), 15 deletions(-)

--- a/mm/page_alloc.c~mm-page_alloc-add-a-missing-mm_page_alloc_zone_locked-tracepoint
+++ a/mm/page_alloc.c
@@ -2862,20 +2862,20 @@ __rmqueue(struct zone *zone, unsigned in
 {
 	struct page *page;
 
-#ifdef CONFIG_CMA
-	/*
-	 * Balance movable allocations between regular and CMA areas by
-	 * allocating from CMA when over half of the zone's free memory
-	 * is in the CMA area.
-	 */
-	if (alloc_flags & ALLOC_CMA &&
-	    zone_page_state(zone, NR_FREE_CMA_PAGES) >
-	    zone_page_state(zone, NR_FREE_PAGES) / 2) {
-		page = __rmqueue_cma_fallback(zone, order);
-		if (page)
-			return page;
+	if (IS_ENABLED(CONFIG_CMA)) {
+		/*
+		 * Balance movable allocations between regular and CMA areas by
+		 * allocating from CMA when over half of the zone's free memory
+		 * is in the CMA area.
+		 */
+		if (alloc_flags & ALLOC_CMA &&
+		    zone_page_state(zone, NR_FREE_CMA_PAGES) >
+		    zone_page_state(zone, NR_FREE_PAGES) / 2) {
+			page = __rmqueue_cma_fallback(zone, order);
+			if (page)
+				goto out;
+		}
 	}
-#endif
 retry:
 	page = __rmqueue_smallest(zone, order, migratetype);
 	if (unlikely(!page)) {
@@ -2886,8 +2886,9 @@ retry:
 								alloc_flags))
 			goto retry;
 	}
-
-	trace_mm_page_alloc_zone_locked(page, order, migratetype);
+out:
+	if (page)
+		trace_mm_page_alloc_zone_locked(page, order, migratetype);
 	return page;
 }
 
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 03/10] mm/memcontrol: fix warning in mem_cgroup_page_lruvec()
  2021-01-12 23:48 incoming Andrew Morton
  2021-01-12 23:49 ` [patch 01/10] mm, slub: consider rest of partial list if acquire_slab() fails Andrew Morton
  2021-01-12 23:49 ` [patch 02/10] mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 04/10] arm/kasan: fix the array size of kasan_early_shadow_pte[] Andrew Morton
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, alex.shi, bhe, chris, guro, hannes, hughd, linux-mm,
	lstoakes, mhocko, mm-commits, sh_def, shakeelb, torvalds, vbabka

From: Hugh Dickins <hughd@google.com>
Subject: mm/memcontrol: fix warning in mem_cgroup_page_lruvec()

Boot a CONFIG_MEMCG=y kernel with "cgroup_disabled=memory" and you are met
by a series of warnings from the VM_WARN_ON_ONCE_PAGE(!memcg, page)
recently added to the inline mem_cgroup_page_lruvec().

An earlier attempt to place that warning, in mem_cgroup_lruvec(), had been
careful to do so after weeding out the mem_cgroup_disabled() case; but was
itself invalid because of the mem_cgroup_lruvec(NULL, pgdat) in
clear_pgdat_congested() and age_active_anon().

Warning in mem_cgroup_page_lruvec() was once useful in detecting a KSM
charge bug, so may be worth keeping: but skip if mem_cgroup_disabled().

Link: https://lkml.kernel.org/r/alpine.LSU.2.11.2101032056260.1093@eggly.anvils
Fixes: 9a1ac2288cf1 ("mm/memcontrol:rewrite mem_cgroup_page_lruvec()")
Signed-off-by: Hugh Dickins <hughd@google.com>
Reviewed-by: Alex Shi <alex.shi@linux.alibaba.com>
Acked-by: Roman Gushchin <guro@fb.com>
Acked-by: Chris Down <chris@chrisdown.name>
Reviewed-by: Baoquan He <bhe@redhat.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Hui Su <sh_def@163.com>
Cc: Lorenzo Stoakes <lstoakes@gmail.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/memcontrol.h |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/include/linux/memcontrol.h~mm-memcontrol-fix-warning-in-mem_cgroup_page_lruvec
+++ a/include/linux/memcontrol.h
@@ -665,7 +665,7 @@ static inline struct lruvec *mem_cgroup_
 {
 	struct mem_cgroup *memcg = page_memcg(page);
 
-	VM_WARN_ON_ONCE_PAGE(!memcg, page);
+	VM_WARN_ON_ONCE_PAGE(!memcg && !mem_cgroup_disabled(), page);
 	return mem_cgroup_lruvec(memcg, pgdat);
 }
 
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 04/10] arm/kasan: fix the array size of kasan_early_shadow_pte[]
  2021-01-12 23:48 incoming Andrew Morton
                   ` (2 preceding siblings ...)
  2021-01-12 23:49 ` [patch 03/10] mm/memcontrol: fix warning in mem_cgroup_page_lruvec() Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 05/10] mm/vmalloc.c: fix potential memory leak Andrew Morton
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, ardb, aryabinin, dvyukov, glider, guo.ziliang,
	linus.walleij, linux-mm, linux, liu.hailong6, mm-commits,
	torvalds

From: Hailong Liu <liu.hailong6@zte.com.cn>
Subject: arm/kasan: fix the array size of kasan_early_shadow_pte[]

The size of kasan_early_shadow_pte[] now is PTRS_PER_PTE which defined to
512 for arm architecture.  This means that it only covers the prev Linux
pte entries, but not the HWTABLE pte entries for arm.

The reason it works well current is that the symbol
kasan_early_shadow_page immediately following kasan_early_shadow_pte in
memory is page aligned, which makes kasan_early_shadow_pte look like a 4KB
size array.  But we can't ensure the order always right with different
compiler/linker, nor more bss symbols be introduced.

We had a test with QEMU + vexpress:put a 512KB-size symbol with
attribute __section(".bss..page_aligned") after kasan_early_shadow_pte,
and poison it after kasan_early_init().  Then enabled CONFIG_KASAN, it
failed to boot up.

Link: https://lkml.kernel.org/r/20210109044622.8312-1-hailongliiu@yeah.net
Signed-off-by: Hailong Liu <liu.hailong6@zte.com.cn>
Signed-off-by: Ziliang Guo <guo.ziliang@zte.com.cn>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/kasan.h |    6 +++++-
 mm/kasan/init.c       |    3 ++-
 2 files changed, 7 insertions(+), 2 deletions(-)

--- a/include/linux/kasan.h~arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte
+++ a/include/linux/kasan.h
@@ -35,8 +35,12 @@ struct kunit_kasan_expectation {
 #define KASAN_SHADOW_INIT 0
 #endif
 
+#ifndef PTE_HWTABLE_PTRS
+#define PTE_HWTABLE_PTRS 0
+#endif
+
 extern unsigned char kasan_early_shadow_page[PAGE_SIZE];
-extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
+extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS];
 extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
 extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
 extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
--- a/mm/kasan/init.c~arm-kasan-fix-the-arry-size-of-kasan_early_shadow_pte
+++ a/mm/kasan/init.c
@@ -64,7 +64,8 @@ static inline bool kasan_pmd_table(pud_t
 	return false;
 }
 #endif
-pte_t kasan_early_shadow_pte[PTRS_PER_PTE] __page_aligned_bss;
+pte_t kasan_early_shadow_pte[PTRS_PER_PTE + PTE_HWTABLE_PTRS]
+	__page_aligned_bss;
 
 static inline bool kasan_pte_table(pmd_t pmd)
 {
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 05/10] mm/vmalloc.c: fix potential memory leak
  2021-01-12 23:48 incoming Andrew Morton
                   ` (3 preceding siblings ...)
  2021-01-12 23:49 ` [patch 04/10] arm/kasan: fix the array size of kasan_early_shadow_pte[] Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 06/10] mm: migrate: initialize err in do_migrate_pages Andrew Morton
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, linmiaohe, linux-mm, luoshijie1, mm-commits, stable,
	torvalds, urezki

From: Miaohe Lin <linmiaohe@huawei.com>
Subject: mm/vmalloc.c: fix potential memory leak

In VM_MAP_PUT_PAGES case, we should put pages and free array in vfree. 
But we missed to set area->nr_pages in vmap().  So we would failed to put
pages in __vunmap() because area->nr_pages = 0.

Link: https://lkml.kernel.org/r/20210107123541.39206-1-linmiaohe@huawei.com
Fixes: b944afc9d64d ("mm: add a VM_MAP_PUT_PAGES flag for vmap")
Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/vmalloc.c |    4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

--- a/mm/vmalloc.c~mm-vmallocc-fix-potential-memory-leak
+++ a/mm/vmalloc.c
@@ -2420,8 +2420,10 @@ void *vmap(struct page **pages, unsigned
 		return NULL;
 	}
 
-	if (flags & VM_MAP_PUT_PAGES)
+	if (flags & VM_MAP_PUT_PAGES) {
 		area->pages = pages;
+		area->nr_pages = count;
+	}
 	return area->addr;
 }
 EXPORT_SYMBOL(vmap);
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 06/10] mm: migrate: initialize err in do_migrate_pages
  2021-01-12 23:48 incoming Andrew Morton
                   ` (4 preceding siblings ...)
  2021-01-12 23:49 ` [patch 05/10] mm/vmalloc.c: fix potential memory leak Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 07/10] mm/hugetlb: fix potential missing huge page size info Andrew Morton
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, jack, jstancek, linux-mm, mgorman, mhocko, mm-commits,
	shy828301, songliubraving, torvalds, willy, ziy

From: Jan Stancek <jstancek@redhat.com>
Subject: mm: migrate: initialize err in do_migrate_pages

After commit 236c32eb1096 ("mm: migrate: clean up migrate_prep{_local}")',
do_migrate_pages can return uninitialized variable 'err' (which is
propagated to user-space as error) when 'from' and 'to' nodesets are
identical.  This can be reproduced with LTP migrate_pages01, which calls
migrate_pages() with same set for both old/new_nodes.

Add 'err' initialization back.

Link: https://lkml.kernel.org/r/456a021c7ef3636d7668cec9dcb4a446a4244812.1609855564.git.jstancek@redhat.com
Fixes: 236c32eb1096 ("mm: migrate: clean up migrate_prep{_local}")
Signed-off-by: Jan Stancek <jstancek@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Yang Shi <shy828301@gmail.com>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Jan Kara <jack@suse.cz>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Song Liu <songliubraving@fb.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/mempolicy.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/mempolicy.c~mm-migrate-initialize-err-in-do_migrate_pages
+++ a/mm/mempolicy.c
@@ -1111,7 +1111,7 @@ int do_migrate_pages(struct mm_struct *m
 		     const nodemask_t *to, int flags)
 {
 	int busy = 0;
-	int err;
+	int err = 0;
 	nodemask_t tmp;
 
 	migrate_prep();
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 07/10] mm/hugetlb: fix potential missing huge page size info
  2021-01-12 23:48 incoming Andrew Morton
                   ` (5 preceding siblings ...)
  2021-01-12 23:49 ` [patch 06/10] mm: migrate: initialize err in do_migrate_pages Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 08/10] MAINTAINERS: add Vlastimil as slab allocators maintainer Andrew Morton
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, linmiaohe, linux-mm, mike.kravetz, mm-commits, stable, torvalds

From: Miaohe Lin <linmiaohe@huawei.com>
Subject: mm/hugetlb: fix potential missing huge page size info

The huge page size is encoded for VM_FAULT_HWPOISON errors only.  So if we
return VM_FAULT_HWPOISON, huge page size would just be ignored.

Link: https://lkml.kernel.org/r/20210107123449.38481-1-linmiaohe@huawei.com
Fixes: aa50d3a7aa81 ("Encode huge page size for VM_FAULT_HWPOISON errors")
Signed-off-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/hugetlb.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/hugetlb.c~mm-hugetlb-fix-potential-missing-huge-page-size-info
+++ a/mm/hugetlb.c
@@ -4371,7 +4371,7 @@ retry:
 		 * So we need to block hugepage fault by PG_hwpoison bit check.
 		 */
 		if (unlikely(PageHWPoison(page))) {
-			ret = VM_FAULT_HWPOISON |
+			ret = VM_FAULT_HWPOISON_LARGE |
 				VM_FAULT_SET_HINDEX(hstate_index(h));
 			goto backout_unlocked;
 		}
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 08/10] MAINTAINERS: add Vlastimil as slab allocators maintainer
  2021-01-12 23:48 incoming Andrew Morton
                   ` (6 preceding siblings ...)
  2021-01-12 23:49 ` [patch 07/10] mm/hugetlb: fix potential missing huge page size info Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 09/10] mm,hwpoison: fix printing of page flags Andrew Morton
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, cl, iamjoonsoo.kim, linux-mm, mm-commits, penberg,
	rientjes, torvalds, vbabka

From: Vlastimil Babka <vbabka@suse.cz>
Subject: MAINTAINERS: add Vlastimil as slab allocators maintainer

I would like to help with slab allocators maintenance, from the
perspective of being responsible for SLAB and more recently also SLUB in
an enterprise distro kernel and supporting its users.  Recently I've been
focusing on improving SLUB's debugging features, and patch review in the
area, including the kmemcg accounting rewrite last year.

Link: https://lkml.kernel.org/r/20210108110353.19971-1-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Christoph Lameter <cl@linux.com>
Acked-by: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 MAINTAINERS |    1 +
 1 file changed, 1 insertion(+)

--- a/MAINTAINERS~maintainers-add-myself-as-slab-allocators-maintainer
+++ a/MAINTAINERS
@@ -16319,6 +16319,7 @@ M:	Pekka Enberg <penberg@kernel.org>
 M:	David Rientjes <rientjes@google.com>
 M:	Joonsoo Kim <iamjoonsoo.kim@lge.com>
 M:	Andrew Morton <akpm@linux-foundation.org>
+M:	Vlastimil Babka <vbabka@suse.cz>
 L:	linux-mm@kvack.org
 S:	Maintained
 F:	include/linux/sl?b*.h
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 09/10] mm,hwpoison: fix printing of page flags
  2021-01-12 23:48 incoming Andrew Morton
                   ` (7 preceding siblings ...)
  2021-01-12 23:49 ` [patch 08/10] MAINTAINERS: add Vlastimil as slab allocators maintainer Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-12 23:49 ` [patch 10/10] mm/process_vm_access.c: include compat.h Andrew Morton
  2021-01-15 23:32 ` incoming Linus Torvalds
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, anshuman.khandual, dan.carpenter, linux-mm, mm-commits,
	naoya.horiguchi, osalvador, torvalds

From: Oscar Salvador <osalvador@suse.de>
Subject: mm,hwpoison: fix printing of page flags

Format %pG expects a lower case 'p' in order to print the flags.
Fix it.

Link: https://lkml.kernel.org/r/20210108085202.4506-1-osalvador@suse.de
Fixes: 8295d535e2aa ("mm,hwpoison: refactor get_any_page")
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Naoya Horiguchi <naoya.horiguchi@nec.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/memory-failure.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/mm/memory-failure.c~mmhwpoison-fix-printing-of-page-flags
+++ a/mm/memory-failure.c
@@ -1940,7 +1940,7 @@ retry:
 			goto retry;
 		}
 	} else if (ret == -EIO) {
-		pr_info("%s: %#lx: unknown page type: %lx (%pGP)\n",
+		pr_info("%s: %#lx: unknown page type: %lx (%pGp)\n",
 			 __func__, pfn, page->flags, &page->flags);
 	}
 
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* [patch 10/10] mm/process_vm_access.c: include compat.h
  2021-01-12 23:48 incoming Andrew Morton
                   ` (8 preceding siblings ...)
  2021-01-12 23:49 ` [patch 09/10] mm,hwpoison: fix printing of page flags Andrew Morton
@ 2021-01-12 23:49 ` Andrew Morton
  2021-01-15 23:32 ` incoming Linus Torvalds
  10 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2021-01-12 23:49 UTC (permalink / raw)
  To: akpm, axboe, hch, linux-mm, me, mm-commits, stable, torvalds, viro

From: Andrew Morton <akpm@linux-foundation.org>
Subject: mm/process_vm_access.c: include compat.h

mm/process_vm_access.c:277:5: error: implicit declaration of function 'in_compat_syscall'; did you mean 'in_ia32_syscall'? [-Werror=implicit-function-declaration]

Fixes: 38dc5079da7081e "Fix compat regression in process_vm_rw()"
Reported-by: syzbot+5b0d0de84d6c65b8dd2b@syzkaller.appspotmail.com
Cc: Kyle Huey <me@kylehuey.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Christoph Hellwig <hch@lst.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 mm/process_vm_access.c |    1 +
 1 file changed, 1 insertion(+)

--- a/mm/process_vm_access.c~mm-process_vm_accessc-include-compath
+++ a/mm/process_vm_access.c
@@ -9,6 +9,7 @@
 #include <linux/mm.h>
 #include <linux/uio.h>
 #include <linux/sched.h>
+#include <linux/compat.h>
 #include <linux/sched/mm.h>
 #include <linux/highmem.h>
 #include <linux/ptrace.h>
_


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: incoming
  2021-01-12 23:48 incoming Andrew Morton
                   ` (9 preceding siblings ...)
  2021-01-12 23:49 ` [patch 10/10] mm/process_vm_access.c: include compat.h Andrew Morton
@ 2021-01-15 23:32 ` Linus Torvalds
  10 siblings, 0 replies; 12+ messages in thread
From: Linus Torvalds @ 2021-01-15 23:32 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Linux-MM, mm-commits

On Tue, Jan 12, 2021 at 3:48 PM Andrew Morton <akpm@linux-foundation.org> wrote:
>
> 10 patches, based on e609571b5ffa3528bf85292de1ceaddac342bc1c.

Whee. I had completely dropped the ball on this - I had built my usual
"akpm" branch with the patches, but then had completely forgotten
about it after doing my basic build tests.

I tend to leave it for a while to see if people send belated ACK/NAK's
for the patches, but that "for a while" is typically "overnight", not
several days.

So if you ever notice that I haven't merged your patch submission, and
you haven't seen me comment on them, feel free to ping me to remind
me.

Because it might just have gotten lost in the shuffle for some random
reason. Admittedly it's rare - I think this is the first time I just
randomly noticed three days later that I'd never done the actual merge
of the patch-series).

               Linus


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2021-01-15 23:32 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-12 23:48 incoming Andrew Morton
2021-01-12 23:49 ` [patch 01/10] mm, slub: consider rest of partial list if acquire_slab() fails Andrew Morton
2021-01-12 23:49 ` [patch 02/10] mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint Andrew Morton
2021-01-12 23:49 ` [patch 03/10] mm/memcontrol: fix warning in mem_cgroup_page_lruvec() Andrew Morton
2021-01-12 23:49 ` [patch 04/10] arm/kasan: fix the array size of kasan_early_shadow_pte[] Andrew Morton
2021-01-12 23:49 ` [patch 05/10] mm/vmalloc.c: fix potential memory leak Andrew Morton
2021-01-12 23:49 ` [patch 06/10] mm: migrate: initialize err in do_migrate_pages Andrew Morton
2021-01-12 23:49 ` [patch 07/10] mm/hugetlb: fix potential missing huge page size info Andrew Morton
2021-01-12 23:49 ` [patch 08/10] MAINTAINERS: add Vlastimil as slab allocators maintainer Andrew Morton
2021-01-12 23:49 ` [patch 09/10] mm,hwpoison: fix printing of page flags Andrew Morton
2021-01-12 23:49 ` [patch 10/10] mm/process_vm_access.c: include compat.h Andrew Morton
2021-01-15 23:32 ` incoming Linus Torvalds

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).