All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrew Morton <akpm@linux-foundation.org>
To: alex.shi@linux.alibaba.com, guro@fb.com, hannes@cmpxchg.org,
	hughd@google.com, iamjoonsoo.kim@lge.com, kirill@shutemov.name,
	mhocko@suse.com, mm-commits@vger.kernel.org, shakeelb@google.com
Subject: + mm-memcontrol-switch-to-native-nr_anon_mapped-counter.patch added to -mm tree
Date: Fri, 08 May 2020 16:19:51 -0700	[thread overview]
Message-ID: <20200508231951.UAZYQKg3h%akpm@linux-foundation.org> (raw)
In-Reply-To: <20200507183509.c5ef146c5aaeb118a25a39a8@linux-foundation.org>


The patch titled
     Subject: mm: memcontrol: switch to native NR_ANON_MAPPED counter
has been added to the -mm tree.  Its filename is
     mm-memcontrol-switch-to-native-nr_anon_mapped-counter.patch

This patch should soon appear at
    http://ozlabs.org/~akpm/mmots/broken-out/mm-memcontrol-switch-to-native-nr_anon_mapped-counter.patch
and later at
    http://ozlabs.org/~akpm/mmotm/broken-out/mm-memcontrol-switch-to-native-nr_anon_mapped-counter.patch

Before you just go and hit "reply", please:
   a) Consider who else should be cc'ed
   b) Prefer to cc a suitable mailing list as well
   c) Ideally: find the original patch on the mailing list and do a
      reply-to-all to that, adding suitable additional cc's

*** Remember to use Documentation/process/submit-checklist.rst when testing your code ***

The -mm tree is included into linux-next and is updated
there every 3-4 working days

------------------------------------------------------
From: Johannes Weiner <hannes@cmpxchg.org>
Subject: mm: memcontrol: switch to native NR_ANON_MAPPED counter

Memcg maintains a private MEMCG_RSS counter.  This divergence from the
generic VM accounting means unnecessary code overhead, and creates a
dependency for memcg that page->mapping is set up at the time of charging,
so that page types can be told apart.

Convert the generic accounting sites to mod_lruvec_page_state and friends
to maintain the per-cgroup vmstat counter of NR_ANON_MAPPED.  We use
lock_page_memcg() to stabilize page->mem_cgroup during rmap changes, the
same way we do for NR_FILE_MAPPED.

With the previous patch removing MEMCG_CACHE and the private NR_SHMEM
counter, this patch finally eliminates the need to have page->mapping set
up at charge time.  However, we need to have page->mem_cgroup set up by
the time rmap runs and does the accounting, so switch the commit and the
rmap callbacks around.

v2: fix temporary accounting bug by switching rmap<->commit (Joonsoo)

Link: http://lkml.kernel.org/r/20200508183105.225460-11-hannes@cmpxchg.org
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Cc: Alex Shi <alex.shi@linux.alibaba.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Roman Gushchin <guro@fb.com>
Cc: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---

 include/linux/memcontrol.h |    3 --
 kernel/events/uprobes.c    |    2 -
 mm/huge_memory.c           |    2 -
 mm/khugepaged.c            |    2 -
 mm/memcontrol.c            |   27 ++++++--------------
 mm/memory.c                |   10 +++----
 mm/migrate.c               |    2 -
 mm/rmap.c                  |   47 +++++++++++++++++++++--------------
 mm/swapfile.c              |    4 +-
 mm/userfaultfd.c           |    2 -
 10 files changed, 51 insertions(+), 50 deletions(-)

--- a/include/linux/memcontrol.h~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/include/linux/memcontrol.h
@@ -29,8 +29,7 @@ struct kmem_cache;
 
 /* Cgroup-specific page state, on top of universal node page state */
 enum memcg_stat_item {
-	MEMCG_RSS = NR_VM_NODE_STAT_ITEMS,
-	MEMCG_RSS_HUGE,
+	MEMCG_RSS_HUGE = NR_VM_NODE_STAT_ITEMS,
 	MEMCG_SWAP,
 	MEMCG_SOCK,
 	/* XXX: why are these zone and not node counters? */
--- a/kernel/events/uprobes.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/kernel/events/uprobes.c
@@ -188,8 +188,8 @@ static int __replace_page(struct vm_area
 
 	if (new_page) {
 		get_page(new_page);
-		page_add_new_anon_rmap(new_page, vma, addr, false);
 		mem_cgroup_commit_charge(new_page, memcg, false);
+		page_add_new_anon_rmap(new_page, vma, addr, false);
 		lru_cache_add_active_or_unevictable(new_page, vma);
 	} else
 		/* no new page, just dec_mm_counter for old_page */
--- a/mm/huge_memory.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/huge_memory.c
@@ -640,8 +640,8 @@ static vm_fault_t __do_huge_pmd_anonymou
 
 		entry = mk_huge_pmd(page, vma->vm_page_prot);
 		entry = maybe_pmd_mkwrite(pmd_mkdirty(entry), vma);
-		page_add_new_anon_rmap(page, vma, haddr, true);
 		mem_cgroup_commit_charge(page, memcg, false);
+		page_add_new_anon_rmap(page, vma, haddr, true);
 		lru_cache_add_active_or_unevictable(page, vma);
 		pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable);
 		set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry);
--- a/mm/khugepaged.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/khugepaged.c
@@ -1086,8 +1086,8 @@ static void collapse_huge_page(struct mm
 
 	spin_lock(pmd_ptl);
 	BUG_ON(!pmd_none(*pmd));
-	page_add_new_anon_rmap(new_page, vma, address, true);
 	mem_cgroup_commit_charge(new_page, memcg, false);
+	page_add_new_anon_rmap(new_page, vma, address, true);
 	count_memcg_events(memcg, THP_COLLAPSE_ALLOC, 1);
 	lru_cache_add_active_or_unevictable(new_page, vma);
 	pgtable_trans_huge_deposit(mm, pmd, pgtable);
--- a/mm/memcontrol.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/memcontrol.c
@@ -836,13 +836,6 @@ static void mem_cgroup_charge_statistics
 					 struct page *page,
 					 int nr_pages)
 {
-	/*
-	 * Here, RSS means 'mapped anon' and anon's SwapCache. Shmem/tmpfs is
-	 * counted as CACHE even if it's on ANON LRU.
-	 */
-	if (PageAnon(page))
-		__mod_memcg_state(memcg, MEMCG_RSS, nr_pages);
-
 	if (abs(nr_pages) > 1) {
 		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
 		__mod_memcg_state(memcg, MEMCG_RSS_HUGE, nr_pages);
@@ -1384,7 +1377,7 @@ static char *memory_stat_format(struct m
 	 */
 
 	seq_buf_printf(&s, "anon %llu\n",
-		       (u64)memcg_page_state(memcg, MEMCG_RSS) *
+		       (u64)memcg_page_state(memcg, NR_ANON_MAPPED) *
 		       PAGE_SIZE);
 	seq_buf_printf(&s, "file %llu\n",
 		       (u64)memcg_page_state(memcg, NR_FILE_PAGES) *
@@ -3300,7 +3293,7 @@ static unsigned long mem_cgroup_usage(st
 
 	if (mem_cgroup_is_root(memcg)) {
 		val = memcg_page_state(memcg, NR_FILE_PAGES) +
-			memcg_page_state(memcg, MEMCG_RSS);
+			memcg_page_state(memcg, NR_ANON_MAPPED);
 		if (swap)
 			val += memcg_page_state(memcg, MEMCG_SWAP);
 	} else {
@@ -3771,7 +3764,7 @@ static int memcg_numa_stat_show(struct s
 
 static const unsigned int memcg1_stats[] = {
 	NR_FILE_PAGES,
-	MEMCG_RSS,
+	NR_ANON_MAPPED,
 	MEMCG_RSS_HUGE,
 	NR_SHMEM,
 	NR_FILE_MAPPED,
@@ -5401,7 +5394,12 @@ static int mem_cgroup_move_account(struc
 
 	lock_page_memcg(page);
 
-	if (!PageAnon(page)) {
+	if (PageAnon(page)) {
+		if (page_mapped(page)) {
+			__mod_lruvec_state(from_vec, NR_ANON_MAPPED, -nr_pages);
+			__mod_lruvec_state(to_vec, NR_ANON_MAPPED, nr_pages);
+		}
+	} else {
 		__mod_lruvec_state(from_vec, NR_FILE_PAGES, -nr_pages);
 		__mod_lruvec_state(to_vec, NR_FILE_PAGES, nr_pages);
 
@@ -6529,7 +6527,6 @@ void mem_cgroup_commit_charge(struct pag
 {
 	unsigned int nr_pages = hpage_nr_pages(page);
 
-	VM_BUG_ON_PAGE(!page->mapping, page);
 	VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page);
 
 	if (mem_cgroup_disabled())
@@ -6602,8 +6599,6 @@ int mem_cgroup_charge(struct page *page,
 	struct mem_cgroup *memcg;
 	int ret;
 
-	VM_BUG_ON_PAGE(!page->mapping, page);
-
 	ret = mem_cgroup_try_charge(page, mm, gfp_mask, &memcg);
 	if (ret)
 		return ret;
@@ -6615,7 +6610,6 @@ struct uncharge_gather {
 	struct mem_cgroup *memcg;
 	unsigned long nr_pages;
 	unsigned long pgpgout;
-	unsigned long nr_anon;
 	unsigned long nr_kmem;
 	unsigned long nr_huge;
 	struct page *dummy_page;
@@ -6640,7 +6634,6 @@ static void uncharge_batch(const struct
 	}
 
 	local_irq_save(flags);
-	__mod_memcg_state(ug->memcg, MEMCG_RSS, -ug->nr_anon);
 	__mod_memcg_state(ug->memcg, MEMCG_RSS_HUGE, -ug->nr_huge);
 	__count_memcg_events(ug->memcg, PGPGOUT, ug->pgpgout);
 	__this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages);
@@ -6680,8 +6673,6 @@ static void uncharge_page(struct page *p
 	if (!PageKmemcg(page)) {
 		if (PageTransHuge(page))
 			ug->nr_huge += nr_pages;
-		if (PageAnon(page))
-			ug->nr_anon += nr_pages;
 		ug->pgpgout++;
 	} else {
 		ug->nr_kmem += nr_pages;
--- a/mm/memory.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/memory.c
@@ -2712,8 +2712,8 @@ static vm_fault_t wp_page_copy(struct vm
 		 * thread doing COW.
 		 */
 		ptep_clear_flush_notify(vma, vmf->address, vmf->pte);
-		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(new_page, memcg, false);
+		page_add_new_anon_rmap(new_page, vma, vmf->address, false);
 		lru_cache_add_active_or_unevictable(new_page, vma);
 		/*
 		 * We call the notify macro here because, when using secondary
@@ -3245,12 +3245,12 @@ vm_fault_t do_swap_page(struct vm_fault
 
 	/* ksm created a completely new copy */
 	if (unlikely(page != swapcache && swapcache)) {
-		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false);
+		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		lru_cache_add_active_or_unevictable(page, vma);
 	} else {
-		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
 		mem_cgroup_commit_charge(page, memcg, true);
+		do_page_add_anon_rmap(page, vma, vmf->address, exclusive);
 		activate_page(page);
 	}
 
@@ -3392,8 +3392,8 @@ static vm_fault_t do_anonymous_page(stru
 	}
 
 	inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
-	page_add_new_anon_rmap(page, vma, vmf->address, false);
 	mem_cgroup_commit_charge(page, memcg, false);
+	page_add_new_anon_rmap(page, vma, vmf->address, false);
 	lru_cache_add_active_or_unevictable(page, vma);
 setpte:
 	set_pte_at(vma->vm_mm, vmf->address, vmf->pte, entry);
@@ -3654,8 +3654,8 @@ vm_fault_t alloc_set_pte(struct vm_fault
 	/* copy-on-write page */
 	if (write && !(vma->vm_flags & VM_SHARED)) {
 		inc_mm_counter_fast(vma->vm_mm, MM_ANONPAGES);
-		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		mem_cgroup_commit_charge(page, memcg, false);
+		page_add_new_anon_rmap(page, vma, vmf->address, false);
 		lru_cache_add_active_or_unevictable(page, vma);
 	} else {
 		inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
--- a/mm/migrate.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/migrate.c
@@ -2838,8 +2838,8 @@ static void migrate_vma_insert_page(stru
 		goto unlock_abort;
 
 	inc_mm_counter(mm, MM_ANONPAGES);
-	page_add_new_anon_rmap(page, vma, addr, false);
 	mem_cgroup_commit_charge(page, memcg, false);
+	page_add_new_anon_rmap(page, vma, addr, false);
 	if (!is_zone_device_page(page))
 		lru_cache_add_active_or_unevictable(page, vma);
 	get_page(page);
--- a/mm/rmap.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/rmap.c
@@ -1114,6 +1114,11 @@ void do_page_add_anon_rmap(struct page *
 	bool compound = flags & RMAP_COMPOUND;
 	bool first;
 
+	if (unlikely(PageKsm(page)))
+		lock_page_memcg(page);
+	else
+		VM_BUG_ON_PAGE(!PageLocked(page), page);
+
 	if (compound) {
 		atomic_t *mapcount;
 		VM_BUG_ON_PAGE(!PageLocked(page), page);
@@ -1134,12 +1139,13 @@ void do_page_add_anon_rmap(struct page *
 		 */
 		if (compound)
 			__inc_node_page_state(page, NR_ANON_THPS);
-		__mod_node_page_state(page_pgdat(page), NR_ANON_MAPPED, nr);
+		__mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);
 	}
-	if (unlikely(PageKsm(page)))
-		return;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	if (unlikely(PageKsm(page))) {
+		unlock_page_memcg(page);
+		return;
+	}
 
 	/* address might be in next vma when migration races vma_adjust */
 	if (first)
@@ -1181,7 +1187,7 @@ void page_add_new_anon_rmap(struct page
 		/* increment count (starts at -1) */
 		atomic_set(&page->_mapcount, 0);
 	}
-	__mod_node_page_state(page_pgdat(page), NR_ANON_MAPPED, nr);
+	__mod_lruvec_page_state(page, NR_ANON_MAPPED, nr);
 	__page_set_anon_rmap(page, vma, address, 1);
 }
 
@@ -1230,13 +1236,12 @@ static void page_remove_file_rmap(struct
 	int i, nr = 1;
 
 	VM_BUG_ON_PAGE(compound && !PageHead(page), page);
-	lock_page_memcg(page);
 
 	/* Hugepages are not counted in NR_FILE_MAPPED for now. */
 	if (unlikely(PageHuge(page))) {
 		/* hugetlb pages are always mapped with pmds */
 		atomic_dec(compound_mapcount_ptr(page));
-		goto out;
+		return;
 	}
 
 	/* page still mapped by someone else? */
@@ -1246,14 +1251,14 @@ static void page_remove_file_rmap(struct
 				nr++;
 		}
 		if (!atomic_add_negative(-1, compound_mapcount_ptr(page)))
-			goto out;
+			return;
 		if (PageSwapBacked(page))
 			__dec_node_page_state(page, NR_SHMEM_PMDMAPPED);
 		else
 			__dec_node_page_state(page, NR_FILE_PMDMAPPED);
 	} else {
 		if (!atomic_add_negative(-1, &page->_mapcount))
-			goto out;
+			return;
 	}
 
 	/*
@@ -1265,8 +1270,6 @@ static void page_remove_file_rmap(struct
 
 	if (unlikely(PageMlocked(page)))
 		clear_page_mlock(page);
-out:
-	unlock_page_memcg(page);
 }
 
 static void page_remove_anon_compound_rmap(struct page *page)
@@ -1310,7 +1313,7 @@ static void page_remove_anon_compound_rm
 		clear_page_mlock(page);
 
 	if (nr)
-		__mod_node_page_state(page_pgdat(page), NR_ANON_MAPPED, -nr);
+		__mod_lruvec_page_state(page, NR_ANON_MAPPED, -nr);
 }
 
 /**
@@ -1322,22 +1325,28 @@ static void page_remove_anon_compound_rm
  */
 void page_remove_rmap(struct page *page, bool compound)
 {
-	if (!PageAnon(page))
-		return page_remove_file_rmap(page, compound);
+	lock_page_memcg(page);
 
-	if (compound)
-		return page_remove_anon_compound_rmap(page);
+	if (!PageAnon(page)) {
+		page_remove_file_rmap(page, compound);
+		goto out;
+	}
+
+	if (compound) {
+		page_remove_anon_compound_rmap(page);
+		goto out;
+	}
 
 	/* page still mapped by someone else? */
 	if (!atomic_add_negative(-1, &page->_mapcount))
-		return;
+		goto out;
 
 	/*
 	 * We use the irq-unsafe __{inc|mod}_zone_page_stat because
 	 * these counters are not modified in interrupt context, and
 	 * pte lock(a spinlock) is held, which implies preemption disabled.
 	 */
-	__dec_node_page_state(page, NR_ANON_MAPPED);
+	__dec_lruvec_page_state(page, NR_ANON_MAPPED);
 
 	if (unlikely(PageMlocked(page)))
 		clear_page_mlock(page);
@@ -1354,6 +1363,8 @@ void page_remove_rmap(struct page *page,
 	 * Leaving it set also helps swapoff to reinstate ptes
 	 * faster for those pages still in swapcache.
 	 */
+out:
+	unlock_page_memcg(page);
 }
 
 /*
--- a/mm/swapfile.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/swapfile.c
@@ -1881,11 +1881,11 @@ static int unuse_pte(struct vm_area_stru
 	set_pte_at(vma->vm_mm, addr, pte,
 		   pte_mkold(mk_pte(page, vma->vm_page_prot)));
 	if (page == swapcache) {
-		page_add_anon_rmap(page, vma, addr, false);
 		mem_cgroup_commit_charge(page, memcg, true);
+		page_add_anon_rmap(page, vma, addr, false);
 	} else { /* ksm created a completely new copy */
-		page_add_new_anon_rmap(page, vma, addr, false);
 		mem_cgroup_commit_charge(page, memcg, false);
+		page_add_new_anon_rmap(page, vma, addr, false);
 		lru_cache_add_active_or_unevictable(page, vma);
 	}
 	swap_free(entry);
--- a/mm/userfaultfd.c~mm-memcontrol-switch-to-native-nr_anon_mapped-counter
+++ a/mm/userfaultfd.c
@@ -123,8 +123,8 @@ static int mcopy_atomic_pte(struct mm_st
 		goto out_release_uncharge_unlock;
 
 	inc_mm_counter(dst_mm, MM_ANONPAGES);
-	page_add_new_anon_rmap(page, dst_vma, dst_addr, false);
 	mem_cgroup_commit_charge(page, memcg, false);
+	page_add_new_anon_rmap(page, dst_vma, dst_addr, false);
 	lru_cache_add_active_or_unevictable(page, dst_vma);
 
 	set_pte_at(dst_mm, dst_addr, dst_pte, _dst_pte);
_

Patches currently in -mm which might be from hannes@cmpxchg.org are

mm-fix-numa-node-file-count-error-in-replace_page_cache.patch
mm-memcontrol-fix-stat-corrupting-race-in-charge-moving.patch
mm-memcontrol-drop-compound-parameter-from-memcg-charging-api.patch
mm-memcontrol-move-out-cgroup-swaprate-throttling.patch
mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api.patch
mm-memcontrol-prepare-uncharging-for-removal-of-private-page-type-counters.patch
mm-memcontrol-prepare-move_account-for-removal-of-private-page-type-counters.patch
mm-memcontrol-prepare-cgroup-vmstat-infrastructure-for-native-anon-counters.patch
mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters.patch
mm-memcontrol-switch-to-native-nr_anon_mapped-counter.patch
mm-memcontrol-switch-to-native-nr_anon_thps-counter.patch
mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api.patch
mm-memcontrol-drop-unused-try-commit-cancel-charge-api.patch
mm-memcontrol-prepare-swap-controller-setup-for-integration.patch
mm-memcontrol-make-swap-tracking-an-integral-part-of-memory-control.patch
mm-memcontrol-charge-swapin-pages-on-instantiation.patch
mm-memcontrol-delete-unused-lrucare-handling.patch
mm-memcontrol-update-page-mem_cgroup-stability-rules.patch

  parent reply	other threads:[~2020-05-08 23:19 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-08  1:35 incoming Andrew Morton
2020-05-08  1:35 ` [patch 01/15] ipc/mqueue.c: change __do_notify() to bypass check_kill_permission() Andrew Morton
2020-05-08  1:35   ` Andrew Morton
2020-05-09 12:30   ` Sasha Levin
2020-05-08  1:35 ` [patch 02/15] mm, memcg: fix error return value of mem_cgroup_css_alloc() Andrew Morton
2020-05-08  1:35 ` [patch 03/15] mm/page_alloc: fix watchdog soft lockups during set_zone_contiguous() Andrew Morton
2020-05-08  1:35 ` [patch 04/15] kernel/kcov.c: fix typos in kcov_remote_start documentation Andrew Morton
2020-05-08  1:35 ` [patch 05/15] scripts/decodecode: fix trapping instruction formatting Andrew Morton
2020-05-08  1:35 ` [patch 06/15] arch/x86/kvm/svm/sev.c: change flag passed to GUP fast in sev_pin_memory() Andrew Morton
2020-05-08  1:35 ` [patch 07/15] eventpoll: fix missing wakeup for ovflist in ep_poll_callback Andrew Morton
2020-05-08  1:35   ` Andrew Morton
2020-05-08  1:36 ` [patch 08/15] scripts/gdb: repair rb_first() and rb_last() Andrew Morton
2020-05-08  1:36 ` [patch 09/15] mm/slub: fix incorrect interpretation of s->offset Andrew Morton
2020-05-08  1:36 ` [patch 10/15] percpu: make pcpu_alloc() aware of current gfp context Andrew Morton
2020-05-08  1:36 ` [patch 11/15] kselftests: introduce new epoll60 testcase for catching lost wakeups Andrew Morton
2020-05-08  1:36 ` [patch 12/15] epoll: atomically remove wait entry on wake up Andrew Morton
2020-05-08  1:36   ` Andrew Morton
2020-05-08  1:36 ` [patch 13/15] mm/vmscan: remove unnecessary argument description of isolate_lru_pages() Andrew Morton
2020-05-08  1:36 ` [patch 14/15] ubsan: disable UBSAN_ALIGNMENT under COMPILE_TEST Andrew Morton
2020-05-08  1:36   ` Andrew Morton
2020-05-08  1:36 ` [patch 15/15] mm: limit boost_watermark on small zones Andrew Morton
2020-05-08  2:08 ` + arch-kunmap-remove-duplicate-kunmap-implementations-fix.patch added to -mm tree Andrew Morton
2020-05-08 20:46 ` + ipc-utilc-sysvipc_find_ipc-incorrectly-updates-position-index-fix.patch " Andrew Morton
2020-05-08 20:50 ` + mm-introduce-external-memory-hinting-api-fix-2.patch " Andrew Morton
2020-05-08 21:32 ` + checkpatch-use-patch-subject-when-reading-from-stdin-fix.patch " Andrew Morton
2020-05-08 23:19 ` + mm-fix-numa-node-file-count-error-in-replace_page_cache.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-fix-stat-corrupting-race-in-charge-moving.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-drop-compound-parameter-from-memcg-charging-api.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-move-out-cgroup-swaprate-throttling.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-convert-page-cache-to-a-new-mem_cgroup_charge-api.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-prepare-uncharging-for-removal-of-private-page-type-counters.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-prepare-move_account-for-removal-of-private-page-type-counters.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-prepare-cgroup-vmstat-infrastructure-for-native-anon-counters.patch " Andrew Morton
2020-05-08 23:19 ` + mm-memcontrol-switch-to-native-nr_file_pages-and-nr_shmem-counters.patch " Andrew Morton
2020-05-08 23:19 ` Andrew Morton [this message]
2020-05-08 23:19 ` + mm-memcontrol-switch-to-native-nr_anon_thps-counter.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-drop-unused-try-commit-cancel-charge-api.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-prepare-swap-controller-setup-for-integration.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-make-swap-tracking-an-integral-part-of-memory-control.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-charge-swapin-pages-on-instantiation.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-document-the-new-swap-control-behavior.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-delete-unused-lrucare-handling.patch " Andrew Morton
2020-05-08 23:20 ` + mm-memcontrol-update-page-mem_cgroup-stability-rules.patch " Andrew Morton
2020-05-08 23:38 ` + selftests-vm-pkeys-introduce-powerpc-support-fix.patch " Andrew Morton
2020-05-08 23:38 ` + selftests-vm-pkeys-override-access-right-definitions-on-powerpc-fix.patch " Andrew Morton
2020-05-08 23:40 ` + memcg-expose-root-cgroups-memorystat.patch " Andrew Morton
2020-05-08 23:43 ` + doc-cgroup-update-note-about-conditions-when-oom-killer-is-invoked.patch " Andrew Morton
2020-05-08 23:43 ` + doc-cgroup-update-note-about-conditions-when-oom-killer-is-invoked-fix.patch " Andrew Morton
2020-05-08 23:48 ` + zcomp-use-array_size-for-backends-list.patch " Andrew Morton
2020-05-08 23:53 ` + device-dax-dont-leak-kernel-memory-to-user-space-after-unloading-kmem.patch " Andrew Morton
2020-05-08 23:56 ` + mm-memory_hotplug-introduce-add_memory_driver_managed.patch " Andrew Morton
2020-05-08 23:56 ` + kexec_file-dont-place-kexec-images-on-ioresource_mem_driver_managed.patch " Andrew Morton
2020-05-08 23:56 ` + device-dax-add-memory-via-add_memory_driver_managed.patch " Andrew Morton
2020-05-09  0:45 ` + mm-memory_hotplug-allow-arch-override-of-non-boot-memory-resource-names-fix.patch " Andrew Morton
2020-05-11 19:08 ` + mm-reset-numa-stats-for-boot-pagesets-v3.patch " Andrew Morton
2020-05-11 19:54 ` + mm-shmem-remove-rare-optimization-when-swapin-races-with-hole-punching.patch " Andrew Morton
2020-05-11 20:31 ` [to-be-updated] kexec-prevent-removal-of-memory-in-use-by-a-loaded-kexec-image.patch removed from " Andrew Morton
2020-05-11 20:31 ` [to-be-updated] mm-memory_hotplug-allow-arch-override-of-non-boot-memory-resource-names.patch " Andrew Morton
2020-05-11 20:31 ` [to-be-updated] mm-memory_hotplug-allow-arch-override-of-non-boot-memory-resource-names-fix.patch " Andrew Morton
2020-05-11 20:31 ` [to-be-updated] arm64-memory-give-hotplug-memory-a-different-resource-name.patch " Andrew Morton
2020-05-11 20:38 ` + arm-add-support-for-folded-p4d-page-tables-fix.patch added to " Andrew Morton
2020-05-11 20:40 ` + mm-add-debug_wx-support-fix-2.patch " Andrew Morton
2020-05-11 20:41 ` + mm-add-debug_wx-support-fix-3.patch " Andrew Morton
2020-05-11 20:50 ` + arm64-mm-drop-__have_arch_huge_ptep_get.patch " Andrew Morton
2020-05-11 20:50 ` + mm-hugetlb-define-a-generic-fallback-for-is_hugepage_only_range.patch " Andrew Morton
2020-05-11 20:50 ` + mm-hugetlb-define-a-generic-fallback-for-arch_clear_hugepage_flags.patch " Andrew Morton
2020-05-11 21:00 ` + mm-introduce-external-memory-hinting-api-fix-2-fix.patch " Andrew Morton
2020-05-11 21:02 ` + mm-support-vector-address-ranges-for-process_madvise-fix-fix-fix-fix-fix.patch " Andrew Morton
2020-05-11 21:12 ` [alternative-merged] mm-vmscan-consistent-update-to-pgsteal-and-pgscan.patch removed from " Andrew Morton
2020-05-11 21:21 ` + seq_file-introduce-define_seq_attribute-helper-macro.patch added to " Andrew Morton
2020-05-11 21:21 ` + mm-vmstat-convert-to-use-define_seq_attribute-macro.patch " Andrew Morton
2020-05-11 21:21 ` + kernel-kprobes-convert-to-use-define_seq_attribute-macro.patch " Andrew Morton
2020-05-11 21:22 ` + seq_file-introduce-define_seq_attribute-helper-macro-checkpatch-fixes.patch " Andrew Morton
2020-05-11 21:23 ` + lib-flex_proportionsc-cleanup-__fprop_inc_percpu_max.patch " Andrew Morton
2020-05-11 21:31 ` [alternative-merged] lib-flex_proportionsc-aging-counts-when-fraction-smaller-than-max_frac-fprop_frac_base.patch removed from " Andrew Morton
2020-05-11 21:44 ` + xtensa-add-loglvl-to-show_trace-fix.patch added to " Andrew Morton
2020-05-11 22:44 ` mmotm 2020-05-11-15-43 uploaded Andrew Morton
2020-05-12  2:12   ` mmotm 2020-05-11-15-43 uploaded (ethernet/ti/ti_cpsw) Randy Dunlap
2020-05-13  9:20     ` Grygorii Strashko
2020-05-13 15:18       ` Randy Dunlap
2020-05-12  4:41   ` mmotm 2020-05-11-15-43 uploaded (mm/memcontrol.c, huge pages) Randy Dunlap
2020-05-12 12:17     ` Johannes Weiner
2020-05-12 15:27       ` Randy Dunlap
2020-05-12 15:37       ` Naresh Kamboju
2020-05-12 15:37         ` Naresh Kamboju
2020-05-12 17:16       ` Geert Uytterhoeven
2020-05-12 17:16         ` Geert Uytterhoeven
2020-05-12 15:11     ` Geert Uytterhoeven
2020-05-12 15:11       ` Geert Uytterhoeven
2020-05-12 21:03 ` + kasan-consistently-disable-debugging-features.patch added to -mm tree Andrew Morton
2020-05-12 21:03 ` + kasan-add-missing-functions-declarations-to-kasanh.patch " Andrew Morton
2020-05-12 21:04 ` + kasan-move-kasan_report-into-reportc.patch " Andrew Morton
2020-05-13 18:31 ` + mm-compaction-avoid-vm_bug_onpageslab-in-page_mapcount.patch " Andrew Morton
2020-05-13 20:56 ` + kernel-sysctl-ignore-out-of-range-taint-bits-introduced-via-kerneltainted.patch " Andrew Morton
2020-05-13 21:26 ` + vfs-keep-inodes-with-page-cache-off-the-inode-shrinker-lru.patch " Andrew Morton
2020-05-13 22:00 ` + mm-memcontrol-convert-anon-and-file-thp-to-new-mem_cgroup_charge-api-fix.patch " Andrew Morton
2020-05-13 22:48 ` + mm-swap-use-prandom_u32_max.patch " Andrew Morton
2020-05-13 22:54 ` + mm-memcontrol-switch-to-native-nr_anon_thps-counter-fix.patch " Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200508231951.UAZYQKg3h%akpm@linux-foundation.org \
    --to=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kirill@shutemov.name \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mhocko@suse.com \
    --cc=mm-commits@vger.kernel.org \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.