LKML Archive on lore.kernel.org
 help / color / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Christoph Lameter <cl@linux.com>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>, Linux MM <linux-mm@kvack.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Kernel Team <kernel-team@fb.com>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH v6 05/19] mm: memcontrol: decouple reference counting from page accounting
Date: Wed, 17 Jun 2020 17:47:21 -0700
Message-ID: <CALvZod5K8gvZnWT-RPJU=VL4OUiDsu6z11Z1WSfYRWDLUOktZQ@mail.gmail.com> (raw)
In-Reply-To: <20200608230654.828134-6-guro@fb.com>

On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin <guro@fb.com> wrote:
>
> From: Johannes Weiner <hannes@cmpxchg.org>
>
> The reference counting of a memcg is currently coupled directly to how
> many 4k pages are charged to it. This doesn't work well with Roman's
> new slab controller, which maintains pools of objects and doesn't want
> to keep an extra balance sheet for the pages backing those objects.
>
> This unusual refcounting design (reference counts usually track
> pointers to an object) is only for historical reasons: memcg used to
> not take any css references and simply stalled offlining until all
> charges had been reparented and the page counters had dropped to
> zero. When we got rid of the reparenting requirement, the simple
> mechanical translation was to take a reference for every charge.
>
> More historical context can be found in commit e8ea14cc6ead ("mm:
> memcontrol: take a css reference for each charged page"),
> commit 64f219938941 ("mm: memcontrol: remove obsolete kmemcg pinning
> tricks") and commit b2052564e66d ("mm: memcontrol: continue cache
> reclaim from offlined groups").
>
> The new slab controller exposes the limitations in this scheme, so
> let's switch it to a more idiomatic reference counting model based on
> actual kernel pointers to the memcg:
>
> - The per-cpu stock holds a reference to the memcg its caching
>
> - User pages hold a reference for their page->mem_cgroup. Transparent
>   huge pages will no longer acquire tail references in advance, we'll
>   get them if needed during the split.
>
> - Kernel pages hold a reference for their page->mem_cgroup
>
> - Pages allocated in the root cgroup will acquire and release css
>   references for simplicity. css_get() and css_put() optimize that.
>
> - The current memcg_charge_slab() already hacked around the per-charge
>   references; this change gets rid of that as well.
>
> Roman:
> 1) Rebased on top of the current mm tree: added css_get() in
>    mem_cgroup_charge(), dropped mem_cgroup_try_charge() part
> 2) I've reformatted commit references in the commit log to make
>    checkpatch.pl happy.
>
> Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Acked-by: Roman Gushchin <guro@fb.com>
> ---
>  mm/memcontrol.c | 37 +++++++++++++++++++++----------------
>  mm/slab.h       |  2 --
>  2 files changed, 21 insertions(+), 18 deletions(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index d18bf93e0f19..80282b2e8b7f 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -2094,13 +2094,17 @@ static void drain_stock(struct memcg_stock_pcp *stock)
>  {
>         struct mem_cgroup *old = stock->cached;
>
> +       if (!old)
> +               return;
> +
>         if (stock->nr_pages) {
>                 page_counter_uncharge(&old->memory, stock->nr_pages);
>                 if (do_memsw_account())
>                         page_counter_uncharge(&old->memsw, stock->nr_pages);
> -               css_put_many(&old->css, stock->nr_pages);
>                 stock->nr_pages = 0;
>         }
> +
> +       css_put(&old->css);
>         stock->cached = NULL;
>  }
>
> @@ -2136,6 +2140,7 @@ static void refill_stock(struct mem_cgroup *memcg, unsigned int nr_pages)
>         stock = this_cpu_ptr(&memcg_stock);
>         if (stock->cached != memcg) { /* reset if necessary */
>                 drain_stock(stock);
> +               css_get(&memcg->css);
>                 stock->cached = memcg;
>         }
>         stock->nr_pages += nr_pages;
> @@ -2594,12 +2599,10 @@ static int try_charge(struct mem_cgroup *memcg, gfp_t gfp_mask,
>         page_counter_charge(&memcg->memory, nr_pages);
>         if (do_memsw_account())
>                 page_counter_charge(&memcg->memsw, nr_pages);
> -       css_get_many(&memcg->css, nr_pages);
>
>         return 0;
>
>  done_restock:
> -       css_get_many(&memcg->css, batch);
>         if (batch > nr_pages)
>                 refill_stock(memcg, batch - nr_pages);
>
> @@ -2657,8 +2660,6 @@ static void cancel_charge(struct mem_cgroup *memcg, unsigned int nr_pages)
>         page_counter_uncharge(&memcg->memory, nr_pages);
>         if (do_memsw_account())
>                 page_counter_uncharge(&memcg->memsw, nr_pages);
> -
> -       css_put_many(&memcg->css, nr_pages);
>  }
>  #endif
>
> @@ -2964,6 +2965,7 @@ int __memcg_kmem_charge_page(struct page *page, gfp_t gfp, int order)
>                 if (!ret) {
>                         page->mem_cgroup = memcg;
>                         __SetPageKmemcg(page);
> +                       return 0;
>                 }
>         }
>         css_put(&memcg->css);
> @@ -2986,12 +2988,11 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>         VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
>         __memcg_kmem_uncharge(memcg, nr_pages);
>         page->mem_cgroup = NULL;
> +       css_put(&memcg->css);
>
>         /* slab pages do not have PageKmemcg flag set */
>         if (PageKmemcg(page))
>                 __ClearPageKmemcg(page);
> -
> -       css_put_many(&memcg->css, nr_pages);
>  }
>  #endif /* CONFIG_MEMCG_KMEM */
>
> @@ -3003,13 +3004,16 @@ void __memcg_kmem_uncharge_page(struct page *page, int order)
>   */
>  void mem_cgroup_split_huge_fixup(struct page *head)
>  {
> +       struct mem_cgroup *memcg = head->mem_cgroup;
>         int i;
>
>         if (mem_cgroup_disabled())

if (mem_cgroup_disabled() || !memcg)?

>                 return;
>
> -       for (i = 1; i < HPAGE_PMD_NR; i++)
> -               head[i].mem_cgroup = head->mem_cgroup;
> +       for (i = 1; i < HPAGE_PMD_NR; i++) {
> +               css_get(&memcg->css);
> +               head[i].mem_cgroup = memcg;
> +       }
>  }
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> @@ -5454,7 +5458,10 @@ static int mem_cgroup_move_account(struct page *page,
>          */
>         smp_mb();
>
> -       page->mem_cgroup = to;  /* caller should have done css_get */
> +       css_get(&to->css);
> +       css_put(&from->css);
> +
> +       page->mem_cgroup = to;
>
>         __unlock_page_memcg(from);
>
> @@ -6540,6 +6547,7 @@ int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask)
>         if (ret)
>                 goto out_put;
>
> +       css_get(&memcg->css);
>         commit_charge(page, memcg);
>
>         local_irq_disable();
> @@ -6594,9 +6602,6 @@ static void uncharge_batch(const struct uncharge_gather *ug)
>         __this_cpu_add(ug->memcg->vmstats_percpu->nr_page_events, ug->nr_pages);
>         memcg_check_events(ug->memcg, ug->dummy_page);
>         local_irq_restore(flags);
> -
> -       if (!mem_cgroup_is_root(ug->memcg))
> -               css_put_many(&ug->memcg->css, ug->nr_pages);
>  }
>
>  static void uncharge_page(struct page *page, struct uncharge_gather *ug)
> @@ -6634,6 +6639,7 @@ static void uncharge_page(struct page *page, struct uncharge_gather *ug)
>
>         ug->dummy_page = page;
>         page->mem_cgroup = NULL;
> +       css_put(&ug->memcg->css);
>  }
>
>  static void uncharge_list(struct list_head *page_list)
> @@ -6739,8 +6745,8 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage)
>         page_counter_charge(&memcg->memory, nr_pages);
>         if (do_memsw_account())
>                 page_counter_charge(&memcg->memsw, nr_pages);
> -       css_get_many(&memcg->css, nr_pages);
>
> +       css_get(&memcg->css);
>         commit_charge(newpage, memcg);
>
>         local_irq_save(flags);
> @@ -6977,8 +6983,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
>         mem_cgroup_charge_statistics(memcg, page, -nr_entries);
>         memcg_check_events(memcg, page);
>
> -       if (!mem_cgroup_is_root(memcg))
> -               css_put_many(&memcg->css, nr_entries);
> +       css_put(&memcg->css);
>  }
>
>  /**
> diff --git a/mm/slab.h b/mm/slab.h
> index 633eedb6bad1..8a574d9361c1 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -373,9 +373,7 @@ static __always_inline int memcg_charge_slab(struct page *page,
>         lruvec = mem_cgroup_lruvec(memcg, page_pgdat(page));
>         mod_lruvec_state(lruvec, cache_vmstat_idx(s), nr_pages << PAGE_SHIFT);
>
> -       /* transer try_charge() page references to kmem_cache */
>         percpu_ref_get_many(&s->memcg_params.refcnt, nr_pages);
> -       css_put_many(&memcg->css, nr_pages);
>  out:
>         css_put(&memcg->css);
>         return ret;
> --
> 2.25.4
>

  reply index

Thread overview: 92+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-08 23:06 [PATCH v6 00/19] The new cgroup slab memory controller Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 01/19] mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state() Roman Gushchin
2020-06-17  1:52   ` Shakeel Butt
2020-06-17  2:50     ` Roman Gushchin
2020-06-17  2:59       ` Shakeel Butt
2020-06-17  3:19         ` Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 02/19] mm: memcg: prepare for byte-sized vmstat items Roman Gushchin
2020-06-17  2:57   ` Shakeel Butt
2020-06-17  3:19     ` Roman Gushchin
2020-06-17 15:55   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 03/19] mm: memcg: convert vmstat slab counters to bytes Roman Gushchin
2020-06-17  3:03   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 04/19] mm: slub: implement SLUB version of obj_to_index() Roman Gushchin
2020-06-17  3:08   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 05/19] mm: memcontrol: decouple reference counting from page accounting Roman Gushchin
2020-06-18  0:47   ` Shakeel Butt [this message]
2020-06-18 14:55   ` Shakeel Butt
2020-06-18 19:51     ` Roman Gushchin
2020-06-19  1:08     ` Roman Gushchin
2020-06-19  1:18       ` Shakeel Butt
2020-06-19  1:31   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 06/19] mm: memcg/slab: obj_cgroup API Roman Gushchin
2020-06-19 15:42   ` Shakeel Butt
2020-06-19 21:38     ` Roman Gushchin
2020-06-19 22:16       ` Shakeel Butt
2020-06-19 22:52         ` Roman Gushchin
2020-06-20 22:50       ` Andrew Morton
2020-06-08 23:06 ` [PATCH v6 07/19] mm: memcg/slab: allocate obj_cgroups for non-root slab pages Roman Gushchin
2020-06-19 16:36   ` Shakeel Butt
2020-06-20  0:25     ` Roman Gushchin
2020-06-20  0:31       ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 08/19] mm: memcg/slab: save obj_cgroup for non-root slab objects Roman Gushchin
2020-06-20  0:16   ` Shakeel Butt
2020-06-20  1:19     ` Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 09/19] mm: memcg/slab: charge individual slab objects instead of pages Roman Gushchin
2020-06-20  0:54   ` Shakeel Butt
2020-06-20  1:29     ` Roman Gushchin
2020-06-08 23:06 ` [PATCH v6 10/19] mm: memcg/slab: deprecate memory.kmem.slabinfo Roman Gushchin
2020-06-22 17:12   ` Shakeel Butt
2020-06-22 18:01     ` Roman Gushchin
2020-06-22 18:09       ` Shakeel Butt
2020-06-22 18:25         ` Roman Gushchin
2020-06-22 18:38           ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 11/19] mm: memcg/slab: move memcg_kmem_bypass() to memcontrol.h Roman Gushchin
2020-06-20  1:19   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 12/19] mm: memcg/slab: use a single set of kmem_caches for all accounted allocations Roman Gushchin
2020-06-22 16:56   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 13/19] mm: memcg/slab: simplify memcg cache creation Roman Gushchin
2020-06-22 17:29   ` Shakeel Butt
2020-06-22 17:40     ` Roman Gushchin
2020-06-22 18:03       ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 14/19] mm: memcg/slab: remove memcg_kmem_get_cache() Roman Gushchin
2020-06-22 18:42   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 15/19] mm: memcg/slab: deprecate slab_root_caches Roman Gushchin
2020-06-22 17:36   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 16/19] mm: memcg/slab: remove redundant check in memcg_accumulate_slabinfo() Roman Gushchin
2020-06-22 17:32   ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 17/19] mm: memcg/slab: use a single set of kmem_caches for all allocations Roman Gushchin
2020-06-17 23:35   ` Andrew Morton
2020-06-18  0:35     ` Roman Gushchin
2020-06-18  7:33       ` Vlastimil Babka
2020-06-18 19:54         ` Roman Gushchin
2020-06-22 19:21   ` Shakeel Butt
2020-06-22 20:37     ` Roman Gushchin
2020-06-22 21:04       ` Shakeel Butt
2020-06-22 21:13         ` Roman Gushchin
2020-06-22 21:28           ` Shakeel Butt
2020-06-22 21:58             ` Roman Gushchin
2020-06-22 22:05               ` Shakeel Butt
2020-06-08 23:06 ` [PATCH v6 18/19] kselftests: cgroup: add kernel memory accounting tests Roman Gushchin
2020-06-17  1:46 ` [PATCH v6 00/19] The new cgroup slab memory controller Shakeel Butt
2020-06-17  2:41   ` Roman Gushchin
2020-06-17  3:05     ` Shakeel Butt
2020-06-17  3:32       ` Roman Gushchin
2020-06-17 11:24         ` Vlastimil Babka
2020-06-17 14:31           ` Mel Gorman
2020-06-20  0:57             ` Roman Gushchin
2020-06-18  1:29           ` Roman Gushchin
2020-06-18  8:43             ` Jesper Dangaard Brouer
2020-06-18  9:31               ` Jesper Dangaard Brouer
2020-06-19  1:30                 ` Roman Gushchin
2020-06-19  8:32                   ` Jesper Dangaard Brouer
2020-06-19  1:27               ` Roman Gushchin
2020-06-19  9:39                 ` Jesper Dangaard Brouer
2020-06-19 18:47                   ` Roman Gushchin
2020-06-18  1:18   ` Roman Gushchin
2020-06-18  9:27 ` Mike Rapoport
2020-06-18 20:43   ` Roman Gushchin
2020-06-21 22:57 ` Qian Cai
2020-06-21 23:34   ` Roman Gushchin
2020-06-21 23:53     ` Qian Cai
2020-06-22  3:07       ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CALvZod5K8gvZnWT-RPJU=VL4OUiDsu6z11Z1WSfYRWDLUOktZQ@mail.gmail.com' \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

LKML Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/lkml/0 lkml/git/0.git
	git clone --mirror https://lore.kernel.org/lkml/1 lkml/git/1.git
	git clone --mirror https://lore.kernel.org/lkml/2 lkml/git/2.git
	git clone --mirror https://lore.kernel.org/lkml/3 lkml/git/3.git
	git clone --mirror https://lore.kernel.org/lkml/4 lkml/git/4.git
	git clone --mirror https://lore.kernel.org/lkml/5 lkml/git/5.git
	git clone --mirror https://lore.kernel.org/lkml/6 lkml/git/6.git
	git clone --mirror https://lore.kernel.org/lkml/7 lkml/git/7.git
	git clone --mirror https://lore.kernel.org/lkml/8 lkml/git/8.git
	git clone --mirror https://lore.kernel.org/lkml/9 lkml/git/9.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 lkml lkml/ https://lore.kernel.org/lkml \
		linux-kernel@vger.kernel.org
	public-inbox-index lkml

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-kernel


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git