All of lore.kernel.org
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Suren Baghdasaryan <surenb@google.com>
Cc: tj@kernel.org, mhocko@kernel.org, vdavydov.dev@gmail.com,
	akpm@linux-foundation.org, shakeelb@google.com, guro@fb.com,
	songmuchun@bytedance.com, shy828301@gmail.com, alexs@kernel.org,
	alexander.h.duyck@linux.intel.com, richard.weiyang@gmail.com,
	vbabka@suse.cz, axboe@kernel.dk, iamjoonsoo.kim@lge.com,
	david@redhat.com, willy@infradead.org, apopple@nvidia.com,
	minchan@kernel.org, linmiaohe@huawei.com,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	linux-mm@kvack.org, kernel-team@android.com
Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
Date: Fri, 9 Jul 2021 10:48:48 -0400	[thread overview]
Message-ID: <YOhh0IabpRk/W/qR@cmpxchg.org> (raw)
In-Reply-To: <20210709000509.2618345-3-surenb@google.com>

On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb@google.com>

Sounds reasonable to me as well. One comment:

> @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
>  		page_counter_read(&memcg->memory);
>  }
>  
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> +
> +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> +			gfp_t gfp);
> +/**
> + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * @page: page to charge
> + * @mm: mm context of the victim
> + * @gfp_mask: reclaim mode
> + *
> + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> + * charge to the active memcg.
> + *
> + * Do not use this for pages allocated for swapin.
> + *
> + * Returns 0 on success. Otherwise, an error code is returned.
> + */
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +				    gfp_t gfp_mask)
> +{
> +	struct mem_cgroup *memcg;
> +	int ret;
> +
> +	if (mem_cgroup_disabled())
> +		return 0;
> +
> +	memcg = get_mem_cgroup_from_mm(mm);
> +	ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> +	css_put(&memcg->css);
> +
> +	return ret;

Why not do

int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
			gfp_t gfp_mask);

static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
				    gfp_t gfp_mask)
{
	if (mem_cgroup_disabled())
		return 0;

	return __mem_cgroup_charge(page, memcg, gfp_mask);
}

like in the other cases as well?

That would avoid inlining two separate function calls into all the
callsites...

There is an (internal) __mem_cgroup_charge() already, but you can
rename it charge_memcg().


WARNING: multiple messages have this Message-ID (diff)
From: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>
To: Suren Baghdasaryan <surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org,
	shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org,
	guro-b10kYP2dOMg@public.gmane.org,
	songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org,
	shy828301-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	alexs-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	alexander.h.duyck-VuQAYsv1563Yd54FQh9/CA@public.gmane.org,
	richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org,
	vbabka-AlSwsSmVLrQ@public.gmane.org,
	axboe-tSWWG44O7X1aa/9Udqfwiw@public.gmane.org,
	iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org,
	david-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org,
	willy-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org,
	apopple-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org,
	minchan-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org,
	linmiaohe-hv44wF8Li93QT0dZR+AlfA@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	kernel-team-z5hGa2qSFaRBDgjK7y7TUQ@public.gmane.org
Subject: Re: [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config
Date: Fri, 9 Jul 2021 10:48:48 -0400	[thread overview]
Message-ID: <YOhh0IabpRk/W/qR@cmpxchg.org> (raw)
In-Reply-To: <20210709000509.2618345-3-surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

On Thu, Jul 08, 2021 at 05:05:08PM -0700, Suren Baghdasaryan wrote:
> Inline mem_cgroup_{charge/uncharge} and mem_cgroup_uncharge_list functions
> functions to perform mem_cgroup_disabled static key check inline before
> calling the main body of the function. This minimizes the memcg overhead
> in the pagefault and exit_mmap paths when memcgs are disabled using
> cgroup_disable=memory command-line option.
> This change results in ~0.4% overhead reduction when running PFT test
> comparing {CONFIG_MEMCG=n} against {CONFIG_MEMCG=y, cgroup_disable=memory}
> configurationon on an 8-core ARM64 Android device.
> 
> Signed-off-by: Suren Baghdasaryan <surenb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>

Sounds reasonable to me as well. One comment:

> @@ -693,13 +693,59 @@ static inline bool mem_cgroup_below_min(struct mem_cgroup *memcg)
>  		page_counter_read(&memcg->memory);
>  }
>  
> -int mem_cgroup_charge(struct page *page, struct mm_struct *mm, gfp_t gfp_mask);
> +struct mem_cgroup *get_mem_cgroup_from_mm(struct mm_struct *mm);
> +
> +int __mem_cgroup_charge(struct page *page, struct mem_cgroup *memcg,
> +			gfp_t gfp);
> +/**
> + * mem_cgroup_charge - charge a newly allocated page to a cgroup
> + * @page: page to charge
> + * @mm: mm context of the victim
> + * @gfp_mask: reclaim mode
> + *
> + * Try to charge @page to the memcg that @mm belongs to, reclaiming
> + * pages according to @gfp_mask if necessary. if @mm is NULL, try to
> + * charge to the active memcg.
> + *
> + * Do not use this for pages allocated for swapin.
> + *
> + * Returns 0 on success. Otherwise, an error code is returned.
> + */
> +static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
> +				    gfp_t gfp_mask)
> +{
> +	struct mem_cgroup *memcg;
> +	int ret;
> +
> +	if (mem_cgroup_disabled())
> +		return 0;
> +
> +	memcg = get_mem_cgroup_from_mm(mm);
> +	ret = __mem_cgroup_charge(page, memcg, gfp_mask);
> +	css_put(&memcg->css);
> +
> +	return ret;

Why not do

int __mem_cgroup_charge(struct page *page, struct mm_struct *mm,
			gfp_t gfp_mask);

static inline int mem_cgroup_charge(struct page *page, struct mm_struct *mm,
				    gfp_t gfp_mask)
{
	if (mem_cgroup_disabled())
		return 0;

	return __mem_cgroup_charge(page, memcg, gfp_mask);
}

like in the other cases as well?

That would avoid inlining two separate function calls into all the
callsites...

There is an (internal) __mem_cgroup_charge() already, but you can
rename it charge_memcg().


  reply	other threads:[~2021-07-09 14:48 UTC|newest]

Thread overview: 26+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-07-09  0:05 [PATCH 0/3] mm, memcg: Optimizations to minimize overhead when memcgs are disabled Suren Baghdasaryan
2021-07-09  0:05 ` Suren Baghdasaryan
2021-07-09  0:05 ` Suren Baghdasaryan
2021-07-09  0:05 ` [PATCH 1/3] mm, memcg: add mem_cgroup_disabled checks in vmpressure and swap-related functions Suren Baghdasaryan
2021-07-09  0:05   ` Suren Baghdasaryan
2021-07-09 14:41   ` Johannes Weiner
2021-07-09 16:00   ` Shakeel Butt
2021-07-09 16:00     ` Shakeel Butt
2021-07-09 16:00     ` Shakeel Butt
2021-07-09  0:05 ` [PATCH 2/3] mm, memcg: inline mem_cgroup_{charge/uncharge} to improve disabled memcg config Suren Baghdasaryan
2021-07-09  0:05   ` Suren Baghdasaryan
2021-07-09  0:05   ` Suren Baghdasaryan
2021-07-09 14:48   ` Johannes Weiner [this message]
2021-07-09 14:48     ` Johannes Weiner
2021-07-09 15:18     ` Suren Baghdasaryan
2021-07-09 15:18       ` Suren Baghdasaryan
2021-07-09 17:17       ` Suren Baghdasaryan
2021-07-09 17:17         ` Suren Baghdasaryan
2021-07-09  0:05 ` [PATCH 3/3] mm, memcg: inline swap-related functions " Suren Baghdasaryan
2021-07-09  0:05   ` Suren Baghdasaryan
2021-07-09  0:05   ` Suren Baghdasaryan
2021-07-09 14:49   ` Johannes Weiner
2021-07-09 14:49     ` Johannes Weiner
2021-07-09 16:05   ` Shakeel Butt
2021-07-09 16:05     ` Shakeel Butt
2021-07-09 16:05     ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YOhh0IabpRk/W/qR@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=alexander.h.duyck@linux.intel.com \
    --cc=alexs@kernel.org \
    --cc=apopple@nvidia.com \
    --cc=axboe@kernel.dk \
    --cc=cgroups@vger.kernel.org \
    --cc=david@redhat.com \
    --cc=guro@fb.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=kernel-team@android.com \
    --cc=linmiaohe@huawei.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=minchan@kernel.org \
    --cc=richard.weiyang@gmail.com \
    --cc=shakeelb@google.com \
    --cc=shy828301@gmail.com \
    --cc=songmuchun@bytedance.com \
    --cc=surenb@google.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.