All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Shakeel Butt <shakeelb@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Cgroups <cgroups@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit()
Date: Fri, 19 Jan 2018 16:11:18 +0100	[thread overview]
Message-ID: <20180119151118.GE6584@dhcp22.suse.cz> (raw)
In-Reply-To: <CALvZod7HS6P0OU6Rps8JeMJycaPd4dF5NjxV8k1y2-yosF2bdA@mail.gmail.com>

On Fri 19-01-18 06:49:29, Shakeel Butt wrote:
> On Fri, Jan 19, 2018 at 5:35 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > On Fri 19-01-18 16:25:44, Andrey Ryabinin wrote:
> >> Currently mem_cgroup_resize_limit() retries to set limit after reclaiming
> >> 32 pages. It makes more sense to reclaim needed amount of pages right away.
> >>
> >> This works noticeably faster, especially if 'usage - limit' big.
> >> E.g. bringing down limit from 4G to 50M:
> >>
> >> Before:
> >>  # perf stat echo 50M > memory.limit_in_bytes
> >>
> >>      Performance counter stats for 'echo 50M':
> >>
> >>             386.582382      task-clock (msec)         #    0.835 CPUs utilized
> >>                  2,502      context-switches          #    0.006 M/sec
> >>
> >>            0.463244382 seconds time elapsed
> >>
> >> After:
> >>  # perf stat echo 50M > memory.limit_in_bytes
> >>
> >>      Performance counter stats for 'echo 50M':
> >>
> >>             169.403906      task-clock (msec)         #    0.849 CPUs utilized
> >>                     14      context-switches          #    0.083 K/sec
> >>
> >>            0.199536900 seconds time elapsed
> >
> > But I am not going ack this one. As already stated this has a risk
> > of over-reclaim if there a lot of charges are freed along with this
> > shrinking. This is more of a theoretical concern so I am _not_ going to
> 
> If you don't mind, can you explain why over-reclaim is a concern at
> all? The only side effect of over reclaim I can think of is the job
> might suffer a bit over (more swapins & pageins). Shouldn't this be
> within the expectation of the user decreasing the limits?

It is not a disaster. But it is an unexpected side effect of the
implementation. If you have limit 1GB and want to reduce it 500MB
then it would be quite surprising to land at 200M just because somebody
was freeing 300MB in parallel. Is this likely? Probably not but the more
is the limit touched and the larger are the differences the more likely
it is. Keep retrying in the smaller amounts and you will not see the
above happening.

And to be honest, I do not really see why keeping retrying from
mem_cgroup_resize_limit should be so much faster than keep retrying from
the direct reclaim path. We are doing SWAP_CLUSTER_MAX batches anyway.
mem_cgroup_resize_limit loop adds _some_ overhead but I am not really
sure why it should be that large.
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: Shakeel Butt <shakeelb@google.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Cgroups <cgroups@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>
Subject: Re: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit()
Date: Fri, 19 Jan 2018 16:11:18 +0100	[thread overview]
Message-ID: <20180119151118.GE6584@dhcp22.suse.cz> (raw)
In-Reply-To: <CALvZod7HS6P0OU6Rps8JeMJycaPd4dF5NjxV8k1y2-yosF2bdA@mail.gmail.com>

On Fri 19-01-18 06:49:29, Shakeel Butt wrote:
> On Fri, Jan 19, 2018 at 5:35 AM, Michal Hocko <mhocko@kernel.org> wrote:
> > On Fri 19-01-18 16:25:44, Andrey Ryabinin wrote:
> >> Currently mem_cgroup_resize_limit() retries to set limit after reclaiming
> >> 32 pages. It makes more sense to reclaim needed amount of pages right away.
> >>
> >> This works noticeably faster, especially if 'usage - limit' big.
> >> E.g. bringing down limit from 4G to 50M:
> >>
> >> Before:
> >>  # perf stat echo 50M > memory.limit_in_bytes
> >>
> >>      Performance counter stats for 'echo 50M':
> >>
> >>             386.582382      task-clock (msec)         #    0.835 CPUs utilized
> >>                  2,502      context-switches          #    0.006 M/sec
> >>
> >>            0.463244382 seconds time elapsed
> >>
> >> After:
> >>  # perf stat echo 50M > memory.limit_in_bytes
> >>
> >>      Performance counter stats for 'echo 50M':
> >>
> >>             169.403906      task-clock (msec)         #    0.849 CPUs utilized
> >>                     14      context-switches          #    0.083 K/sec
> >>
> >>            0.199536900 seconds time elapsed
> >
> > But I am not going ack this one. As already stated this has a risk
> > of over-reclaim if there a lot of charges are freed along with this
> > shrinking. This is more of a theoretical concern so I am _not_ going to
> 
> If you don't mind, can you explain why over-reclaim is a concern at
> all? The only side effect of over reclaim I can think of is the job
> might suffer a bit over (more swapins & pageins). Shouldn't this be
> within the expectation of the user decreasing the limits?

It is not a disaster. But it is an unexpected side effect of the
implementation. If you have limit 1GB and want to reduce it 500MB
then it would be quite surprising to land at 200M just because somebody
was freeing 300MB in parallel. Is this likely? Probably not but the more
is the limit touched and the larger are the differences the more likely
it is. Keep retrying in the smaller amounts and you will not see the
above happening.

And to be honest, I do not really see why keeping retrying from
mem_cgroup_resize_limit should be so much faster than keep retrying from
the direct reclaim path. We are doing SWAP_CLUSTER_MAX batches anyway.
mem_cgroup_resize_limit loop adds _some_ overhead but I am not really
sure why it should be that large.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
To: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Cc: Andrey Ryabinin
	<aryabinin-5HdwGun5lf+gSpxsJD1C4w@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Cgroups <cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	LKML <linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	Linux MM <linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Vladimir Davydov
	<vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
Subject: Re: [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit()
Date: Fri, 19 Jan 2018 16:11:18 +0100	[thread overview]
Message-ID: <20180119151118.GE6584@dhcp22.suse.cz> (raw)
In-Reply-To: <CALvZod7HS6P0OU6Rps8JeMJycaPd4dF5NjxV8k1y2-yosF2bdA-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>

On Fri 19-01-18 06:49:29, Shakeel Butt wrote:
> On Fri, Jan 19, 2018 at 5:35 AM, Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org> wrote:
> > On Fri 19-01-18 16:25:44, Andrey Ryabinin wrote:
> >> Currently mem_cgroup_resize_limit() retries to set limit after reclaiming
> >> 32 pages. It makes more sense to reclaim needed amount of pages right away.
> >>
> >> This works noticeably faster, especially if 'usage - limit' big.
> >> E.g. bringing down limit from 4G to 50M:
> >>
> >> Before:
> >>  # perf stat echo 50M > memory.limit_in_bytes
> >>
> >>      Performance counter stats for 'echo 50M':
> >>
> >>             386.582382      task-clock (msec)         #    0.835 CPUs utilized
> >>                  2,502      context-switches          #    0.006 M/sec
> >>
> >>            0.463244382 seconds time elapsed
> >>
> >> After:
> >>  # perf stat echo 50M > memory.limit_in_bytes
> >>
> >>      Performance counter stats for 'echo 50M':
> >>
> >>             169.403906      task-clock (msec)         #    0.849 CPUs utilized
> >>                     14      context-switches          #    0.083 K/sec
> >>
> >>            0.199536900 seconds time elapsed
> >
> > But I am not going ack this one. As already stated this has a risk
> > of over-reclaim if there a lot of charges are freed along with this
> > shrinking. This is more of a theoretical concern so I am _not_ going to
> 
> If you don't mind, can you explain why over-reclaim is a concern at
> all? The only side effect of over reclaim I can think of is the job
> might suffer a bit over (more swapins & pageins). Shouldn't this be
> within the expectation of the user decreasing the limits?

It is not a disaster. But it is an unexpected side effect of the
implementation. If you have limit 1GB and want to reduce it 500MB
then it would be quite surprising to land at 200M just because somebody
was freeing 300MB in parallel. Is this likely? Probably not but the more
is the limit touched and the larger are the differences the more likely
it is. Keep retrying in the smaller amounts and you will not see the
above happening.

And to be honest, I do not really see why keeping retrying from
mem_cgroup_resize_limit should be so much faster than keep retrying from
the direct reclaim path. We are doing SWAP_CLUSTER_MAX batches anyway.
mem_cgroup_resize_limit loop adds _some_ overhead but I am not really
sure why it should be that large.
-- 
Michal Hocko
SUSE Labs

  reply	other threads:[~2018-01-19 15:11 UTC|newest]

Thread overview: 125+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-20 10:24 [PATCH 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Andrey Ryabinin
2017-12-20 10:24 ` Andrey Ryabinin
2017-12-20 10:24 ` [PATCH 2/2] mm/memcg: Consolidate mem_cgroup_resize_[memsw]_limit() functions Andrey Ryabinin
2017-12-20 10:24   ` Andrey Ryabinin
2017-12-20 10:33 ` [PATCH 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Michal Hocko
2017-12-20 10:33   ` Michal Hocko
2017-12-20 11:32   ` Andrey Ryabinin
2017-12-20 11:32     ` Andrey Ryabinin
2017-12-20 11:34     ` Michal Hocko
2017-12-20 11:34       ` Michal Hocko
2017-12-20 18:15       ` Shakeel Butt
2017-12-20 18:15         ` Shakeel Butt
2017-12-21 10:00         ` Andrey Ryabinin
2017-12-21 10:00           ` Andrey Ryabinin
2017-12-21 10:00           ` Andrey Ryabinin
2017-12-20 13:21 ` [PATCH v2 " Andrey Ryabinin
2017-12-20 13:21   ` Andrey Ryabinin
2017-12-20 13:21   ` [PATCH v2 2/2] mm/memcg: Consolidate mem_cgroup_resize_[memsw]_limit() functions Andrey Ryabinin
2017-12-20 13:21     ` Andrey Ryabinin
2017-12-20 13:53   ` [PATCH v2 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Michal Hocko
2017-12-20 13:53     ` Michal Hocko
2018-01-09 16:58     ` [PATCH v3 " Andrey Ryabinin
2018-01-09 16:58       ` Andrey Ryabinin
2018-01-09 16:58       ` [PATCH v3 2/2] mm/memcg: Consolidate mem_cgroup_resize_[memsw]_limit() functions Andrey Ryabinin
2018-01-09 16:58         ` Andrey Ryabinin
2018-01-09 17:10         ` Shakeel Butt
2018-01-09 17:10           ` Shakeel Butt
2018-01-09 17:10           ` Shakeel Butt
2018-01-09 17:26           ` Andrey Ryabinin
2018-01-09 17:26             ` Andrey Ryabinin
2018-01-09 23:26             ` Andrew Morton
2018-01-09 23:26               ` Andrew Morton
2018-01-10 12:43               ` [PATCH v4] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Andrey Ryabinin
2018-01-10 12:43                 ` Andrey Ryabinin
2018-01-10 12:43                 ` Andrey Ryabinin
2018-01-10 22:31                 ` Andrew Morton
2018-01-10 22:31                   ` Andrew Morton
2018-01-11 11:59                   ` Andrey Ryabinin
2018-01-11 11:59                     ` Andrey Ryabinin
2018-01-12  0:21                     ` Andrew Morton
2018-01-12  0:21                       ` Andrew Morton
2018-01-12  0:21                       ` Andrew Morton
2018-01-12  9:08                       ` Andrey Ryabinin
2018-01-12  9:08                         ` Andrey Ryabinin
2018-01-11 10:42                 ` Michal Hocko
2018-01-11 10:42                   ` Michal Hocko
2018-01-11 10:42                   ` Michal Hocko
2018-01-11 12:21                   ` Andrey Ryabinin
2018-01-11 12:21                     ` Andrey Ryabinin
2018-01-11 12:21                     ` Andrey Ryabinin
2018-01-11 12:46                     ` Michal Hocko
2018-01-11 12:46                       ` Michal Hocko
2018-01-11 15:23                       ` Andrey Ryabinin
2018-01-11 15:23                         ` Andrey Ryabinin
2018-01-11 15:23                         ` Andrey Ryabinin
2018-01-11 16:29                         ` Michal Hocko
2018-01-11 16:29                           ` Michal Hocko
2018-01-11 16:29                           ` Michal Hocko
2018-01-11 21:59                           ` Andrey Ryabinin
2018-01-11 21:59                             ` Andrey Ryabinin
2018-01-11 21:59                             ` Andrey Ryabinin
2018-01-12 12:24                             ` Michal Hocko
2018-01-12 12:24                               ` Michal Hocko
2018-01-12 22:57                               ` Shakeel Butt
2018-01-12 22:57                                 ` Shakeel Butt
2018-01-12 22:57                                 ` Shakeel Butt
2018-01-15 12:29                                 ` Andrey Ryabinin
2018-01-15 12:29                                   ` Andrey Ryabinin
2018-01-15 17:04                                   ` Shakeel Butt
2018-01-15 17:04                                     ` Shakeel Butt
2018-01-15 17:04                                     ` Shakeel Butt
2018-01-15 12:30                               ` Andrey Ryabinin
2018-01-15 12:30                                 ` Andrey Ryabinin
2018-01-15 12:46                                 ` Michal Hocko
2018-01-15 12:46                                   ` Michal Hocko
2018-01-15 12:53                                   ` Andrey Ryabinin
2018-01-15 12:53                                     ` Andrey Ryabinin
2018-01-15 12:58                                     ` Michal Hocko
2018-01-15 12:58                                       ` Michal Hocko
2018-01-09 17:08       ` [PATCH v3 1/2] " Andrey Ryabinin
2018-01-09 17:08         ` Andrey Ryabinin
2018-01-09 17:08         ` Andrey Ryabinin
2018-01-09 17:22       ` Shakeel Butt
2018-01-09 17:22         ` Shakeel Butt
2018-01-19 13:25 ` [PATCH v5 1/2] mm/memcontrol.c: " Andrey Ryabinin
2018-01-19 13:25   ` Andrey Ryabinin
2018-01-19 13:25   ` Andrey Ryabinin
2018-01-19 13:25   ` [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit() Andrey Ryabinin
2018-01-19 13:25     ` Andrey Ryabinin
2018-01-19 13:35     ` Michal Hocko
2018-01-19 13:35       ` Michal Hocko
2018-01-19 14:49       ` Shakeel Butt
2018-01-19 14:49         ` Shakeel Butt
2018-01-19 14:49         ` Shakeel Butt
2018-01-19 15:11         ` Michal Hocko [this message]
2018-01-19 15:11           ` Michal Hocko
2018-01-19 15:11           ` Michal Hocko
2018-01-19 15:24           ` Shakeel Butt
2018-01-19 15:24             ` Shakeel Butt
2018-01-19 15:31             ` Michal Hocko
2018-01-19 15:31               ` Michal Hocko
2018-01-19 15:31               ` Michal Hocko
2018-02-21 20:17           ` Andrew Morton
2018-02-21 20:17             ` Andrew Morton
2018-02-22 13:50             ` Andrey Ryabinin
2018-02-22 13:50               ` Andrey Ryabinin
2018-02-22 14:09               ` Michal Hocko
2018-02-22 14:09                 ` Michal Hocko
2018-02-22 15:13                 ` Andrey Ryabinin
2018-02-22 15:13                   ` Andrey Ryabinin
2018-02-22 15:33                   ` Michal Hocko
2018-02-22 15:33                     ` Michal Hocko
2018-02-22 15:38                     ` Andrey Ryabinin
2018-02-22 15:38                       ` Andrey Ryabinin
2018-02-22 15:44                       ` Michal Hocko
2018-02-22 15:44                         ` Michal Hocko
2018-02-22 16:01                         ` Andrey Ryabinin
2018-02-22 16:01                           ` Andrey Ryabinin
2018-02-22 16:30                           ` Michal Hocko
2018-02-22 16:30                             ` Michal Hocko
2018-01-19 13:32   ` [PATCH v5 1/2] mm/memcontrol.c: try harder to decrease [memory,memsw].limit_in_bytes Michal Hocko
2018-01-19 13:32     ` Michal Hocko
2018-01-19 13:32     ` Michal Hocko
2018-01-25 19:44   ` Andrey Ryabinin
2018-01-25 19:44     ` Andrey Ryabinin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180119151118.GE6584@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=aryabinin@virtuozzo.com \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.