All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, Shakeel Butt <shakeelb@google.com>
Subject: Re: [PATCH v4] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes
Date: Fri, 12 Jan 2018 00:59:38 +0300	[thread overview]
Message-ID: <560a77b5-02d7-cbae-35f3-0b20a1c384c2@virtuozzo.com> (raw)
In-Reply-To: <20180111162947.GG1732@dhcp22.suse.cz>

On 01/11/2018 07:29 PM, Michal Hocko wrote:
> On Thu 11-01-18 18:23:57, Andrey Ryabinin wrote:
>> On 01/11/2018 03:46 PM, Michal Hocko wrote:
>>> On Thu 11-01-18 15:21:33, Andrey Ryabinin wrote:
>>>>
>>>>
>>>> On 01/11/2018 01:42 PM, Michal Hocko wrote:
>>>>> On Wed 10-01-18 15:43:17, Andrey Ryabinin wrote:
>>>>> [...]
>>>>>> @@ -2506,15 +2480,13 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
>>>>>>  		if (!ret)
>>>>>>  			break;
>>>>>>  
>>>>>> -		try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, !memsw);
>>>>>> -
>>>>>> -		curusage = page_counter_read(counter);
>>>>>> -		/* Usage is reduced ? */
>>>>>> -		if (curusage >= oldusage)
>>>>>> -			retry_count--;
>>>>>> -		else
>>>>>> -			oldusage = curusage;
>>>>>> -	} while (retry_count);
>>>>>> +		usage = page_counter_read(counter);
>>>>>> +		if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
>>>>>> +						GFP_KERNEL, !memsw)) {
>>>>>
>>>>> If the usage drops below limit in the meantime then you get underflow
>>>>> and reclaim the whole memcg. I do not think this is a good idea. This
>>>>> can also lead to over reclaim. Why don't you simply stick with the
>>>>> original SWAP_CLUSTER_MAX (aka 1 for try_to_free_mem_cgroup_pages)?
>>>>>
>>>>
>>>> Because, if new limit is gigabytes bellow the current usage, retrying to set
>>>> new limit after reclaiming only 32 pages seems unreasonable.
>>>
>>> Who would do insanity like that?
>>>
>>
>> What's insane about that?
> 
> I haven't seen this being done in practice. Why would you want to
> reclaim GBs of memory from a cgroup? Anyway, if you believe this is
> really needed then simply do it in a separate patch.
>  

For the same reason as anyone would want to set memory limit on some job that generates
too much pressure and disrupts others. Whether this GBs or MBs is just a matter of scale.

More concrete example is workload that generates lots of page cache. Without limit (or with too high limit)
it wakes up kswapd which starts trashing all other cgroups. It's pretty bad for anon mostly cgroups
as we may constantly swap back and forth hot data.


>>>> @@ -2487,8 +2487,8 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
>>>>  		if (!ret)
>>>>  			break;
>>>>  
>>>> -		usage = page_counter_read(counter);
>>>> -		if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
>>>> +		nr_pages = max_t(long, 1, page_counter_read(counter) - limit);
>>>> +		if (!try_to_free_mem_cgroup_pages(memcg, nr_pages,
>>>>  						GFP_KERNEL, !memsw)) {
>>>>  			ret = -EBUSY;
>>>>  			break;
>>>
>>> How does this address the over reclaim concern?
>>  
>> It protects from over reclaim due to underflow.
> 
> I do not think so. Consider that this reclaim races with other
> reclaimers. Now you are reclaiming a large chunk so you might end up
> reclaiming more than necessary. SWAP_CLUSTER_MAX would reduce the over
> reclaim to be negligible.
> 

I did consider this. And I think, I already explained that sort of race in previous email.
Whether "Task B" is really a task in cgroup or it's actually a bunch of reclaimers,
doesn't matter. That doesn't change anything.

WARNING: multiple messages have this Message-ID (diff)
From: Andrey Ryabinin <aryabinin@virtuozzo.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	cgroups@vger.kernel.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org, Shakeel Butt <shakeelb@google.com>
Subject: Re: [PATCH v4] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes
Date: Fri, 12 Jan 2018 00:59:38 +0300	[thread overview]
Message-ID: <560a77b5-02d7-cbae-35f3-0b20a1c384c2@virtuozzo.com> (raw)
In-Reply-To: <20180111162947.GG1732@dhcp22.suse.cz>

On 01/11/2018 07:29 PM, Michal Hocko wrote:
> On Thu 11-01-18 18:23:57, Andrey Ryabinin wrote:
>> On 01/11/2018 03:46 PM, Michal Hocko wrote:
>>> On Thu 11-01-18 15:21:33, Andrey Ryabinin wrote:
>>>>
>>>>
>>>> On 01/11/2018 01:42 PM, Michal Hocko wrote:
>>>>> On Wed 10-01-18 15:43:17, Andrey Ryabinin wrote:
>>>>> [...]
>>>>>> @@ -2506,15 +2480,13 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
>>>>>>  		if (!ret)
>>>>>>  			break;
>>>>>>  
>>>>>> -		try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, !memsw);
>>>>>> -
>>>>>> -		curusage = page_counter_read(counter);
>>>>>> -		/* Usage is reduced ? */
>>>>>> -		if (curusage >= oldusage)
>>>>>> -			retry_count--;
>>>>>> -		else
>>>>>> -			oldusage = curusage;
>>>>>> -	} while (retry_count);
>>>>>> +		usage = page_counter_read(counter);
>>>>>> +		if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
>>>>>> +						GFP_KERNEL, !memsw)) {
>>>>>
>>>>> If the usage drops below limit in the meantime then you get underflow
>>>>> and reclaim the whole memcg. I do not think this is a good idea. This
>>>>> can also lead to over reclaim. Why don't you simply stick with the
>>>>> original SWAP_CLUSTER_MAX (aka 1 for try_to_free_mem_cgroup_pages)?
>>>>>
>>>>
>>>> Because, if new limit is gigabytes bellow the current usage, retrying to set
>>>> new limit after reclaiming only 32 pages seems unreasonable.
>>>
>>> Who would do insanity like that?
>>>
>>
>> What's insane about that?
> 
> I haven't seen this being done in practice. Why would you want to
> reclaim GBs of memory from a cgroup? Anyway, if you believe this is
> really needed then simply do it in a separate patch.
>  

For the same reason as anyone would want to set memory limit on some job that generates
too much pressure and disrupts others. Whether this GBs or MBs is just a matter of scale.

More concrete example is workload that generates lots of page cache. Without limit (or with too high limit)
it wakes up kswapd which starts trashing all other cgroups. It's pretty bad for anon mostly cgroups
as we may constantly swap back and forth hot data.


>>>> @@ -2487,8 +2487,8 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
>>>>  		if (!ret)
>>>>  			break;
>>>>  
>>>> -		usage = page_counter_read(counter);
>>>> -		if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
>>>> +		nr_pages = max_t(long, 1, page_counter_read(counter) - limit);
>>>> +		if (!try_to_free_mem_cgroup_pages(memcg, nr_pages,
>>>>  						GFP_KERNEL, !memsw)) {
>>>>  			ret = -EBUSY;
>>>>  			break;
>>>
>>> How does this address the over reclaim concern?
>>  
>> It protects from over reclaim due to underflow.
> 
> I do not think so. Consider that this reclaim races with other
> reclaimers. Now you are reclaiming a large chunk so you might end up
> reclaiming more than necessary. SWAP_CLUSTER_MAX would reduce the over
> reclaim to be negligible.
> 

I did consider this. And I think, I already explained that sort of race in previous email.
Whether "Task B" is really a task in cgroup or it's actually a bunch of reclaimers,
doesn't matter. That doesn't change anything.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

WARNING: multiple messages have this Message-ID (diff)
From: Andrey Ryabinin <aryabinin-5HdwGun5lf+gSpxsJD1C4w@public.gmane.org>
To: Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>
Cc: Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Vladimir Davydov
	<vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
Subject: Re: [PATCH v4] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes
Date: Fri, 12 Jan 2018 00:59:38 +0300	[thread overview]
Message-ID: <560a77b5-02d7-cbae-35f3-0b20a1c384c2@virtuozzo.com> (raw)
In-Reply-To: <20180111162947.GG1732-2MMpYkNvuYDjFM9bn6wA6Q@public.gmane.org>

On 01/11/2018 07:29 PM, Michal Hocko wrote:
> On Thu 11-01-18 18:23:57, Andrey Ryabinin wrote:
>> On 01/11/2018 03:46 PM, Michal Hocko wrote:
>>> On Thu 11-01-18 15:21:33, Andrey Ryabinin wrote:
>>>>
>>>>
>>>> On 01/11/2018 01:42 PM, Michal Hocko wrote:
>>>>> On Wed 10-01-18 15:43:17, Andrey Ryabinin wrote:
>>>>> [...]
>>>>>> @@ -2506,15 +2480,13 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
>>>>>>  		if (!ret)
>>>>>>  			break;
>>>>>>  
>>>>>> -		try_to_free_mem_cgroup_pages(memcg, 1, GFP_KERNEL, !memsw);
>>>>>> -
>>>>>> -		curusage = page_counter_read(counter);
>>>>>> -		/* Usage is reduced ? */
>>>>>> -		if (curusage >= oldusage)
>>>>>> -			retry_count--;
>>>>>> -		else
>>>>>> -			oldusage = curusage;
>>>>>> -	} while (retry_count);
>>>>>> +		usage = page_counter_read(counter);
>>>>>> +		if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
>>>>>> +						GFP_KERNEL, !memsw)) {
>>>>>
>>>>> If the usage drops below limit in the meantime then you get underflow
>>>>> and reclaim the whole memcg. I do not think this is a good idea. This
>>>>> can also lead to over reclaim. Why don't you simply stick with the
>>>>> original SWAP_CLUSTER_MAX (aka 1 for try_to_free_mem_cgroup_pages)?
>>>>>
>>>>
>>>> Because, if new limit is gigabytes bellow the current usage, retrying to set
>>>> new limit after reclaiming only 32 pages seems unreasonable.
>>>
>>> Who would do insanity like that?
>>>
>>
>> What's insane about that?
> 
> I haven't seen this being done in practice. Why would you want to
> reclaim GBs of memory from a cgroup? Anyway, if you believe this is
> really needed then simply do it in a separate patch.
>  

For the same reason as anyone would want to set memory limit on some job that generates
too much pressure and disrupts others. Whether this GBs or MBs is just a matter of scale.

More concrete example is workload that generates lots of page cache. Without limit (or with too high limit)
it wakes up kswapd which starts trashing all other cgroups. It's pretty bad for anon mostly cgroups
as we may constantly swap back and forth hot data.


>>>> @@ -2487,8 +2487,8 @@ static int mem_cgroup_resize_limit(struct mem_cgroup *memcg,
>>>>  		if (!ret)
>>>>  			break;
>>>>  
>>>> -		usage = page_counter_read(counter);
>>>> -		if (!try_to_free_mem_cgroup_pages(memcg, usage - limit,
>>>> +		nr_pages = max_t(long, 1, page_counter_read(counter) - limit);
>>>> +		if (!try_to_free_mem_cgroup_pages(memcg, nr_pages,
>>>>  						GFP_KERNEL, !memsw)) {
>>>>  			ret = -EBUSY;
>>>>  			break;
>>>
>>> How does this address the over reclaim concern?
>>  
>> It protects from over reclaim due to underflow.
> 
> I do not think so. Consider that this reclaim races with other
> reclaimers. Now you are reclaiming a large chunk so you might end up
> reclaiming more than necessary. SWAP_CLUSTER_MAX would reduce the over
> reclaim to be negligible.
> 

I did consider this. And I think, I already explained that sort of race in previous email.
Whether "Task B" is really a task in cgroup or it's actually a bunch of reclaimers,
doesn't matter. That doesn't change anything.



  reply	other threads:[~2018-01-11 21:59 UTC|newest]

Thread overview: 125+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-20 10:24 [PATCH 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Andrey Ryabinin
2017-12-20 10:24 ` Andrey Ryabinin
2017-12-20 10:24 ` [PATCH 2/2] mm/memcg: Consolidate mem_cgroup_resize_[memsw]_limit() functions Andrey Ryabinin
2017-12-20 10:24   ` Andrey Ryabinin
2017-12-20 10:33 ` [PATCH 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Michal Hocko
2017-12-20 10:33   ` Michal Hocko
2017-12-20 11:32   ` Andrey Ryabinin
2017-12-20 11:32     ` Andrey Ryabinin
2017-12-20 11:34     ` Michal Hocko
2017-12-20 11:34       ` Michal Hocko
2017-12-20 18:15       ` Shakeel Butt
2017-12-20 18:15         ` Shakeel Butt
2017-12-21 10:00         ` Andrey Ryabinin
2017-12-21 10:00           ` Andrey Ryabinin
2017-12-21 10:00           ` Andrey Ryabinin
2017-12-20 13:21 ` [PATCH v2 " Andrey Ryabinin
2017-12-20 13:21   ` Andrey Ryabinin
2017-12-20 13:21   ` [PATCH v2 2/2] mm/memcg: Consolidate mem_cgroup_resize_[memsw]_limit() functions Andrey Ryabinin
2017-12-20 13:21     ` Andrey Ryabinin
2017-12-20 13:53   ` [PATCH v2 1/2] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Michal Hocko
2017-12-20 13:53     ` Michal Hocko
2018-01-09 16:58     ` [PATCH v3 " Andrey Ryabinin
2018-01-09 16:58       ` Andrey Ryabinin
2018-01-09 16:58       ` [PATCH v3 2/2] mm/memcg: Consolidate mem_cgroup_resize_[memsw]_limit() functions Andrey Ryabinin
2018-01-09 16:58         ` Andrey Ryabinin
2018-01-09 17:10         ` Shakeel Butt
2018-01-09 17:10           ` Shakeel Butt
2018-01-09 17:10           ` Shakeel Butt
2018-01-09 17:26           ` Andrey Ryabinin
2018-01-09 17:26             ` Andrey Ryabinin
2018-01-09 23:26             ` Andrew Morton
2018-01-09 23:26               ` Andrew Morton
2018-01-10 12:43               ` [PATCH v4] mm/memcg: try harder to decrease [memory,memsw].limit_in_bytes Andrey Ryabinin
2018-01-10 12:43                 ` Andrey Ryabinin
2018-01-10 12:43                 ` Andrey Ryabinin
2018-01-10 22:31                 ` Andrew Morton
2018-01-10 22:31                   ` Andrew Morton
2018-01-11 11:59                   ` Andrey Ryabinin
2018-01-11 11:59                     ` Andrey Ryabinin
2018-01-12  0:21                     ` Andrew Morton
2018-01-12  0:21                       ` Andrew Morton
2018-01-12  0:21                       ` Andrew Morton
2018-01-12  9:08                       ` Andrey Ryabinin
2018-01-12  9:08                         ` Andrey Ryabinin
2018-01-11 10:42                 ` Michal Hocko
2018-01-11 10:42                   ` Michal Hocko
2018-01-11 10:42                   ` Michal Hocko
2018-01-11 12:21                   ` Andrey Ryabinin
2018-01-11 12:21                     ` Andrey Ryabinin
2018-01-11 12:21                     ` Andrey Ryabinin
2018-01-11 12:46                     ` Michal Hocko
2018-01-11 12:46                       ` Michal Hocko
2018-01-11 15:23                       ` Andrey Ryabinin
2018-01-11 15:23                         ` Andrey Ryabinin
2018-01-11 15:23                         ` Andrey Ryabinin
2018-01-11 16:29                         ` Michal Hocko
2018-01-11 16:29                           ` Michal Hocko
2018-01-11 16:29                           ` Michal Hocko
2018-01-11 21:59                           ` Andrey Ryabinin [this message]
2018-01-11 21:59                             ` Andrey Ryabinin
2018-01-11 21:59                             ` Andrey Ryabinin
2018-01-12 12:24                             ` Michal Hocko
2018-01-12 12:24                               ` Michal Hocko
2018-01-12 22:57                               ` Shakeel Butt
2018-01-12 22:57                                 ` Shakeel Butt
2018-01-12 22:57                                 ` Shakeel Butt
2018-01-15 12:29                                 ` Andrey Ryabinin
2018-01-15 12:29                                   ` Andrey Ryabinin
2018-01-15 17:04                                   ` Shakeel Butt
2018-01-15 17:04                                     ` Shakeel Butt
2018-01-15 17:04                                     ` Shakeel Butt
2018-01-15 12:30                               ` Andrey Ryabinin
2018-01-15 12:30                                 ` Andrey Ryabinin
2018-01-15 12:46                                 ` Michal Hocko
2018-01-15 12:46                                   ` Michal Hocko
2018-01-15 12:53                                   ` Andrey Ryabinin
2018-01-15 12:53                                     ` Andrey Ryabinin
2018-01-15 12:58                                     ` Michal Hocko
2018-01-15 12:58                                       ` Michal Hocko
2018-01-09 17:08       ` [PATCH v3 1/2] " Andrey Ryabinin
2018-01-09 17:08         ` Andrey Ryabinin
2018-01-09 17:08         ` Andrey Ryabinin
2018-01-09 17:22       ` Shakeel Butt
2018-01-09 17:22         ` Shakeel Butt
2018-01-19 13:25 ` [PATCH v5 1/2] mm/memcontrol.c: " Andrey Ryabinin
2018-01-19 13:25   ` Andrey Ryabinin
2018-01-19 13:25   ` Andrey Ryabinin
2018-01-19 13:25   ` [PATCH v5 2/2] mm/memcontrol.c: Reduce reclaim retries in mem_cgroup_resize_limit() Andrey Ryabinin
2018-01-19 13:25     ` Andrey Ryabinin
2018-01-19 13:35     ` Michal Hocko
2018-01-19 13:35       ` Michal Hocko
2018-01-19 14:49       ` Shakeel Butt
2018-01-19 14:49         ` Shakeel Butt
2018-01-19 14:49         ` Shakeel Butt
2018-01-19 15:11         ` Michal Hocko
2018-01-19 15:11           ` Michal Hocko
2018-01-19 15:11           ` Michal Hocko
2018-01-19 15:24           ` Shakeel Butt
2018-01-19 15:24             ` Shakeel Butt
2018-01-19 15:31             ` Michal Hocko
2018-01-19 15:31               ` Michal Hocko
2018-01-19 15:31               ` Michal Hocko
2018-02-21 20:17           ` Andrew Morton
2018-02-21 20:17             ` Andrew Morton
2018-02-22 13:50             ` Andrey Ryabinin
2018-02-22 13:50               ` Andrey Ryabinin
2018-02-22 14:09               ` Michal Hocko
2018-02-22 14:09                 ` Michal Hocko
2018-02-22 15:13                 ` Andrey Ryabinin
2018-02-22 15:13                   ` Andrey Ryabinin
2018-02-22 15:33                   ` Michal Hocko
2018-02-22 15:33                     ` Michal Hocko
2018-02-22 15:38                     ` Andrey Ryabinin
2018-02-22 15:38                       ` Andrey Ryabinin
2018-02-22 15:44                       ` Michal Hocko
2018-02-22 15:44                         ` Michal Hocko
2018-02-22 16:01                         ` Andrey Ryabinin
2018-02-22 16:01                           ` Andrey Ryabinin
2018-02-22 16:30                           ` Michal Hocko
2018-02-22 16:30                             ` Michal Hocko
2018-01-19 13:32   ` [PATCH v5 1/2] mm/memcontrol.c: try harder to decrease [memory,memsw].limit_in_bytes Michal Hocko
2018-01-19 13:32     ` Michal Hocko
2018-01-19 13:32     ` Michal Hocko
2018-01-25 19:44   ` Andrey Ryabinin
2018-01-25 19:44     ` Andrey Ryabinin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=560a77b5-02d7-cbae-35f3-0b20a1c384c2@virtuozzo.com \
    --to=aryabinin@virtuozzo.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.