From: Yang Shi <yang.shi@linux.alibaba.com>
To: Greg Thelen <gthelen@google.com>, Michal Hocko <mhocko@kernel.org>
Cc: hannes@cmpxchg.org, akpm@linux-foundation.org,
linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [RFC PATCH 0/3] mm: memcontrol: delayed force empty
Date: Fri, 4 Jan 2019 14:57:33 -0800 [thread overview]
Message-ID: <5d9579aa-cfdc-8254-bfd9-63c4e1bfa4c5@linux.alibaba.com> (raw)
In-Reply-To: <xr93y380xk9k.fsf@gthelen.svl.corp.google.com>
On 1/4/19 12:03 PM, Greg Thelen wrote:
> Yang Shi <yang.shi@linux.alibaba.com> wrote:
>
>> On 1/3/19 11:23 AM, Michal Hocko wrote:
>>> On Thu 03-01-19 11:10:00, Yang Shi wrote:
>>>> On 1/3/19 10:53 AM, Michal Hocko wrote:
>>>>> On Thu 03-01-19 10:40:54, Yang Shi wrote:
>>>>>> On 1/3/19 10:13 AM, Michal Hocko wrote:
>>> [...]
>>>>>>> Is there any reason for your scripts to be strictly sequential here? In
>>>>>>> other words why cannot you offload those expensive operations to a
>>>>>>> detached context in _userspace_?
>>>>>> I would say it has not to be strictly sequential. The above script is just
>>>>>> an example to illustrate the pattern. But, sometimes it may hit such pattern
>>>>>> due to the complicated cluster scheduling and container scheduling in the
>>>>>> production environment, for example the creation process might be scheduled
>>>>>> to the same CPU which is doing force_empty. I have to say I don't know too
>>>>>> much about the internals of the container scheduling.
>>>>> In that case I do not see a strong reason to implement the offloding
>>>>> into the kernel. It is an additional code and semantic to maintain.
>>>> Yes, it does introduce some additional code and semantic, but IMHO, it is
>>>> quite simple and very straight forward, isn't it? Just utilize the existing
>>>> css offline worker. And, that a couple of lines of code do improve some
>>>> throughput issues for some real usecases.
>>> I do not really care it is few LOC. It is more important that it is
>>> conflating force_empty into offlining logic. There was a good reason to
>>> remove reparenting/emptying the memcg during the offline. Considering
>>> that you can offload force_empty from userspace trivially then I do not
>>> see any reason to implement it in the kernel.
>> Er, I may not articulate in the earlier email, force_empty can not be
>> offloaded from userspace *trivially*. IOWs the container scheduler may
>> unexpectedly overcommit something due to the stall of synchronous force
>> empty, which can't be figured out by userspace before it actually
>> happens. The scheduler doesn't know how long force_empty would take. If
>> the force_empty could be offloaded by kernel, it would make scheduler's
>> life much easier. This is not something userspace could do.
> If kernel workqueues are doing more work (i.e. force_empty processing),
> then it seem like the time to offline could grow. I'm not sure if
> that's important.
One thing I can think of is this may slow down the recycling of memcg
id. This may cause memcg id exhausted for some extreme workload. But, I
don't see this as a problem in our workload.
Thanks,
Yang
>
> I assume that if we make force_empty an async side effect of rmdir then
> user space scheduler would not be unable to immediately assume the
> rmdir'd container memory is available without subjecting a new container
> to direct reclaim. So it seems like user space would use a mechanism to
> wait for reclaim: either the existing sync force_empty or polling
> meminfo/etc waiting for free memory to appear.
>
>>>>> I think it is more important to discuss whether we want to introduce
>>>>> force_empty in cgroup v2.
>>>> We would prefer have it in v2 as well.
>>> Then bring this up in a separate email thread please.
>> Sure. Will prepare the patches later.
>>
>> Thanks,
>> Yang
next prev parent reply other threads:[~2019-01-04 23:00 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-02 20:05 [RFC PATCH 0/3] mm: memcontrol: delayed force empty Yang Shi
2019-01-02 20:05 ` [PATCH 1/3] doc: memcontrol: fix the obsolete content about " Yang Shi
2019-01-02 21:18 ` Shakeel Butt
2019-01-03 10:13 ` Michal Hocko
2019-01-02 20:05 ` [PATCH 2/3] mm: memcontrol: do not try to do swap when " Yang Shi
2019-01-02 21:45 ` Shakeel Butt
2019-01-03 16:56 ` Yang Shi
2019-01-03 17:03 ` Shakeel Butt
2019-01-03 18:19 ` Yang Shi
2019-01-02 20:05 ` [PATCH 3/3] mm: memcontrol: delay force empty to css offline Yang Shi
2019-01-03 10:12 ` [RFC PATCH 0/3] mm: memcontrol: delayed force empty Michal Hocko
2019-01-03 17:33 ` Yang Shi
2019-01-03 18:13 ` Michal Hocko
2019-01-03 18:40 ` Yang Shi
2019-01-03 18:53 ` Michal Hocko
2019-01-03 19:10 ` Yang Shi
2019-01-03 19:23 ` Michal Hocko
2019-01-03 19:49 ` Yang Shi
2019-01-03 20:01 ` Michal Hocko
2019-01-04 4:15 ` Yang Shi
2019-01-04 8:55 ` Michal Hocko
2019-01-04 16:46 ` Yang Shi
2019-01-04 20:03 ` Greg Thelen
2019-01-04 21:41 ` Yang Shi
2019-01-04 22:57 ` Yang Shi [this message]
2019-01-04 23:04 ` Yang Shi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5d9579aa-cfdc-8254-bfd9-63c4e1bfa4c5@linux.alibaba.com \
--to=yang.shi@linux.alibaba.com \
--cc=akpm@linux-foundation.org \
--cc=gthelen@google.com \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhocko@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).