From: Minchan Kim <minchan@kernel.org>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
<linux-kernel@vger.kernel.org>, <linux-mm@kvack.org>,
Rik van Riel <riel@redhat.com>,
Sangwoo Park <sangwoo2.park@lge.com>
Subject: Re: [PATCH v1 3/3] mm: per-process reclaim
Date: Fri, 17 Jun 2016 15:43:30 +0900 [thread overview]
Message-ID: <20160617064330.GD2374@bbox> (raw)
In-Reply-To: <20160616144102.GA17692@cmpxchg.org>
Hi Hannes,
On Thu, Jun 16, 2016 at 10:41:02AM -0400, Johannes Weiner wrote:
> On Wed, Jun 15, 2016 at 09:40:27AM +0900, Minchan Kim wrote:
> > A question is it seems cgroup2 doesn't have per-cgroup swappiness.
> > Why?
> >
> > I think we need it in one-cgroup-per-app model.
>
> Can you explain why you think that?
>
> As we have talked about this recently in the LRU balancing thread,
> swappiness is the cost factor between file IO and swapping, so the
> only situation I can imagine you'd need a memcg swappiness setting is
> when you have different cgroups use different storage devices that do
> not have comparable speeds.
>
> So I'm not sure I understand the relationship to an app-group model.
Sorry for lacking the inforamtion. I should have written more clear.
In fact, what we need is *per-memcg-swap-device*.
What I want is to avoid kill background application although memory
is overflow because cold launcing of app takes a very long time
compared to resume(ie, just switching). I also want to keep a mount
of free pages in the memory so that new application startup cannot
be stuck by reclaim activities.
To get free memory, I want to reclaim less important app rather than
killing. In this time, we can support two swap devices.
A one is zram, other is slow storage but much bigger than zram size.
Then, we can use storage swap to reclaim pages for not-important app
while we can use zram swap for for important app(e.g., forground app,
system services, daemon and so on).
IOW, we want to support mutiple swap device with one-cgroup-per-app
and the storage speed is totally different.
next prev parent reply other threads:[~2016-06-17 6:43 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-13 7:50 [PATCH v1 0/3] per-process reclaim Minchan Kim
2016-06-13 7:50 ` [PATCH v1 1/3] mm: vmscan: refactoring force_reclaim Minchan Kim
2016-06-13 7:50 ` [PATCH v1 2/3] mm: vmscan: shrink_page_list with multiple zones Minchan Kim
2016-06-13 7:50 ` [PATCH v1 3/3] mm: per-process reclaim Minchan Kim
2016-06-13 15:06 ` Johannes Weiner
2016-06-15 0:40 ` Minchan Kim
2016-06-16 11:07 ` Michal Hocko
2016-06-16 14:41 ` Johannes Weiner
2016-06-17 6:43 ` Minchan Kim [this message]
2016-06-17 7:24 ` Balbir Singh
2016-06-17 7:57 ` Vinayak Menon
2016-06-13 17:06 ` Rik van Riel
2016-06-15 1:01 ` Minchan Kim
2016-06-13 11:50 ` [PATCH v1 0/3] " Chen Feng
2016-06-13 12:22 ` ZhaoJunmin Zhao(Junmin)
2016-06-15 0:43 ` Minchan Kim
2016-06-13 13:29 ` Vinayak Menon
2016-06-15 0:57 ` Minchan Kim
2016-06-16 4:21 ` Vinayak Menon
[not found] <040501d1c55a$81d51910$857f4b30$@alibaba-inc.com>
2016-06-13 10:07 ` [PATCH v1 3/3] mm: " Hillf Danton
2016-06-15 0:46 ` Minchan Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160617064330.GD2374@bbox \
--to=minchan@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=riel@redhat.com \
--cc=sangwoo2.park@lge.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).