linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Minchan Kim <minchan@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	Rik van Riel <riel@redhat.com>,
	Sangwoo Park <sangwoo2.park@lge.com>
Subject: Re: [PATCH v1 3/3] mm: per-process reclaim
Date: Mon, 13 Jun 2016 11:06:53 -0400	[thread overview]
Message-ID: <20160613150653.GA30642@cmpxchg.org> (raw)
In-Reply-To: <1465804259-29345-4-git-send-email-minchan@kernel.org>

Hi Minchan,

On Mon, Jun 13, 2016 at 04:50:58PM +0900, Minchan Kim wrote:
> These day, there are many platforms available in the embedded market
> and sometime, they has more hints about workingset than kernel so
> they want to involve memory management more heavily like android's
> lowmemory killer and ashmem or user-daemon with lowmemory notifier.
> 
> This patch adds add new method for userspace to manage memory
> efficiently via knob "/proc/<pid>/reclaim" so platform can reclaim
> any process anytime.

Cgroups are our canonical way to control system resources on a per
process or group-of-processes level. I don't like the idea of adding
ad-hoc interfaces for single-use cases like this.

For this particular case, you can already stick each app into its own
cgroup and use memory.force_empty to target-reclaim them.

Or better yet, set the soft limits / memory.low to guide physical
memory pressure, once it actually occurs, toward the least-important
apps? We usually prefer doing work on-demand rather than proactively.

The one-cgroup-per-app model would give Android much more control and
would also remove a *lot* of overhead during task switches, see this:
https://lkml.org/lkml/2014/12/19/358

  reply	other threads:[~2016-06-13 15:09 UTC|newest]

Thread overview: 21+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-13  7:50 [PATCH v1 0/3] per-process reclaim Minchan Kim
2016-06-13  7:50 ` [PATCH v1 1/3] mm: vmscan: refactoring force_reclaim Minchan Kim
2016-06-13  7:50 ` [PATCH v1 2/3] mm: vmscan: shrink_page_list with multiple zones Minchan Kim
2016-06-13  7:50 ` [PATCH v1 3/3] mm: per-process reclaim Minchan Kim
2016-06-13 15:06   ` Johannes Weiner [this message]
2016-06-15  0:40     ` Minchan Kim
2016-06-16 11:07       ` Michal Hocko
2016-06-16 14:41       ` Johannes Weiner
2016-06-17  6:43         ` Minchan Kim
2016-06-17  7:24     ` Balbir Singh
2016-06-17  7:57       ` Vinayak Menon
2016-06-13 17:06   ` Rik van Riel
2016-06-15  1:01     ` Minchan Kim
2016-06-13 11:50 ` [PATCH v1 0/3] " Chen Feng
2016-06-13 12:22   ` ZhaoJunmin Zhao(Junmin)
2016-06-15  0:43   ` Minchan Kim
2016-06-13 13:29 ` Vinayak Menon
2016-06-15  0:57   ` Minchan Kim
2016-06-16  4:21     ` Vinayak Menon
     [not found] <040501d1c55a$81d51910$857f4b30$@alibaba-inc.com>
2016-06-13 10:07 ` [PATCH v1 3/3] mm: " Hillf Danton
2016-06-15  0:46   ` Minchan Kim

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160613150653.GA30642@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=minchan@kernel.org \
    --cc=riel@redhat.com \
    --cc=sangwoo2.park@lge.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).