From: Christian Brauner <christian@brauner.io>
To: Joel Fernandes <joel@joelfernandes.org>
Cc: "Daniel Colascione" <dancol@google.com>,
"Steven Rostedt" <rostedt@goodmis.org>,
"Sultan Alsawaf" <sultan@kerneltoast.com>,
"Tim Murray" <timmurray@google.com>,
"Michal Hocko" <mhocko@kernel.org>,
"Suren Baghdasaryan" <surenb@google.com>,
"Greg Kroah-Hartman" <gregkh@linuxfoundation.org>,
"Arve Hjønnevåg" <arve@android.com>,
"Todd Kjos" <tkjos@android.com>,
"Martijn Coenen" <maco@android.com>,
"Ingo Molnar" <mingo@redhat.com>,
"Peter Zijlstra" <peterz@infradead.org>,
LKML <linux-kernel@vger.kernel.org>,
"open list:ANDROID DRIVERS" <devel@driverdev.osuosl.org>,
linux-mm <linux-mm@kvack.org>,
kernel-team <kernel-team@android.com>
Subject: Re: [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android
Date: Fri, 15 Mar 2019 19:24:28 +0100 [thread overview]
Message-ID: <20190315182426.sujcqbzhzw4llmsa@brauner.io> (raw)
In-Reply-To: <20190315181324.GA248160@google.com>
On Fri, Mar 15, 2019 at 02:13:24PM -0400, Joel Fernandes wrote:
> On Fri, Mar 15, 2019 at 07:03:07PM +0100, Christian Brauner wrote:
> > On Thu, Mar 14, 2019 at 09:36:43PM -0700, Daniel Colascione wrote:
> > > On Thu, Mar 14, 2019 at 8:16 PM Steven Rostedt <rostedt@goodmis.org> wrote:
> > > >
> > > > On Thu, 14 Mar 2019 13:49:11 -0700
> > > > Sultan Alsawaf <sultan@kerneltoast.com> wrote:
> > > >
> > > > > Perhaps I'm missing something, but if you want to know when a process has died
> > > > > after sending a SIGKILL to it, then why not just make the SIGKILL optionally
> > > > > block until the process has died completely? It'd be rather trivial to just
> > > > > store a pointer to an onstack completion inside the victim process' task_struct,
> > > > > and then complete it in free_task().
> > > >
> > > > How would you implement such a method in userspace? kill() doesn't take
> > > > any parameters but the pid of the process you want to send a signal to,
> > > > and the signal to send. This would require a new system call, and be
> > > > quite a bit of work.
> > >
> > > That's what the pidfd work is for. Please read the original threads
> > > about the motivation and design of that facility.
> > >
> > > > If you can solve this with an ebpf program, I
> > > > strongly suggest you do that instead.
> > >
> > > Regarding process death notification: I will absolutely not support
> > > putting aBPF and perf trace events on the critical path of core system
> > > memory management functionality. Tracing and monitoring facilities are
> > > great for learning about the system, but they were never intended to
> > > be load-bearing. The proposed eBPF process-monitoring approach is just
> > > a variant of the netlink proposal we discussed previously on the pidfd
> > > threads; it has all of its drawbacks. We really need a core system
> > > call --- really, we've needed robust process management since the
> > > creation of unix --- and I'm glad that we're finally getting it.
> > > Adding new system calls is not expensive; going to great lengths to
> > > avoid adding one is like calling a helicopter to avoid crossing the
> > > street. I don't think we should present an abuse of the debugging and
> > > performance monitoring infrastructure as an alternative to a robust
> > > and desperately-needed bit of core functionality that's neither hard
> > > to add nor complex to implement nor expensive to use.
> > >
> > > Regarding the proposal for a new kernel-side lmkd: when possible, the
> > > kernel should provide mechanism, not policy. Putting the low memory
> > > killer back into the kernel after we've spent significant effort
> > > making it possible for userspace to do that job. Compared to kernel
> > > code, more easily understood, more easily debuggable, more easily
> > > updated, and much safer. If we *can* move something out of the kernel,
> > > we should. This patch moves us in exactly the wrong direction. Yes, we
> > > need *something* that sits synchronously astride the page allocation
> > > path and does *something* to stop a busy beaver allocator that eats
> > > all the available memory before lmkd, even mlocked and realtime, can
> > > respond. The OOM killer is adequate for this very rare case.
> > >
> > > With respect to kill timing: Tim is right about the need for two
> > > levels of policy: first, a high-level process prioritization and
> > > memory-demand balancing scheme (which is what OOM score adjustment
> > > code in ActivityManager amounts to); and second, a low-level
> > > process-killing methodology that maximizes sustainable memory reclaim
> > > and minimizes unwanted side effects while killing those processes that
> > > should be dead. Both of these policies belong in userspace --- because
> > > they *can* be in userspace --- and userspace needs only a few tools,
> > > most of which already exist, to do a perfectly adequate job.
> > >
> > > We do want killed processes to die promptly. That's why I support
> > > boosting a process's priority somehow when lmkd is about to kill it.
> > > The precise way in which we do that --- involving not only actual
> > > priority, but scheduler knobs, cgroup assignment, core affinity, and
> > > so on --- is a complex topic best left to userspace. lmkd already has
> > > all the knobs it needs to implement whatever priority boosting policy
> > > it wants.
> > >
> > > Hell, once we add a pidfd_wait --- which I plan to work on, assuming
> > > nobody beats me to it, after pidfd_send_signal lands --- you can
> >
> > Daniel,
> >
> > I've just been talking to Joel.
> > I actually "expected" you to work pidfd_wait() after prior
> > conversations we had on the pidfd_send_signal() patchsets. :) That's why
> > I got a separate git tree on kernel.org since I expect a lot more work
> > to come. I hope that Linus still decides to pull pidfd_send_signal()
> > before Sunday (For the ones who have missed the link in a prior response
> > of mine:
> > https://lkml.org/lkml/2019/3/12/439
> >
> > This is the first merge window I sent this PR.
> >
> > The pidfd tree has a branch for-next that is already tracked by Stephen
> > in linux-next since the 5.0 merge window. The patches for
> > pidfd_send_signal() sit in the pidfd branch.
> > I'd be happy to share the tree with you and Joel (We can rename it if
> > you prefer I don't care).
> > I would really like to centralize this work so that we sort of have a
> > "united front" and end up with a coherent api and can send PRs from a
> > centralized place:
> > https://git.kernel.org/pub/scm/linux/kernel/git/brauner/linux.git/
>
> I am totally onboard with working together / reviewing this work with you all
> on a common tree somewhere (Christian's pidfd tree is fine). I was curious,
Excellent.
> why do we want to add a new syscall (pidfd_wait) though? Why not just use
> standard poll/epoll interface on the proc fd like Daniel was suggesting.
> AFAIK, once the proc file is opened, the struct pid is essentially pinned
> even though the proc number may be reused. Then the caller can just poll.
> We can add a waitqueue to struct pid, and wake up any waiters on process
> death (A quick look shows task_struct can be mapped to its struct pid) and
> also possibly optimize it using Steve's TIF flag idea. No new syscall is
> needed then, let me know if I missed something?
Huh, I thought that Daniel was against the poll/epoll solution?
I have no clear opinion on what is better at the moment since I have
been mostly concerned with getting pidfd_send_signal() into shape and
was reluctant to put more ideas/work into this if it gets shutdown.
Once we have pidfd_send_signal() the wait discussion makes sense.
Thanks!
Christian
next prev parent reply other threads:[~2019-03-15 18:24 UTC|newest]
Thread overview: 113+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-10 20:34 [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android Sultan Alsawaf
2019-03-10 21:03 ` Greg Kroah-Hartman
2019-03-10 21:26 ` Sultan Alsawaf
2019-03-11 16:32 ` Joel Fernandes
2019-03-11 16:37 ` Joel Fernandes
2019-03-11 17:43 ` Michal Hocko
2019-03-11 17:58 ` Sultan Alsawaf
2019-03-11 20:10 ` Suren Baghdasaryan
2019-03-11 20:46 ` Sultan Alsawaf
2019-03-11 21:11 ` Joel Fernandes
2019-03-11 21:46 ` Sultan Alsawaf
2019-03-11 22:15 ` Suren Baghdasaryan
2019-03-11 22:36 ` Sultan Alsawaf
2019-03-12 8:05 ` Michal Hocko
2019-03-12 14:36 ` Suren Baghdasaryan
2019-03-12 15:25 ` Matthew Wilcox
2019-03-12 15:33 ` Michal Hocko
2019-03-12 15:39 ` Michal Hocko
2019-03-12 16:37 ` Sultan Alsawaf
2019-03-12 16:48 ` Michal Hocko
2019-03-12 16:58 ` Michal Hocko
2019-03-12 17:15 ` Suren Baghdasaryan
2019-03-12 17:17 ` Tim Murray
2019-03-12 17:45 ` Sultan Alsawaf
2019-03-12 18:43 ` Tim Murray
2019-03-12 18:50 ` Christian Brauner
2019-03-14 17:47 ` Joel Fernandes
2019-03-14 20:49 ` Sultan Alsawaf
2019-03-15 2:54 ` Joel Fernandes
2019-03-15 3:43 ` Sultan Alsawaf
2019-03-15 3:16 ` Steven Rostedt
2019-03-15 3:45 ` Sultan Alsawaf
2019-03-15 4:36 ` Daniel Colascione
2019-03-15 13:36 ` Joel Fernandes
2019-03-15 15:56 ` Suren Baghdasaryan
2019-03-15 16:12 ` Daniel Colascione
2019-03-15 16:43 ` Steven Rostedt
2019-03-15 17:17 ` Daniel Colascione
2019-03-15 18:03 ` Christian Brauner
2019-03-15 18:13 ` Joel Fernandes
2019-03-15 18:24 ` Christian Brauner [this message]
2019-03-15 18:49 ` Joel Fernandes
2019-03-16 17:31 ` Suren Baghdasaryan
2019-03-16 18:00 ` Daniel Colascione
2019-03-16 18:57 ` Christian Brauner
2019-03-16 19:37 ` Suren Baghdasaryan
2019-03-17 1:53 ` Joel Fernandes
2019-03-17 11:42 ` Christian Brauner
2019-03-17 15:40 ` Daniel Colascione
2019-03-18 0:29 ` Christian Brauner
2019-03-18 23:50 ` Joel Fernandes
2019-03-19 22:14 ` Christian Brauner
2019-03-19 22:26 ` Joel Fernandes
2019-03-19 22:48 ` Daniel Colascione
2019-03-19 23:10 ` Christian Brauner
2019-03-20 1:52 ` Joel Fernandes
2019-03-20 2:42 ` pidfd design Daniel Colascione
2019-03-20 3:59 ` Christian Brauner
2019-03-20 7:02 ` Daniel Colascione
2019-03-20 11:33 ` Joel Fernandes
2019-03-20 18:26 ` Christian Brauner
2019-03-20 18:38 ` Daniel Colascione
2019-03-20 18:51 ` Christian Brauner
2019-03-20 18:58 ` Andy Lutomirski
2019-03-20 19:14 ` Christian Brauner
2019-03-20 19:40 ` Daniel Colascione
2019-03-21 17:02 ` Andy Lutomirski
2019-03-25 20:13 ` Jann Horn
2019-03-25 20:23 ` Daniel Colascione
2019-03-25 23:42 ` Andy Lutomirski
2019-03-25 23:45 ` Christian Brauner
2019-03-26 0:00 ` Andy Lutomirski
2019-03-26 0:12 ` Christian Brauner
2019-03-26 0:24 ` Andy Lutomirski
2019-03-28 9:21 ` Christian Brauner
2019-03-20 19:19 ` Joel Fernandes
2019-03-20 19:29 ` Daniel Colascione
2019-03-24 14:44 ` Serge E. Hallyn
2019-03-24 18:48 ` Joel Fernandes
2019-03-20 19:11 ` Joel Fernandes
2019-05-07 2:16 ` [RFC] simple_lmk: Introduce Simple Low Memory Killer for Android Sultan Alsawaf
2019-05-07 7:04 ` Greg Kroah-Hartman
2019-05-07 7:27 ` Sultan Alsawaf
2019-05-07 7:43 ` Greg Kroah-Hartman
2019-05-07 8:12 ` Sultan Alsawaf
2019-05-07 10:58 ` Christian Brauner
2019-05-07 16:28 ` Suren Baghdasaryan
2019-05-07 16:38 ` Christian Brauner
2019-05-07 16:53 ` Sultan Alsawaf
2019-05-07 20:01 ` Suren Baghdasaryan
2019-05-07 18:46 ` Joel Fernandes
2019-05-07 17:17 ` Sultan Alsawaf
2019-05-07 17:29 ` Greg Kroah-Hartman
2019-05-07 11:09 ` Greg Kroah-Hartman
2019-05-07 12:26 ` Michal Hocko
2019-05-07 15:31 ` Oleg Nesterov
2019-05-07 16:35 ` Sultan Alsawaf
2019-05-09 15:56 ` Oleg Nesterov
2019-05-09 18:33 ` Sultan Alsawaf
2019-05-10 15:10 ` Oleg Nesterov
2019-05-13 16:45 ` Sultan Alsawaf
2019-05-14 16:44 ` Steven Rostedt
2019-05-14 17:31 ` Sultan Alsawaf
2019-05-15 14:58 ` Oleg Nesterov
2019-05-15 17:27 ` Sultan Alsawaf
2019-05-15 18:32 ` Steven Rostedt
2019-05-15 18:52 ` Sultan Alsawaf
2019-05-15 20:09 ` Steven Rostedt
2019-05-16 13:54 ` Oleg Nesterov
2019-03-17 16:35 ` Serge E. Hallyn
2019-03-17 17:11 ` Daniel Colascione
2019-03-17 17:16 ` Serge E. Hallyn
2019-03-17 22:02 ` Suren Baghdasaryan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190315182426.sujcqbzhzw4llmsa@brauner.io \
--to=christian@brauner.io \
--cc=arve@android.com \
--cc=dancol@google.com \
--cc=devel@driverdev.osuosl.org \
--cc=gregkh@linuxfoundation.org \
--cc=joel@joelfernandes.org \
--cc=kernel-team@android.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=maco@android.com \
--cc=mhocko@kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=rostedt@goodmis.org \
--cc=sultan@kerneltoast.com \
--cc=surenb@google.com \
--cc=timmurray@google.com \
--cc=tkjos@android.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).