From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1756477AbdJJORo (ORCPT ); Tue, 10 Oct 2017 10:17:44 -0400 Received: from gum.cmpxchg.org ([85.214.110.215]:54206 "EHLO gum.cmpxchg.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756440AbdJJORm (ORCPT ); Tue, 10 Oct 2017 10:17:42 -0400 Date: Tue, 10 Oct 2017 10:17:33 -0400 From: Johannes Weiner To: Michal Hocko Cc: Greg Thelen , Shakeel Butt , Alexander Viro , Vladimir Davydov , Andrew Morton , Linux MM , linux-fsdevel@vger.kernel.org, LKML Subject: Re: [PATCH] fs, mm: account filp and names caches to kmemcg Message-ID: <20171010141733.GB16710@cmpxchg.org> References: <20171005222144.123797-1-shakeelb@google.com> <20171006075900.icqjx5rr7hctn3zd@dhcp22.suse.cz> <20171009062426.hmqedtqz5hkmhnff@dhcp22.suse.cz> <20171009202613.GA15027@cmpxchg.org> <20171010091430.giflzlayvjblx5bu@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20171010091430.giflzlayvjblx5bu@dhcp22.suse.cz> User-Agent: Mutt/1.9.1 (2017-09-22) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Tue, Oct 10, 2017 at 11:14:30AM +0200, Michal Hocko wrote: > On Mon 09-10-17 16:26:13, Johannes Weiner wrote: > > It's consistent in the sense that only page faults enable the memcg > > OOM killer. It's not the type of memory that decides, it's whether the > > allocation context has a channel to communicate an error to userspace. > > > > Whether userspace is able to handle -ENOMEM from syscalls was a voiced > > concern at the time this patch was merged, although there haven't been > > any reports so far, > > Well, I remember reports about MAP_POPULATE breaking or at least having > an unexpected behavior. Hm, that slipped past me. Did we do something about these? Or did they fix userspace? > Well, we should be able to do that with the oom_reaper. At least for v2 > which doesn't have synchronous userspace oom killing. I don't see how the OOM reaper is a guarantee as long as we have this: if (!down_read_trylock(&mm->mmap_sem)) { ret = false; trace_skip_task_reaping(tsk->pid); goto unlock_oom; } What do you mean by 'v2'? > > > c) Overcharge kmem to oom memcg and queue an async memcg limit checker, > > > which will oom kill if needed. > > > > This makes the most sense to me. Architecturally, I imagine this would > > look like b), with an OOM handler at the point of return to userspace, > > except that we'd overcharge instead of retrying the syscall. > > I do not think we should break the hard limit semantic if possible. We > can currently allow that for allocations which are very short term (oom > victims) or too important to fail but allowing that for kmem charges in > general sounds like too easy to runaway. I'm not sure there is a convenient way out of this. If we want to respect the hard limit AND guarantee allocation success, the OOM killer has to free memory reliably - which it doesn't. But if it did, we could also break the limit temporarily and have the OOM killer replenish the pool before that userspace app can continue. The allocation wouldn't have to be short-lived, since memory is fungible. Until the OOM killer is 100% reliable, we have the choice between sometimes deadlocking the cgroup tasks and everything that interacts with them, returning -ENOMEM for syscalls, or breaking the hard limit guarantee during memcg OOM. It seems breaking the limit temporarily in order to reclaim memory is the best option. There is kernel memory we don't account to the memcg already because we think it's probably not going to be significant, so the isolation isn't 100% watertight in the first place. And I'd rather have the worst-case effect of a cgroup OOMing be spilling over its hard limit than deadlocking things inside and outside the cgroup.