From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 88214C433F5 for ; Thu, 21 Oct 2021 11:49:33 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 1DA1E611CE for ; Thu, 21 Oct 2021 11:49:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 1DA1E611CE Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 9A18E940007; Thu, 21 Oct 2021 07:49:32 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 952F2900002; Thu, 21 Oct 2021 07:49:32 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 840AC940007; Thu, 21 Oct 2021 07:49:32 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 77116900002 for ; Thu, 21 Oct 2021 07:49:32 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 36EA0256E1 for ; Thu, 21 Oct 2021 11:49:32 +0000 (UTC) X-FDA: 78720274584.27.6E1C693 Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by imf23.hostedemail.com (Postfix) with ESMTP id 8F11B9000093 for ; Thu, 21 Oct 2021 11:49:26 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id BA74F2198C; Thu, 21 Oct 2021 11:49:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1634816970; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=wT70oYeZpHQ/Zw1q79aVg8xYlfn/XGItMyWdHMMq2mc=; b=dbvcoLBO60xANuysYbeygYlx/ufkfe8rO3pFHYiG4HhBwwrtVgXTY+w0JMwpn1U1Kn+CBz NPjWJBRPA8rgHaxlIj42LDmApYAY1ZTK5tYK6jF9phuFrOArW5jidBSQO35sz+MHqfjVsK CLNc706hSIYtGkr3yPfVVl0YCOt74TM= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 3A76DA3B87; Thu, 21 Oct 2021 11:49:30 +0000 (UTC) Date: Thu, 21 Oct 2021 13:49:29 +0200 From: Michal Hocko To: Vasily Averin Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , Roman Gushchin , Uladzislau Rezki , Vlastimil Babka , Shakeel Butt , Mel Gorman , Tetsuo Handa , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org Subject: Re: [PATCH memcg 3/3] memcg: handle memcg oom failures Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 8F11B9000093 X-Stat-Signature: hwofbgcx8qiwccrnqo1ajodgcn7nxkpb Authentication-Results: imf23.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=dbvcoLBO; spf=pass (imf23.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.28 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-HE-Tag: 1634816966-357121 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 20-10-21 18:46:56, Vasily Averin wrote: > On 20.10.2021 16:02, Michal Hocko wrote: > > On Wed 20-10-21 15:14:27, Vasily Averin wrote: > >> mem_cgroup_oom() can fail if current task was marked unkillable > >> and oom killer cannot find any victim. > >> > >> Currently we force memcg charge for such allocations, > >> however it allow memcg-limited userspace task in to overuse assigned limits > >> and potentially trigger the global memory shortage. > > > > You should really go into more details whether that is a practical > > problem to handle. OOM_FAILED means that the memcg oom killer couldn't > > find any oom victim so it cannot help with a forward progress. There are > > not that many situations when that can happen. Naming that would be > > really useful. > > I've pointed above: > "if current task was marked unkillable and oom killer cannot find any victim." > This may happen when current task cannot be oom-killed because it was marked > unkillable i.e. it have p->signal->oom_score_adj == OOM_SCORE_ADJ_MIN > and other processes in memcg are either dying, or are kernel threads or are marked unkillable > by the same way. Or when memcg have this process only. > > If we always approve such kind of allocation, it can be misused. > Process can mmap a lot of memory, > ant then touch it and generate page fault and make overcharged memory allocations. > Finally it can consume all node memory and trigger global memory shortage on the host. Yes, this is true but a) OOM_SCORE_ADJ_MIN tasks are excluded from the OOM handling so they have to be careful with the memory consumption and b) is this a theoretical or a practical concern. This is mostly what I wanted to make sure you describe in the changelog. > >> Let's fail the memory charge in such cases. > >> > >> This failure should be somehow recognised in #PF context, > > > > explain why > > When #PF cannot allocate memory (due to reason described above), handle_mm_fault returns VM_FAULT_OOM, > then its caller executes pagefault_out_of_memory(). If last one cannot recognize the real reason of this fail, > it expect it was global memory shortage and executed global out_ouf_memory() that can kill random process > or just crash node if sysctl vm.panic_on_oom is set to 1. > > Currently pagefault_out_of_memory() knows about possible async memcg OOM and handles it correctly. > However it is not aware that memcg can reject some other allocations, do not recognize the fault > as memcg-related and allows to run global OOM. Again something to be added to the changelog. > >> so let's use current->memcg_in_oom == (struct mem_cgroup *)OOM_FAILED > >> > >> ToDo: what is the best way to notify pagefault_out_of_memory() about > >> mem_cgroup_out_of_memory failure ? > > > > why don't you simply remove out_of_memory from pagefault_out_of_memory > > and leave it only with the blocking memcg OOM handling? Wouldn't that be a > > more generic solution? Your first patch already goes that way partially. > > I clearly understand that global out_of_memory should not be trggired by memcg restrictions. > I clearly understand that dying task will release some memory soon and we can do not run global oom if current task is dying. > > However I'm not sure that I can remove out_of_memory at all. At least I do not have good arguments to do it. I do understand that handling a very specific case sounds easier but it would be better to have a robust fix even if that requires some more head scratching. So far we have collected several reasons why the it is bad to trigger oom killer from the #PF path. There is no single argument to keep it so it sounds like a viable path to pursue. Maybe there are some very well hidden reasons but those should be documented and this is a great opportunity to do either of the step. Moreover if it turns out that there is a regression then this can be easily reverted and a different, maybe memcg specific, solution can be implemented. -- Michal Hocko SUSE Labs