From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 11048C433EF for ; Wed, 20 Oct 2021 12:33:51 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id A9C886135E for ; Wed, 20 Oct 2021 12:33:50 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org A9C886135E Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 2C4CF6B0071; Wed, 20 Oct 2021 08:33:50 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 274D2900002; Wed, 20 Oct 2021 08:33:50 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 1634D6B0073; Wed, 20 Oct 2021 08:33:50 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0058.hostedemail.com [216.40.44.58]) by kanga.kvack.org (Postfix) with ESMTP id 063C46B0071 for ; Wed, 20 Oct 2021 08:33:50 -0400 (EDT) Received: from smtpin19.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id AA1BB181AEF3E for ; Wed, 20 Oct 2021 12:33:49 +0000 (UTC) X-FDA: 78716757378.19.8F54FF1 Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.220.29]) by imf16.hostedemail.com (Postfix) with ESMTP id 759D5F000091 for ; Wed, 20 Oct 2021 12:33:46 +0000 (UTC) Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out2.suse.de (Postfix) with ESMTP id DEDB81F770; Wed, 20 Oct 2021 12:33:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1634733227; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=3UnTu9zwwf8NSgpDXIex/EQbixB3GG5nxeC27rFUoo4=; b=hwW3FoF7Q9xQQ6c9C6pXKQVXDvbM5O97PBXyMbMq/0FtzqbRYEDMmNIeYhYMEUaH3bEmzm e2K72qTnlyNHrXAyZr3ANBJ4k6blJ6/7Ec3Cat8TdGd6Dwmgwtco8FNx3KQ/ipePD4qRPB QWwPiXvrmlfaVryc15mzcCx4y1j93vs= Received: from suse.cz (unknown [10.100.201.86]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 52F47A3C5F; Wed, 20 Oct 2021 12:33:47 +0000 (UTC) Date: Wed, 20 Oct 2021 14:33:43 +0200 From: Michal Hocko To: Vasily Averin Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , Roman Gushchin , Uladzislau Rezki , Vlastimil Babka , Shakeel Butt , Mel Gorman , Tetsuo Handa , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org Subject: Re: [PATCH memcg 1/3] mm: do not firce global OOM from inside dying tasks Message-ID: References: <2c13c739-7282-e6f4-da0a-c0b69e68581e@virtuozzo.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <2c13c739-7282-e6f4-da0a-c0b69e68581e@virtuozzo.com> X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 759D5F000091 X-Stat-Signature: ax9wmm5k17congnaxr5fa9jj74cyyb65 Authentication-Results: imf16.hostedemail.com; dkim=pass header.d=suse.com header.s=susede1 header.b=hwW3FoF7; spf=pass (imf16.hostedemail.com: domain of mhocko@suse.com designates 195.135.220.29 as permitted sender) smtp.mailfrom=mhocko@suse.com; dmarc=pass (policy=quarantine) header.from=suse.com X-HE-Tag: 1634733226-267096 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: s@firce@force@ On Wed 20-10-21 15:12:19, Vasily Averin wrote: > There is no sense to force global OOM if current task is dying. This really begs for much more information. Feel free to get an inspiration from my previous attempt to achieve something similar. In minimum it is important to mention that the OOM killer is already handled at the page allocator level for the global OOM and at the charging level for the memcg one. Both have much more information about the scope of allocation/charge request. This means that either the OOM killer has been invoked properly and didn't lead to the allocation success or it has been skipped because it couldn't have been invoked. In both cases triggering it from here is pointless and even harmful. Another argument is that it is more reasonable to let killed task die rather than hit the oom killer and retry the allocation. > Signed-off-by: Vasily Averin > --- > mm/oom_kill.c | 3 +++ > 1 file changed, 3 insertions(+) > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c > index 831340e7ad8b..1deef8c7a71b 100644 > --- a/mm/oom_kill.c > +++ b/mm/oom_kill.c > @@ -1137,6 +1137,9 @@ void pagefault_out_of_memory(void) > if (mem_cgroup_oom_synchronize(true)) > return; > > + if (fatal_signal_pending(current)) > + return; > + > if (!mutex_trylock(&oom_lock)) > return; > out_of_memory(&oc); > -- > 2.32.0 -- Michal Hocko SUSE Labs