From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.3 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1FCD3C433EF for ; Thu, 23 Sep 2021 06:50:05 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 85C9761241 for ; Thu, 23 Sep 2021 06:50:04 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 85C9761241 Authentication-Results: mail.kernel.org; dmarc=fail (p=quarantine dis=none) header.from=virtuozzo.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 0009A900002; Thu, 23 Sep 2021 02:50:04 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id EF29A6B0071; Thu, 23 Sep 2021 02:50:03 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id DE18F900002; Thu, 23 Sep 2021 02:50:03 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0231.hostedemail.com [216.40.44.231]) by kanga.kvack.org (Postfix) with ESMTP id D01D86B006C for ; Thu, 23 Sep 2021 02:50:03 -0400 (EDT) Received: from smtpin38.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 751E0182DD9F6 for ; Thu, 23 Sep 2021 06:50:03 +0000 (UTC) X-FDA: 78617913486.38.B818AE9 Received: from relay.sw.ru (relay.sw.ru [185.231.240.75]) by imf13.hostedemail.com (Postfix) with ESMTP id EA51212AD425 for ; Thu, 23 Sep 2021 06:50:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=virtuozzo.com; s=relay; h=Content-Type:MIME-Version:Date:Message-ID:From: Subject; bh=vtROa+P4fLarhoo5zHVgaO07dFJHZ8lgd6rV14a23N4=; b=mm4NFGv1WorRzXdvb 0fANICC/P0JNGuqguha1XOl3FMrxkIIzBvMJSgQ8zZnnDu2HTxwDLxRe7NnFrR6Xq4TES1kfjz3TK n2Sa4PWBllcqGfzkEBCw+TTHgQm2HluKwcynBwEyDlKHKB6tpN3mAqO4PbI9bP0hQL8okJfjtZqeY =; Received: from [10.93.0.56] by relay.sw.ru with esmtp (Exim 4.94.2) (envelope-from ) id 1mTIYP-002xdr-JG; Thu, 23 Sep 2021 09:49:57 +0300 Subject: Re: [PATCH mm] vmalloc: back off when the current task is OOM-killed To: Michal Hocko Cc: Johannes Weiner , Vladimir Davydov , Andrew Morton , Tetsuo Handa , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel@openvz.org References: From: Vasily Averin Message-ID: Date: Thu, 23 Sep 2021 09:49:57 +0300 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Authentication-Results: imf13.hostedemail.com; dkim=pass header.d=virtuozzo.com header.s=relay header.b=mm4NFGv1; spf=pass (imf13.hostedemail.com: domain of vvs@virtuozzo.com designates 185.231.240.75 as permitted sender) smtp.mailfrom=vvs@virtuozzo.com; dmarc=pass (policy=quarantine) header.from=virtuozzo.com X-Rspamd-Server: rspam06 X-Rspamd-Queue-Id: EA51212AD425 X-Stat-Signature: 1am8swo1emresno37c7qaog9m5m9ojdw X-HE-Tag: 1632379802-138557 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 9/22/21 3:27 PM, Michal Hocko wrote: > On Fri 17-09-21 11:06:49, Vasily Averin wrote: >> Huge vmalloc allocation on heavy loaded node can lead to a global >> memory shortage. A task called vmalloc can have the worst badness >> and be chosen by OOM-killer, however received fatal signal and >> oom victim mark does not interrupt allocation cycle. Vmalloc will >> continue allocating pages over and over again, exacerbating the crisis >> and consuming the memory freed up by another killed tasks. >> >> This patch allows OOM-killer to break vmalloc cycle, makes OOM more >> effective and avoid host panic. >> >> Unfortunately it is not 100% safe. Previous attempt to break vmalloc >> cycle was reverted by commit b8c8a338f75e ("Revert "vmalloc: back off when >> the current task is killed"") due to some vmalloc callers did not handled >> failures properly. Found issues was resolved, however, there may >> be other similar places. >> >> Such failures may be acceptable for emergencies, such as OOM. On the other >> hand, we would like to detect them earlier. However they are quite rare, >> and will be hidden by OOM messages, so I'm afraid they wikk have quite >> small chance of being noticed and reported. >> >> To improve the detection of such places this patch also interrupts the vmalloc >> allocation cycle for all fatal signals. The checks are hidden under DEBUG_VM >> config option to do not break unaware production kernels. > > I really dislike this. We shouldn't have a sementically different > behavior for a debugging kernel. Yes, you're right, thank you. > Is there any technical reason to not do fatal_signal_pending bailout > unconditionally? OOM victim based check will make it less likely and > therefore any potential bugs are just hidden more. So I think we should > really go with fatal_signal_pending check here. I'm agree, oom_victim == fatal_signal_pending. I'm agree that vmalloc callers should expect and handle single vnalloc failures. I think it is acceptable to enable fatal_signal_pending check to quickly detect such kind of iussues. However fatal_signal_pending check can cause serial vmalloc failures and I doubt it is acceptable. Rollback after failed vmalloc can call new vmalloc calls that will be failed too, even properly handled such serial failures can cause troubles. Hypothetically, cancelled vmalloc called inside some filesystem's transaction forces its rollback, that in own turn it can call own vmalloc. Any failures on this path can break the filesystem. I doubt it is acceptable, especially for non-OOM fatal signals. On the other hand I cannot say that it is a 100% bug. Another scenario: as you know failed vmalloc calls pr_warn. According message should be sent to remote terminal or netconsole. I'm not sure about execution context, however if this is done in task context it may call vmalloc either in terminal or in network subsystems. Even handled, such failures are not fatal, but this behaviour is at least unexpected. Should we perhaps interrupt the first vmalloc only? >> Vmalloc uses new alloc_pages_bulk subsystem, so newly added checks can >> affect other users of this subsystem. >> >> Signed-off-by: Vasily Averin >> --- >> mm/page_alloc.c | 5 +++++ >> mm/vmalloc.c | 6 ++++++ >> 2 files changed, 11 insertions(+) >> >> diff --git a/mm/page_alloc.c b/mm/page_alloc.c >> index b37435c274cf..133d52e507ff 100644 >> --- a/mm/page_alloc.c >> +++ b/mm/page_alloc.c >> @@ -5288,6 +5288,11 @@ unsigned long __alloc_pages_bulk(gfp_t gfp, int preferred_nid, >> continue; >> } >> >> + if (tsk_is_oom_victim(current) || >> + (IS_ENABLED(CONFIG_DEBUG_VM) && >> + fatal_signal_pending(current))) >> + break; > > This allocator interface is used in some real hot paths. It is also > meant to be fail fast interface (e.g. it only allocates from pcp > allocator) so it shouldn't bring any additional risk to memory depletion > under heavy memory pressure. > > In other words I do not see any reason to bail out in this code path. Thank you for the explanation, let's drop this check at all. Thank you, Vasily Averin