From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-4.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BD246C0044C for ; Wed, 7 Nov 2018 09:45:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 88ACC2086C for ; Wed, 7 Nov 2018 09:45:47 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 88ACC2086C Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=i-love.sakura.ne.jp Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1730560AbeKGTPU (ORCPT ); Wed, 7 Nov 2018 14:15:20 -0500 Received: from www262.sakura.ne.jp ([202.181.97.72]:59963 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726223AbeKGTPU (ORCPT ); Wed, 7 Nov 2018 14:15:20 -0500 Received: from fsav104.sakura.ne.jp (fsav104.sakura.ne.jp [27.133.134.231]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id wA79jW3r096815; Wed, 7 Nov 2018 18:45:32 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav104.sakura.ne.jp (F-Secure/fsigk_smtp/530/fsav104.sakura.ne.jp); Wed, 07 Nov 2018 18:45:32 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/530/fsav104.sakura.ne.jp) Received: from [192.168.1.8] (softbank060157065137.bbtec.net [60.157.65.137]) (authenticated bits=0) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTPSA id wA79jSm9096790 (version=TLSv1.2 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 7 Nov 2018 18:45:32 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks To: Michal Hocko Cc: Johannes Weiner , linux-mm@kvack.org, David Rientjes , Andrew Morton , LKML References: <20181022071323.9550-1-mhocko@kernel.org> <20181022071323.9550-3-mhocko@kernel.org> <20181026142531.GA27370@cmpxchg.org> <20181026192551.GC18839@dhcp22.suse.cz> <20181026193304.GD18839@dhcp22.suse.cz> <20181106124224.GM27423@dhcp22.suse.cz> From: Tetsuo Handa Message-ID: <8725e3b3-3752-fa7f-a88f-5ff4f5b6eace@i-love.sakura.ne.jp> Date: Wed, 7 Nov 2018 18:45:27 +0900 User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1 MIME-Version: 1.0 In-Reply-To: <20181106124224.GM27423@dhcp22.suse.cz> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 2018/11/06 21:42, Michal Hocko wrote: > On Tue 06-11-18 18:44:43, Tetsuo Handa wrote: > [...] >> diff --git a/mm/memcontrol.c b/mm/memcontrol.c >> index 6e1469b..a97648a 100644 >> --- a/mm/memcontrol.c >> +++ b/mm/memcontrol.c >> @@ -1382,8 +1382,13 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, >> }; >> bool ret; >> >> - mutex_lock(&oom_lock); >> - ret = out_of_memory(&oc); >> + if (mutex_lock_killable(&oom_lock)) >> + return true; >> + /* >> + * A few threads which were not waiting at mutex_lock_killable() can >> + * fail to bail out. Therefore, check again after holding oom_lock. >> + */ >> + ret = fatal_signal_pending(current) || out_of_memory(&oc); >> mutex_unlock(&oom_lock); >> return ret; >> } > > If we are goging with a memcg specific thingy then I really prefer > tsk_is_oom_victim approach. Or is there any reason why this is not > suitable? > Why need to wait for mark_oom_victim() called after slow printk() messages? If current thread got Ctrl-C and thus current thread can terminate, what is nice with waiting for the OOM killer? If there are several OOM events in multiple memcg domains waiting for completion of printk() messages? I don't see points with waiting for oom_lock, for try_charge() already allows current thread to terminate due to fatal_signal_pending() test.