From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.1 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1474BC433E1 for ; Mon, 13 Jul 2020 13:12:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C74E1206F0 for ; Mon, 13 Jul 2020 13:12:28 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="sIWtNIQk" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C74E1206F0 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4FD9A8D0002; Mon, 13 Jul 2020 09:12:28 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 4ADBE8D0001; Mon, 13 Jul 2020 09:12:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 39CF98D0002; Mon, 13 Jul 2020 09:12:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0071.hostedemail.com [216.40.44.71]) by kanga.kvack.org (Postfix) with ESMTP id 23AB88D0001 for ; Mon, 13 Jul 2020 09:12:28 -0400 (EDT) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id C8AEA2494 for ; Mon, 13 Jul 2020 13:12:27 +0000 (UTC) X-FDA: 77033091534.06.cry35_42102d526ee8 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin06.hostedemail.com (Postfix) with ESMTP id 3525110034913 for ; Mon, 13 Jul 2020 13:12:27 +0000 (UTC) X-HE-Tag: cry35_42102d526ee8 X-Filterd-Recvd-Size: 7286 Received: from mail-il1-f196.google.com (mail-il1-f196.google.com [209.85.166.196]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Mon, 13 Jul 2020 13:12:26 +0000 (UTC) Received: by mail-il1-f196.google.com with SMTP id r12so11086954ilh.4 for ; Mon, 13 Jul 2020 06:12:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=X/E91af7hPsCVW37H7EFy0aotw4XU9GNh75ekv8eyOs=; b=sIWtNIQk9w677Sz7XdyvXu5jSkTZ+GI0F5pJMA9TWsZOr2r/iTof5odRfLnARXcilc 3WFa2zGohMl4LuiVGeguIJLuywDz0qhbelenAh4+nE0SrsJdjIB/D+sdDP/NdFzavWE1 mb+Mrkc5qbXg5KcwcnEW4pszlgPaALDepNCvbi2I+UfZsPoC18hzv8/bXxBzvBML2Ol3 kUyHAP9KWDzr8rBwJ/QzhBotiyBVbQPSyIfDPf61x00/iOU0wXwiNyH9RLEJ3JyBs7Wf GCPieUq1CPbLkAgRLgqsQqk4+Gq6lvdZ/0YnG+yESi0IRSjtPNixmBFEmOpI+3AQrV3G WJzA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=X/E91af7hPsCVW37H7EFy0aotw4XU9GNh75ekv8eyOs=; b=tBTRasnoYRLMd/+W9MwE1PTkgfDiKAB3dBwxepRBmST5gf1H/vpkDicIS380wSY1Mo 2dmPt/RygmWo1wo1TJd6hYU17W2Bx/+DxKIs+Jx/VlbaopxdfnisfHjqyNPLf6ivbCzO ttJb9wayqQaVwvuoso+ccYjEoyT4tztJz8YQ9SBRVM+t8QKUv5JREz2jAl9T0bUH/mmO dRUlcNnmxKsv/mifiJKKSTQ1HGC3x6mUVvtejwWna67wq1Y2zBfzcbT8mqOD9upb6bcG HcsgLwqf0yCkPwWLmdcdw9P4coqbMvQmGAUys/ZDSwukXin2r311Zg0lF5AbO+BpFdX7 xiBw== X-Gm-Message-State: AOAM533tHU6pIiimHd72RUMahTp4FyBx10V7stsW2Hcy8Hbd4gqfn3Mt zCASeuBRJciIHxkEZWpEQTlBrcD0XF3jwQYs4u0= X-Google-Smtp-Source: ABdhPJxYSX38TkuQqewzVB2dlrS5oECgxNcHzkAmDUeJAQ7EAyALcVLomi+Wpolw1XaloylpJGyPqB35f/8tzQ7dD7M= X-Received: by 2002:a05:6e02:d4c:: with SMTP id h12mr67729604ilj.168.1594645946259; Mon, 13 Jul 2020 06:12:26 -0700 (PDT) MIME-Version: 1.0 References: <1594437481-11144-1-git-send-email-laoar.shao@gmail.com> <20200713060154.GA16783@dhcp22.suse.cz> <20200713062132.GB16783@dhcp22.suse.cz> <20200713124503.GF16783@dhcp22.suse.cz> In-Reply-To: <20200713124503.GF16783@dhcp22.suse.cz> From: Yafang Shao Date: Mon, 13 Jul 2020 21:11:50 +0800 Message-ID: Subject: Re: [PATCH] mm, oom: don't invoke oom killer if current has been reapered To: Michal Hocko Cc: David Rientjes , Andrew Morton , Linux MM Content-Type: text/plain; charset="UTF-8" X-Rspamd-Queue-Id: 3525110034913 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam03 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, Jul 13, 2020 at 8:45 PM Michal Hocko wrote: > > On Mon 13-07-20 20:24:07, Yafang Shao wrote: > > On Mon, Jul 13, 2020 at 2:21 PM Michal Hocko wrote: > > > > > > On Mon 13-07-20 08:01:57, Michal Hocko wrote: > > > > On Fri 10-07-20 23:18:01, Yafang Shao wrote: > > > [...] > > > > > There're many threads of a multi-threaded task parallel running in a > > > > > container on many cpus. Then many threads triggered OOM at the same time, > > > > > > > > > > CPU-1 CPU-2 ... CPU-n > > > > > thread-1 thread-2 ... thread-n > > > > > > > > > > wait oom_lock wait oom_lock ... hold oom_lock > > > > > > > > > > (sigkill received) > > > > > > > > > > select current as victim > > > > > and wakeup oom reaper > > > > > > > > > > release oom_lock > > > > > > > > > > (MMF_OOM_SKIP set by oom reaper) > > > > > > > > > > (lots of pages are freed) > > > > > hold oom_lock > > > > > > > > Could you be more specific please? The page allocator never waits for > > > > the oom_lock and keeps retrying instead. Also __alloc_pages_may_oom > > > > tries to allocate with the lock held. > > > > > > I suspect that you are looking at memcg oom killer. > > > > Right, these threads were waiting the oom_lock in mem_cgroup_out_of_memory(). > > > > > Because we do not do > > > trylock there for some reason I do not immediatelly remember from top of > > > my head. If this is really the case then I would recommend looking into > > > how the page allocator implements this and follow the same pattern for > > > memcg as well. > > > > > > > That is a good suggestion. > > But we can't try locking the global oom_lock here, because task ooming > > in memcg foo may can't help the tasks in memcg bar. > > I do not follow. oom_lock is not about fwd progress. It is a big lock to > synchronize against oom_disable logic. > > I have this in mind > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 248e6cad0095..29d1f8c2d968 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -1563,8 +1563,10 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > }; > bool ret; > > - if (mutex_lock_killable(&oom_lock)) > + if (!mutex_trylock(&oom_lock)) > return true; root_mem_cgroup / \ memcg_a (16G) memcg_b (32G) | | process a_1 (reach memcg_a limit) process b_1(reach memcg_b limit) hold oom_lock wait oom_lock So we can find that process a_1 will try to kill process in memcg_a, while process b_1 need to try to kill process in memcg_b. IOW, the process killed in memcg_a can't help the processes in memcg_b, so if process b_1 should not trylock oom_lock here. While if the memcg tree is , target mem_cgroup (16G) / \ | | process a_1 (reach memcg_a limit) process a_2(reach memcg_a limit) hold oom_lock wait oom_lock Then, process a_2 can trylock oom_lock here. IOW, these processes should in the same memcg. That's why I said that we should introduce per-memcg oom_lock. > + > + > /* > * A few threads which were not waiting at mutex_lock_killable() can > * fail to bail out. Therefore, check again after holding oom_lock. > > But as I've said I would need to double check the history on why we > differ here. Btw. I suspect that mem_cgroup_out_of_memory call in > mem_cgroup_oom_synchronize is bogus and can no longer trigger after > 29ef680ae7c21 but this needs double checking as well. > > > IOW, we need to introduce the per memcg oom_lock, like bellow, > > I do not see why. Besides that we already do have per oom memcg > hierarchy lock. > -- Thanks Yafang