From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id BB488C3A5A6 for ; Mon, 23 Sep 2019 08:23:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 7680D20835 for ; Mon, 23 Sep 2019 08:23:29 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 7680D20835 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0D2066B0007; Mon, 23 Sep 2019 04:23:29 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 05B826B0008; Mon, 23 Sep 2019 04:23:28 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id E64B76B000A; Mon, 23 Sep 2019 04:23:28 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id BC2CA6B0007 for ; Mon, 23 Sep 2019 04:23:28 -0400 (EDT) Received: from smtpin29.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with SMTP id 667A082437D2 for ; Mon, 23 Sep 2019 08:23:28 +0000 (UTC) X-FDA: 75965496096.29.plot81_11e93fd6cc3f X-HE-Tag: plot81_11e93fd6cc3f X-Filterd-Recvd-Size: 7462 Received: from mx1.suse.de (mx2.suse.de [195.135.220.15]) by imf44.hostedemail.com (Postfix) with ESMTP for ; Mon, 23 Sep 2019 08:23:27 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.220.254]) by mx1.suse.de (Postfix) with ESMTP id DF8E6AD2C; Mon, 23 Sep 2019 08:23:25 +0000 (UTC) Date: Mon, 23 Sep 2019 10:23:25 +0200 From: Michal Hocko To: Tetsuo Handa Cc: Roman Gushchin , ShakeelButt , linux-mm@kvack.org, Andrew Morton , Linus Torvalds Subject: Re: [PATCH] mm, oom: avoid printk() iteration under RCU Message-ID: <20190923082325.GB6016@dhcp22.suse.cz> References: <1563360901-8277-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp> <20190718083014.GB30461@dhcp22.suse.cz> <7478e014-e5ce-504c-34b6-f2f9da952600@i-love.sakura.ne.jp> <20190718140224.GC30461@dhcp22.suse.cz> <4291b98c-a961-5648-34d1-6f9347e65782@i-love.sakura.ne.jp> <20190920171042.8d970f9fc6f360de9b20ebbe@linux-foundation.org> <20190921203043.GA3382@dhcp22.suse.cz> <11c42f07-74d1-d4be-99bc-ca50d7c0ec71@i-love.sakura.ne.jp> <20190922062040.GA18814@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun 22-09-19 20:30:51, Tetsuo Handa wrote: > On 2019/09/22 15:20, Michal Hocko wrote: > > On Sun 22-09-19 08:47:31, Tetsuo Handa wrote: > >> On 2019/09/22 5:30, Michal Hocko wrote: > >>> On Fri 20-09-19 17:10:42, Andrew Morton wrote: > >>>> On Sat, 20 Jul 2019 20:29:23 +0900 Tetsuo Handa wrote: > >>>> > >>>>>> > >>>>>>> ) under RCU and this patch is one of them (except that we can't remove > >>>>>>> printk() for dump_tasks() case). > >>>>>> > >>>>>> No, this one adds a complexity for something that is not clearly a huge > >>>>>> win or the win is not explained properly. > >>>>>> > >>>>> > >>>>> The win is already explained properly by the past commits. Avoiding RCU stalls > >>>>> (even without slow consoles) is a clear win. The duration of RCU stall avoided > >>>>> by this patch is roughly the same with commit b2b469939e934587. > >>>>> > >>>>> We haven't succeeded making printk() asynchronous (and potentially we won't > >>>>> succeed making printk() asynchronous because we need synchronous printk() > >>>>> when something critical is undergoing outside of out_of_memory()). Thus, > >>>>> bringing printk() to outside of RCU section is a clear win we can make for now. > >>>> > >>>> It's actually not a complex patch and moving all that printing outside > >>>> the rcu section makes sense. So I'll sit on the patch for a few more > >>>> days but am inclined to send it upstream. > >>> > >>> Look, I am quite tired of arguing about this and other changes following > >>> the similar pattern. In short a problematic code is shuffled around and > >>> pretend to solve some problem. In this particular case it is a RCU stall > >>> which in itself is not a fatal condition. Sure it sucks and the primary > >>> reason is that printk can take way too long. This is something that is > >>> currently a WIP to be address. What is more important though there is no > >>> sign of any _real world_ workload that would require a quick workaround > >>> to justify a hacky stop gap solution. > >>> > >>> So again, why do we want to add more code for something which is not > >>> clear to be a real life problem and that will add a maintenance burden > >>> for future? > >>> > >> > >> Enqueueing zillion printk() lines from dump_tasks() will overflow printk > >> buffer (i.e. leads to lost messages) if OOM killer messages were printed > >> asynchronously. I don't think that making printk() asynchronous will solve > >> this problem. I repeat again; there is no better solution than "printk() > >> users are careful not to exhaust the printk buffer". This patch is the > >> first step towards avoiding thoughtless printk(). > > > > Irrelevant because this patch doesn't reduce the amount of output. > > This patch is just a temporary change before applying > https://lkml.kernel.org/r/7de2310d-afbd-e616-e83a-d75103b986c6@i-love.sakura.ne.jp and > https://lkml.kernel.org/r/57be50b2-a97a-e559-e4bd-10d923895f83@i-love.sakura.ne.jp . > > Show your solution by patch instead of ignoring or nacking. I simply suggest the most trivial patch which doesn't change any single line of code. This and the two discussion referenced by you simply confirm that a) you didn't bother to think your change through for other potential corner cases and b) add even more code in order to behave semi-sane. > >> Delay from dump_tasks() not only affects a thread holding oom_lock but also > >> other threads which are directly doing concurrent allocation requests or > >> indirectly waiting for the thread holding oom_lock. Your "it is a RCU stall > >> which in itself is not a fatal condition" is underestimating the _real world_ > >> problems (e.g. "delay can trigger watchdog timeout and cause the system to > >> reboot even if the administrator does not want the system to reboot"). > > > > Please back your claims by real world examples. > > > > People have to use /proc/sys/vm/oom_dump_tasks == 0 (and give up obtaining some > clue) because they worry stalls caused by /proc/sys/vm/oom_dump_tasks != 0 while > they have to use /proc/sys/vm/panic_on_oom == 0 because they don't want the down > time caused by rebooting. And such situation cannot be solved unless we solve stalls > caused by /proc/sys/vm/oom_dump_tasks != 0. I'm working at a support center and > I have to be able to figure out the system's state, but I have neither environment > to run real world workloads nor control of customer's environments to enforce > /proc/sys/vm/oom_dump_tasks != 0. > > In short, your "real world" requirement is a catch-22 problem. I am pretty sure this would be less of a catch-22 problem if you had more actual arguments at hands rather than constant hand waving. I have told you many times and I will repeat one more time, and hopefully won't have to again, even if there are issues in the code we always have to weigh cost vs. benefits. If no real workloads are hitting these problems while the fix in question is non-trivial, adds a maintenance burden or even worse undermine the functionality (and dump_tasks printed at an arbitrary time after the actual oom while you keep references to task_structs really could be perceived that way) then a patch is simply not worth it. There are exceptions to that of course. If a more complex solution would lead to a more robust code or functionality that other parts of the kernel could benefit then this would be certainly an argument to weigh in as well. E.g. improving tasks iteration to release rcu lock to yield etc, improving printk etc. I completely see how stress testing corner cases is useful and how it might help the code in general but solely focusing on this testing is a free one way ticket to unmaintainable mess. This is my last email in this thread. -- Michal Hocko SUSE Labs