From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S933262AbdKARyO (ORCPT ); Wed, 1 Nov 2017 13:54:14 -0400 Received: from mail.kernel.org ([198.145.29.99]:48616 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933234AbdKARyM (ORCPT ); Wed, 1 Nov 2017 13:54:12 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B6EF1214F0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=goodmis.org Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=rostedt@goodmis.org Date: Wed, 1 Nov 2017 13:54:09 -0400 From: Steven Rostedt To: Vlastimil Babka Cc: Tetsuo Handa , akpm@linux-foundation.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Cong Wang , Dave Hansen , Johannes Weiner , Mel Gorman , Michal Hocko , Petr Mladek , Sergey Senozhatsky , "yuwang.yuwang" Subject: Re: [PATCH] mm: don't warn about allocations which stall for too long Message-ID: <20171101135409.0190afb1@gandalf.local.home> In-Reply-To: <40ed01d3-1475-cd4a-0dff-f7a6ee24d5e9@suse.cz> References: <1509017339-4802-1-git-send-email-penguin-kernel@I-love.SAKURA.ne.jp> <20171031153225.218234b4@gandalf.local.home> <187a38c6-f964-ed60-932d-b7e0bee03316@suse.cz> <20171101113336.19758220@gandalf.local.home> <40ed01d3-1475-cd4a-0dff-f7a6ee24d5e9@suse.cz> X-Mailer: Claws Mail 3.14.0 (GTK+ 2.24.31; x86_64-pc-linux-gnu) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed, 1 Nov 2017 18:42:25 +0100 Vlastimil Babka wrote: > On 11/01/2017 04:33 PM, Steven Rostedt wrote: > > On Wed, 1 Nov 2017 09:30:05 +0100 > > Vlastimil Babka wrote: > > > >> > >> But still, it seems to me that the scheme only works as long as there > >> are printk()'s coming with some reasonable frequency. There's still a > >> corner case when a storm of printk()'s can come that will fill the ring > >> buffers, and while during the storm the printing will be distributed > >> between CPUs nicely, the last unfortunate CPU after the storm subsides > >> will be left with a large accumulated buffer to print, and there will be > >> no waiters to take over if there are no more printk()'s coming. What > >> then, should it detect such situation and defer the flushing? > > > > No! > > > > If such a case happened, that means the system is doing something > > really stupid. > > Hm, what about e.g. a soft lockup that triggers backtraces from all > CPU's? Yes, having softlockups is "stupid" but sometimes they do happen > and the system still recovers (just some looping operation is missing > cond_resched() and took longer than expected). It would be sad if it > didn't recover because of a printk() issue... I still think such a case would not be huge for the last printer. > > > Btw, each printk that takes over, does one message, so the last one to > > take over, shouldn't have a full buffer anyway. > > There might be multiple messages per each CPU, e.g. the softlockup > backtraces. And each one does multiple printks, still spreading the love around. > > > But still, if you have such a hypothetical situation, the system should > > just crash. The printk is still bounded by the length of the buffer. > > Although it is slow, it will finish. > > Finish, but with single CPU doing the printing, which is wrong? I don't think so. This is all hypothetical anyway. I need to implement my solution, and then lets see if this can actually happen. > > > Which is not the case with the > > current situation. And the current situation (as which this patch > > demonstrates) does happen today and is not hypothetical. > > Yep, so ideally it can be fixed without corner cases :) If there is any corner cases. I guess the test would be to trigger a soft lockup on all CPUs to print out a dump at the same time. But then again, how is a soft lockup on all CPUs not any worse than a single CPU finishing up the buffer output? -- Steve