netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.ibm.com>
To: "Theodore Ts'o" <tytso@mit.edu>,
	Dmitry Vyukov <dvyukov@google.com>,
	syzbot <syzbot+4bfbbf28a2e50ab07368@syzkaller.appspotmail.com>,
	Andreas Dilger <adilger.kernel@dilger.ca>,
	David Miller <davem@davemloft.net>,
	eladr@mellanox.com, Ido Schimmel <idosch@mellanox.com>,
	Jiri Pirko <jiri@mellanox.com>,
	John Stultz <john.stultz@linaro.org>,
	linux-ext4@vger.kernel.org, LKML <linux-kernel@vger.kernel.org>,
	netdev <netdev@vger.kernel.org>,
	syzkaller-bugs <syzkaller-bugs@googlegroups.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Peter Zijlstra <peterz@infradead.org>,
	Ingo Molnar <mingo@kernel.org>
Subject: Re: INFO: rcu detected stall in ext4_write_checks
Date: Sat, 6 Jul 2019 18:16:55 -0700	[thread overview]
Message-ID: <20190707011655.GA22081@linux.ibm.com> (raw)
In-Reply-To: <20190706180311.GW26519@linux.ibm.com>

On Sat, Jul 06, 2019 at 11:03:11AM -0700, Paul E. McKenney wrote:
> On Sat, Jul 06, 2019 at 11:02:26AM -0400, Theodore Ts'o wrote:
> > On Fri, Jul 05, 2019 at 11:16:31PM -0700, Paul E. McKenney wrote:
> > > I suppose RCU could take the dueling-banjos approach and use increasingly
> > > aggressive scheduler policies itself, up to and including SCHED_DEADLINE,
> > > until it started getting decent forward progress.  However, that
> > > sounds like the something that just might have unintended consequences,
> > > particularly if other kernel subsystems were to also play similar
> > > games of dueling banjos.
> > 
> > So long as the RCU threads are well-behaved, using SCHED_DEADLINE
> > shouldn't have much of an impact on the system --- and the scheduling
> > parameters that you can specify on SCHED_DEADLINE allows you to
> > specify the worst-case impact on the system while also guaranteeing
> > that the SCHED_DEADLINE tasks will urn in the first place.  After all,
> > that's the whole point of SCHED_DEADLINE.
> > 
> > So I wonder if the right approach is during the the first userspace
> > system call to shced_setattr to enable a (any) real-time priority
> > scheduler (SCHED_DEADLINE, SCHED_FIFO or SCHED_RR) on a userspace
> > thread, before that's allowed to proceed, the RCU kernel threads are
> > promoted to be SCHED_DEADLINE with appropriately set deadline
> > parameters.  That way, a root user won't be able to shoot the system
> > in the foot, and since the vast majority of the time, there shouldn't
> > be any processes running with real-time priorities, we won't be
> > changing the behavior of a normal server system.
> 
> It might well be.  However, running the RCU kthreads at real-time
> priority does not come for free.  For example, it tends to crank up the
> context-switch rate.
> 
> Plus I have taken several runs at computing SCHED_DEADLINE parameters,
> but things like the rcuo callback-offload threads have computational
> requirements that are controlled not by RCU, and not just by the rest of
> the kernel, but also by userspace (keeping in mind the example of opening
> and closing a file in a tight loop, each pass of which queues a callback).
> I suspect that RCU is not the only kernel subsystem whose computational
> requirements are set not by the subsystem, but rather by external code.
> 
> OK, OK, I suppose I could just set insanely large SCHED_DEADLINE
> parameters, following syzkaller's example, and then trust my ability to
> keep the RCU code from abusing the resulting awesome power.  But wouldn't
> a much nicer approach be to put SCHED_DEADLINE between SCHED_RR/SCHED_FIFO
> priorities 98 and 99 or some such?  Then the same (admittedly somewhat
> scary) result could be obtained much more simply via SCHED_FIFO or
> SCHED_RR priority 99.
> 
> Some might argue that this is one of those situations where simplicity
> is not necessarily an advantage, but then again, you can find someone
> who will complain about almost anything.  ;-)
> 
> > (I suspect there might be some audio applications that might try to
> > set real-time priorities, but for desktop systems, it's probably more
> > important that the system not tie its self into knots since the
> > average desktop user isn't going to be well equipped to debug the
> > problem.)
> 
> Not only that, but if core counts continue to increase, and if reliance
> on cloud computing continues to grow, there are going to be an increasing
> variety of mixed workloads in increasingly less-controlled environments.
> 
> So, yes, it would be good to solve this problem in some reasonable way.
> 
> I don't see this as urgent just yet, but I am sure you all will let
> me know if I am mistaken on that point.
> 
> > > Alternatively, is it possible to provide stricter admission control?
> > 
> > I think that's an orthogonal issue; better admission control would be
> > nice, but it looks to me that it's going to be fundamentally an issue
> > of tweaking hueristics, and a fool-proof solution that will protect
> > against all malicious userspace applications (including syzkaller) is
> > going to require solving the halting problem.  So while it would be
> > nice to improve the admission control, I don't think that's a going to
> > be a general solution.
> 
> Agreed, and my earlier point about the need to trust the coding abilities
> of those writing ultimate-priority code is all too consistent with your
> point about needing to solve the halting problem.  Nevertheless,  I believe
> that we could make something that worked reasonably well in practice.
> 
> Here are a few components of a possible solution, in practice, but
> of course not in theory:
> 
> 1.	We set limits to SCHED_DEADLINE parameters, perhaps novel ones.
> 	For one example, insist on (say) 10 milliseconds of idle time
> 	every second on each CPU.  Yes, you can configure beyond that
> 	given sufficient permissions, but if you do so, you just voided
> 	your warranty.
> 
> 2.	Only allow SCHED_DEADLINE on nohz_full CPUs.  (Partial solution,
> 	given that such a CPU might be running in the kernel or have
> 	more than one runnable task.  Just for fun, I will suggest the
> 	option of disabling SCHED_DEADLINE during such times.)
> 
> 3.	RCU detects slowdowns, and does something TBD to increase its
> 	priority, but only while the slowdown persists.  This likely
> 	relies on scheduling-clock interrupts to detect the slowdowns,
> 	so there might be additional challenges on a fully nohz_full
> 	system.

4.	SCHED_DEADLINE treats the other three scheduling classes as each
	having a period, deadline, and a modest CPU consumption budget
	for the members of the class in aggregate.  But this has to have
	been discussed before.  How did that go?

> 5.	Your idea here.

							Thanx, Paul


  reply	other threads:[~2019-07-07  1:17 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-26 17:27 INFO: rcu detected stall in ext4_write_checks syzbot
2019-06-26 18:42 ` Theodore Ts'o
2019-06-26 21:03   ` Theodore Ts'o
2019-06-26 22:47     ` Theodore Ts'o
2019-07-05 13:24       ` Dmitry Vyukov
2019-07-05 15:16         ` Paul E. McKenney
2019-07-05 15:47           ` Amir Goldstein
2019-07-05 15:48           ` Dmitry Vyukov
2019-07-05 19:10             ` Paul E. McKenney
2019-07-06  4:28               ` Theodore Ts'o
2019-07-06  6:16                 ` Paul E. McKenney
2019-07-06 15:02                   ` Theodore Ts'o
2019-07-06 18:03                     ` Paul E. McKenney
2019-07-07  1:16                       ` Paul E. McKenney [this message]
2019-07-14 14:48                         ` Dmitry Vyukov
2019-07-14 18:49                           ` Paul E. McKenney
2019-07-15 13:29                             ` Peter Zijlstra
2019-07-15 13:33                               ` Dmitry Vyukov
2019-07-15 13:46                                 ` Peter Zijlstra
2019-07-15 14:02                                   ` Paul E. McKenney
2019-07-22 10:03                                   ` Dmitry Vyukov
2019-07-23  8:51                                     ` Dmitry Vyukov
2019-07-14 19:05                           ` Theodore Ts'o
2019-07-14 19:29                             ` Paul E. McKenney
2019-07-15  3:10                               ` Paul E. McKenney
2019-07-15 13:01                                 ` Paul E. McKenney
2019-07-15 13:29                                   ` Dmitry Vyukov
2019-07-15 13:39                                   ` Peter Zijlstra
2019-07-15 14:03                                     ` Paul E. McKenney
2019-07-15 13:22                         ` Peter Zijlstra
2019-07-05 13:18   ` Dmitry Vyukov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190707011655.GA22081@linux.ibm.com \
    --to=paulmck@linux.ibm.com \
    --cc=adilger.kernel@dilger.ca \
    --cc=davem@davemloft.net \
    --cc=dvyukov@google.com \
    --cc=eladr@mellanox.com \
    --cc=idosch@mellanox.com \
    --cc=jiri@mellanox.com \
    --cc=john.stultz@linaro.org \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=syzbot+4bfbbf28a2e50ab07368@syzkaller.appspotmail.com \
    --cc=syzkaller-bugs@googlegroups.com \
    --cc=tglx@linutronix.de \
    --cc=tytso@mit.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).