From: "Michael S. Tsirkin" <mst@redhat.com>
To: "Paul E. McKenney" <paulmck@linux.ibm.com>
Cc: Matthew Wilcox <willy@infradead.org>,
aarcange@redhat.com, akpm@linux-foundation.org,
christian@brauner.io, davem@davemloft.net, ebiederm@xmission.com,
elena.reshetova@intel.com, guro@fb.com, hch@infradead.org,
james.bottomley@hansenpartnership.com, jasowang@redhat.com,
jglisse@redhat.com, keescook@chromium.org, ldv@altlinux.org,
linux-arm-kernel@lists.infradead.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-parisc@vger.kernel.org, luto@amacapital.net,
mhocko@suse.com, mingo@kernel.org, namit@vmware.com,
peterz@infradead.org, syzkaller-bugs@googlegroups.com,
viro@zeniv.linux.org.uk, wad@chromium.org
Subject: Re: RFC: call_rcu_outstanding (was Re: WARNING in __mmdrop)
Date: Mon, 22 Jul 2019 03:52:05 -0400 [thread overview]
Message-ID: <20190722035042-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20190721233113.GV14271@linux.ibm.com>
On Sun, Jul 21, 2019 at 04:31:13PM -0700, Paul E. McKenney wrote:
> On Sun, Jul 21, 2019 at 02:08:37PM -0700, Matthew Wilcox wrote:
> > On Sun, Jul 21, 2019 at 06:17:25AM -0700, Paul E. McKenney wrote:
> > > Also, the overhead is important. For example, as far as I know,
> > > current RCU gracefully handles close(open(...)) in a tight userspace
> > > loop. But there might be trouble due to tight userspace loops around
> > > lighter-weight operations.
> >
> > I thought you believed that RCU was antifragile, in that it would scale
> > better as it was used more heavily?
>
> You are referring to this? https://paulmck.livejournal.com/47933.html
>
> If so, the last few paragraphs might be worth re-reading. ;-)
>
> And in this case, the heuristics RCU uses to decide when to schedule
> invocation of the callbacks needs some help. One component of that help
> is a time-based limit to the number of consecutive callback invocations
> (see my crude prototype and Eric Dumazet's more polished patch). Another
> component is an overload warning.
>
> Why would an overload warning be needed if RCU's callback-invocation
> scheduling heurisitics were upgraded? Because someone could boot a
> 100-CPU system with the rcu_nocbs=0-99, bind all of the resulting
> rcuo kthreads to (say) CPU 0, and then run a callback-heavy workload
> on all of the CPUs. Given the constraints, CPU 0 cannot keep up.
>
> So warnings are required as well.
>
> > Would it make sense to have call_rcu() check to see if there are many
> > outstanding requests on this CPU and if so process them before returning?
> > That would ensure that frequent callers usually ended up doing their
> > own processing.
>
> Unfortunately, no. Here is a code fragment illustrating why:
>
> void my_cb(struct rcu_head *rhp)
> {
> unsigned long flags;
>
> spin_lock_irqsave(&my_lock, flags);
> handle_cb(rhp);
> spin_unlock_irqrestore(&my_lock, flags);
> }
>
> . . .
>
> spin_lock_irqsave(&my_lock, flags);
> p = look_something_up();
> remove_that_something(p);
> call_rcu(p, my_cb);
> spin_unlock_irqrestore(&my_lock, flags);
>
> Invoking the extra callbacks directly from call_rcu() would thus result
> in self-deadlock. Documentation/RCU/UP.txt contains a few more examples
> along these lines.
We could add an option that simply fails if overloaded, right?
Have caller recover...
--
MST
next prev parent reply other threads:[~2019-07-22 7:52 UTC|newest]
Thread overview: 88+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-07-19 3:35 WARNING in __mmdrop syzbot
2019-07-20 10:08 ` syzbot
2019-07-21 10:02 ` Michael S. Tsirkin
2019-07-21 12:18 ` Michael S. Tsirkin
2019-07-22 5:24 ` Jason Wang
2019-07-22 8:08 ` Michael S. Tsirkin
2019-07-23 4:01 ` Jason Wang
2019-07-23 5:01 ` Michael S. Tsirkin
2019-07-23 5:47 ` Jason Wang
2019-07-23 7:23 ` Michael S. Tsirkin
2019-07-23 7:53 ` Jason Wang
2019-07-23 8:10 ` Michael S. Tsirkin
2019-07-23 8:49 ` Jason Wang
2019-07-23 9:26 ` Michael S. Tsirkin
2019-07-23 13:31 ` Jason Wang
2019-07-25 5:52 ` Michael S. Tsirkin
2019-07-25 7:43 ` Jason Wang
2019-07-25 8:28 ` Michael S. Tsirkin
2019-07-25 13:21 ` Jason Wang
2019-07-25 13:26 ` Michael S. Tsirkin
2019-07-25 14:25 ` Jason Wang
2019-07-26 11:49 ` Michael S. Tsirkin
2019-07-26 12:00 ` Jason Wang
2019-07-26 12:38 ` Michael S. Tsirkin
2019-07-26 12:53 ` Jason Wang
2019-07-26 13:36 ` Jason Wang
2019-07-26 13:49 ` Michael S. Tsirkin
2019-07-29 5:54 ` Jason Wang
2019-07-29 8:59 ` Michael S. Tsirkin
2019-07-29 14:24 ` Jason Wang
2019-07-29 14:44 ` Michael S. Tsirkin
2019-07-30 7:44 ` Jason Wang
2019-07-30 8:03 ` Jason Wang
2019-07-30 15:08 ` Michael S. Tsirkin
2019-07-31 8:49 ` Jason Wang
2019-07-31 23:00 ` Jason Gunthorpe
2019-07-26 13:47 ` Michael S. Tsirkin
2019-07-26 14:00 ` Jason Wang
2019-07-26 14:10 ` Michael S. Tsirkin
2019-07-26 15:03 ` Jason Gunthorpe
2019-07-29 5:56 ` Jason Wang
2019-07-21 12:28 ` RFC: call_rcu_outstanding (was Re: WARNING in __mmdrop) Michael S. Tsirkin
2019-07-21 13:17 ` Paul E. McKenney
2019-07-21 17:53 ` Michael S. Tsirkin
2019-07-21 19:28 ` Paul E. McKenney
2019-07-22 7:56 ` Michael S. Tsirkin
2019-07-22 11:57 ` Paul E. McKenney
2019-07-21 21:08 ` Matthew Wilcox
2019-07-21 23:31 ` Paul E. McKenney
2019-07-22 7:52 ` Michael S. Tsirkin [this message]
2019-07-22 11:51 ` Paul E. McKenney
2019-07-22 13:41 ` Jason Gunthorpe
2019-07-22 15:52 ` Paul E. McKenney
2019-07-22 16:04 ` Jason Gunthorpe
2019-07-22 16:15 ` Michael S. Tsirkin
2019-07-22 16:15 ` Paul E. McKenney
2019-07-22 15:14 ` Joel Fernandes
2019-07-22 15:47 ` Michael S. Tsirkin
2019-07-22 15:55 ` Paul E. McKenney
2019-07-22 16:13 ` Michael S. Tsirkin
2019-07-22 16:25 ` Paul E. McKenney
2019-07-22 16:32 ` Michael S. Tsirkin
2019-07-22 18:58 ` Paul E. McKenney
2019-07-22 5:21 ` WARNING in __mmdrop Jason Wang
2019-07-22 8:02 ` Michael S. Tsirkin
2019-07-23 3:55 ` Jason Wang
2019-07-23 5:02 ` Michael S. Tsirkin
2019-07-23 5:48 ` Jason Wang
2019-07-23 7:25 ` Michael S. Tsirkin
2019-07-23 7:55 ` Jason Wang
2019-07-23 7:56 ` Michael S. Tsirkin
2019-07-23 8:42 ` Jason Wang
2019-07-23 10:27 ` Michael S. Tsirkin
2019-07-23 13:34 ` Jason Wang
2019-07-23 15:02 ` Michael S. Tsirkin
2019-07-24 2:17 ` Jason Wang
2019-07-24 8:05 ` Michael S. Tsirkin
2019-07-24 10:08 ` Jason Wang
2019-07-24 18:25 ` Michael S. Tsirkin
2019-07-25 3:44 ` Jason Wang
2019-07-25 5:09 ` Michael S. Tsirkin
2019-07-24 16:53 ` Jason Gunthorpe
2019-07-24 18:25 ` Michael S. Tsirkin
2019-07-23 10:42 ` Michael S. Tsirkin
2019-07-23 13:37 ` Jason Wang
2019-07-22 14:11 ` Jason Gunthorpe
2019-07-25 6:02 ` Michael S. Tsirkin
2019-07-25 7:44 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190722035042-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=aarcange@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=christian@brauner.io \
--cc=davem@davemloft.net \
--cc=ebiederm@xmission.com \
--cc=elena.reshetova@intel.com \
--cc=guro@fb.com \
--cc=hch@infradead.org \
--cc=james.bottomley@hansenpartnership.com \
--cc=jasowang@redhat.com \
--cc=jglisse@redhat.com \
--cc=keescook@chromium.org \
--cc=ldv@altlinux.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=linux-parisc@vger.kernel.org \
--cc=luto@amacapital.net \
--cc=mhocko@suse.com \
--cc=mingo@kernel.org \
--cc=namit@vmware.com \
--cc=paulmck@linux.ibm.com \
--cc=peterz@infradead.org \
--cc=syzkaller-bugs@googlegroups.com \
--cc=viro@zeniv.linux.org.uk \
--cc=wad@chromium.org \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).