From: Steven Rostedt <rostedt@goodmis.org>
To: "Paul E. McKenney" <paulmck@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Ankur Arora <ankur.a.arora@oracle.com>,
linux-kernel@vger.kernel.org, peterz@infradead.org,
torvalds@linux-foundation.org, linux-mm@kvack.org,
x86@kernel.org, akpm@linux-foundation.org, luto@kernel.org,
bp@alien8.de, dave.hansen@linux.intel.com, hpa@zytor.com,
mingo@redhat.com, juri.lelli@redhat.com,
vincent.guittot@linaro.org, willy@infradead.org, mgorman@suse.de,
jon.grimm@amd.com, bharata@amd.com, raghavendra.kt@amd.com,
boris.ostrovsky@oracle.com, konrad.wilk@oracle.com,
jgross@suse.com, andrew.cooper3@citrix.com, mingo@kernel.org,
bristot@kernel.org, mathieu.desnoyers@efficios.com,
geert@linux-m68k.org, glaubitz@physik.fu-berlin.de,
anton.ivanov@cambridgegreys.com, mattst88@gmail.com,
krypton@ulrich-teichert.org, David.Laight@aculab.com,
richard@nod.at, mjguzik@gmail.com,
Simon Horman <horms@verge.net.au>, Julian Anastasov <ja@ssi.bg>,
Alexei Starovoitov <ast@kernel.org>,
Daniel Borkmann <daniel@iogearbox.net>
Subject: Re: [RFC PATCH 47/86] rcu: select PREEMPT_RCU if PREEMPT
Date: Thu, 7 Dec 2023 08:44:57 -0500 [thread overview]
Message-ID: <20231207084457.78ab7d31@gandalf.local.home> (raw)
In-Reply-To: <842f589e-5ea3-4c2b-9376-d718c14fabf5@paulmck-laptop>
On Wed, 6 Dec 2023 20:34:11 -0800
"Paul E. McKenney" <paulmck@kernel.org> wrote:
> > > I like the concept, but those with mutex_lock() of rarely-held mutexes
> > > in their fastpaths might have workloads that have a contrary opinion.
> >
> > I don't understand your above statement. Maybe I wasn't clear with my
> > statement? The above is more about PREEMPT_FULL, as it currently will
> > preempt immediately. My above comment is that we can have an option for
> > PREEMPT_FULL where if the scheduler decided to preempt even in a fast path,
> > it would at least hold off until there's no mutex held. Who cares if it's a
> > fast path when a task needs to give up the CPU for another task? What I
> > worry about is scheduling out while holding a mutex which increases the
> > chance of that mutex being contended upon. Which does have drastic impact
> > on performance.
>
> As I understand the current mutex_lock() code, the fastpaths leave no
> scheduler-visible clue that a mutex is in fact held. If there is no
> such clue, it is quite likely that those fastpaths will need to do some
> additional clue-leaving work, increasing their overhead. And while it
> is always possible that this overhead will be down in the noise, if it
> was too far down in the noise there would be no need for those fastpaths.
>
> So it is possible (but by no means certain) that some workloads will end
> up caring.
OK, that makes more sense, and I do agree with that statement. It would
need to do something like spin locks do with preempt disable, but I agree,
this would need to be done in a way not to cause performance regressions.
>
> > > > > Another is the aforementioned situations where removing the cond_resched()
> > > > > increases latency. Yes, capping the preemption latency is a wonderful
> > > > > thing, and the people I chatted with are all for that, but it is only
> > > > > natural that there would be a corresponding level of concern about the
> > > > > cases where removing the cond_resched() calls increases latency.
> > > >
> > > > With the "capped preemption" I'm not sure that would still be the case.
> > > > cond_resched() currently only preempts if NEED_RESCHED is set. That means
> > > > the system had to already be in a situation that a schedule needs to
> > > > happen. There's lots of places in the kernel that run for over a tick
> > > > without any cond_resched(). The cond_resched() is usually added for
> > > > locations that show tremendous latency (where either a watchdog triggered,
> > > > or showed up in some analysis that had a latency that was much greater than
> > > > a tick).
> > >
> > > For non-real-time workloads, the average case is important, not just the
> > > worst case. In the new lazily preemptible mode of thought, a preemption
> > > by a non-real-time task will wait a tick. Earlier, it would have waited
> > > for the next cond_resched(). Which, in the average case, might have
> > > arrived much sooner than one tick.
> >
> > Or much later. It's random. And what's nice about this model, we can add
> > more models than just "NONE", "VOLUNTARY", "FULL". We could have a way to
> > say "this task needs to preempt immediately" and not just for RT tasks.
> >
> > This allows the user to decide which task preempts more and which does not
> > (defined by the scheduler), instead of some random cond_resched() that can
> > also preempt a higher priority task that just finished its quota to run a
> > low priority task causing latency for the higher priority task.
> >
> > This is what I mean by "think differently".
>
> I did understand your meaning, and it is a source of some concern. ;-)
>
> When things become sufficiently stable, larger-scale tests will of course
> be needed, not just different thought..
Fair enough.
>
> > > > The point is, if/when we switch to the new preemption model, we would need
> > > > to re-evaluate if any cond_resched() is needed. Yes, testing needs to be
> > > > done to prevent regressions. But the reasons I see cond_resched() being
> > > > added today, should no longer exist with this new model.
> > >
> > > This I agree with. Also, with the new paradigm and new mode of thought
> > > in place, it should be safe to drop any cond_resched() that is in a loop
> > > that consumes more than a tick of CPU time per iteration.
> >
> > Why does that matter? Is the loop not important? Why stop it from finishing
> > for some random task that may not be important, and cond_resched() has no
> > idea if it is or not.
>
> Because if it takes more than a tick to reach the next cond_resched(),
> lazy preemption is likely to preempt before that cond_resched() is
> reached. Which suggests that such a cond_resched() would not be all
> that valuable in the new thought paradigm. Give or take potential issues
> with exactly where the preemption happens.
I'm just saying there's lots of places that the above happens, which is why
we are still scattering cond_resched() all over the place.
>
> > > > > There might be others as well. These are the possibilities that have
> > > > > come up thus far.
> > > > >
> > > > > > They all suck and keeping some of them is just counterproductive as
> > > > > > again people will sprinkle them all over the place for the very wrong
> > > > > > reasons.
> > > > >
> > > > > Yes, but do they suck enough and are they counterproductive enough to
> > > > > be useful and necessary? ;-)
> > > >
> > > > They are only useful and necessary because of the way we handle preemption
> > > > today. With the new preemption model, they are all likely to be useless and
> > > > unnecessary ;-)
> > >
> > > The "all likely" needs some demonstration. I agree that a great many
> > > of them would be useless and unnecessary. Maybe even the vast majority.
> > > But that is different than "all". ;-)
> >
> > I'm betting it is "all" ;-) But I also agree that this "needs some
> > demonstration". We are not there yet, and likely will not be until the
> > second half of next year. So we have plenty of time to speak rhetorically
> > to each other!
>
> You know, we usually find time to engage in rhetorical conversation. ;-)
>
> > > > The conflict is with the new paradigm (I love that word! It's so "buzzy").
> > > > As I mentioned above, cond_resched() is usually added when a problem was
> > > > seen. I really believe that those problems would never had been seen if
> > > > the new paradigm had already been in place.
> > >
> > > Indeed, that sort of wording does quite the opposite of raising my
> > > confidence levels. ;-)
> >
> > Yes, I admit the "manager speak" isn't something to brag about here. But I
> > really do like that word. It's just fun to say (and spell)! Paradigm,
> > paradigm, paradigm! It's that silent 'g'. Although, I wonder if we should
> > be like gnu, and pronounce it when speaking about free software? Although,
> > that makes the word sound worse. :-p
>
> Pair a' dime, pair a' quarter, pair a' fifty-cent pieces, whatever it takes!
Pair a' two-bits : that's all it's worth
Or
Pair a' two-cents : as it's my two cents that I'm giving.
>
> > > You know, the ancient Romans would have had no problem dealing with the
> > > dot-com boom, cryptocurrency, some of the shadier areas of artificial
> > > intelligence and machine learning, and who knows what all else. As the
> > > Romans used to say, "Beware of geeks bearing grifts."
> > >
> > > > > > 3) Looking at the initial problem Ankur was trying to solve there is
> > > > > > absolutely no acceptable solution to solve that unless you think
> > > > > > that the semantically invers 'allow_preempt()/disallow_preempt()'
> > > > > > is anywhere near acceptable.
> > > > >
> > > > > I am not arguing for allow_preempt()/disallow_preempt(), so for that
> > > > > argument, you need to find someone else to argue with. ;-)
> > > >
> > > > Anyway, there's still a long path before cond_resched() can be removed. It
> > > > was a mistake by Ankur to add those removals this early (and he has
> > > > acknowledged that mistake).
> > >
> > > OK, that I can live with. But that seems to be a bit different of a
> > > take than that of some earlier emails in this thread. ;-)
> >
> > Well, we are also stating the final goal as well. I think there's some
> > confusion to what's going to happen immediately and what's going to happen
> > in the long run.
>
> If I didn't know better, I might suspect that in addition to the
> confusion, there are a few differences of opinion. ;-)
Confusion enhances differences of opinion.
>
> > > > First we need to get the new preemption modeled implemented. When it is, it
> > > > can be just a config option at first. Then when that config option is set,
> > > > you can enable the NONE, VOLUNTARY or FULL preemption modes, even switch
> > > > between them at run time as they are just a way to tell the scheduler when
> > > > to set NEED_RESCHED_LAZY vs NEED_RSECHED.
> > >
> > > Assuming CONFIG_PREEMPT_RCU=y, agreed. With CONFIG_PREEMPT_RCU=n,
> > > the runtime switching needs to be limited to NONE and VOLUNTARY.
> > > Which is fine.
> >
> > But why? Because the run time switches of NONE and VOLUNTARY are no
> > different than FULL.
> >
> > Why I say that? Because:
> >
> > For all modes, NEED_RESCHED_LAZY is set, the kernel has one tick to get out
> > or NEED_RESCHED will be set (of course that one tick may be configurable).
> > Once the NEED_RESCHED is set, then the kernel is converted to PREEMPT_FULL.
> >
> > Even if the user sets the mode to "NONE", after the above scenario (one tick
> > after NEED_RESCHED_LAZY is set) the kernel will be behaving no differently
> > than PREEMPT_FULL.
> >
> > So why make the difference between CONFIG_PREEMPT_RCU=n and limit to only
> > NONE and VOLUNTARY. It must work with FULL or it will be broken for NONE
> > and VOLUNTARY after one tick from NEED_RESCHED_LAZY being set.
>
> Because PREEMPT_FULL=y plus PREEMPT_RCU=n appears to be a useless
> combination. All of the gains from PREEMPT_FULL=y are more than lost
> due to PREEMPT_RCU=n, especially when the kernel decides to do something
> like walk a long task list under RCU protection. We should not waste
> people's time getting burned by this combination, nor should we waste
> cycles testing it.
The issue I see here is that PREEMPT_RCU is not something that we can
convert at run time, where the NONE, VOLUNTARY, FULL (and more to come) can
be. And you have stated that PREEMPT_RCU adds some more overhead that
people may not care about. But even though you say PREEMPT_RCU=n makes no
sense with PREEMPT_FULL, it doesn't mean we should not allow it. Especially
if we have to make sure that it still works (even NONE and VOLUNTARY turn
to FULL after that one-tick).
Remember, what we are looking at is having:
N : NEED_RESCHED - schedule at next possible location
L : NEED_RESCHED_LAZY - schedule when going into user space.
When to set what for a task needing to schedule?
Model SCHED_OTHER RT/DL(or user specified)
----- ----------- ------------------------
NONE L L
VOLUNTARY L N
FULL N N
By saying FULL, you are saying that you want the SCHED_OTHER as well as
RT/DL tasks to schedule as soon as possible and not wait to going into user
space. This is still applicable even with PREEMPT_RCU=n
It may be that someone wants better latency for all tasks (like VOLUNTARY)
but not the overhead that PREEMPT_RCU gives, and is OK with the added
latency as a result.
-- Steve
next prev parent reply other threads:[~2023-12-07 13:44 UTC|newest]
Thread overview: 250+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-07 21:56 [RFC PATCH 00/86] Make the kernel preemptible Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 01/86] Revert "riscv: support PREEMPT_DYNAMIC with static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 02/86] Revert "sched/core: Make sched_dynamic_mutex static" Ankur Arora
2023-11-07 23:04 ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 03/86] Revert "ftrace: Use preemption model accessors for trace header printout" Ankur Arora
2023-11-07 23:10 ` Steven Rostedt
2023-11-07 23:23 ` Ankur Arora
2023-11-07 23:31 ` Steven Rostedt
2023-11-07 23:34 ` Steven Rostedt
2023-11-08 0:12 ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 04/86] Revert "preempt/dynamic: Introduce preemption model accessors" Ankur Arora
2023-11-07 23:12 ` Steven Rostedt
2023-11-08 4:59 ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 05/86] Revert "kcsan: Use " Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 06/86] Revert "entry: Fix compile error in dynamic_irqentry_exit_cond_resched()" Ankur Arora
2023-11-08 7:47 ` Greg KH
2023-11-08 9:09 ` Ankur Arora
2023-11-08 10:00 ` Greg KH
2023-11-07 21:56 ` [RFC PATCH 07/86] Revert "livepatch,sched: Add livepatch task switching to cond_resched()" Ankur Arora
2023-11-07 23:16 ` Steven Rostedt
2023-11-08 4:55 ` Ankur Arora
2023-11-09 17:26 ` Josh Poimboeuf
2023-11-09 17:31 ` Steven Rostedt
2023-11-09 17:51 ` Josh Poimboeuf
2023-11-09 22:50 ` Ankur Arora
2023-11-09 23:47 ` Josh Poimboeuf
2023-11-10 0:46 ` Ankur Arora
2023-11-10 0:56 ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 08/86] Revert "arm64: Support PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 23:17 ` Steven Rostedt
2023-11-08 15:44 ` Mark Rutland
2023-11-07 21:56 ` [RFC PATCH 09/86] Revert "sched/preempt: Add PREEMPT_DYNAMIC using static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 10/86] Revert "sched/preempt: Decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 11/86] Revert "sched/preempt: Simplify irqentry_exit_cond_resched() callers" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 12/86] Revert "sched/preempt: Refactor sched_dynamic_update()" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 13/86] Revert "sched/preempt: Move PREEMPT_DYNAMIC logic later" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 14/86] Revert "preempt/dynamic: Fix setup_preempt_mode() return value" Ankur Arora
2023-11-07 23:20 ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 15/86] Revert "preempt: Restore preemption model selection configs" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 16/86] Revert "sched: Provide Kconfig support for default dynamic preempt mode" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 17/86] sched/preempt: remove PREEMPT_DYNAMIC from the build version Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 18/86] Revert "preempt/dynamic: Fix typo in macro conditional statement" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 19/86] Revert "sched,preempt: Move preempt_dynamic to debug.c" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 20/86] Revert "static_call: Relax static_call_update() function argument type" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 21/86] Revert "sched/core: Use -EINVAL in sched_dynamic_mode()" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 22/86] Revert "sched/core: Stop using magic values " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 23/86] Revert "sched,x86: Allow !PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 24/86] Revert "sched: Harden PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 25/86] Revert "sched: Add /debug/sched_preempt" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 26/86] Revert "preempt/dynamic: Support dynamic preempt with preempt= boot option" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 27/86] Revert "preempt/dynamic: Provide irqentry_exit_cond_resched() static call" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 28/86] Revert "preempt/dynamic: Provide preempt_schedule[_notrace]() static calls" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 29/86] Revert "preempt/dynamic: Provide cond_resched() and might_resched() " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 30/86] Revert "preempt: Introduce CONFIG_PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 31/86] x86/thread_info: add TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 23:26 ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 32/86] entry: handle TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 33/86] entry/kvm: " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 34/86] thread_info: accessors for TIF_NEED_RESCHED* Ankur Arora
2023-11-08 8:58 ` Peter Zijlstra
2023-11-21 5:59 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 35/86] thread_info: change to tif_need_resched(resched_t) Ankur Arora
2023-11-08 9:00 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 36/86] entry: irqentry_exit only preempts TIF_NEED_RESCHED Ankur Arora
2023-11-08 9:01 ` Peter Zijlstra
2023-11-21 6:00 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 37/86] sched: make test_*_tsk_thread_flag() return bool Ankur Arora
2023-11-08 9:02 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 38/86] sched: *_tsk_need_resched() now takes resched_t Ankur Arora
2023-11-08 9:03 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 39/86] sched: handle lazy resched in set_nr_*_polling() Ankur Arora
2023-11-08 9:15 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 40/86] context_tracking: add ct_state_cpu() Ankur Arora
2023-11-08 9:16 ` Peter Zijlstra
2023-11-21 6:32 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 41/86] sched: handle resched policy in resched_curr() Ankur Arora
2023-11-08 9:36 ` Peter Zijlstra
2023-11-08 10:26 ` Ankur Arora
2023-11-08 10:46 ` Peter Zijlstra
2023-11-21 6:34 ` Ankur Arora
2023-11-21 6:31 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 42/86] sched: force preemption on tick expiration Ankur Arora
2023-11-08 9:56 ` Peter Zijlstra
2023-11-21 6:44 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 43/86] sched: enable PREEMPT_COUNT, PREEMPTION for all preemption models Ankur Arora
2023-11-08 9:58 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 44/86] sched: voluntary preemption Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 45/86] preempt: ARCH_NO_PREEMPT only preempts lazily Ankur Arora
2023-11-08 0:07 ` Steven Rostedt
2023-11-08 8:47 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 46/86] tracing: handle lazy resched Ankur Arora
2023-11-08 0:19 ` Steven Rostedt
2023-11-08 9:24 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 47/86] rcu: select PREEMPT_RCU if PREEMPT Ankur Arora
2023-11-08 0:27 ` Steven Rostedt
2023-11-21 0:28 ` Paul E. McKenney
2023-11-21 3:43 ` Steven Rostedt
2023-11-21 5:04 ` Paul E. McKenney
2023-11-21 5:39 ` Ankur Arora
2023-11-21 15:00 ` Steven Rostedt
2023-11-21 15:19 ` Paul E. McKenney
2023-11-28 10:53 ` Thomas Gleixner
2023-11-28 18:30 ` Ankur Arora
2023-12-05 1:03 ` Paul E. McKenney
2023-12-05 1:01 ` Paul E. McKenney
2023-12-05 15:01 ` Steven Rostedt
2023-12-05 19:38 ` Paul E. McKenney
2023-12-05 20:18 ` Ankur Arora
2023-12-06 4:07 ` Paul E. McKenney
2023-12-07 1:33 ` Ankur Arora
2023-12-05 20:45 ` Steven Rostedt
2023-12-06 10:08 ` David Laight
2023-12-07 4:34 ` Paul E. McKenney
2023-12-07 13:44 ` Steven Rostedt [this message]
2023-12-08 4:28 ` Paul E. McKenney
2023-11-08 12:15 ` Julian Anastasov
2023-11-07 21:57 ` [RFC PATCH 48/86] rcu: handle quiescent states for PREEMPT_RCU=n Ankur Arora
2023-11-21 0:38 ` Paul E. McKenney
2023-11-21 3:26 ` Ankur Arora
2023-11-21 5:17 ` Paul E. McKenney
2023-11-21 5:34 ` Paul E. McKenney
2023-11-21 6:13 ` Z qiang
2023-11-21 15:32 ` Paul E. McKenney
2023-11-21 19:25 ` Paul E. McKenney
2023-11-21 20:30 ` Peter Zijlstra
2023-11-21 21:14 ` Paul E. McKenney
2023-11-21 21:38 ` Steven Rostedt
2023-11-21 22:26 ` Paul E. McKenney
2023-11-21 22:52 ` Steven Rostedt
2023-11-22 0:01 ` Paul E. McKenney
2023-11-22 0:12 ` Steven Rostedt
2023-11-22 1:09 ` Paul E. McKenney
2023-11-28 17:04 ` Thomas Gleixner
2023-12-05 1:33 ` Paul E. McKenney
2023-12-06 15:10 ` Thomas Gleixner
2023-12-07 4:17 ` Paul E. McKenney
2023-12-07 1:31 ` Ankur Arora
2023-12-07 2:10 ` Steven Rostedt
2023-12-07 4:37 ` Paul E. McKenney
2023-12-07 14:22 ` Thomas Gleixner
2023-11-21 3:55 ` Z qiang
2023-11-07 21:57 ` [RFC PATCH 49/86] osnoise: handle quiescent states directly Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 50/86] rcu: TASKS_RCU does not need to depend on PREEMPTION Ankur Arora
2023-11-21 0:38 ` Paul E. McKenney
2023-11-07 21:57 ` [RFC PATCH 51/86] preempt: disallow !PREEMPT_COUNT or !PREEMPTION Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 52/86] sched: remove CONFIG_PREEMPTION from *_needbreak() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 53/86] sched: fixup __cond_resched_*() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 54/86] sched: add cond_resched_stall() Ankur Arora
2023-11-09 11:19 ` Thomas Gleixner
2023-11-09 22:27 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 55/86] xarray: add cond_resched_xas_rcu() and cond_resched_xas_lock_irq() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 56/86] xarray: use cond_resched_xas*() Ankur Arora
2023-11-07 23:01 ` [RFC PATCH 00/86] Make the kernel preemptible Steven Rostedt
2023-11-07 23:43 ` Ankur Arora
2023-11-08 0:00 ` Steven Rostedt
2023-11-07 23:07 ` [RFC PATCH 57/86] coccinelle: script to remove cond_resched() Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 58/86] treewide: x86: " Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 59/86] treewide: rcu: " Ankur Arora
2023-11-21 1:01 ` Paul E. McKenney
2023-11-07 23:07 ` [RFC PATCH 60/86] treewide: torture: " Ankur Arora
2023-11-21 1:02 ` Paul E. McKenney
2023-11-07 23:07 ` [RFC PATCH 61/86] treewide: bpf: " Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 62/86] treewide: trace: " Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 63/86] treewide: futex: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 64/86] treewide: printk: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 65/86] treewide: task_work: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 66/86] treewide: kernel: " Ankur Arora
2023-11-17 18:14 ` Luis Chamberlain
2023-11-17 19:51 ` Steven Rostedt
2023-11-07 23:08 ` [RFC PATCH 67/86] treewide: kernel: remove cond_reshed() Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 68/86] treewide: mm: remove cond_resched() Ankur Arora
2023-11-08 1:28 ` Sergey Senozhatsky
2023-11-08 7:49 ` Vlastimil Babka
2023-11-08 8:02 ` Yosry Ahmed
2023-11-08 8:54 ` Ankur Arora
2023-11-08 12:58 ` Matthew Wilcox
2023-11-08 14:50 ` Steven Rostedt
2023-11-07 23:08 ` [RFC PATCH 69/86] treewide: io_uring: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 70/86] treewide: ipc: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 71/86] treewide: lib: " Ankur Arora
2023-11-08 9:15 ` Herbert Xu
2023-11-08 15:08 ` Steven Rostedt
2023-11-09 4:19 ` Herbert Xu
2023-11-09 4:43 ` Steven Rostedt
2023-11-08 19:15 ` Kees Cook
2023-11-08 19:41 ` Steven Rostedt
2023-11-08 22:16 ` Kees Cook
2023-11-08 22:21 ` Steven Rostedt
2023-11-09 9:39 ` David Laight
2023-11-07 23:08 ` [RFC PATCH 72/86] treewide: crypto: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 73/86] treewide: security: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 74/86] treewide: fs: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 75/86] treewide: virt: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 76/86] treewide: block: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 77/86] treewide: netfilter: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 78/86] treewide: net: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 79/86] " Ankur Arora
2023-11-08 12:16 ` Eric Dumazet
2023-11-08 17:11 ` Steven Rostedt
2023-11-08 20:59 ` Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 80/86] treewide: sound: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 81/86] treewide: md: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 82/86] treewide: mtd: " Ankur Arora
2023-11-08 16:28 ` Miquel Raynal
2023-11-08 16:32 ` Matthew Wilcox
2023-11-08 17:21 ` Steven Rostedt
2023-11-09 8:38 ` Miquel Raynal
2023-11-07 23:08 ` [RFC PATCH 83/86] treewide: drm: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 84/86] treewide: net: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 85/86] treewide: drivers: " Ankur Arora
2023-11-08 0:48 ` Chris Packham
2023-11-09 0:55 ` Ankur Arora
2023-11-09 23:25 ` Dmitry Torokhov
2023-11-09 23:41 ` Steven Rostedt
2023-11-10 0:01 ` Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 86/86] sched: " Ankur Arora
2023-11-07 23:19 ` [RFC PATCH 57/86] coccinelle: script to " Julia Lawall
2023-11-08 8:29 ` Ankur Arora
2023-11-08 9:49 ` Julia Lawall
2023-11-21 0:45 ` Paul E. McKenney
2023-11-21 5:16 ` Ankur Arora
2023-11-21 15:26 ` Paul E. McKenney
2023-11-08 4:08 ` [RFC PATCH 00/86] Make the kernel preemptible Christoph Lameter
2023-11-08 4:33 ` Ankur Arora
2023-11-08 4:52 ` Christoph Lameter
2023-11-08 5:12 ` Steven Rostedt
2023-11-08 6:49 ` Ankur Arora
2023-11-08 7:54 ` Vlastimil Babka
2023-11-08 7:31 ` Juergen Gross
2023-11-08 8:51 ` Peter Zijlstra
2023-11-08 9:53 ` Daniel Bristot de Oliveira
2023-11-08 10:04 ` Ankur Arora
2023-11-08 10:13 ` Peter Zijlstra
2023-11-08 11:00 ` Ankur Arora
2023-11-08 11:14 ` Peter Zijlstra
2023-11-08 12:16 ` Peter Zijlstra
2023-11-08 15:38 ` Thomas Gleixner
2023-11-08 16:15 ` Peter Zijlstra
2023-11-08 16:22 ` Steven Rostedt
2023-11-08 16:49 ` Peter Zijlstra
2023-11-08 17:18 ` Steven Rostedt
2023-11-08 20:46 ` Ankur Arora
2023-11-08 20:26 ` Ankur Arora
2023-11-08 9:43 ` David Laight
2023-11-08 15:15 ` Steven Rostedt
2023-11-08 16:29 ` David Laight
2023-11-08 16:33 ` Mark Rutland
2023-11-09 0:34 ` Ankur Arora
2023-11-09 11:00 ` Mark Rutland
2023-11-09 22:36 ` Ankur Arora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231207084457.78ab7d31@gandalf.local.home \
--to=rostedt@goodmis.org \
--cc=David.Laight@aculab.com \
--cc=akpm@linux-foundation.org \
--cc=andrew.cooper3@citrix.com \
--cc=ankur.a.arora@oracle.com \
--cc=anton.ivanov@cambridgegreys.com \
--cc=ast@kernel.org \
--cc=bharata@amd.com \
--cc=boris.ostrovsky@oracle.com \
--cc=bp@alien8.de \
--cc=bristot@kernel.org \
--cc=daniel@iogearbox.net \
--cc=dave.hansen@linux.intel.com \
--cc=geert@linux-m68k.org \
--cc=glaubitz@physik.fu-berlin.de \
--cc=horms@verge.net.au \
--cc=hpa@zytor.com \
--cc=ja@ssi.bg \
--cc=jgross@suse.com \
--cc=jon.grimm@amd.com \
--cc=juri.lelli@redhat.com \
--cc=konrad.wilk@oracle.com \
--cc=krypton@ulrich-teichert.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mattst88@gmail.com \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=mjguzik@gmail.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@amd.com \
--cc=richard@nod.at \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=vincent.guittot@linaro.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).