From: Ankur Arora <ankur.a.arora@oracle.com>
To: linux-kernel@vger.kernel.org
Cc: tglx@linutronix.de, peterz@infradead.org,
torvalds@linux-foundation.org, paulmck@kernel.org,
linux-mm@kvack.org, x86@kernel.org, akpm@linux-foundation.org,
luto@kernel.org, bp@alien8.de, dave.hansen@linux.intel.com,
hpa@zytor.com, mingo@redhat.com, juri.lelli@redhat.com,
vincent.guittot@linaro.org, willy@infradead.org, mgorman@suse.de,
jon.grimm@amd.com, bharata@amd.com, raghavendra.kt@amd.com,
boris.ostrovsky@oracle.com, konrad.wilk@oracle.com,
jgross@suse.com, andrew.cooper3@citrix.com, mingo@kernel.org,
bristot@kernel.org, mathieu.desnoyers@efficios.com,
geert@linux-m68k.org, glaubitz@physik.fu-berlin.de,
anton.ivanov@cambridgegreys.com, mattst88@gmail.com,
krypton@ulrich-teichert.org, rostedt@goodmis.org,
David.Laight@ACULAB.COM, richard@nod.at, mjguzik@gmail.com,
Ankur Arora <ankur.a.arora@oracle.com>
Subject: [RFC PATCH 44/86] sched: voluntary preemption
Date: Tue, 7 Nov 2023 13:57:30 -0800 [thread overview]
Message-ID: <20231107215742.363031-45-ankur.a.arora@oracle.com> (raw)
In-Reply-To: <20231107215742.363031-1-ankur.a.arora@oracle.com>
The no preemption model allows running to completion in kernel context.
For voluntary preemption, allow preemption by higher scheduling
classes.
To do this resched_curr() now takes a parameter that specifies if the
resched is for a scheduler class above the runqueue's current task.
And reschedules eagerly, if so.
Also define scheduler feature PREEMPT_PRIORITY which can be used to
toggle voluntary preemption model at runtime.
TODO: Both RT, deadline work but I'm almost certainly not doing all the
right things for both.
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---
kernel/Kconfig.preempt | 19 ++++++-------------
kernel/sched/core.c | 28 +++++++++++++++++-----------
kernel/sched/core_sched.c | 2 +-
kernel/sched/deadline.c | 22 +++++++++++-----------
kernel/sched/fair.c | 18 +++++++++---------
kernel/sched/features.h | 5 +++++
kernel/sched/idle.c | 2 +-
kernel/sched/rt.c | 26 +++++++++++++-------------
kernel/sched/sched.h | 2 +-
9 files changed, 64 insertions(+), 60 deletions(-)
diff --git a/kernel/Kconfig.preempt b/kernel/Kconfig.preempt
index 074fe5e253b5..e16114b679e3 100644
--- a/kernel/Kconfig.preempt
+++ b/kernel/Kconfig.preempt
@@ -20,23 +20,16 @@ config PREEMPT_NONE
at runtime.
config PREEMPT_VOLUNTARY
- bool "Voluntary Kernel Preemption (Desktop)"
+ bool "Voluntary Kernel Preemption"
depends on !ARCH_NO_PREEMPT
select PREEMPTION
help
- This option reduces the latency of the kernel by adding more
- "explicit preemption points" to the kernel code. These new
- preemption points have been selected to reduce the maximum
- latency of rescheduling, providing faster application reactions,
- at the cost of slightly lower throughput.
+ This option reduces the latency of the kernel by allowing
+ processes in higher scheduling policy classes preempt ones
+ lower down.
- This allows reaction to interactive events by allowing a
- low priority process to voluntarily preempt itself even if it
- is in kernel mode executing a system call. This allows
- applications to run more 'smoothly' even when the system is
- under load.
-
- Select this if you are building a kernel for a desktop system.
+ Higher priority processes in the same scheduling policy class
+ do not preempt others in the same class.
config PREEMPT
bool "Preemptible Kernel (Low-Latency Desktop)"
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 2a50a64255c6..3fa78e8afb7d 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -256,7 +256,7 @@ void sched_core_dequeue(struct rq *rq, struct task_struct *p, int flags)
*/
if (!(flags & DEQUEUE_SAVE) && rq->nr_running == 1 &&
rq->core->core_forceidle_count && rq->curr == rq->idle)
- resched_curr(rq);
+ resched_curr(rq, false);
}
static int sched_task_is_throttled(struct task_struct *p, int cpu)
@@ -1074,9 +1074,12 @@ void __resched_curr(struct rq *rq, resched_t rs)
*
* - in userspace: run to completion semantics are only for kernel tasks
*
- * Otherwise (regardless of priority), run to completion.
+ * - running under voluntary preemption (sched_feat(PREEMPT_PRIORITY))
+ * and a task from a sched_class above wants the CPU
+ *
+ * Otherwise, run to completion.
*/
-void resched_curr(struct rq *rq)
+void resched_curr(struct rq *rq, bool above)
{
resched_t rs = RESCHED_lazy;
int context;
@@ -1112,6 +1115,9 @@ void resched_curr(struct rq *rq)
goto resched;
}
+ if (sched_feat(PREEMPT_PRIORITY) && above)
+ rs = RESCHED_eager;
+
resched:
__resched_curr(rq, rs);
}
@@ -1123,7 +1129,7 @@ void resched_cpu(int cpu)
raw_spin_rq_lock_irqsave(rq, flags);
if (cpu_online(cpu) || cpu == smp_processor_id())
- resched_curr(rq);
+ resched_curr(rq, true);
raw_spin_rq_unlock_irqrestore(rq, flags);
}
@@ -2277,7 +2283,7 @@ void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags)
if (p->sched_class == rq->curr->sched_class)
rq->curr->sched_class->check_preempt_curr(rq, p, flags);
else if (sched_class_above(p->sched_class, rq->curr->sched_class))
- resched_curr(rq);
+ resched_curr(rq, true);
/*
* A queue event has occurred, and we're going to schedule. In
@@ -2764,7 +2770,7 @@ int push_cpu_stop(void *arg)
deactivate_task(rq, p, 0);
set_task_cpu(p, lowest_rq->cpu);
activate_task(lowest_rq, p, 0);
- resched_curr(lowest_rq);
+ resched_curr(lowest_rq, true);
}
double_unlock_balance(rq, lowest_rq);
@@ -3999,7 +4005,7 @@ void wake_up_if_idle(int cpu)
if (is_idle_task(rcu_dereference(rq->curr))) {
guard(rq_lock_irqsave)(rq);
if (is_idle_task(rq->curr))
- resched_curr(rq);
+ resched_curr(rq, true);
}
}
@@ -6333,7 +6339,7 @@ pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
continue;
}
- resched_curr(rq_i);
+ resched_curr(rq_i, false);
}
out_set_next:
@@ -6388,7 +6394,7 @@ static bool try_steal_cookie(int this, int that)
set_task_cpu(p, this);
activate_task(dst, p, 0);
- resched_curr(dst);
+ resched_curr(dst, false);
success = true;
break;
@@ -8743,7 +8749,7 @@ int __sched yield_to(struct task_struct *p, bool preempt)
* fairness.
*/
if (preempt && rq != p_rq)
- resched_curr(p_rq);
+ resched_curr(p_rq, true);
}
out_unlock:
@@ -10300,7 +10306,7 @@ void sched_move_task(struct task_struct *tsk)
* throttled one but it's still the running task. Trigger a
* resched to make sure that task can still run.
*/
- resched_curr(rq);
+ resched_curr(rq, true);
}
unlock:
diff --git a/kernel/sched/core_sched.c b/kernel/sched/core_sched.c
index a57fd8f27498..32f234f2a210 100644
--- a/kernel/sched/core_sched.c
+++ b/kernel/sched/core_sched.c
@@ -89,7 +89,7 @@ static unsigned long sched_core_update_cookie(struct task_struct *p,
* next scheduling edge, rather than always forcing a reschedule here.
*/
if (task_on_cpu(rq, p))
- resched_curr(rq);
+ resched_curr(rq, false);
task_rq_unlock(rq, p, &rf);
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index e6815c3bd2f0..ecb47b5e9588 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1177,7 +1177,7 @@ static enum hrtimer_restart dl_task_timer(struct hrtimer *timer)
if (dl_task(rq->curr))
check_preempt_curr_dl(rq, p, 0);
else
- resched_curr(rq);
+ resched_curr(rq, false);
#ifdef CONFIG_SMP
/*
@@ -1367,7 +1367,7 @@ static void update_curr_dl(struct rq *rq)
enqueue_task_dl(rq, curr, ENQUEUE_REPLENISH);
if (!is_leftmost(curr, &rq->dl))
- resched_curr(rq);
+ resched_curr(rq, false);
}
/*
@@ -1914,7 +1914,7 @@ static void check_preempt_equal_dl(struct rq *rq, struct task_struct *p)
cpudl_find(&rq->rd->cpudl, p, NULL))
return;
- resched_curr(rq);
+ resched_curr(rq, false);
}
static int balance_dl(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
@@ -1943,7 +1943,7 @@ static void check_preempt_curr_dl(struct rq *rq, struct task_struct *p,
int flags)
{
if (dl_entity_preempt(&p->dl, &rq->curr->dl)) {
- resched_curr(rq);
+ resched_curr(rq, false);
return;
}
@@ -2307,7 +2307,7 @@ static int push_dl_task(struct rq *rq)
if (dl_task(rq->curr) &&
dl_time_before(next_task->dl.deadline, rq->curr->dl.deadline) &&
rq->curr->nr_cpus_allowed > 1) {
- resched_curr(rq);
+ resched_curr(rq, false);
return 0;
}
@@ -2353,7 +2353,7 @@ static int push_dl_task(struct rq *rq)
activate_task(later_rq, next_task, 0);
ret = 1;
- resched_curr(later_rq);
+ resched_curr(later_rq, false);
double_unlock_balance(rq, later_rq);
@@ -2457,7 +2457,7 @@ static void pull_dl_task(struct rq *this_rq)
}
if (resched)
- resched_curr(this_rq);
+ resched_curr(this_rq, false);
}
/*
@@ -2654,7 +2654,7 @@ static void switched_to_dl(struct rq *rq, struct task_struct *p)
if (dl_task(rq->curr))
check_preempt_curr_dl(rq, p, 0);
else
- resched_curr(rq);
+ resched_curr(rq, false);
} else {
update_dl_rq_load_avg(rq_clock_pelt(rq), rq, 0);
}
@@ -2687,7 +2687,7 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
* runqueue.
*/
if (dl_time_before(rq->dl.earliest_dl.curr, p->dl.deadline))
- resched_curr(rq);
+ resched_curr(rq, false);
} else {
/*
* Current may not be deadline in case p was throttled but we
@@ -2697,14 +2697,14 @@ static void prio_changed_dl(struct rq *rq, struct task_struct *p,
*/
if (!dl_task(rq->curr) ||
dl_time_before(p->dl.deadline, rq->curr->dl.deadline))
- resched_curr(rq);
+ resched_curr(rq, false);
}
#else
/*
* We don't know if p has a earlier or later deadline, so let's blindly
* set a (maybe not needed) rescheduling point.
*/
- resched_curr(rq);
+ resched_curr(rq, false);
#endif
}
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fe7e5e9b2207..448fe36e7bbb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1046,7 +1046,7 @@ static void update_deadline(struct cfs_rq *cfs_rq,
if (tick && test_tsk_thread_flag(rq->curr, TIF_NEED_RESCHED_LAZY))
__resched_curr(rq, RESCHED_eager);
else
- resched_curr(rq);
+ resched_curr(rq, false);
clear_buddies(cfs_rq, se);
}
@@ -5337,7 +5337,7 @@ entity_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr, int queued)
* validating it and just reschedule.
*/
if (queued) {
- resched_curr(rq_of(cfs_rq));
+ resched_curr(rq_of(cfs_rq), false);
return;
}
/*
@@ -5483,7 +5483,7 @@ static void __account_cfs_rq_runtime(struct cfs_rq *cfs_rq, u64 delta_exec)
* hierarchy can be throttled
*/
if (!assign_cfs_rq_runtime(cfs_rq) && likely(cfs_rq->curr))
- resched_curr(rq_of(cfs_rq));
+ resched_curr(rq_of(cfs_rq), false);
}
static __always_inline
@@ -5743,7 +5743,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq)
/* Determine whether we need to wake up potentially idle CPU: */
if (rq->curr == rq->idle && rq->cfs.nr_running)
- resched_curr(rq);
+ resched_curr(rq, false);
}
#ifdef CONFIG_SMP
@@ -6448,7 +6448,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p)
if (delta < 0) {
if (task_current(rq, p))
- resched_curr(rq);
+ resched_curr(rq, false);
return;
}
hrtick_start(rq, delta);
@@ -8143,7 +8143,7 @@ static void check_preempt_wakeup(struct rq *rq, struct task_struct *p, int wake_
return;
preempt:
- resched_curr(rq);
+ resched_curr(rq, false);
}
#ifdef CONFIG_SMP
@@ -12294,7 +12294,7 @@ static inline void task_tick_core(struct rq *rq, struct task_struct *curr)
*/
if (rq->core->core_forceidle_count && rq->cfs.nr_running == 1 &&
__entity_slice_used(&curr->se, MIN_NR_TASKS_DURING_FORCEIDLE))
- resched_curr(rq);
+ resched_curr(rq, false);
}
/*
@@ -12459,7 +12459,7 @@ prio_changed_fair(struct rq *rq, struct task_struct *p, int oldprio)
*/
if (task_current(rq, p)) {
if (p->prio > oldprio)
- resched_curr(rq);
+ resched_curr(rq, false);
} else
check_preempt_curr(rq, p, 0);
}
@@ -12561,7 +12561,7 @@ static void switched_to_fair(struct rq *rq, struct task_struct *p)
* if we can still preempt the current task.
*/
if (task_current(rq, p))
- resched_curr(rq);
+ resched_curr(rq, false);
else
check_preempt_curr(rq, p, 0);
}
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index 9b4c2967b2b7..9bf30732b03f 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -92,6 +92,11 @@ SCHED_FEAT(HZ_BW, true)
#if defined(CONFIG_PREEMPT)
SCHED_FEAT(FORCE_PREEMPT, true)
+SCHED_FEAT(PREEMPT_PRIORITY, true)
+#elif defined(CONFIG_PREEMPT_VOLUNTARY)
+SCHED_FEAT(FORCE_PREEMPT, false)
+SCHED_FEAT(PREEMPT_PRIORITY, true)
#else
SCHED_FEAT(FORCE_PREEMPT, false)
+SCHED_FEAT(PREEMPT_PRIORITY, false)
#endif
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index eacd204e2879..3ef039869be9 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -403,7 +403,7 @@ balance_idle(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
*/
static void check_preempt_curr_idle(struct rq *rq, struct task_struct *p, int flags)
{
- resched_curr(rq);
+ resched_curr(rq, true);
}
static void put_prev_task_idle(struct rq *rq, struct task_struct *prev)
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 5fdb93f1b87e..8d87e42d30d8 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -589,7 +589,7 @@ static void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
enqueue_rt_entity(rt_se, 0);
if (rt_rq->highest_prio.curr < curr->prio)
- resched_curr(rq);
+ resched_curr(rq, false);
}
}
@@ -682,7 +682,7 @@ static inline void sched_rt_rq_enqueue(struct rt_rq *rt_rq)
return;
enqueue_top_rt_rq(rt_rq);
- resched_curr(rq);
+ resched_curr(rq, false);
}
static inline void sched_rt_rq_dequeue(struct rt_rq *rt_rq)
@@ -1076,7 +1076,7 @@ static void update_curr_rt(struct rq *rq)
rt_rq->rt_time += delta_exec;
exceeded = sched_rt_runtime_exceeded(rt_rq);
if (exceeded)
- resched_curr(rq);
+ resched_curr(rq, false);
raw_spin_unlock(&rt_rq->rt_runtime_lock);
if (exceeded)
do_start_rt_bandwidth(sched_rt_bandwidth(rt_rq));
@@ -1691,7 +1691,7 @@ static void check_preempt_equal_prio(struct rq *rq, struct task_struct *p)
* to try and push the current task away:
*/
requeue_task_rt(rq, p, 1);
- resched_curr(rq);
+ resched_curr(rq, false);
}
static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
@@ -1718,7 +1718,7 @@ static int balance_rt(struct rq *rq, struct task_struct *p, struct rq_flags *rf)
static void check_preempt_curr_rt(struct rq *rq, struct task_struct *p, int flags)
{
if (p->prio < rq->curr->prio) {
- resched_curr(rq);
+ resched_curr(rq, false);
return;
}
@@ -2074,7 +2074,7 @@ static int push_rt_task(struct rq *rq, bool pull)
* just reschedule current.
*/
if (unlikely(next_task->prio < rq->curr->prio)) {
- resched_curr(rq);
+ resched_curr(rq, false);
return 0;
}
@@ -2162,7 +2162,7 @@ static int push_rt_task(struct rq *rq, bool pull)
deactivate_task(rq, next_task, 0);
set_task_cpu(next_task, lowest_rq->cpu);
activate_task(lowest_rq, next_task, 0);
- resched_curr(lowest_rq);
+ resched_curr(lowest_rq, false);
ret = 1;
double_unlock_balance(rq, lowest_rq);
@@ -2456,7 +2456,7 @@ static void pull_rt_task(struct rq *this_rq)
}
if (resched)
- resched_curr(this_rq);
+ resched_curr(this_rq, false);
}
/*
@@ -2555,7 +2555,7 @@ static void switched_to_rt(struct rq *rq, struct task_struct *p)
rt_queue_push_tasks(rq);
#endif /* CONFIG_SMP */
if (p->prio < rq->curr->prio && cpu_online(cpu_of(rq)))
- resched_curr(rq);
+ resched_curr(rq, false);
}
}
@@ -2583,11 +2583,11 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
* then reschedule.
*/
if (p->prio > rq->rt.highest_prio.curr)
- resched_curr(rq);
+ resched_curr(rq, false);
#else
/* For UP simply resched on drop of prio */
if (oldprio < p->prio)
- resched_curr(rq);
+ resched_curr(rq, false);
#endif /* CONFIG_SMP */
} else {
/*
@@ -2596,7 +2596,7 @@ prio_changed_rt(struct rq *rq, struct task_struct *p, int oldprio)
* then reschedule.
*/
if (p->prio < rq->curr->prio)
- resched_curr(rq);
+ resched_curr(rq, false);
}
}
@@ -2668,7 +2668,7 @@ static void task_tick_rt(struct rq *rq, struct task_struct *p, int queued)
if (test_tsk_thread_flag(rq->curr, TIF_NEED_RESCHED_LAZY))
__resched_curr(rq, RESCHED_eager);
else
- resched_curr(rq);
+ resched_curr(rq, false);
return;
}
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e29a8897f573..9a745dd7482f 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2435,7 +2435,7 @@ extern void init_sched_fair_class(void);
extern void reweight_task(struct task_struct *p, int prio);
extern void __resched_curr(struct rq *rq, resched_t rs);
-extern void resched_curr(struct rq *rq);
+extern void resched_curr(struct rq *rq, bool above);
extern void resched_cpu(int cpu);
extern struct rt_bandwidth def_rt_bandwidth;
--
2.31.1
next prev parent reply other threads:[~2023-11-07 22:04 UTC|newest]
Thread overview: 250+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-11-07 21:56 [RFC PATCH 00/86] Make the kernel preemptible Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 01/86] Revert "riscv: support PREEMPT_DYNAMIC with static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 02/86] Revert "sched/core: Make sched_dynamic_mutex static" Ankur Arora
2023-11-07 23:04 ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 03/86] Revert "ftrace: Use preemption model accessors for trace header printout" Ankur Arora
2023-11-07 23:10 ` Steven Rostedt
2023-11-07 23:23 ` Ankur Arora
2023-11-07 23:31 ` Steven Rostedt
2023-11-07 23:34 ` Steven Rostedt
2023-11-08 0:12 ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 04/86] Revert "preempt/dynamic: Introduce preemption model accessors" Ankur Arora
2023-11-07 23:12 ` Steven Rostedt
2023-11-08 4:59 ` Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 05/86] Revert "kcsan: Use " Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 06/86] Revert "entry: Fix compile error in dynamic_irqentry_exit_cond_resched()" Ankur Arora
2023-11-08 7:47 ` Greg KH
2023-11-08 9:09 ` Ankur Arora
2023-11-08 10:00 ` Greg KH
2023-11-07 21:56 ` [RFC PATCH 07/86] Revert "livepatch,sched: Add livepatch task switching to cond_resched()" Ankur Arora
2023-11-07 23:16 ` Steven Rostedt
2023-11-08 4:55 ` Ankur Arora
2023-11-09 17:26 ` Josh Poimboeuf
2023-11-09 17:31 ` Steven Rostedt
2023-11-09 17:51 ` Josh Poimboeuf
2023-11-09 22:50 ` Ankur Arora
2023-11-09 23:47 ` Josh Poimboeuf
2023-11-10 0:46 ` Ankur Arora
2023-11-10 0:56 ` Steven Rostedt
2023-11-07 21:56 ` [RFC PATCH 08/86] Revert "arm64: Support PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 23:17 ` Steven Rostedt
2023-11-08 15:44 ` Mark Rutland
2023-11-07 21:56 ` [RFC PATCH 09/86] Revert "sched/preempt: Add PREEMPT_DYNAMIC using static keys" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 10/86] Revert "sched/preempt: Decouple HAVE_PREEMPT_DYNAMIC from GENERIC_ENTRY" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 11/86] Revert "sched/preempt: Simplify irqentry_exit_cond_resched() callers" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 12/86] Revert "sched/preempt: Refactor sched_dynamic_update()" Ankur Arora
2023-11-07 21:56 ` [RFC PATCH 13/86] Revert "sched/preempt: Move PREEMPT_DYNAMIC logic later" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 14/86] Revert "preempt/dynamic: Fix setup_preempt_mode() return value" Ankur Arora
2023-11-07 23:20 ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 15/86] Revert "preempt: Restore preemption model selection configs" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 16/86] Revert "sched: Provide Kconfig support for default dynamic preempt mode" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 17/86] sched/preempt: remove PREEMPT_DYNAMIC from the build version Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 18/86] Revert "preempt/dynamic: Fix typo in macro conditional statement" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 19/86] Revert "sched,preempt: Move preempt_dynamic to debug.c" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 20/86] Revert "static_call: Relax static_call_update() function argument type" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 21/86] Revert "sched/core: Use -EINVAL in sched_dynamic_mode()" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 22/86] Revert "sched/core: Stop using magic values " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 23/86] Revert "sched,x86: Allow !PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 24/86] Revert "sched: Harden PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 25/86] Revert "sched: Add /debug/sched_preempt" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 26/86] Revert "preempt/dynamic: Support dynamic preempt with preempt= boot option" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 27/86] Revert "preempt/dynamic: Provide irqentry_exit_cond_resched() static call" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 28/86] Revert "preempt/dynamic: Provide preempt_schedule[_notrace]() static calls" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 29/86] Revert "preempt/dynamic: Provide cond_resched() and might_resched() " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 30/86] Revert "preempt: Introduce CONFIG_PREEMPT_DYNAMIC" Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 31/86] x86/thread_info: add TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 23:26 ` Steven Rostedt
2023-11-07 21:57 ` [RFC PATCH 32/86] entry: handle TIF_NEED_RESCHED_LAZY Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 33/86] entry/kvm: " Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 34/86] thread_info: accessors for TIF_NEED_RESCHED* Ankur Arora
2023-11-08 8:58 ` Peter Zijlstra
2023-11-21 5:59 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 35/86] thread_info: change to tif_need_resched(resched_t) Ankur Arora
2023-11-08 9:00 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 36/86] entry: irqentry_exit only preempts TIF_NEED_RESCHED Ankur Arora
2023-11-08 9:01 ` Peter Zijlstra
2023-11-21 6:00 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 37/86] sched: make test_*_tsk_thread_flag() return bool Ankur Arora
2023-11-08 9:02 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 38/86] sched: *_tsk_need_resched() now takes resched_t Ankur Arora
2023-11-08 9:03 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 39/86] sched: handle lazy resched in set_nr_*_polling() Ankur Arora
2023-11-08 9:15 ` Peter Zijlstra
2023-11-07 21:57 ` [RFC PATCH 40/86] context_tracking: add ct_state_cpu() Ankur Arora
2023-11-08 9:16 ` Peter Zijlstra
2023-11-21 6:32 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 41/86] sched: handle resched policy in resched_curr() Ankur Arora
2023-11-08 9:36 ` Peter Zijlstra
2023-11-08 10:26 ` Ankur Arora
2023-11-08 10:46 ` Peter Zijlstra
2023-11-21 6:34 ` Ankur Arora
2023-11-21 6:31 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 42/86] sched: force preemption on tick expiration Ankur Arora
2023-11-08 9:56 ` Peter Zijlstra
2023-11-21 6:44 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 43/86] sched: enable PREEMPT_COUNT, PREEMPTION for all preemption models Ankur Arora
2023-11-08 9:58 ` Peter Zijlstra
2023-11-07 21:57 ` Ankur Arora [this message]
2023-11-07 21:57 ` [RFC PATCH 45/86] preempt: ARCH_NO_PREEMPT only preempts lazily Ankur Arora
2023-11-08 0:07 ` Steven Rostedt
2023-11-08 8:47 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 46/86] tracing: handle lazy resched Ankur Arora
2023-11-08 0:19 ` Steven Rostedt
2023-11-08 9:24 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 47/86] rcu: select PREEMPT_RCU if PREEMPT Ankur Arora
2023-11-08 0:27 ` Steven Rostedt
2023-11-21 0:28 ` Paul E. McKenney
2023-11-21 3:43 ` Steven Rostedt
2023-11-21 5:04 ` Paul E. McKenney
2023-11-21 5:39 ` Ankur Arora
2023-11-21 15:00 ` Steven Rostedt
2023-11-21 15:19 ` Paul E. McKenney
2023-11-28 10:53 ` Thomas Gleixner
2023-11-28 18:30 ` Ankur Arora
2023-12-05 1:03 ` Paul E. McKenney
2023-12-05 1:01 ` Paul E. McKenney
2023-12-05 15:01 ` Steven Rostedt
2023-12-05 19:38 ` Paul E. McKenney
2023-12-05 20:18 ` Ankur Arora
2023-12-06 4:07 ` Paul E. McKenney
2023-12-07 1:33 ` Ankur Arora
2023-12-05 20:45 ` Steven Rostedt
2023-12-06 10:08 ` David Laight
2023-12-07 4:34 ` Paul E. McKenney
2023-12-07 13:44 ` Steven Rostedt
2023-12-08 4:28 ` Paul E. McKenney
2023-11-08 12:15 ` Julian Anastasov
2023-11-07 21:57 ` [RFC PATCH 48/86] rcu: handle quiescent states for PREEMPT_RCU=n Ankur Arora
2023-11-21 0:38 ` Paul E. McKenney
2023-11-21 3:26 ` Ankur Arora
2023-11-21 5:17 ` Paul E. McKenney
2023-11-21 5:34 ` Paul E. McKenney
2023-11-21 6:13 ` Z qiang
2023-11-21 15:32 ` Paul E. McKenney
2023-11-21 19:25 ` Paul E. McKenney
2023-11-21 20:30 ` Peter Zijlstra
2023-11-21 21:14 ` Paul E. McKenney
2023-11-21 21:38 ` Steven Rostedt
2023-11-21 22:26 ` Paul E. McKenney
2023-11-21 22:52 ` Steven Rostedt
2023-11-22 0:01 ` Paul E. McKenney
2023-11-22 0:12 ` Steven Rostedt
2023-11-22 1:09 ` Paul E. McKenney
2023-11-28 17:04 ` Thomas Gleixner
2023-12-05 1:33 ` Paul E. McKenney
2023-12-06 15:10 ` Thomas Gleixner
2023-12-07 4:17 ` Paul E. McKenney
2023-12-07 1:31 ` Ankur Arora
2023-12-07 2:10 ` Steven Rostedt
2023-12-07 4:37 ` Paul E. McKenney
2023-12-07 14:22 ` Thomas Gleixner
2023-11-21 3:55 ` Z qiang
2023-11-07 21:57 ` [RFC PATCH 49/86] osnoise: handle quiescent states directly Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 50/86] rcu: TASKS_RCU does not need to depend on PREEMPTION Ankur Arora
2023-11-21 0:38 ` Paul E. McKenney
2023-11-07 21:57 ` [RFC PATCH 51/86] preempt: disallow !PREEMPT_COUNT or !PREEMPTION Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 52/86] sched: remove CONFIG_PREEMPTION from *_needbreak() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 53/86] sched: fixup __cond_resched_*() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 54/86] sched: add cond_resched_stall() Ankur Arora
2023-11-09 11:19 ` Thomas Gleixner
2023-11-09 22:27 ` Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 55/86] xarray: add cond_resched_xas_rcu() and cond_resched_xas_lock_irq() Ankur Arora
2023-11-07 21:57 ` [RFC PATCH 56/86] xarray: use cond_resched_xas*() Ankur Arora
2023-11-07 23:01 ` [RFC PATCH 00/86] Make the kernel preemptible Steven Rostedt
2023-11-07 23:43 ` Ankur Arora
2023-11-08 0:00 ` Steven Rostedt
2023-11-07 23:07 ` [RFC PATCH 57/86] coccinelle: script to remove cond_resched() Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 58/86] treewide: x86: " Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 59/86] treewide: rcu: " Ankur Arora
2023-11-21 1:01 ` Paul E. McKenney
2023-11-07 23:07 ` [RFC PATCH 60/86] treewide: torture: " Ankur Arora
2023-11-21 1:02 ` Paul E. McKenney
2023-11-07 23:07 ` [RFC PATCH 61/86] treewide: bpf: " Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 62/86] treewide: trace: " Ankur Arora
2023-11-07 23:07 ` [RFC PATCH 63/86] treewide: futex: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 64/86] treewide: printk: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 65/86] treewide: task_work: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 66/86] treewide: kernel: " Ankur Arora
2023-11-17 18:14 ` Luis Chamberlain
2023-11-17 19:51 ` Steven Rostedt
2023-11-07 23:08 ` [RFC PATCH 67/86] treewide: kernel: remove cond_reshed() Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 68/86] treewide: mm: remove cond_resched() Ankur Arora
2023-11-08 1:28 ` Sergey Senozhatsky
2023-11-08 7:49 ` Vlastimil Babka
2023-11-08 8:02 ` Yosry Ahmed
2023-11-08 8:54 ` Ankur Arora
2023-11-08 12:58 ` Matthew Wilcox
2023-11-08 14:50 ` Steven Rostedt
2023-11-07 23:08 ` [RFC PATCH 69/86] treewide: io_uring: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 70/86] treewide: ipc: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 71/86] treewide: lib: " Ankur Arora
2023-11-08 9:15 ` Herbert Xu
2023-11-08 15:08 ` Steven Rostedt
2023-11-09 4:19 ` Herbert Xu
2023-11-09 4:43 ` Steven Rostedt
2023-11-08 19:15 ` Kees Cook
2023-11-08 19:41 ` Steven Rostedt
2023-11-08 22:16 ` Kees Cook
2023-11-08 22:21 ` Steven Rostedt
2023-11-09 9:39 ` David Laight
2023-11-07 23:08 ` [RFC PATCH 72/86] treewide: crypto: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 73/86] treewide: security: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 74/86] treewide: fs: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 75/86] treewide: virt: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 76/86] treewide: block: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 77/86] treewide: netfilter: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 78/86] treewide: net: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 79/86] " Ankur Arora
2023-11-08 12:16 ` Eric Dumazet
2023-11-08 17:11 ` Steven Rostedt
2023-11-08 20:59 ` Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 80/86] treewide: sound: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 81/86] treewide: md: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 82/86] treewide: mtd: " Ankur Arora
2023-11-08 16:28 ` Miquel Raynal
2023-11-08 16:32 ` Matthew Wilcox
2023-11-08 17:21 ` Steven Rostedt
2023-11-09 8:38 ` Miquel Raynal
2023-11-07 23:08 ` [RFC PATCH 83/86] treewide: drm: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 84/86] treewide: net: " Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 85/86] treewide: drivers: " Ankur Arora
2023-11-08 0:48 ` Chris Packham
2023-11-09 0:55 ` Ankur Arora
2023-11-09 23:25 ` Dmitry Torokhov
2023-11-09 23:41 ` Steven Rostedt
2023-11-10 0:01 ` Ankur Arora
2023-11-07 23:08 ` [RFC PATCH 86/86] sched: " Ankur Arora
2023-11-07 23:19 ` [RFC PATCH 57/86] coccinelle: script to " Julia Lawall
2023-11-08 8:29 ` Ankur Arora
2023-11-08 9:49 ` Julia Lawall
2023-11-21 0:45 ` Paul E. McKenney
2023-11-21 5:16 ` Ankur Arora
2023-11-21 15:26 ` Paul E. McKenney
2023-11-08 4:08 ` [RFC PATCH 00/86] Make the kernel preemptible Christoph Lameter
2023-11-08 4:33 ` Ankur Arora
2023-11-08 4:52 ` Christoph Lameter
2023-11-08 5:12 ` Steven Rostedt
2023-11-08 6:49 ` Ankur Arora
2023-11-08 7:54 ` Vlastimil Babka
2023-11-08 7:31 ` Juergen Gross
2023-11-08 8:51 ` Peter Zijlstra
2023-11-08 9:53 ` Daniel Bristot de Oliveira
2023-11-08 10:04 ` Ankur Arora
2023-11-08 10:13 ` Peter Zijlstra
2023-11-08 11:00 ` Ankur Arora
2023-11-08 11:14 ` Peter Zijlstra
2023-11-08 12:16 ` Peter Zijlstra
2023-11-08 15:38 ` Thomas Gleixner
2023-11-08 16:15 ` Peter Zijlstra
2023-11-08 16:22 ` Steven Rostedt
2023-11-08 16:49 ` Peter Zijlstra
2023-11-08 17:18 ` Steven Rostedt
2023-11-08 20:46 ` Ankur Arora
2023-11-08 20:26 ` Ankur Arora
2023-11-08 9:43 ` David Laight
2023-11-08 15:15 ` Steven Rostedt
2023-11-08 16:29 ` David Laight
2023-11-08 16:33 ` Mark Rutland
2023-11-09 0:34 ` Ankur Arora
2023-11-09 11:00 ` Mark Rutland
2023-11-09 22:36 ` Ankur Arora
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20231107215742.363031-45-ankur.a.arora@oracle.com \
--to=ankur.a.arora@oracle.com \
--cc=David.Laight@ACULAB.COM \
--cc=akpm@linux-foundation.org \
--cc=andrew.cooper3@citrix.com \
--cc=anton.ivanov@cambridgegreys.com \
--cc=bharata@amd.com \
--cc=boris.ostrovsky@oracle.com \
--cc=bp@alien8.de \
--cc=bristot@kernel.org \
--cc=dave.hansen@linux.intel.com \
--cc=geert@linux-m68k.org \
--cc=glaubitz@physik.fu-berlin.de \
--cc=hpa@zytor.com \
--cc=jgross@suse.com \
--cc=jon.grimm@amd.com \
--cc=juri.lelli@redhat.com \
--cc=konrad.wilk@oracle.com \
--cc=krypton@ulrich-teichert.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mathieu.desnoyers@efficios.com \
--cc=mattst88@gmail.com \
--cc=mgorman@suse.de \
--cc=mingo@kernel.org \
--cc=mingo@redhat.com \
--cc=mjguzik@gmail.com \
--cc=paulmck@kernel.org \
--cc=peterz@infradead.org \
--cc=raghavendra.kt@amd.com \
--cc=richard@nod.at \
--cc=rostedt@goodmis.org \
--cc=tglx@linutronix.de \
--cc=torvalds@linux-foundation.org \
--cc=vincent.guittot@linaro.org \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).