All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/rt: Rework for_each_process_thread() iterations in tg_has_rt_tasks()
@ 2018-04-19 17:29 Kirill Tkhai
  2018-04-20  9:25 ` Juri Lelli
  2018-04-25 17:55 ` Peter Zijlstra
  0 siblings, 2 replies; 21+ messages in thread
From: Kirill Tkhai @ 2018-04-19 17:29 UTC (permalink / raw)
  To: mingo, peterz, linux-kernel, ktkhai

tg_rt_schedulable() iterates over all child task groups,
while tg_has_rt_tasks() iterates over all linked tasks.
In case of systems with big number of tasks, this may
take a lot of time.

I observed hard LOCKUP on machine with 20000+ processes
after write to "cpu.rt_period_us" of cpu cgroup with
39 children. The problem occurred because of tasklist_lock
is held for a long time and other processes can't do fork().

PID: 1036268  TASK: ffff88766c310000  CPU: 36  COMMAND: "criu"
 #0 [ffff887f7f408e48] crash_nmi_callback at ffffffff81050601
 #1 [ffff887f7f408e58] nmi_handle at ffffffff816e0cc7
 #2 [ffff887f7f408eb0] do_nmi at ffffffff816e0fb0
 #3 [ffff887f7f408ef0] end_repeat_nmi at ffffffff816e00b9
    [exception RIP: tg_rt_schedulable+463]
    RIP: ffffffff810bf49f  RSP: ffff886537ad7d50  RFLAGS: 00000202
    RAX: 0000000000000000  RBX: 000000003b9aca00  RCX: ffff883e9cb4b1b0
    RDX: ffff887d0be43608  RSI: ffff886537ad7dd8  RDI: ffff8840a6ad0000
    RBP: ffff886537ad7d68   R8: ffff887d0be431b0   R9: 00000000000e7ef0
    R10: ffff88164fc39400  R11: 0000000000023380  R12: ffffffff81ef8d00
    R13: ffffffff810bea40  R14: 0000000000000000  R15: ffff8840a6ad0000
    ORIG_RAX: ffffffffffffffff  CS: 0010  SS: 0018
--- <NMI exception stack> ---
 #4 [ffff886537ad7d50] tg_rt_schedulable at ffffffff810bf49f
 #5 [ffff886537ad7d70] walk_tg_tree_from at ffffffff810c6c91
 #6 [ffff886537ad7dc0] tg_set_rt_bandwidth at ffffffff810c6dd0
 #7 [ffff886537ad7e28] cpu_rt_period_write_uint at ffffffff810c6eea
 #8 [ffff886537ad7e38] cgroup_file_write at ffffffff8111cfd3
 #9 [ffff886537ad7ec8] vfs_write at ffffffff8121eced
#10 [ffff886537ad7f08] sys_write at ffffffff8121faff
#11 [ffff886537ad7f50] system_call_fastpath at ffffffff816e8a7d

The patch reworks tg_has_rt_tasks() and makes it to check
for rt_rq::rt_nr_running instead of iteration over task list.
This makes the function to scale well, and its execution time
does not depend on number of processes in the system.

Note, that since tasklist_lock doesn't protect a task against
sched_class changing, we don't introduce new races in comparison
to that we had before. Also, rt_rq::rt_nr_running contains queued
child cfs_rq in additional to queued task. Since tg_has_rt_tasks()
is used in case of !runtime case:

	if (rt_bandwidth_enabled() && !runtime && tg_has_rt_tasks(tg))
		return -EBUSY;

the behaviour won't change. The only change is that walk_tg_tree()
calling tg_rt_schedulable() will break its iteration on parent cfs_rq,
i.e. earlier.

Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
---
 kernel/sched/rt.c |   22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 7aef6b4e885a..601151bb9322 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -2395,10 +2395,10 @@ const struct sched_class rt_sched_class = {
  */
 static DEFINE_MUTEX(rt_constraints_mutex);
 
-/* Must be called with tasklist_lock held */
 static inline int tg_has_rt_tasks(struct task_group *tg)
 {
-	struct task_struct *g, *p;
+	struct rt_rq *rt_rq;
+	int cpu, ret = 0;
 
 	/*
 	 * Autogroups do not have RT tasks; see autogroup_create().
@@ -2406,12 +2406,18 @@ static inline int tg_has_rt_tasks(struct task_group *tg)
 	if (task_group_is_autogroup(tg))
 		return 0;
 
-	for_each_process_thread(g, p) {
-		if (rt_task(p) && task_group(p) == tg)
-			return 1;
+	preempt_disable();
+
+	for_each_online_cpu(cpu) {
+		rt_rq = tg->rt_rq[cpu];
+		if (READ_ONCE(rt_rq->rt_nr_running)) {
+			ret = 1;
+			break;
+		}
 	}
 
-	return 0;
+	preempt_enable();
+	return ret;
 }
 
 struct rt_schedulable_data {
@@ -2510,7 +2516,6 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
 		return -EINVAL;
 
 	mutex_lock(&rt_constraints_mutex);
-	read_lock(&tasklist_lock);
 	err = __rt_schedulable(tg, rt_period, rt_runtime);
 	if (err)
 		goto unlock;
@@ -2528,7 +2533,6 @@ static int tg_set_rt_bandwidth(struct task_group *tg,
 	}
 	raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock);
 unlock:
-	read_unlock(&tasklist_lock);
 	mutex_unlock(&rt_constraints_mutex);
 
 	return err;
@@ -2582,9 +2586,7 @@ static int sched_rt_global_constraints(void)
 	int ret = 0;
 
 	mutex_lock(&rt_constraints_mutex);
-	read_lock(&tasklist_lock);
 	ret = __rt_schedulable(NULL, 0, 0);
-	read_unlock(&tasklist_lock);
 	mutex_unlock(&rt_constraints_mutex);
 
 	return ret;

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2020-01-27 17:46 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-04-19 17:29 [PATCH] sched/rt: Rework for_each_process_thread() iterations in tg_has_rt_tasks() Kirill Tkhai
2018-04-20  9:25 ` Juri Lelli
2018-04-20  9:43   ` Kirill Tkhai
2018-04-20 10:06     ` [PATCH v2] " Kirill Tkhai
2018-04-20 14:11       ` Juri Lelli
2018-04-20 14:30         ` Kirill Tkhai
2018-04-20 15:27           ` Juri Lelli
2018-04-25 15:42       ` Kirill Tkhai
2018-04-25 19:49       ` Peter Zijlstra
2018-04-26  9:54         ` [PATCH v3]sched/rt: Stop " Kirill Tkhai
2020-01-23 21:56           ` Phil Auld
2020-01-24  9:09             ` Kirill Tkhai
2020-01-27 16:30               ` Phil Auld
2020-01-27 16:43             ` Peter Zijlstra
2020-01-27 16:56               ` Phil Auld
2020-01-27 17:00                 ` Peter Zijlstra
2020-01-27 17:45                   ` Phil Auld
2018-04-20 10:58     ` [PATCH] sched/rt: Rework " Juri Lelli
2018-04-20 11:21       ` Kirill Tkhai
2018-04-25 17:55 ` Peter Zijlstra
2018-04-26  9:26   ` Kirill Tkhai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.