linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
@ 2017-06-21 19:28 Daniel Bristot de Oliveira
  2017-06-21 19:29 ` [PATCH 1/2] sched/debug: Inform the number of rt/dl task that can migrate Daniel Bristot de Oliveira
  2017-06-21 19:29 ` [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration Daniel Bristot de Oliveira
  0 siblings, 2 replies; 11+ messages in thread
From: Daniel Bristot de Oliveira @ 2017-06-21 19:28 UTC (permalink / raw)
  To: linux-rt-users
  Cc: Luis Claudio R . Goncalves, Clark Williams, Luiz Capitulino,
	Sebastian Andrzej Siewior, Thomas Gleixner, Steven Rostedt,
	Peter Zijlstra, Ingo Molnar, LKML

This should be the v2 of the:
  [RFC] rt: Some fixes for migrate_disable/enable

However, migrate_disable/enable was reworked during the
4.11-rt window, so it turns out that 2 of 3 problems were fixed.
Good! :-)

But there is still one problem, which is the dl/rt_nr_migratory inc/dec.

The problem is reproducible with the following command [in a 4 CPU box]:

  # chrt -f 1 taskset -c 3 cat /dev/full | taskset -c 0-2 grep 'batman'

By applying only the patch 1/2, it is possible to see the problem with
the following command:

  # cat /proc/sched_debug | grep rt_nr_migratory
    .rt_nr_migratory               : 18446744073709542849
    .rt_nr_migratory               : 18446744073709538566
    .rt_nr_migratory               : 18446744073709548257
    .rt_nr_migratory               : 0

The detailed description of the bug, and the fix, is in the log
of the patch 2/2.

Changes from RFC:

 - The problems addressed in the patches:
   x  rt: Update nr_cpus_allowed if the affinity of a task changes while its
      migration is disabled
   x  rt: Checks if task needs migration when re-enabling migration

  were fixed, so these patches are not needed anymore, while patch:

   x  rt: Increase/decrease the nr of migratory tasks when
      enabling/disabling migration

  is still needed, so it was reworked for the new implementation.

 - The patch showing the rt/dl_nr_migratory was added.

Daniel Bristot de Oliveira (2):
  sched/debug: Inform the number of rt/dl task that can migrate
  rt: Increase/decrease the nr of migratory tasks when
    enabling/disabling migration

 kernel/sched/core.c  | 40 ++++++++++++++++++++++++++++++++--------
 kernel/sched/debug.c | 15 +++++++++++++--
 2 files changed, 45 insertions(+), 10 deletions(-)

-- 
2.9.4

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/2] sched/debug: Inform the number of rt/dl task that can migrate
  2017-06-21 19:28 [PATCH 0/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration Daniel Bristot de Oliveira
@ 2017-06-21 19:29 ` Daniel Bristot de Oliveira
  2017-06-22  9:29   ` Ingo Molnar
  2017-06-21 19:29 ` [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration Daniel Bristot de Oliveira
  1 sibling, 1 reply; 11+ messages in thread
From: Daniel Bristot de Oliveira @ 2017-06-21 19:29 UTC (permalink / raw)
  To: linux-rt-users
  Cc: Luis Claudio R . Goncalves, Clark Williams, Luiz Capitulino,
	Sebastian Andrzej Siewior, Thomas Gleixner, Steven Rostedt,
	Peter Zijlstra, Ingo Molnar, LKML

Add the value of the rt_rq.rt_nr_migratory and dl_rq.dl_nr_migratory
to the sched_debug output, for instance:

rt_rq[0]:
  .rt_nr_running                 : 2
  .rt_nr_migratory               : 1     <--- Like this
  .rt_throttled                  : 0
  .rt_time                       : 828.645877
  .rt_runtime                    : 1000.000000

This is useful to debug problems related to the dl/rt schedulers.

This also fixes the format of some variables, that were unsigned, rather
than signed.

[ I am sending this patch to be able to demonstrate       ]
[ the problem in the kernel-rt, however, this patch can   ]
[ be merged in the vanilla kernel, if people decide it    ]
[ is worth having it.                                     ]

Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
---
 kernel/sched/debug.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 0e2af53..2bdeda8 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -552,15 +552,19 @@ void print_rt_rq(struct seq_file *m, int cpu, struct rt_rq *rt_rq)
 
 #define P(x) \
 	SEQ_printf(m, "  .%-30s: %Ld\n", #x, (long long)(rt_rq->x))
+#define PU(x) \
+	SEQ_printf(m, "  .%-30s: %lu\n", #x, (unsigned long)(rt_rq->x))
 #define PN(x) \
 	SEQ_printf(m, "  .%-30s: %Ld.%06ld\n", #x, SPLIT_NS(rt_rq->x))
 
-	P(rt_nr_running);
+	PU(rt_nr_running);
+	PU(rt_nr_migratory);
 	P(rt_throttled);
 	PN(rt_time);
 	PN(rt_runtime);
 
 #undef PN
+#undef PU
 #undef P
 }
 
@@ -569,7 +573,12 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
 	struct dl_bw *dl_bw;
 
 	SEQ_printf(m, "\ndl_rq[%d]:\n", cpu);
-	SEQ_printf(m, "  .%-30s: %ld\n", "dl_nr_running", dl_rq->dl_nr_running);
+
+#define PU(x) \
+	SEQ_printf(m, "  .%-30s: %lu\n", #x, (unsigned long)(dl_rq->x))
+
+	PU(dl_nr_running);
+	PU(dl_nr_migratory);
 #ifdef CONFIG_SMP
 	dl_bw = &cpu_rq(cpu)->rd->dl_bw;
 #else
@@ -577,6 +586,8 @@ void print_dl_rq(struct seq_file *m, int cpu, struct dl_rq *dl_rq)
 #endif
 	SEQ_printf(m, "  .%-30s: %lld\n", "dl_bw->bw", dl_bw->bw);
 	SEQ_printf(m, "  .%-30s: %lld\n", "dl_bw->total_bw", dl_bw->total_bw);
+
+#undef PU
 }
 
 extern __read_mostly int sched_clock_running;
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
  2017-06-21 19:28 [PATCH 0/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration Daniel Bristot de Oliveira
  2017-06-21 19:29 ` [PATCH 1/2] sched/debug: Inform the number of rt/dl task that can migrate Daniel Bristot de Oliveira
@ 2017-06-21 19:29 ` Daniel Bristot de Oliveira
  2017-06-22  8:38   ` Ingo Molnar
  1 sibling, 1 reply; 11+ messages in thread
From: Daniel Bristot de Oliveira @ 2017-06-21 19:29 UTC (permalink / raw)
  To: linux-rt-users
  Cc: Luis Claudio R . Goncalves, Clark Williams, Luiz Capitulino,
	Sebastian Andrzej Siewior, Thomas Gleixner, Steven Rostedt,
	Peter Zijlstra, Ingo Molnar, LKML

There is a problem in the migrate_disable()/enable() implementation
regarding the number of migratory tasks in the rt/dl RQs. The problem
is the following:

When a task is attached to the rt runqueue, it is checked if it either
can run in more than one CPU, or if it is with migration disable. If
either check is true, the rt_rq->rt_nr_migratory counter is not
increased. The counter increases otherwise.

When the task is detached, the same check is done. If either check is
true, the rt_rq->rt_nr_migratory counter is not decreased. The counter
decreases otherwise. The same check is done in the dl scheduler.

One important thing is that, migrate disable/enable does not touch this
counter for tasks attached to the rt rq. So suppose the following chain
of events.

Assumptions:
Task A is the only runnable task in A      Task B runs on the CPU B
Task A runs on CFS (non-rt)                Task B has RT priority
Thus, rt_nr_migratory is 0                 B is running
Task A can run on all CPUS.

Timeline:
        CPU A/TASK A                        CPU B/TASK B
A takes the rt mutex X                           .
A disables migration                             .
           .                          B tries to take the rt mutex X
           .                          As it is held by A {
           .                            A inherits the rt priority of B
           .                            A is dequeued from CFS RQ of CPU A
           .                            A is enqueued in the RT RQ of CPU A
           .                            As migration is disabled
           .                              rt_nr_migratory in A is not increased
           .
A enables migration
A releases the rt mutex X {
  A returns to its original priority
  A ask to be dequeued from RT RQ {
    As migration is now enabled and it can run on all CPUS {
       rt_nr_migratory should be decreased
       As rt_nr_migratory is 0, rt_nr_migratory under flows
    }
}

This variable is important because it notifies if there are more than one
runnable & migratory task in the runqueue. If there are more than one
tasks, the rt_rq is set as overloaded, and then tries to migrate some
tasks. This rule is important to keep the scheduler working conserving,
that is, in a system with M CPUs, the M highest priority tasks should be
running.

As rt_nr_migratory is unsigned, it will become > 0, notifying that the
RQ is overloaded, activating pushing mechanism without need.

This patch fixes this problem by decreasing/increasing the
rt/dl_nr_migratory in the migrate disable/enable operations.

Reported-by: Pei Zhang <pezhang@redhat.com>
Reported-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Cc: Luis Claudio R. Goncalves <lgoncalv@redhat.com>
Cc: Clark Williams <williams@redhat.com>
Cc: Luiz Capitulino <lcapitulino@redhat.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: LKML <linux-kernel@vger.kernel.org>
Cc: linux-rt-users <linux-rt-users@vger.kernel.org>
---
 kernel/sched/core.c | 40 ++++++++++++++++++++++++++++++++--------
 1 file changed, 32 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ce34e4f..2b78189 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7569,6 +7569,9 @@ const u32 sched_prio_to_wmult[40] = {
 void migrate_disable(void)
 {
 	struct task_struct *p = current;
+	struct rq *rq;
+	struct rq_flags rf;
+
 
 	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
@@ -7593,10 +7596,21 @@ void migrate_disable(void)
 	preempt_disable();
 	preempt_lazy_disable();
 	pin_current_cpu();
-	p->migrate_disable = 1;
 
-	p->cpus_ptr = cpumask_of(smp_processor_id());
+	rq = task_rq_lock(p, &rf);
+	if (unlikely((p->sched_class == &rt_sched_class ||
+		      p->sched_class == &dl_sched_class) &&
+		      p->nr_cpus_allowed > 1)) {
+		if (p->sched_class == &rt_sched_class)
+			task_rq(p)->rt.rt_nr_migratory--;
+		else
+			task_rq(p)->dl.dl_nr_migratory--;
+	}
 	p->nr_cpus_allowed = 1;
+	task_rq_unlock(rq, p, &rf);
+	p->cpus_ptr = cpumask_of(smp_processor_id());
+	p->migrate_disable = 1;
+
 
 	preempt_enable();
 }
@@ -7605,6 +7619,9 @@ EXPORT_SYMBOL(migrate_disable);
 void migrate_enable(void)
 {
 	struct task_struct *p = current;
+	struct rq *rq;
+	struct rq_flags rf;
+
 
 	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
@@ -7628,17 +7645,24 @@ void migrate_enable(void)
 
 	preempt_disable();
 
-	p->cpus_ptr = &p->cpus_mask;
-	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
 	p->migrate_disable = 0;
+	p->cpus_ptr = &p->cpus_mask;
 
-	if (p->migrate_disable_update) {
-		struct rq *rq;
-		struct rq_flags rf;
+	rq = task_rq_lock(p, &rf);
+	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
+	if (unlikely((p->sched_class == &rt_sched_class ||
+		      p->sched_class == &dl_sched_class) &&
+		      p->nr_cpus_allowed > 1)) {
+		if (p->sched_class == &rt_sched_class)
+			task_rq(p)->rt.rt_nr_migratory++;
+		else
+			task_rq(p)->dl.dl_nr_migratory++;
+	}
+	task_rq_unlock(rq, p, &rf);
 
+	if (unlikely(p->migrate_disable_update)) {
 		rq = task_rq_lock(p, &rf);
 		update_rq_clock(rq);
-
 		__do_set_cpus_allowed_tail(p, &p->cpus_mask);
 		task_rq_unlock(rq, p, &rf);
 
-- 
2.9.4

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
  2017-06-21 19:29 ` [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration Daniel Bristot de Oliveira
@ 2017-06-22  8:38   ` Ingo Molnar
  2017-06-22 13:31     ` Daniel Bristot de Oliveira
  0 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2017-06-22  8:38 UTC (permalink / raw)
  To: Daniel Bristot de Oliveira
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML


* Daniel Bristot de Oliveira <bristot@redhat.com> wrote:

>  void migrate_disable(void)
>  {
>  	struct task_struct *p = current;
> +	struct rq *rq;
> +	struct rq_flags rf;
> +
>  
>  	if (in_atomic() || irqs_disabled()) {
>  #ifdef CONFIG_SCHED_DEBUG
> @@ -7593,10 +7596,21 @@ void migrate_disable(void)
>  	preempt_disable();
>  	preempt_lazy_disable();
>  	pin_current_cpu();
> -	p->migrate_disable = 1;
>  
> -	p->cpus_ptr = cpumask_of(smp_processor_id());
> +	rq = task_rq_lock(p, &rf);
> +	if (unlikely((p->sched_class == &rt_sched_class ||
> +		      p->sched_class == &dl_sched_class) &&
> +		      p->nr_cpus_allowed > 1)) {
> +		if (p->sched_class == &rt_sched_class)
> +			task_rq(p)->rt.rt_nr_migratory--;
> +		else
> +			task_rq(p)->dl.dl_nr_migratory--;
> +	}
>  	p->nr_cpus_allowed = 1;
> +	task_rq_unlock(rq, p, &rf);
> +	p->cpus_ptr = cpumask_of(smp_processor_id());
> +	p->migrate_disable = 1;
> +
>  
>  	preempt_enable();
>  }
> @@ -7605,6 +7619,9 @@ EXPORT_SYMBOL(migrate_disable);
>  void migrate_enable(void)
>  {
>  	struct task_struct *p = current;
> +	struct rq *rq;
> +	struct rq_flags rf;
> +
>  
>  	if (in_atomic() || irqs_disabled()) {
>  #ifdef CONFIG_SCHED_DEBUG
> @@ -7628,17 +7645,24 @@ void migrate_enable(void)
>  
>  	preempt_disable();
>  
> -	p->cpus_ptr = &p->cpus_mask;
> -	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
>  	p->migrate_disable = 0;
> +	p->cpus_ptr = &p->cpus_mask;
>  
> -	if (p->migrate_disable_update) {
> -		struct rq *rq;
> -		struct rq_flags rf;
> +	rq = task_rq_lock(p, &rf);
> +	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
> +	if (unlikely((p->sched_class == &rt_sched_class ||
> +		      p->sched_class == &dl_sched_class) &&
> +		      p->nr_cpus_allowed > 1)) {
> +		if (p->sched_class == &rt_sched_class)
> +			task_rq(p)->rt.rt_nr_migratory++;
> +		else
> +			task_rq(p)->dl.dl_nr_migratory++;
> +	}
> +	task_rq_unlock(rq, p, &rf);

The fix looks good to me, but AFAICS the repeat pattern introduced here could be 
factored out into a helper function instead, right?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] sched/debug: Inform the number of rt/dl task that can migrate
  2017-06-21 19:29 ` [PATCH 1/2] sched/debug: Inform the number of rt/dl task that can migrate Daniel Bristot de Oliveira
@ 2017-06-22  9:29   ` Ingo Molnar
  2017-06-22 13:02     ` Daniel Bristot de Oliveira
  0 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2017-06-22  9:29 UTC (permalink / raw)
  To: Daniel Bristot de Oliveira
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML


* Daniel Bristot de Oliveira <bristot@redhat.com> wrote:

> Add the value of the rt_rq.rt_nr_migratory and dl_rq.dl_nr_migratory
> to the sched_debug output, for instance:
> 
> rt_rq[0]:
>   .rt_nr_running                 : 2
>   .rt_nr_migratory               : 1     <--- Like this
>   .rt_throttled                  : 0
>   .rt_time                       : 828.645877
>   .rt_runtime                    : 1000.000000
> 
> This is useful to debug problems related to the dl/rt schedulers.
> 
> This also fixes the format of some variables, that were unsigned, rather
> than signed.
> 
> [ I am sending this patch to be able to demonstrate       ]
> [ the problem in the kernel-rt, however, this patch can   ]
> [ be merged in the vanilla kernel, if people decide it    ]
> [ is worth having it.                                     ]

Note that this patch breaks in various cross-builds - here's the Alpha defconfig 
build:

/home/mingo/tip/kernel/sched/debug.c: In function 'print_rt_rq':
/home/mingo/tip/kernel/sched/debug.c:556:60: error: 'struct rt_rq' has no member 
named 'rt_nr_migratory'
  SEQ_printf(m, "  .%-30s: %lu\n", #x, (unsigned long)(rt_rq->x))
                                                            ^
/home/mingo/tip/kernel/sched/debug.c:33:17: note: in definition of macro 
'SEQ_printf'
   seq_printf(m, x);  \
...

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/2] sched/debug: Inform the number of rt/dl task that can migrate
  2017-06-22  9:29   ` Ingo Molnar
@ 2017-06-22 13:02     ` Daniel Bristot de Oliveira
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Bristot de Oliveira @ 2017-06-22 13:02 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML

On 06/22/2017 11:29 AM, Ingo Molnar wrote:
> Note that this patch breaks in various cross-builds - here's the Alpha defconfig 
> build:
> 
> /home/mingo/tip/kernel/sched/debug.c: In function 'print_rt_rq':
> /home/mingo/tip/kernel/sched/debug.c:556:60: error: 'struct rt_rq' has no member 
> named 'rt_nr_migratory'
>   SEQ_printf(m, "  .%-30s: %lu\n", #x, (unsigned long)(rt_rq->x))
>                                                             ^
> /home/mingo/tip/kernel/sched/debug.c:33:17: note: in definition of macro 
> 'SEQ_printf'
>    seq_printf(m, x);  \
> ...

Oops...

The rt_nr_migratory is conditioned to:

#ifdef CONFIG_SMP

Sorry for the miss attention, I will fix this.

-- Daniel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
  2017-06-22  8:38   ` Ingo Molnar
@ 2017-06-22 13:31     ` Daniel Bristot de Oliveira
  2017-06-22 19:49       ` Ingo Molnar
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Bristot de Oliveira @ 2017-06-22 13:31 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML

On 06/22/2017 10:38 AM, Ingo Molnar wrote:
> 
> * Daniel Bristot de Oliveira <bristot@redhat.com> wrote:
> 
>>  void migrate_disable(void)
>>  {
>>  	struct task_struct *p = current;
>> +	struct rq *rq;
>> +	struct rq_flags rf;
>> +
>>  
>>  	if (in_atomic() || irqs_disabled()) {
>>  #ifdef CONFIG_SCHED_DEBUG
>> @@ -7593,10 +7596,21 @@ void migrate_disable(void)
>>  	preempt_disable();
>>  	preempt_lazy_disable();
>>  	pin_current_cpu();
>> -	p->migrate_disable = 1;
>>  
>> -	p->cpus_ptr = cpumask_of(smp_processor_id());
>> +	rq = task_rq_lock(p, &rf);
>> +	if (unlikely((p->sched_class == &rt_sched_class ||
>> +		      p->sched_class == &dl_sched_class) &&
>> +		      p->nr_cpus_allowed > 1)) {
>> +		if (p->sched_class == &rt_sched_class)
>> +			task_rq(p)->rt.rt_nr_migratory--;
>> +		else
>> +			task_rq(p)->dl.dl_nr_migratory--;
>> +	}
>>  	p->nr_cpus_allowed = 1;
>> +	task_rq_unlock(rq, p, &rf);
>> +	p->cpus_ptr = cpumask_of(smp_processor_id());
>> +	p->migrate_disable = 1;
>> +
>>  
>>  	preempt_enable();
>>  }
>> @@ -7605,6 +7619,9 @@ EXPORT_SYMBOL(migrate_disable);
>>  void migrate_enable(void)
>>  {
>>  	struct task_struct *p = current;
>> +	struct rq *rq;
>> +	struct rq_flags rf;
>> +
>>  
>>  	if (in_atomic() || irqs_disabled()) {
>>  #ifdef CONFIG_SCHED_DEBUG
>> @@ -7628,17 +7645,24 @@ void migrate_enable(void)
>>  
>>  	preempt_disable();
>>  
>> -	p->cpus_ptr = &p->cpus_mask;
>> -	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
>>  	p->migrate_disable = 0;
>> +	p->cpus_ptr = &p->cpus_mask;
>>  
>> -	if (p->migrate_disable_update) {
>> -		struct rq *rq;
>> -		struct rq_flags rf;
>> +	rq = task_rq_lock(p, &rf);
>> +	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
>> +	if (unlikely((p->sched_class == &rt_sched_class ||
>> +		      p->sched_class == &dl_sched_class) &&
>> +		      p->nr_cpus_allowed > 1)) {
>> +		if (p->sched_class == &rt_sched_class)
>> +			task_rq(p)->rt.rt_nr_migratory++;
>> +		else
>> +			task_rq(p)->dl.dl_nr_migratory++;
>> +	}
>> +	task_rq_unlock(rq, p, &rf);
> 
> The fix looks good to me, but AFAICS the repeat pattern introduced here could be 
> factored out into a helper function instead, right?

Like:

static inline int task_in_rt_class(struct task_struct *p)
{
	return p->sched_class == &rt_sched_class;
}

static inline int task_in_dl_class(struct task_struct *p)
{
	return p->sched_class == &dl_sched_class;
}

?

Thanks!

-- Daniel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
  2017-06-22 13:31     ` Daniel Bristot de Oliveira
@ 2017-06-22 19:49       ` Ingo Molnar
  2017-06-23 13:36         ` Daniel Bristot de Oliveira
  0 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2017-06-22 19:49 UTC (permalink / raw)
  To: Daniel Bristot de Oliveira
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML


* Daniel Bristot de Oliveira <bristot@redhat.com> wrote:

> On 06/22/2017 10:38 AM, Ingo Molnar wrote:
> > 
> > * Daniel Bristot de Oliveira <bristot@redhat.com> wrote:
> > 
> >>  void migrate_disable(void)
> >>  {
> >>  	struct task_struct *p = current;
> >> +	struct rq *rq;
> >> +	struct rq_flags rf;
> >> +
> >>  
> >>  	if (in_atomic() || irqs_disabled()) {
> >>  #ifdef CONFIG_SCHED_DEBUG
> >> @@ -7593,10 +7596,21 @@ void migrate_disable(void)
> >>  	preempt_disable();
> >>  	preempt_lazy_disable();
> >>  	pin_current_cpu();
> >> -	p->migrate_disable = 1;
> >>  
> >> -	p->cpus_ptr = cpumask_of(smp_processor_id());
> >> +	rq = task_rq_lock(p, &rf);
> >> +	if (unlikely((p->sched_class == &rt_sched_class ||
> >> +		      p->sched_class == &dl_sched_class) &&
> >> +		      p->nr_cpus_allowed > 1)) {
> >> +		if (p->sched_class == &rt_sched_class)
> >> +			task_rq(p)->rt.rt_nr_migratory--;
> >> +		else
> >> +			task_rq(p)->dl.dl_nr_migratory--;
> >> +	}
> >>  	p->nr_cpus_allowed = 1;
> >> +	task_rq_unlock(rq, p, &rf);
> >> +	p->cpus_ptr = cpumask_of(smp_processor_id());
> >> +	p->migrate_disable = 1;
> >> +
> >>  
> >>  	preempt_enable();
> >>  }
> >> @@ -7605,6 +7619,9 @@ EXPORT_SYMBOL(migrate_disable);
> >>  void migrate_enable(void)
> >>  {
> >>  	struct task_struct *p = current;
> >> +	struct rq *rq;
> >> +	struct rq_flags rf;
> >> +
> >>  
> >>  	if (in_atomic() || irqs_disabled()) {
> >>  #ifdef CONFIG_SCHED_DEBUG
> >> @@ -7628,17 +7645,24 @@ void migrate_enable(void)
> >>  
> >>  	preempt_disable();
> >>  
> >> -	p->cpus_ptr = &p->cpus_mask;
> >> -	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
> >>  	p->migrate_disable = 0;
> >> +	p->cpus_ptr = &p->cpus_mask;
> >>  
> >> -	if (p->migrate_disable_update) {
> >> -		struct rq *rq;
> >> -		struct rq_flags rf;
> >> +	rq = task_rq_lock(p, &rf);
> >> +	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
> >> +	if (unlikely((p->sched_class == &rt_sched_class ||
> >> +		      p->sched_class == &dl_sched_class) &&
> >> +		      p->nr_cpus_allowed > 1)) {
> >> +		if (p->sched_class == &rt_sched_class)
> >> +			task_rq(p)->rt.rt_nr_migratory++;
> >> +		else
> >> +			task_rq(p)->dl.dl_nr_migratory++;
> >> +	}
> >> +	task_rq_unlock(rq, p, &rf);
> > 
> > The fix looks good to me, but AFAICS the repeat pattern introduced here could be 
> > factored out into a helper function instead, right?
> 
> Like:
> 
> static inline int task_in_rt_class(struct task_struct *p)
> {
> 	return p->sched_class == &rt_sched_class;
> }
> 
> static inline int task_in_dl_class(struct task_struct *p)
> {
> 	return p->sched_class == &dl_sched_class;
> }
> 
> ?

So AFAICS it's this block that is used twice:

> >> +	rq = task_rq_lock(p, &rf);
> >> +	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
> >> +	if (unlikely((p->sched_class == &rt_sched_class ||
> >> +		      p->sched_class == &dl_sched_class) &&
> >> +		      p->nr_cpus_allowed > 1)) {
> >> +		if (p->sched_class == &rt_sched_class)
> >> +			task_rq(p)->rt.rt_nr_migratory++;
> >> +		else
> >> +			task_rq(p)->dl.dl_nr_migratory++;
> >> +	}
> >> +	task_rq_unlock(rq, p, &rf);

or is there some difference I haven't noticed?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
  2017-06-22 19:49       ` Ingo Molnar
@ 2017-06-23 13:36         ` Daniel Bristot de Oliveira
  2017-06-24  6:41           ` Ingo Molnar
  0 siblings, 1 reply; 11+ messages in thread
From: Daniel Bristot de Oliveira @ 2017-06-23 13:36 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML

On 06/22/2017 09:49 PM, Ingo Molnar wrote:
> So AFAICS it's this block that is used twice:
> 
>>>> +	rq = task_rq_lock(p, &rf);
>>>> +	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
>>>> +	if (unlikely((p->sched_class == &rt_sched_class ||
>>>> +		      p->sched_class == &dl_sched_class) &&
>>>> +		      p->nr_cpus_allowed > 1)) {
>>>> +		if (p->sched_class == &rt_sched_class)
>>>> +			task_rq(p)->rt.rt_nr_migratory++;
>>>> +		else
>>>> +			task_rq(p)->dl.dl_nr_migratory++;
>>>> +	}
>>>> +	task_rq_unlock(rq, p, &rf);
> or is there some difference I haven't noticed?

One block increases the number of migratory tasks, and the
other one decreases... 

How about this version? (if it is good, I will polish it in a v2).

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ce34e4f..0f66376 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7566,10 +7566,57 @@ const u32 sched_prio_to_wmult[40] = {
 
 #if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
 
+enum inc_dec_migratory {
+	DEC_NR_MIGRATORY = -1,
+	INC_NR_MIGRATORY = 1,
+};
+
+static inline void
+inc_dec_nr_migratory(struct task_struct *p, enum inc_dec_migratory id)
+{
+	if (unlikely((p->sched_class == &rt_sched_class ||
+		      p->sched_class == &dl_sched_class) &&
+		      p->nr_cpus_allowed > 1)) {
+		if (p->sched_class == &rt_sched_class)
+			task_rq(p)->rt.rt_nr_migratory += id;
+		else
+			task_rq(p)->dl.dl_nr_migratory += id;
+	}
+}
+
+static inline void
+migrate_disable_update_cpus_allowed(struct task_struct *p)
+{
+	struct rq *rq;
+	struct rq_flags rf;
+
+	p->cpus_ptr = cpumask_of(smp_processor_id());
+
+	rq = task_rq_lock(p, &rf);
+	inc_dec_nr_migratory(p, DEC_NR_MIGRATORY);
+	p->nr_cpus_allowed = 1;
+	task_rq_unlock(rq, p, &rf);
+}
+
+static inline void
+migrate_enable_update_cpus_allowed(struct task_struct *p)
+{
+	struct rq *rq;
+	struct rq_flags rf;
+
+	p->cpus_ptr = &p->cpus_mask;
+
+	rq = task_rq_lock(p, &rf);
+	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
+	inc_dec_nr_migratory(p, INC_NR_MIGRATORY);
+	task_rq_unlock(rq, p, &rf);
+}
+
 void migrate_disable(void)
 {
 	struct task_struct *p = current;
 
+
 	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic++;
@@ -7593,10 +7640,9 @@ void migrate_disable(void)
 	preempt_disable();
 	preempt_lazy_disable();
 	pin_current_cpu();
-	p->migrate_disable = 1;
 
-	p->cpus_ptr = cpumask_of(smp_processor_id());
-	p->nr_cpus_allowed = 1;
+	migrate_disable_update_cpus_allowed(p);
+	p->migrate_disable = 1;
 
 	preempt_enable();
 }
@@ -7606,6 +7652,7 @@ void migrate_enable(void)
 {
 	struct task_struct *p = current;
 
+
 	if (in_atomic() || irqs_disabled()) {
 #ifdef CONFIG_SCHED_DEBUG
 		p->migrate_disable_atomic--;
@@ -7628,9 +7675,8 @@ void migrate_enable(void)
 
 	preempt_disable();
 
-	p->cpus_ptr = &p->cpus_mask;
-	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
 	p->migrate_disable = 0;
+	migrate_enable_update_cpus_allowed(p);
 
 	if (p->migrate_disable_update) {
 		struct rq *rq;

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
  2017-06-23 13:36         ` Daniel Bristot de Oliveira
@ 2017-06-24  6:41           ` Ingo Molnar
  2017-06-26 12:16             ` Daniel Bristot de Oliveira
  0 siblings, 1 reply; 11+ messages in thread
From: Ingo Molnar @ 2017-06-24  6:41 UTC (permalink / raw)
  To: Daniel Bristot de Oliveira
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML


* Daniel Bristot de Oliveira <bristot@redhat.com> wrote:

> On 06/22/2017 09:49 PM, Ingo Molnar wrote:
> > So AFAICS it's this block that is used twice:
> > 
> >>>> +	rq = task_rq_lock(p, &rf);
> >>>> +	p->nr_cpus_allowed = cpumask_weight(&p->cpus_mask);
> >>>> +	if (unlikely((p->sched_class == &rt_sched_class ||
> >>>> +		      p->sched_class == &dl_sched_class) &&
> >>>> +		      p->nr_cpus_allowed > 1)) {
> >>>> +		if (p->sched_class == &rt_sched_class)
> >>>> +			task_rq(p)->rt.rt_nr_migratory++;
> >>>> +		else
> >>>> +			task_rq(p)->dl.dl_nr_migratory++;
> >>>> +	}
> >>>> +	task_rq_unlock(rq, p, &rf);
> > or is there some difference I haven't noticed?
> 
> One block increases the number of migratory tasks, and the
> other one decreases... 
> 
> How about this version? (if it is good, I will polish it in a v2).
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index ce34e4f..0f66376 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7566,10 +7566,57 @@ const u32 sched_prio_to_wmult[40] = {
>  
>  #if defined(CONFIG_PREEMPT_COUNT) && defined(CONFIG_SMP)
>  
> +enum inc_dec_migratory {
> +	DEC_NR_MIGRATORY = -1,
> +	INC_NR_MIGRATORY = 1,
> +};
> +
> +static inline void
> +inc_dec_nr_migratory(struct task_struct *p, enum inc_dec_migratory id)
> +{
> +	if (unlikely((p->sched_class == &rt_sched_class ||
> +		      p->sched_class == &dl_sched_class) &&
> +		      p->nr_cpus_allowed > 1)) {
> +		if (p->sched_class == &rt_sched_class)
> +			task_rq(p)->rt.rt_nr_migratory += id;
> +		else
> +			task_rq(p)->dl.dl_nr_migratory += id;
> +	}
> +}

How about just 'long delta', pass in +1 or -1 and do away with the 
inc_dec_migratory complication?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration
  2017-06-24  6:41           ` Ingo Molnar
@ 2017-06-26 12:16             ` Daniel Bristot de Oliveira
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Bristot de Oliveira @ 2017-06-26 12:16 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: linux-rt-users, Luis Claudio R . Goncalves, Clark Williams,
	Luiz Capitulino, Sebastian Andrzej Siewior, Thomas Gleixner,
	Steven Rostedt, Peter Zijlstra, LKML

On 06/24/2017 08:41 AM, Ingo Molnar wrote:
>> +enum inc_dec_migratory {
>> +	DEC_NR_MIGRATORY = -1,
>> +	INC_NR_MIGRATORY = 1,
>> +};
>> +
>> +static inline void
>> +inc_dec_nr_migratory(struct task_struct *p, enum inc_dec_migratory id)
>> +{
>> +	if (unlikely((p->sched_class == &rt_sched_class ||
>> +		      p->sched_class == &dl_sched_class) &&
>> +		      p->nr_cpus_allowed > 1)) {
>> +		if (p->sched_class == &rt_sched_class)
>> +			task_rq(p)->rt.rt_nr_migratory += id;
>> +		else
>> +			task_rq(p)->dl.dl_nr_migratory += id;
>> +	}
>> +}
> How about just 'long delta', pass in +1 or -1 and do away with the 
> inc_dec_migratory complication?

Ack! I am cooking a v2 using a long delta, rather than the
inc_dec_migratory complication (yeah, I exaggerated hehe).

Thanks!

-- Daniel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-06-26 12:16 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-06-21 19:28 [PATCH 0/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration Daniel Bristot de Oliveira
2017-06-21 19:29 ` [PATCH 1/2] sched/debug: Inform the number of rt/dl task that can migrate Daniel Bristot de Oliveira
2017-06-22  9:29   ` Ingo Molnar
2017-06-22 13:02     ` Daniel Bristot de Oliveira
2017-06-21 19:29 ` [PATCH 2/2] rt: Increase/decrease the nr of migratory tasks when enabling/disabling migration Daniel Bristot de Oliveira
2017-06-22  8:38   ` Ingo Molnar
2017-06-22 13:31     ` Daniel Bristot de Oliveira
2017-06-22 19:49       ` Ingo Molnar
2017-06-23 13:36         ` Daniel Bristot de Oliveira
2017-06-24  6:41           ` Ingo Molnar
2017-06-26 12:16             ` Daniel Bristot de Oliveira

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).